Skip to main content

13 posts tagged with "platform-engineering"

View all tags

Your CS Team Built a Shadow Agent. That's Your Roadmap.

· 9 min read
Tian Pan
Software Engineer

A senior CSM in your support org spent a weekend wiring up an internal Slack bot. They wrote the system prompt themselves. They pointed it at the public docs, a Zendesk export of resolved tickets, and the changelog. Six weeks later it answers about 40% of the tier-1 questions their team used to type out by hand. Nobody on your engineering org chart knows it exists. The first time the platform team finds out, somebody from security will be asking why a service account is hitting Zendesk's API at 3am.

The default reaction is panic. Lock down the API token. Send a company-wide email about unsanctioned AI. Add a slide to the next governance review. Then promise that the platform team will build "the official version" next quarter, on the proper roadmap.

That reaction misses what actually happened. The CS team didn't go rogue — they built a working prototype of a product the engineering team hasn't shipped. They have real usage data, real prompt iteration cycles, and real user feedback. Your platform roadmap has none of those. Treating the bot as a compliance violation throws away the most accurate prioritization signal your AI program is going to get this year.

The Hidden Edges Between Your AI Features: When One Prompt Edit Regresses Three Other Teams

· 9 min read
Tian Pan
Software Engineer

A platform engineer changes the opening sentence of the company's "house style" preamble — a single line that anchors voice across customer-facing assistants. The change ships behind a flag. By Tuesday, the search team's relevance regression has spiked, the support bot's eval pass-rate has dropped four points, and the onboarding agent's retry rate has doubled. None of those teams touched their own code. None of them got a heads-up. The platform engineer has no idea any of this happened, because nobody was on the receiving end of an alert that said "your edit just broke three downstream features."

This is the failure mode that defines the second year of an AI org's life. The first year, every team builds its own thing in a corner. The second year, those corners start sharing artifacts — a prompt fragment here, a seeded eval set there, a tool schema reused as a contract — and the moment that sharing becomes implicit, the dependency graph between AI features becomes invisible. You now have a distributed system whose edges no one can name.

The discipline that fixes this is not a new platform. It's drawing the graph.

AI Office Hours Don't Scale: When Your One Expert Becomes the Release Gate

· 11 min read
Tian Pan
Software Engineer

Open the calendar of the one engineer at your company who has shipped real AI features into production for more than six months. Count the recurring "30 min sync — questions about the agent" invites, the ad-hoc "can I grab you for 15?" Slack pings that ended up booked, the architecture-review attendances marked "optional" that they actually have to be at, and the office hours block that started as one Friday afternoon and now eats two hours every weekday. Then look at the roadmap and trace which features depend on a decision that engineer hasn't made yet. The intersection is your real release schedule. The Jira board is fiction.

This is the AI office hours bottleneck, and it is the load-bearing constraint inside more 2026 AI orgs than anyone in those orgs would say out loud. The team scaled AI feature work fast — every product squad got a model budget, every PM got a prompt — and routed every "is this the right model," "should we use RAG here," "is our eval design valid," "why is the cache hit rate weird" question to the one engineer who's actually shipped enough production AI to answer. Six months in, that engineer's calendar is the rate-limiting reagent for half the roadmap, and "I need to grab 30 minutes with them" is the load-bearing escalation path your incident response was supposed to make explicit.

The Internal LLM Gateway Is the New Service Mesh

· 10 min read
Tian Pan
Software Engineer

Walk into any company with fifty engineers writing LLM code in production and you will find seven gateway-shaped artifacts. The recommendations team built one to route between OpenAI and Anthropic. The support-bot team wrote one to attach their prompt registry. The platform team has a half-finished proxy that handles auth but not rate limiting. The growth team has a Lambda that does PII redaction on its way out. The data-science team is calling the vendor SDK directly and nobody has told them to stop. There is no shared gateway. There are seven shared problems, each solved poorly in isolation, and a CFO who is about to ask why the AI bill grew 40% quarter over quarter with no clear owner for any of it.

This is the same architectural beat the industry hit with microservices in 2016 and 2017. A thousand external dependencies, the same shared concerns at every team — auth, retries, observability, policy — and a choice between solving them once or rediscovering them everywhere. The answer then was the service mesh. The answer now is the internal LLM gateway, and most companies are still in the rediscovering-everywhere phase.

The Model Deprecation Treadmill: Discipline That Has to Exist Before the Sunset Email

· 13 min read
Tian Pan
Software Engineer

The team that treats "we use the latest model" as a virtue is one sunset email away from a quarter of unplanned work. By the time the deprecation notice lands, the architectural decisions that determine whether you can absorb it have already been made — months ago, by people who weren't thinking about migrations at all. The eval suite was implicitly trained against a specific checkpoint. The prompts were tuned against a specific refusal style. The cost projections assumed a specific token-per-task baseline. The router has a hardcoded fallback to a model that is itself about to disappear. None of these decisions look like risks until the email arrives, and then all of them look like the same risk.

Model deprecation is now the most predictable surprise in the AI stack. Anthropic gives a minimum of 60 days' notice on publicly released models. OpenAI's notice windows range from three months for specialized snapshots to 18 months for foundational models, but in practice a recent batch of ChatGPT model retirements landed with as little as two weeks' warning for some teams. GitHub deprecated a slate of Anthropic and OpenAI models in February 2026 in a single coordinated changelog entry. The pattern is no longer "if a model retires" — it's "every quarter, at least one model your stack depends on enters a retirement window, and the calendar isn't synchronized to your roadmap."

The Shared-Prompt Flag Day: When One Edit Becomes Thirty Teams' Regression

· 10 min read
Tian Pan
Software Engineer

The first edit to a shared system prompt feels like good engineering. Three teams all paste the same eighteen-line safety preamble at the top of their agents, someone notices, and an internal platform team says the obvious thing: let's centralize it. A prompts.common.safety_preamble@v1 lands in a registry. Thirty teams adopt it within a quarter because it's the path of least resistance — and because security is happy that one team owns the wording. For two quarters, this looks like a clean DRY win.

Then the security team needs a small wording change. Maybe a new compliance regulation tightens what an assistant is allowed to volunteer about a user's account. Maybe a red-team finding requires a one-sentence addition to the refusal clause. The platform team makes the edit, ships v2, and within a day the support queue fills with messages from consumer teams: our eval dropped, our format broke, our tool-call rate halved, our tone changed, our latency went up because the model started reasoning more. Each team wants the edit reverted. The security team needs it shipped. Nobody can roll forward without a re-eval, and nobody owns the re-eval. Welcome to the shared-prompt flag day.

The kWh Column Missing From Your Inference Span: Carbon Attribution Per Request

· 10 min read
Tian Pan
Software Engineer

Your inference flame graph has a cost axis. It does not have an energy axis. That gap is fine right up until the morning a customer's procurement team sends you a spreadsheet with twenty-three columns of vendor sustainability disclosures, and one of them is kgCO2e per 1,000 inferences. You have no way to fill that cell, your provider's answer is a methodology paper, and the deal closes in nine days. The token-cost dashboard your platform team has been polishing for two years suddenly looks like it was solving the wrong problem.

The shift here is not abstract. Sustainability disclosure is moving from corporate aggregate to product-level granularity. The first wave of that movement landed inside CSRD and ESRS in 2025, and the second wave is landing in B2B procurement contracts right now. Engineering organizations that built observability for cost are about to discover they need observability for carbon, and the two are not the same column on the same span.

The Inference Budget Committee: Governance When Token Spend Crosses Seven Figures

· 12 min read
Tian Pan
Software Engineer

At $50,000 a month, the "compute + tokens" line on your infra bill is rounding error. At $5,000,000 a month, it is a CFO question. The transition between those two states is not gradual — it is a phase change in how an organization talks about model spend, and most engineering orgs are unprepared for the social and political work that follows. The bill stays a single line; the conversation around it does not.

What changes is who has standing to ask "why." When three product teams share one API key and one capacity reservation, every quota argument has the same structure: someone is currently winning at the expense of someone else, and there is no neutral party to call it. The first time a team's launch is throttled because another team shipped a chatty agent, the absence of a governance body is felt by the entire engineering org at once. Calling a meeting and inventing a process under pressure is the worst time to design one.

The Shared Prompt Service Problem: Multi-Team LLM Platforms and the Dependency Nightmare

· 10 min read
Tian Pan
Software Engineer

On a Tuesday afternoon, the platform team at a mid-size AI startup merged a "minor improvement" to the shared system prompt. By Thursday, three separate product teams had filed bugs. One team's evaluation suite dropped from 87% to 61% accuracy. Another team's RAG pipeline started producing hallucinated citations. A third team's safety filter stopped catching a category of harmful outputs entirely. Nobody connected the dots for four days.

This is the shared prompt service problem, and it's coming for every organization that has more than one team building on a common LLM platform.

The AI Dependency Footprint: When Every Feature Adds a New Infrastructure Owner

· 9 min read
Tian Pan
Software Engineer

Your team shipped a RAG-powered search feature last quarter. It required a vector database, an embedding model, an annotation pipeline, a chunking service, and an evaluation harness. Each component made sense individually. But six months later, you discover that three of those five components have no clear owner, two are running on engineers' personal cloud accounts, and one was quietly deprecated by its vendor without anyone noticing. The 3am page comes from a component nobody even remembers adding.

This is the AI dependency footprint problem: the compounding accumulation of infrastructure that each AI feature requires, combined with the organizational reality that teams rarely plan ownership for any of it before shipping.

Hot-Path vs. Cold-Path AI: The Architectural Decision That Decides Your p99

· 10 min read
Tian Pan
Software Engineer

Every AI feature you ship makes an architectural choice before it makes a product one: does this model call live inside the user's request, or does it run somewhere the user isn't waiting for it? The choice is usually made by whoever writes the first prototype, never revisited, and silently determines your p99 latency for the rest of the feature's life. When the post-mortem asks why a shipping dashboard became unusable at 10 a.m. every Monday, the answer is almost always that something which should have been cold-path got welded into the hot path — and a model that is fine at p50 becomes catastrophic at p99 when traffic fans out.

The hot-path / cold-path distinction is older than LLMs. CQRS, streaming architectures, lambda architectures — they all draw the same line between "must respond now" and "can arrive eventually." What's different about AI workloads is that the cost of crossing the line in the wrong direction is an order of magnitude higher than it used to be. A synchronous database query that takes 50 ms turning into 200 ms is a regression. A synchronous LLM call that takes 1.2 s at p50 turning into 11 s at p99 is a business decision you didn't know you made.

MCP Is the New Microservices: The AI Tool Ecosystem Is Repeating Distributed Systems Mistakes

· 8 min read
Tian Pan
Software Engineer

If you lived through the microservices explosion of 2015–2018, the current state of MCP should feel uncomfortably familiar. A genuinely useful protocol appears. It's easy to spin up. Every team spins one up. Nobody tracks what's running, who owns it, or how it's secured. Within eighteen months, you're staring at a dependency graph that engineers privately call "the Death Star."

The Model Context Protocol is following the same trajectory, at roughly three times the speed. Unofficial registries already index over 16,000 MCP servers. GitHub hosts north of 20,000 public repositories implementing them. And Gartner is predicting that 40% of agentic AI projects will fail by 2027 — not because the technology doesn't work, but because organizations are automating broken processes. MCP sprawl is a symptom of exactly that problem.