Skip to main content

17 posts tagged with "engineering-leadership"

View all tags

The Agentic Debugger's Trap: When Your Agent Patches Faster Than You Can Diagnose

· 10 min read
Tian Pan
Software Engineer

A staff engineer I worked with last quarter caught a bug that had already been "fixed" three times in the previous six weeks. Three different engineers. Three different files. Three green CI runs. Three accepted agent-generated patches. Each patch made the failing test pass and the user-reported error disappear. Each one moved the bug somewhere else, where it waited until a different surface area triggered it again. The fourth time it surfaced, the data corruption it caused had been silently compounding for forty days.

The bug was a single off-by-one in a pagination cursor. The agent had been right that the symptom would go away. It had been wrong about why. And the engineers — competent, senior, well-intentioned — had each accepted a passing patch before they understood the failure mechanism.

This is the agentic debugger's trap: your agent can produce a fix faster than you can build the mental model needed to evaluate whether the fix is correct. Patch velocity outruns diagnosis. The bug count drops, the CI dashboard goes green, and you ship a codebase whose failure modes you no longer understand.

Your AI Feature Has No DRI: Why It's Drifting Without a Quarterly Goal Owner

· 11 min read
Tian Pan
Software Engineer

Walk into a quarterly business review and ask whose name is on the AI feature. Watch what happens. The PM points at the platform team. The platform team points at the research engineer who wrote the eval harness. The research engineer points at the FinOps analyst who keeps emailing about the cost graph. The FinOps analyst points back at the PM. Four people, one feature, zero owners. The eval score has been drifting downward for six weeks and nobody has triaged it because the dashboard lives in a Notion page that was last edited the day after launch.

This is the most predictable outcome of how organizations actually ship AI features in 2026. The feature was launched by a tiger team that got disbanded the moment the launch press release went out. The instrumentation was bolted on by an infra group that has no product mandate. The prompt is a prompts/v3.txt file in the repo whose blame is split across nine engineers, none of whom remember why line 47 says what it does. The user-facing tile has a PM whose OKRs moved on to the next launch two quarters ago. The feature is technically in production, technically owned, and structurally orphaned.

AI Procurement Clauses Your Lawyers Haven't Learned to Ask For Yet

· 11 min read
Tian Pan
Software Engineer

The 14-month-old AI vendor contract on your shared drive was drafted from a SaaS template. It guarantees uptime, names a security contact, and caps liability at twelve months of fees. It says nothing about whether your prompts get fed into the next training run, what happens when the model you depend on is quietly swapped for a smaller variant, or which region your inference logs sit in when a regulator asks. The lawyer who drafted it did a competent job with the vocabulary they had. The vocabulary is a generation behind the surface area.

Procurement teams are still optimizing for the wrong contract. The standard MSA fights battles from the 2010s — outage credits, breach notification windows, indemnification for IP that makes it into the source repository. AI vendor relationships have a different attack surface, and the clauses that matter most are the ones that don't have a heading in your existing template. The team that lets last year's procurement playbook handle this year's vendor stack is signing away leverage they will need within a year.

AI Office Hours Don't Scale: When Your One Expert Becomes the Release Gate

· 11 min read
Tian Pan
Software Engineer

Open the calendar of the one engineer at your company who has shipped real AI features into production for more than six months. Count the recurring "30 min sync — questions about the agent" invites, the ad-hoc "can I grab you for 15?" Slack pings that ended up booked, the architecture-review attendances marked "optional" that they actually have to be at, and the office hours block that started as one Friday afternoon and now eats two hours every weekday. Then look at the roadmap and trace which features depend on a decision that engineer hasn't made yet. The intersection is your real release schedule. The Jira board is fiction.

This is the AI office hours bottleneck, and it is the load-bearing constraint inside more 2026 AI orgs than anyone in those orgs would say out loud. The team scaled AI feature work fast — every product squad got a model budget, every PM got a prompt — and routed every "is this the right model," "should we use RAG here," "is our eval design valid," "why is the cache hit rate weird" question to the one engineer who's actually shipped enough production AI to answer. Six months in, that engineer's calendar is the rate-limiting reagent for half the roadmap, and "I need to grab 30 minutes with them" is the load-bearing escalation path your incident response was supposed to make explicit.

The AI Feature OKR Mismatch: Why Quarterly Cadence Breaks AI Roadmaps

· 10 min read
Tian Pan
Software Engineer

The team commits to "ship the AI summarizer this quarter," gets it past the technical bar by week ten, takes a victory lap at the all-hands, and ships. Six weeks later the telemetry curve starts bending the wrong way — quietly, slowly, in a way nobody dashboards because nobody owns the shape. The OKR is already marked green. The next quarter's OKRs are already drafted around new launches. The summarizer is now somebody's second-priority maintenance job, and by quarter-end review the team is wondering why customer satisfaction on the feature dropped fifteen points without anything obvious changing.

This is not a bug in the team. It's a bug in the operating model. Quarterly OKRs were calibrated for software where a feature can be scoped, built, shipped, and then largely left alone until the next major rev. AI features don't have that shape. They have a launch curve and a sustain curve, and the sustain curve is where most of the value — and most of the risk — actually lives. The OKR template that treats them as deliverables with launch dates quietly produces a portfolio of demos that decay before the next planning cycle.

The Rubber-Stamp Collapse: Why AI-Authored PRs Are Hollowing Out Code Review

· 10 min read
Tian Pan
Software Engineer

A senior engineer approves a 400-line PR in four minutes. The diff is clean. Names are sensible. Tests pass. Two weeks later the on-call engineer is paging through a query that returns the right shape of rows but from the wrong column — user.updated_at where user.created_at was meant — and the cohort analysis dashboard has been quietly lying to the CFO for nine days. The reviewer was competent. The code was well-structured. The bug was invisible in the diff because it wasn't a syntactic smell. It was a semantic one, and the reviewer had nothing to anchor against because no one had written down what the change was supposed to do.

This is the failure mode that shows up once the majority of diffs in your repo start life as model output. Reviewers stop asking "is this correct?" and start asking "does this look like code?" The answer is almost always yes. AI-authored code is grammatically fluent in a way that bypasses the review heuristics engineers spent a decade sharpening on human-written slop.

Communicating AI Limitations Across the Organization: A Framework for Engineering Leaders

· 11 min read
Tian Pan
Software Engineer

The demo worked perfectly. Legal had signed off. Sales was already promising customers the feature would ship next quarter. Then the first production failure happened — the model confidently drafted a clause that cited a contract term that didn't exist, sales forwarded it to a customer, and legal spent three weeks in damage control.

This is not a story about a bad model. It's a story about miscommunication. The engineering team knew the model could hallucinate. Legal assumed it wouldn't. Sales assumed any failure would be caught before reaching customers. Ops assumed someone else was monitoring for exactly this. Nobody was lying. Everyone was working from a different mental model of the same system.

The root cause of most AI project failures isn't the AI. According to RAND Corporation's analysis of failed AI initiatives, "misunderstood problem definition" — which includes miscommunication about capability limits — is the single most common cause. Between 70 and 95% of enterprise AI initiatives fail to deliver their intended outcomes, and the technology is rarely the limiting factor. The limiting factor is that every team in your organization is quietly building a different theory of what your AI system does, and nobody has explicitly corrected any of them.

Board-Level AI Governance: The Five Decisions Only Executives Can Make

· 9 min read
Tian Pan
Software Engineer

A major insurer's AI system was denying coverage claims. When humans reviewed those decisions, 90% were found to be wrong. The insurer's engineering team had built a performant model. Their MLOps team had solid deployment pipelines. Their data scientists had rigorous evaluation metrics. None of that mattered, because no one at the board level had ever answered the question: what is our acceptable failure rate for AI decisions that affect whether a sick person gets treated?

That gap — between functional technical systems and missing executive decisions — is where AI governance most often breaks down in practice. The result is organizations that are simultaneously running AI in production and exposed to liability they've never formally acknowledged.

The Mental Model Shift That Separates Good AI Engineers from the Rest

· 10 min read
Tian Pan
Software Engineer

The most common pattern among engineers who struggle with AI work isn't a lack of technical knowledge. It's that they keep asking the wrong question. They want to know: "Does this work?" What they should be asking is: "At what rate does this fail, and is that rate acceptable for this use case?"

That single shift — from binary correctness to acceptable failure rates — is the core of what experienced AI engineers think differently about. It sounds simple. It isn't. Everything downstream of it is different: how you debug, how you test, how you deploy, what you monitor, what you build your confidence on. Engineers who haven't made this shift will keep fighting their tools and losing.

The Shared Prompt Service Problem: Multi-Team LLM Platforms and the Dependency Nightmare

· 10 min read
Tian Pan
Software Engineer

On a Tuesday afternoon, the platform team at a mid-size AI startup merged a "minor improvement" to the shared system prompt. By Thursday, three separate product teams had filed bugs. One team's evaluation suite dropped from 87% to 61% accuracy. Another team's RAG pipeline started producing hallucinated citations. A third team's safety filter stopped catching a category of harmful outputs entirely. Nobody connected the dots for four days.

This is the shared prompt service problem, and it's coming for every organization that has more than one team building on a common LLM platform.

Why '92% Accurate' Is Almost Always a Lie

· 8 min read
Tian Pan
Software Engineer

You launch an AI feature. The model gets 92% accuracy on your holdout set. You present this to the VP of Product, the legal team, and the head of customer success. Everyone nods. The feature ships.

Three months later, a customer segment you didn't specifically test is experiencing a 40% error rate. Legal is asking questions. Customer success is fielding escalations. The VP of Product wants to know why no one flagged this.

The 92% figure was technically correct. It was also nearly useless as a decision-making input — because headline accuracy collapses exactly the information that matters most.

When Everyone Has an AI Coding Agent: The Team Dynamics Nobody Warned You About

· 10 min read
Tian Pan
Software Engineer

A team of twelve engineers adopts AI coding tools enthusiastically. Six months later, each engineer is merging nearly twice as many pull requests. The engineering manager celebrates. Then the on-call rotation starts paging. Debugging sessions last twice as long. Nobody can explain why a particular module was structured the way it was. The engineer who wrote it replies honestly: "I don't know — the AI generated most of it and it seemed fine."

This scenario is playing out at companies everywhere. The individual productivity story is real: developers finish tasks faster, write more tests, and clear backlogs more efficiently. The team-level story is more complicated, and most organizations aren't ready for it.