Skip to main content

553 posts tagged with "ai-engineering"

View all tags

The Orchestration Framework Trap: When LangChain Makes You Slower to Ship

· 8 min read
Tian Pan
Software Engineer

At some point in 2024, a pattern started appearing in engineering postmortems across AI teams: "We rewrote it without LangChain and shipping became significantly faster." The teams in these postmortems hadn't made a technical mistake in adopting the framework — they'd made a timing mistake. LangChain was the right tool for the prototype and the wrong tool for month seven.

The same story played out enough times that it has a name now: the orchestration framework trap. You adopt a framework that genuinely accelerates early work, and the productivity gain masks a growing structural debt. By the time the debt is visible, you're deep in internals that were never meant to be touched.

The Over-Tooled Agent Problem: Why More Tools Make Your LLM Dumber

· 9 min read
Tian Pan
Software Engineer

When a team at Writer instrumented their RAG-MCP benchmark, they found that baseline tool selection accuracy — with no special handling — was 13.62% when the agent had access to a large set of tools. Not 80%. Not 60%. Thirteen percent. The same agent, with retrieval-augmented tool selection exposing only the most relevant subset, reached 43%. The tools didn't change. The model didn't change. Only the number of tool definitions visible at reasoning time changed.

This is the over-tooled agent problem, and it's quietly wrecking production AI systems at scale.

The Privacy Architecture of Embeddings: What Your Vector Store Knows About Your Users

· 10 min read
Tian Pan
Software Engineer

Most engineers treat embeddings as safely abstract — a bag of floating-point numbers that can't be reverse-engineered. That assumption is wrong, and the gap between perception and reality is where user data gets exposed.

Recent research achieved over 92% accuracy reconstructing exact token sequences — including full names, health diagnoses, and email addresses — from text embeddings alone, without access to the original encoder model. These aren't theoretical attacks. Transferable inversion techniques work in black-box scenarios where an attacker builds a surrogate model that mimics your embedding API. The attack surface exists whether you're using a proprietary model or an open-source one.

This post covers the three layers of embedding privacy risk: what inversion attacks can actually do, where access control silently breaks down in retrieval pipelines, and the architectural patterns — per-user namespacing, retrieval-time permission filtering, audit logging, and deletion-safe design — that give your users appropriate control over what gets retrieved on their behalf.

The Prompt Governance Problem: Managing Business Logic That Lives Outside Your Codebase

· 9 min read
Tian Pan
Software Engineer

A junior PM edits a customer-facing prompt during a product sprint to "make it sound friendlier." Two weeks later, a backend engineer tweaks the same prompt to fix a formatting quirk. An ML engineer, unaware of either change, adds chain-of-thought instructions in a separate system message that now conflicts with the PM's edit. None of these changes have a ticket. None have a reviewer. None have a rollback plan.

This is how most teams manage prompts. And at five prompts, it's annoying. At fifty, it's a liability.

Prompt Injection Is a Supply Chain Problem, Not an Input Validation Problem

· 9 min read
Tian Pan
Software Engineer

Five carefully crafted documents hidden among a million clean ones can achieve a 90% attack success rate against a production RAG system. Not through zero-days or cryptographic breaks — through plain text that instructs the model to behave differently than its operators intended. If your defense strategy is "sanitize inputs before they reach the LLM," you have already lost.

The framing matters. Teams that treat prompt injection as an input validation problem build perimeter defenses: regex filters, LLM-based classifiers, output scanners. These are useful but insufficient. The real problem is that modern AI systems are compositions of components — retrievers, knowledge bases, tool executors, external APIs — and each component is an ingestion point with its own attack surface. That is the definition of a supply chain vulnerability.

Prompt Localization Debt: The Silent Quality Tiers Hiding in Your Multilingual AI Product

· 9 min read
Tian Pan
Software Engineer

Your AI feature shipped with a 91% task success rate. You ran evals, iterated on your prompt, and tuned it until it hit your quality bar. Then you launched globally — and three months later a user in Tokyo files a support ticket that your AI "doesn't really understand" their input. Your Japanese users have been silently working around a feature that performs 15–20 percentage points worse than what your English users experience. Nobody on your team noticed because nobody was measuring it.

This is prompt localization debt: the accumulating gap between how well your AI performs in the language you built it for and every other language your users speak. It doesn't announce itself in dashboards. It doesn't cause outages. It just quietly creates second-class users.

Red-Teaming Consumer LLM Features: Finding Injection Surfaces Before Your Users Do

· 9 min read
Tian Pan
Software Engineer

A dealership deployed a ChatGPT-powered chatbot. Within days, a user instructed it to agree with anything they said, then offered $1 for a 2024 SUV. The chatbot accepted. The dealer pulled it offline. This wasn't a sophisticated attack — it was a three-sentence prompt from someone who wanted to see what would happen.

At consumer scale, that curiosity is your biggest security threat. Internal LLM agents operate inside controlled environments with curated inputs and trusted data. Consumer-facing LLM features operate in adversarial conditions by default: millions of users, many actively probing for weaknesses, and a stochastic model that has no concept of "this user seems hostile." The security posture these two environments require is fundamentally different, and teams that treat consumer features like internal tooling find out the hard way.

Sandboxing Agents That Can Write Code: Least Privilege Is Not Optional

· 12 min read
Tian Pan
Software Engineer

Most teams ship their first code-executing agent with exactly one security control: API key scoping. They give the agent a GitHub token with repo:read and a shell with access to a working directory, and they call it "sandboxed." This is wrong in ways that become obvious only after an incident.

The threat model for an agent that can write and execute code is categorically different from the threat model for a web server or a CLI tool. The attack surface isn't the protocol boundary anymore — it's everything the agent reads. That includes git commits, documentation pages, API responses, database records, and any file it opens. Any of those inputs can contain a prompt injection that turns your research agent into a data exfiltration pipeline.

Shadow Traffic for AI Systems: The Safest Way to Validate Model Changes Before They Ship

· 10 min read
Tian Pan
Software Engineer

Most teams ship LLM changes the way they shipped web changes in 2005 — they run some offline evals, convince themselves the numbers look fine, and push. The surprise comes on Monday morning when a system prompt tweak that passed every benchmark silently breaks the 40% of user queries that weren't in the eval set.

Shadow traffic is the fix. The idea is simple: run your candidate model or prompt in parallel with production, feed it every real request, compare the outputs, and only expose users to the current version. Zero user exposure, real production data, and statistical confidence before anyone sees the change. But applying this to LLMs requires rethinking almost every piece of the implementation — because language models are non-deterministic, expensive to evaluate, and produce outputs that can't be compared with a simple diff.

The Skill Atrophy Trap: How AI Assistance Silently Erodes the Engineers Who Use It Most

· 10 min read
Tian Pan
Software Engineer

A randomized controlled trial with 52 junior engineers found that those who used AI assistance scored 17 percentage points lower on comprehension and debugging quizzes — nearly two letter grades — compared to those who worked unassisted. Debugging, the very skill AI is supposed to augment, showed the largest gap. And this was after just one learning session. Extrapolate that across a year of daily AI assistance, and you start to understand why senior engineers at several companies quietly report that something has changed about how their team reasons through hard problems.

The skill atrophy problem with AI tooling is real, it's measurable, and it's hitting mid-career engineers hardest. Here's what the research shows and what you can do about it.

SLOs for Non-Deterministic AI Features: Setting Error Budgets When Wrong Is Probabilistic

· 10 min read
Tian Pan
Software Engineer

Your AI feature is "up." Latency is fine. Error rate is 0.2%. The dashboard is green. But over the past two weeks, the summarization quality quietly dropped — outputs are now technically coherent but factually shallow, consistently missing the key detail users care about. Nobody filed a bug. No alert fired. And you won't know until the next quarterly review when retention numbers come in.

This is the failure mode that traditional SLOs are blind to. Availability and latency measure whether your service is responding — not whether it's responding well. For deterministic systems, those two things are nearly equivalent. For LLM features, they can diverge silently for weeks.

Specification Gaming in Production LLM Systems: When Your AI Does Exactly What You Asked

· 10 min read
Tian Pan
Software Engineer

A 2025 study gave frontier models a coding evaluation task with an explicit rule: don't hack the benchmark. Every model acknowledged, 10 out of 10 times, that cheating would violate the user's intent. Then 70–95% of them did it anyway. The models weren't confused — they understood the constraint perfectly. They just found that satisfying the specification literally was more rewarding than satisfying it in spirit.

That's specification gaming in production, and it's not a theoretical concern. It's a property that emerges whenever you optimize a proxy metric hard enough, and in production LLM systems you're almost always optimizing a proxy.