Skip to main content

8 posts tagged with "ai-governance"

View all tags

The Indexing Policy Committee Nobody Convened: RAG Corpus Governance Beyond the One-Time Migration

· 9 min read
Tian Pan
Software Engineer

Two years ago, a team pointed their retrieval index at the wiki, the Zendesk export, and a snapshot of the public docs. Last week, that same index returned a deprecated runbook that told an SRE to restart a service that no longer exists. The runbook had been deprecated for eighteen months. Nobody owned its retirement, so nobody retired it. The agent confidently cited it. The model wasn't wrong; the corpus was.

This is the failure mode that doesn't show up in retrieval evals: the corpus is treated as a one-time engineering decision when it's actually an ongoing governance problem. The team that scoped the initial ingestion is long gone. The legal review that should have flagged the customer-confidential PDFs never happened, because nobody told legal there was a pipeline. The "freshness strategy" is a Slack message from someone who left in Q3. The retrieval index has become a shared inbox for every document anyone ever scraped, and the bar for inclusion has drifted to "whatever was easy to ingest."

Content Provenance for AI Outputs: C2PA, SynthID, and the Audit Trail You Will Soon Owe

· 10 min read
Tian Pan
Software Engineer

A model's output used to be a string. By August 2026 it will be a signed artifact with a chain-of-custody manifest, and any team treating it as anything less will be retrofitting under deadline pressure.

That sentence sounds dramatic until you read Article 50 of the EU AI Act, which becomes fully enforceable on August 2, 2026, and requires that any synthetic content from a generative system be machine-detectable as AI-generated. The Code of Practice published in March 2026 is explicit that a single marking technique is not sufficient — providers must combine metadata embedding (C2PA) with imperceptible watermarking, and the output must survive common transformations like cropping, compression, and screenshotting. Penalties for non-compliance reach €15 million or 3% of global turnover. This is not a labeling guideline; it is a signed-artifact mandate, and it lands on every team shipping a generative feature into the EU market.

The Agent Backfill Problem: Your Model Upgrade Is a Trial of the Last 90 Days

· 12 min read
Tian Pan
Software Engineer

Here is a Tuesday-morning conversation that nobody on your AI team is prepared for. The new model lands in shadow mode. Within an hour the eval dashboard lights up: it categorizes 4% of refund requests differently than the model you have been running for the last quarter. Most of those flips look like the new model is right. Someone in the room — usually the one with the most lawyers in their reporting line — asks the question that ends the celebration: so what are we doing about the ninety days of decisions the old model already shipped?

That is the agent backfill problem. The moment a smarter model starts producing outputs that look more correct than your previous model's, every durable decision the previous model made becomes a contested record. You did not intend to indict the past. The new model did it for you, automatically, the first time you compared traces. And now you have an engineering question (can we replay history?), a legal question (do we have to disclose corrected outcomes?), and a product question (do users see retroactive changes?), and they collide.

Your Fine-Tuning Corpus Is a GDPR Data Artifact, Not Just an ML Asset

· 11 min read
Tian Pan
Software Engineer

The moment your first fine-tune lands in production, your weights become a new kind of record your privacy program has never cataloged. A customer support transcript that made it into your training mix is no longer just a row in a database you can DELETE — it is now encoded, redundantly and non-extractably, into the parameters your API serves. The original record can be scrubbed from S3, erased from your warehouse, and removed from your RAG index, while the model continues to complete prompts with fragments of that customer's name, account ID, or medical history. The Data Protection Agreement your sales team signed promised you'd honor erasure requests. Nobody asked the ML team whether that was technically possible.

Research on PII extraction shows this is not hypothetical. The PII-Scope benchmark reports that adversarial extraction rates can increase up to fivefold against pretrained models under realistic query budgets, and membership inference attacks using self-prompt calibration have pushed AUC from 0.7 to 0.9 on fine-tuned models. Llama 3.2 1B, a small and widely copied base, has been demonstrated to memorize sensitive records present in its training set. The takeaway for anyone shipping fine-tunes on production traces is blunt: you cannot assume your weights forgot.

This matters because most fine-tuning pipelines were designed by ML engineers optimizing for loss, not by data stewards optimizing for Article 17. The result is an artifact whose legal status is ambiguous, whose lineage is rarely documented, and whose "delete user X" workflow doesn't exist.

The Prompt Ownership Problem: When Conway's Law Comes for Your Prompts

· 11 min read
Tian Pan
Software Engineer

Every non-trivial AI product eventually develops a prompt that nobody is allowed to touch. It has three conditional branches, two inline examples pasted in during a customer-reported incident, and a sentence that begins with "IMPORTANT:" followed by a tone instruction nobody remembers writing. The prompt is 1,400 tokens. The PR that last modified it was reviewed by an engineer who has since changed teams. When a new model comes out, nobody is confident the prompt will still work. When evals regress, nobody is sure whether the prompt, the model, the retrieval pipeline, or a downstream tool caused it. The string is shared across four services. Every team has a local override. None of the overrides are documented.

This is the prompt ownership problem, and it is the single most under-discussed failure mode in multi-team AI engineering. It is not a technical problem. It is Conway's law reasserting itself at the token level. An organization's prompts end up mirroring its org chart, its RACI gaps, and its coordination tax — and the model, which does not care about your Jira hierarchy, produces correspondingly incoherent behavior for end users who do not care either.

Decision Provenance in Agentic Systems: Audit Trails That Actually Work

· 13 min read
Tian Pan
Software Engineer

An agent running in your production system deletes 10,000 database records. The deletion matches valid business logic — the records were flagged correctly. But three months later, a regulator asks a simple question: who authorized this, and on what basis did the agent decide? You open your logs. You find the SQL statement. You find the timestamp. You find nothing else.

This is the decision provenance problem. You can prove that your agent acted; you cannot prove why, or whether that action was ever sanctioned by a human who understood what they were approving. With autonomous agents now executing workflows that span hours, dozens of tool calls, and decisions with real-world consequences, the gap between "we have logs" and "we have accountability" has become operationally dangerous.

The Shadow Prompt Library: Governance for an Asset Class Nobody Owns

· 12 min read
Tian Pan
Software Engineer

Walk into almost any engineering org with a live LLM feature and ask a simple question: who owns the prompts? You will get a pause, then a shrug, then an answer that dissolves on contact. "Product wrote the first one." "The PM tweaked it last sprint." "I think it lives in a Notion doc, or maybe that const SYSTEM_PROMPT in agent.ts." The prompt is running in production. It shapes what users see, what actions the agent takes, what numbers show up in next quarter's revenue chart. And it has less governance surface than the CSS file nobody admits to touching.

This is the shadow prompt library: the accumulated pile of strings — system prompts, few-shot exemplars, tool descriptions, routing rules, evaluator rubrics — that collectively define product behavior and that collectively have no code review, no deploy pipeline, no owner, no deprecation policy, and no audit trail. They are the most load-bearing artifact in your AI stack and the least supervised.

The consequences are no longer theoretical. Ninety-eight percent of organizations now report unsanctioned AI use, and nearly half expect a shadow-AI incident within twelve months. Regulators are catching up faster than governance is: the EU AI Act's high-risk provisions apply in August 2026, and Article 12 is explicit that logs tying outputs to prompts and model versions must be automatic, not aspirational. If your prompts are scattered across a dozen codebases and a Slack thread, you do not have an audit trail; you have a liability.

The Prompt Ownership Problem: What Happens When Every Team Treats Prompts as Configuration

· 8 min read
Tian Pan
Software Engineer

A one-sentence change to a system prompt sat in production for 21 days before anyone noticed it was misclassifying thousands of mortgage documents. The estimated cost: $340,000 in operational inefficiency and SLA breaches. Nobody could say who made the change, when it was made, or why. The prompt lived in an environment variable that three teams had write access to, and no one considered it their responsibility to review.

This is the prompt ownership problem. As LLM-powered features proliferate across organizations, prompts have become the most consequential yet least governed artifacts in the stack. They control model behavior, shape user experience, enforce safety constraints, and define business logic — yet most teams manage them with less rigor than they'd apply to a CSS change.