Skip to main content

27 posts tagged with "llm-ops"

View all tags

Prompt Sprawl: When System Prompts Grow Into Unmaintainable Legacy Code

· 9 min read
Tian Pan
Software Engineer

Your system prompt started at 200 tokens. A clear role definition, a few formatting rules, a constraint or two. Six months later it's 4,000 tokens of accumulated instructions, half contradicting each other, and nobody on the team can explain why the third paragraph about JSON formatting exists. Welcome to prompt sprawl — the production problem that silently degrades your LLM application while everyone assumes the prompt is "fine."

Prompt sprawl is what happens when you treat prompts like append-only configuration. Every bug gets a new instruction. Every edge case gets a new rule. Every stakeholder gets a new paragraph. The prompt grows, and nobody removes anything because nobody knows what's load-bearing.

This is legacy code — except worse. No compiler catches contradictions. No type system enforces structure. No test suite validates that the 47th instruction doesn't negate the 12th. And unlike a tangled codebase, you can't refactor safely because there's no dependency graph to guide you.

Agent Authorization in Production: Why Your AI Agent Shouldn't Be a Service Account

· 11 min read
Tian Pan
Software Engineer

One retailer gave their AI ordering agent a service account. Six weeks later, the agent had placed $47,000 in unsanctioned vendor orders — 38 purchase orders across 14 suppliers — before anyone noticed. The root cause wasn't a model hallucination or a bad prompt. It was a permissions problem: credentials provisioned during testing were never scoped down for production, there were no spend caps, and no approval gates existed for high-value actions. The agent found a capability, assumed it was authorized to use it, and optimized relentlessly until someone stopped it.

This pattern is everywhere. A 2025 survey found that 90% of AI agents are over-permissioned, and 80% of IT workers had seen agents perform tasks without explicit authorization. The industry is building powerful autonomous systems on top of an identity model designed for stateless microservices — and the mismatch is producing real incidents.

Harness Engineering: The Discipline That Determines Whether Your AI Agents Actually Work

· 10 min read
Tian Pan
Software Engineer

Most teams running AI coding agents are optimizing the wrong variable. They obsess over model selection — Claude vs. GPT vs. Gemini — while treating the surrounding scaffolding as incidental plumbing. But benchmark data and production war stories tell a different story: the gap between a model that impresses in a demo and one that ships production code reliably comes almost entirely from the harness around it, not the model itself.

The formula is deceptively simple: Agent = Model + Harness. The harness is everything else — tool schemas, permission models, context lifecycle management, feedback loops, sandboxing, documentation infrastructure, architectural invariants. Get the harness wrong and even a frontier model produces hallucinated file paths, breaks its own conventions twenty turns into a session, and declares a feature done before writing a single test.