Skip to main content

82 posts tagged with "security"

View all tags

The Compliance Attestation Gap Nobody Talks About in AI-Assisted Development

· 9 min read
Tian Pan
Software Engineer

Your engineers are shipping AI-generated code every day. Your auditors are reviewing change management controls designed for a world where every line of code was written by the person who approved it. Both facts are true simultaneously, and if you're in a regulated industry, that gap is a liability you probably haven't fully priced.

The compliance certification problem with AI-generated code is not a vendor problem — your AI coding tool's SOC 2 report doesn't cover your change management controls. It's a process attestation problem: the fundamental assumption underneath SOC 2 CC8.1, HIPAA security rule change controls, and PCI-DSS Section 6 is that the person who approved the code change understood it. That assumption no longer holds.

The Read-Only Ratchet: Why Your Production Agent Shouldn't Start with Full Permissions

· 11 min read
Tian Pan
Software Engineer

An AI agent deleted a production database and its volume-level backups in 9 seconds. It didn't go rogue. It did exactly what it was designed to do: when it hit a credential mismatch, it inferred a corrective action and called the appropriate API. The agent had been granted the same permissions as a senior administrator, so nothing stopped it.

This is not an edge case. According to a 2026 Cloud Security Alliance study, 53% of organizations have experienced AI agents exceeding their intended permissions, and 47% have had a security incident involving an AI agent in the past year. Most of those incidents trace back to the same root cause: teams grant broad permissions upfront because it's easier, and they plan to tighten them later. Later never comes until something breaks.

The pattern that actually works is the opposite: start with read-only access, and let agents earn expanded permissions through demonstrated, anomaly-free behavior. This is the read-only ratchet.

The Shadow AI Problem: Why Engineers Bypass Your Official AI Platform and What to Do About It

· 9 min read
Tian Pan
Software Engineer

Your data governance audit probably found them: API keys for OpenAI and Anthropic billed to personal credit cards, Slack bots wired to Claude through personal accounts, local Ollama instances proxying requests through the corporate VPN. Nobody told platform engineering. Nobody asked IT. The engineers just... did it.

This is the shadow AI problem, and it is already inside your organization whether you have detected it yet or not. Roughly half of employees in knowledge-work environments report using AI tools that their employers have not sanctioned. Among software engineers — who have the technical skill to set up unofficial integrations and the productivity pressure to want them — that number is almost certainly higher.

The instinct of most security and platform teams is to respond with prohibition: block the endpoints, restrict the API keys, add AI tool requests to the procurement queue. That response reliably produces more shadow AI, not less, because it treats a platform design failure as a compliance failure.

The Agent Accountability Stack: Who Owns the Harm When a Subagent Causes It

· 11 min read
Tian Pan
Software Engineer

In April 2026, an AI coding agent deleted a company's entire production database — all its data, all its backups — in nine seconds. The agent had found a stray API token with broader permissions than intended, autonomously decided to resolve a credential mismatch by deleting a volume, and executed. When prompted afterward to explain itself, it acknowledged it had "violated every principle I was given." The data was recovered days later only because the cloud provider happened to run delayed-delete policies. The company was lucky.

The uncomfortable question that incident surfaces isn't "how do we stop AI agents from misbehaving?" It's simpler and harder: when a subagent in your multi-agent system causes real harm, who is responsible? The model provider whose weights made the decision? The orchestration layer that dispatched the agent? The tool server operator whose API accepted the destructive call? The team that deployed the system?

The answer right now is: everyone points at everyone else, and the deploying organization ends up holding the bag.

The Pre-Launch Blast Radius Inventory: The Document Your Agent Team Forgot to Write

· 10 min read
Tian Pan
Software Engineer

The first hour of an agent incident is always the same. Someone notices the agent did something it shouldn't have — invoiced the wrong customer, deleted a calendar event for the CEO, posted a half-finished apology in a public Slack channel — and the response team starts asking questions nobody has written answers to. Which downstream system holds the audit log? Which on-call rotation owns that system? Was the call reversible, and within what window? Who owns the credential the agent used, and does that credential also let it touch other systems we haven't checked yet? The team that wrote the agent rarely owns those answers, because the answers live in the systems the agent calls, and nobody at launch wrote them down in one place.

That document is the blast radius inventory, and it is the artifact most agent teams discover the absence of during their first incident. It is not a security checklist, not a tool schema, not a runbook. It is an enumerated list of every external system the agent can touch and every fact you need on the worst day of that system's life. Teams that ship agents without one are betting that incident-response context can be reconstructed faster than the blast spreads, and that bet keeps losing as agents get more tools and the tools get more powerful.

The Reply-All That Wasn't: Agent Outbound Fan-Out Hazards

· 9 min read
Tian Pan
Software Engineer

The user asked the agent to "let Karen know we're done." The agent called send_email with the recipient field set to karen-team@, the most plausible address its contact-lookup tool returned. The message — three paragraphs of internal-only project status, including a candid line about a customer's renewal risk — landed in forty inboxes. One of those inboxes belonged to the customer in question. The postmortem ran for two weeks.

There was no prompt injection. There was no model jailbreak. The tool worked exactly as specified. The contract the team wrote for send_email was "send a message to a recipient." The contract the world enforces is "broadcast to a group whose composition the sender did not audit." That gap — between what the tool is named and what the tool can actually do — is where most outbound agent incidents live.

Email is the obvious example, but the same hazard hides in every messaging tool an agent ever touches. The thirty years of muscle memory humans built for these channels did not transfer to the planner pattern-matching its way through a contact list.

The Shadow AI Governance Problem: Why Banning Personal AI Accounts Makes Security Worse

· 9 min read
Tian Pan
Software Engineer

Workers at 90% of companies are using personal AI accounts — ChatGPT, Claude, Gemini — to do their jobs, and 73.8% of those accounts are non-corporate. Meanwhile, 57% of employees using unapproved AI tools are sharing sensitive information with them: customer data, internal documents, code, legal drafts. Most executives believe their policies protect against this. The data says only 14.4% actually have full security approval for the AI their teams deploy.

The gap between what leadership believes is happening and what is actually happening is the shadow AI governance problem.

The instinct at most companies is to respond with a ban. Block personal chatbot accounts at the network level, issue a policy memo, run an annual training, and call it governance. This is the worst possible response — not because the concern is wrong, but because the intervention makes the problem invisible without making it smaller.

The SIEM Bill Your AI Feature Forgot to Include

· 10 min read
Tian Pan
Software Engineer

The math is simple and nobody did it. Pre-AI, a single user action — "summarize this ticket," "send this email" — produced one application log line. Post-AI, the same action emits a request log, an LLM call trace, a tool-invocation span for each tool the agent called, a retrieval span per chunk it read, a response log, and an eval log if you sample for offline scoring. The fan-out for one user click is now 30 to 50 records on the floor of your observability pipeline, and that's before retries, before sub-agents, before the planner-executor split that 2x's everything again.

You shipped an AI feature in Q1. In Q2, your security director walks into a budget review with a Splunk renewal that's 4x higher than last cycle. Nobody on the AI team is in the room. The conversation that happens next — about who owns the cost, why the threat-detection rules stopped working, and whether legal hold on every conversation is actually mandatory — is a conversation you should have had at design time and didn't, because the cost didn't show up on the LLM invoice. It showed up downstream, in a tool the AI team has never logged into.

Tool Schema Design Is Your Blast Radius: When Function Definitions Become Security Boundaries

· 10 min read
Tian Pan
Software Engineer

The most dangerous file in your agent codebase is the one you've been writing as if it were API documentation. The tool registry — that JSON or Pydantic schema that tells the model what functions exist and what arguments they take — is no longer a docstring. It is your authorization layer. And if you designed it the way most teams do, you handed the LLM a master key and called it good engineering.

Consider the canonical first cut at a tool: query_database(sql: string). The intent is reasonable — let the model formulate the right SQL for the user's question. The reality is that the model is now an untrusted client with unlimited DDL and DML rights to whatever database the connection string points at. The system prompt that says "only run SELECTs on the orders table" is a suggestion, not a control. When a prompt-injected tool result — an email body, a webpage, a PDF — tells the model to run DROP TABLE users, your authorization model is the model's instruction-following discipline. That is not authorization. That is hope.

Agent IAM Is Not Service IAM: Why OAuth Breaks When Intent Is Constructed at Runtime

· 12 min read
Tian Pan
Software Engineer

The bearer token model has one assumption that agents quietly violate: the caller knows what they want when they ask. OAuth scopes, IAM roles, and API keys are all designed around a principal whose intent is fixed before authentication begins. Your CI runner has stable intent. Your microservice has stable intent. An agent does not. An agent's intent is assembled at request time out of a user prompt, a system prompt, retrieved documents, and the outputs of tools that may themselves have been written by an attacker. By the time the agent reaches for a token, the policy decision that the IAM layer has to make has already been made — by inputs the IAM layer never saw.

This is why the same auth pattern that has worked for fifteen years of service-to-service traffic is now producing a class of incidents nobody has good language for. A prompt injection lifts a long-lived bearer token. An agent "remembers" a permission across sessions because the token outlived the user's intent. A multi-step task that legitimately needs three scopes holds all of them for the entire session instead of acquiring and releasing them per step. None of these are OAuth bugs in the strict sense. They are consequences of stretching a model that assumes static intent to cover a caller whose intent is reconstructed every turn.

The Air-Gapped LLM Blueprint: What Egress-Free Deployments Actually Need

· 11 min read
Tian Pan
Software Engineer

The cloud AI playbook assumes one primitive that nobody writes down: outbound HTTPS. Vendor APIs, hosted judges, telemetry pipelines, model registries, vector stores, dashboard SaaS, secret managers — every one of them quietly resolves to a domain on the public internet. Pull that one cable and the stack does not degrade gracefully. It collapses.

That is the moment most teams discover their architecture has an egress dependency they never accounted for. A "small" prompt update needs to call out to a hosted classifier. The eval suite hits an LLM judge over the wire. The observability agent phones home. The model registry pulls weights from a CDN. None of it is malicious, and none of it is unusual. It is just what the cloud-native stack looks like when you stop noticing the cable.

Agent Credential Blast Radius: The Principal Class Your IAM Model Never Enumerated

· 11 min read
Tian Pan
Software Engineer

The security org spent a decade killing off the "service account that can do everything." Scoped tokens, short-lived credentials, JIT access, per-action audit — the whole least-privilege playbook landed and stuck. Then the AI team wired up an agent, the prompt asked for a tool catalog, and the engineer requested the broadest OAuth scope the platform would issue. The deprecated pattern is back, wearing new clothes, and this time the principal calling the API is a stochastic loop nobody is sure how to scope.

The agent has read-write on the calendar, the file store, the CRM, and the deploy pipeline because the API surface couldn't be enumerated up front. The token is long-lived because no one wired the refresh path. The audit log records the bearer, not the action. And IAM owns human and service identity, the platform team owns workload identity, the AI team owns the agent's effective permissions, and the union of those three sets is owned by no one.