Skip to main content

4 posts tagged with "risk-management"

View all tags

The AI Risk Register: What Your CRO Will Demand the Morning After

· 12 min read
Tian Pan
Software Engineer

The morning after the first six-figure agent incident, the directors will not ask whether the model was state-of-the-art. They will ask to see the row in the risk register that named this scenario, the owner who signed off, and the date the board last reviewed it. If your enterprise risk register has lines for cyber, vendor, regulatory, and operational risk, but no row for "an autonomous agent took an action under our credentials that produced a customer-visible loss," you are about to spend a board meeting explaining why the artifact every other category of risk merits did not exist for the one that just lost you money.

This is not a hypothetical anymore. Gartner projects that more than a thousand legal claims for harm caused by AI agents will be filed against enterprises by the end of 2026. AI-related risk has moved from tenth to second on the Allianz Risk Barometer in a single year. Insurers are now asking, in D&O renewal questionnaires, how the board has integrated AI into the corporate risk register and how third-party agentic exposures are being tracked. The line items below are what a defensible answer looks like, and the cadence the AI feature owner has to defend them on.

AI Cyber Insurance: The Coverage Gap Your Agent Will Find First

· 11 min read
Tian Pan
Software Engineer

A coding agent merges a change at 2 a.m. that takes a customer's production database offline for ninety minutes. A customer-support agent fans out and sends fourteen thousand misworded refund-denial emails before the loop is killed. An autonomous reconciliation workflow charges 2,800 cards twice. The damages are real, the audit trail names your company, and your finance team files the claim against the cyber policy that was renewed six weeks ago. The carrier's response is a polite letter explaining that the policy covers "unauthorized access by malicious third parties" and "social engineering of an employee" — and the agent was authenticated, the action was authorized, and no employee was deceived. Coverage denied. The loss sits on your balance sheet.

This is not a hypothetical edge case. It is the modal claim profile for the next eighteen months, and the insurance industry knows it. Cyber, E&O, and D&O policy language was calibrated against a threat model where breach severity is a function of records exfiltrated and incident response is a function of forensic hours billed. Agentic AI does not produce that shape of incident. It produces a shape the underwriter has no actuarial baseline for, and the carrier's first instinct — when the actuarial baseline is missing — is to write the exposure out of the policy entirely.

The Model-of-the-Week Roadmap: When Vendor Promises Become Committed Dependencies

· 9 min read
Tian Pan
Software Engineer

A product manager pulls up the next-quarter roadmap. Three features are marked "depends on next-gen model." Nobody asks what happens if next-gen slips, arrives 20% smaller than the demo suggested, or ships gated behind an enterprise tier your customers do not qualify for. Six months later, all three of those scenarios have happened, and the team is now rebuilding two quarters of architecture against the model that actually shipped — a different shape from the one they planned for.

This is the model-of-the-week roadmap: treating unreleased capability claims as committed dependencies. It is one of the most reliable ways to turn a twelve-month plan into a thirty-month plan, and it rarely looks risky in the moment because every vendor demo feels inevitable. The schedule damage is invisible until the slip compounds.

Board-Level AI Governance: The Five Decisions Only Executives Can Make

· 9 min read
Tian Pan
Software Engineer

A major insurer's AI system was denying coverage claims. When humans reviewed those decisions, 90% were found to be wrong. The insurer's engineering team had built a performant model. Their MLOps team had solid deployment pipelines. Their data scientists had rigorous evaluation metrics. None of that mattered, because no one at the board level had ever answered the question: what is our acceptable failure rate for AI decisions that affect whether a sick person gets treated?

That gap — between functional technical systems and missing executive decisions — is where AI governance most often breaks down in practice. The result is organizations that are simultaneously running AI in production and exposed to liability they've never formally acknowledged.