Skip to main content

5 posts tagged with "enterprise"

View all tags

Foundation Model Vendor Strategy: What Enterprise SLAs Actually Guarantee

· 12 min read
Tian Pan
Software Engineer

Enterprise teams pick LLM vendors based on benchmarks and demos. Then they hit production and discover what the SLA actually says — which is usually much less than they assumed. The 99.9% uptime guarantee you negotiated doesn't cover latency. The data processing agreement your legal team signed doesn't prohibit training on your inputs unless you explicitly added that clause. And the vendor concentration risk that nobody quantified becomes painfully obvious when your core product is down for four hours because a telemetry deployment cascaded through a Kubernetes control plane.

This is not a procurement problem. It's an engineering problem that procurement can't solve alone. The people who build AI systems need to understand what these contracts actually say — and what they don't.

The Enterprise AI Capability Discovery Problem

· 10 min read
Tian Pan
Software Engineer

You shipped the AI feature. You put it in the product. You wrote the help doc. And still, six months later, your most sophisticated enterprise users are copy-pasting text into ChatGPT to do the same thing your feature already does natively. This is not a training problem. It is a discoverability problem, and it is one of the most consistent sources of wasted AI investment in enterprise software today.

The pattern is well-documented: 49% of workers report they never use AI in their role, and 74% of companies struggle to scale value from AI deployments. But the interesting failure mode is not the late-adopters who explicitly resist. It is the engaged users who open your product every day, never knowing that the AI capability they would have paid for is sitting one click away from where their cursor already is.

Why Your Document Extractor Breaks on the Contracts That Matter Most

· 13 min read
Tian Pan
Software Engineer

Your invoice parser probably works fine. Feed it a clean, digital PDF from a Fortune 500 vendor — structured rows, consistent column widths, machine-generated text — and it will extract line items with near-perfect accuracy. Then someone uploads a multi-page contract from a regional supplier, a scanned form with handwritten amendments, or a financial statement where the table header lives on page 3 and the rows continue through page 6. The extractor fails silently, returns partial data, or confidently produces structured output that is wrong in ways no downstream validation catches.

This is the central problem with enterprise document intelligence: the documents that break your system are not the edge cases. They are the ones with the highest business value.

The Insider Threat You Created When You Deployed Enterprise AI

· 9 min read
Tian Pan
Software Engineer

Most enterprise security teams have a reasonably well-developed model for insider threats: a disgruntled employee downloads files to a USB drive, emails a spreadsheet to a personal account, or walks out with credentials. The detection playbook is known — DLP rules, egress monitoring, UEBA baselines. What those playbooks don't account for is the scenario where you handed every one of your employees a tool that can plan, execute, and cover multi-stage operations at machine speed. That's what deploying AI coding assistants and RAG-based document agents actually does.

The problem isn't that these tools are insecure in isolation. It's that they dramatically amplify what a compromised or malicious insider can accomplish in a single session. The average cost of an insider incident has reached $17.4 million per organization annually, and 83% of organizations experienced at least one insider attack in the past year. AI tools don't introduce a new threat category — they multiply the capability of every threat category that already exists.

The Pilot Graveyard: Why Enterprise AI Rollouts Fail After the Demo

· 10 min read
Tian Pan
Software Engineer

Your AI demo was genuinely impressive. The executive audience nodded, the VP of Engineering said "this is the future," and the pilot was approved with real budget. Six months later, weekly active users have plateaued at 12%. The tool gets a polite mention in all-hands. Nobody has the heart to call it dead. This is the pilot graveyard — where good demos go to die.

It's not a rare failure. Roughly 88% of enterprise AI pilots never reach production. Only 6% of enterprises have successfully moved generative AI projects beyond pilot to production at any meaningful scale. The gap between "impressive in the conference room" and "load-bearing in the daily workflow" is where most enterprise AI investment disappears.

The reason isn't the model. It's everything that happens after the demo.