Skip to main content

One post tagged with "threat-detection"

View all tags

LLMs in the Security Operations Center: Acceleration Without Liability

· 11 min read
Tian Pan
Software Engineer

A senior analyst I respect described her team's first six months with an LLM-powered triage agent like this: "It made the easy alerts disappear, and made the hard ones harder to trust." The phrase has stayed with me because it captures the actual shape of the trade. AI in the security operations center is not a productivity story. It is a confidence calibration story, and most teams are getting the calibration wrong in the same direction.

The seductive version goes: drop a model in front of the alert queue, let it cluster duplicates, summarize raw events, and auto-close obvious noise. The MTTR graph drops. The pager quiets. The Tier-1 backlog evaporates. The version that actually gets you breached goes: the model confidently mis-attributes a real intrusion as a benign backup job, and a tired analyst — told that "the AI already triaged this, it's clean" — never opens the case. The first version is real. So is the second. They are the same system viewed at different confidence levels.