"Vibe Coding" Was Collins Word of the Year — Now It's Shipping to Production and Breaking Everything

A year ago, Andrej Karpathy posted a tweet that changed the vocabulary of our entire industry. On February 2, 2025, he wrote:

“There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

That tweet got over 4.5 million views. By November 2025, Collins Dictionary named “vibe coding” the Word of the Year. And now, in early 2026, we’re living in the world that tweet created — and I’m not sure we were ready for it.

The Numbers Are Staggering

Here’s where we are right now: 92% of US developers use AI coding tools daily. Globally, 82% of developers use them at least weekly. According to recent industry data, 41% of all code is now AI-generated, representing roughly 256 billion lines written in 2024 alone. 87% of Fortune 500 companies have adopted at least one vibe coding platform.

The tools have matured fast. Cursor is the go-to for developers who still want to see the code. Replit and Bolt.new let you go from a conversation to a deployed app without ever opening an editor. Lovable hit $100M ARR in just 8 months — potentially the fastest-growing startup in history. Replit’s revenue jumped from $10M to $100M in 9 months after launching their Agent feature.

This isn’t a trend anymore. This is the new default.

What “Vibe Coding” Actually Looks Like in Practice

For those who haven’t tried it: you describe what you want in plain English. The AI generates the code. You run it. If it works, you ship it. If it doesn’t, you describe the problem back to the AI and it fixes it. At no point do you necessarily read, understand, or review the underlying code.

Karpathy himself described it as “fully giving in to the vibes” — you accept suggestions, don’t question the implementation, and “forget that the code even exists.” For weekend projects and prototypes, this feels magical. I’ve personally used Cursor to build internal tools in a fraction of the time it would’ve taken me to write from scratch. The productivity gains are real: teams report 51% faster task completion with vibe coding approaches.

But here’s where it gets uncomfortable.

The Security Problem Nobody Wants to Talk About

The 2025 GenAI Code Security Report found that AI-generated code introduces security flaws in 45% of cases. Only 55% of AI-generated code was secure across 80 coding tasks spanning four programming languages. Java was the worst offender with a 72% security failure rate.

A December 2025 analysis by CodeRabbit of 470 open-source GitHub pull requests found that AI co-authored code contained approximately 1.7x more “major” issues compared to human-written code. XSS vulnerabilities were 2.74x more likely. Improper password handling was 1.88x more likely. And here’s the kicker: newer, larger models don’t generate significantly more secure code than their predecessors. The models are getting better at writing code that works but not code that’s safe.

The Startup Problem

The vibe coding ecosystem has created a new category of startup: built almost entirely by AI, shipped to production by founders who can’t read the code their product runs on. I’ve talked to founders who proudly describe their stack as “95% AI-generated.” They’re moving fast. They’re raising money. And their codebases are ticking time bombs.

When I ask them about their security posture, I get blank stares. When I ask about dependency management, they say “the AI handles it.” When I ask about technical debt, they say “we’ll refactor later.” But refactoring code you didn’t write and don’t understand isn’t refactoring — it’s starting over.

So Where Does This Leave Us?

I’m not anti-AI. I use these tools every day. But there’s a massive difference between using AI as a force multiplier for experienced developers and using AI as a replacement for understanding what you’re building.

Vibe coding as Karpathy described it — giving in to the vibes, forgetting the code exists — is fine for throwaway projects. But we’re now in an era where this philosophy is being applied to production systems, financial services, healthcare applications, and infrastructure.

The question I keep coming back to: Is vibe coding the future of software development, or is it creating an entire generation of developers who can’t debug their own systems?

I’d genuinely love to hear from folks across the stack — security, infrastructure, leadership. What are you seeing in your orgs?

Alex, I want to print this post out and staple it to every “move fast and break things” poster I see in startup offices.

Vibe coding is my literal nightmare scenario as a security engineer. And I don’t say that for dramatic effect — I say it because I’ve seen what happens when nobody reads the code.

The Audit That Broke Me

Last quarter, I was brought in to audit a Series A startup’s application before their SOC 2 compliance review. The founders told me the app was “95% AI-generated” like it was a badge of honor. In three hours, I found 12 critical vulnerabilities:

  • Hardcoded API keys in the frontend JavaScript (the AI had helpfully included them inline)
  • SQL injection vectors in three separate endpoints
  • An authentication bypass that let you access any user’s data by modifying a URL parameter
  • Unvalidated file uploads that accepted executable files
  • CORS misconfiguration that essentially turned their API into a public endpoint

The code worked perfectly from a functional standpoint. Every feature did what it was supposed to do. The AI had generated clean, readable, well-structured code. But security wasn’t part of the “vibe.” Nobody had asked the AI about input validation, output encoding, or authentication boundaries, so the AI didn’t implement them.

The Core Problem

Here’s what people don’t understand: security is adversarial thinking. It’s not about whether the code works — it’s about whether the code works when someone is actively trying to break it. AI models are trained to generate code that fulfills requirements, not code that resists attack. When the developer doesn’t read the code and the AI doesn’t think like an attacker, you get functionally correct but fundamentally insecure systems.

The stat Alex cited — 45% of AI-generated code has security flaws — matches what I see in practice. And it’s actually worse than it sounds, because those flaws tend to be systemic. The AI makes the same mistakes everywhere because it’s following the same patterns. One bad pattern becomes 50 vulnerabilities across your codebase.

What I’m Telling Teams

My new standard advice: developers who can’t explain their own code can’t secure it. If you vibe-coded it and can’t walk me through the authentication flow, you can’t tell me it’s safe. Full stop. The 2.74x increase in XSS vulnerabilities in AI-generated code isn’t a statistical curiosity — it’s a prediction of what your next breach report is going to look like.

Great thread, Alex. And Sam, that audit story is exactly the kind of thing that keeps me up at night.

I’ll offer the CTO perspective here, because I think there’s a middle ground that people keep missing.

I Use Vibe Coding. I Also Have Rules.

I’m not going to pretend I don’t vibe code. I absolutely do. When I’m prototyping a new feature concept for a board presentation, or spinning up a quick internal tool for the ops team, or exploring a technical approach before committing resources — AI-assisted development is genuinely transformative. I’ve gone from idea to working demo in an afternoon. That used to take a sprint.

But here’s my team’s hard line: vibe-code the MVP, then rewrite with understanding.

That means the AI-generated prototype is a throwaway. It’s there to validate the concept, test the UX, get stakeholder buy-in. Once we decide to build it for real, we start over with engineers who understand every line they’re writing. The AI can help with the rewrite too — but now the developer is driving, not riding.

The Problem Is Startups Skip the Rewrite

What I see in the ecosystem terrifies me. Startups are shipping the prototype directly to production. They vibe-code an MVP for a demo day, raise a seed round, and then that “demo” becomes the production system. There’s no rewrite. There’s no security review. There’s no architecture discussion. They tell themselves they’ll “clean it up later” but later never comes because they’re too busy building the next feature the same way.

These companies are building on quicksand and they don’t know it because they’ve never looked down.

My Policy, For What It’s Worth

Here’s what I’ve implemented at my org:

  1. AI-generated code requires the same review standards as human code. No exceptions. If you can’t explain the PR, it doesn’t merge.
  2. Prototype vs. production is an explicit gate. We literally have a “rewrite checkpoint” in our development process.
  3. Security scanning on every commit, with additional manual review for AI-generated code flagged by our tooling.
  4. Engineers must document their understanding, not just what the code does but why it does it that way.

Is this slower than pure vibe coding? Absolutely. But as Sam’s audit story illustrates, the companies that don’t do this aren’t saving time — they’re borrowing it. They’ll have plenty of time when the breach happens. Or when the codebase becomes so tangled that no one — human or AI — can modify it safely.

The future of development includes AI. But it includes AI as a tool, not as the developer.

Coming at this from a slightly different angle — I lead design systems at my company, and vibe coding is forcing me to rethink the entire purpose of what I do.

The Design System Paradox

Here’s the tension I’m living with daily: if AI can generate any component on the fly, does a design system matter more or less?

The argument for more: If developers are vibe coding and the AI is generating UI components, the design system becomes the only consistent quality guardrail. Without a well-documented component library with clear patterns, the AI will generate a different button style for every page. The design system becomes the source of truth that keeps AI-generated interfaces coherent. In this world, my job matters more than ever.

The argument for less: If the AI can generate anything from a description, why maintain a rigid component library at all? Just tell the AI “make a button that looks like our brand” and let it figure it out. Several teams in my org are already doing this — they’ve stopped using our design system entirely because “the AI can just build it.” Why constrain yourself with pre-built components when AI can generate bespoke ones for every context?

What I’m Actually Seeing

In practice, teams that skip the design system with vibe coding produce interfaces that look fine on first glance but fall apart on closer inspection:

  • Inconsistent spacing and typography that creates a subtly “off” feeling
  • Accessibility violations everywhere — the AI generates visually correct components but misses ARIA labels, focus management, keyboard navigation
  • Components that look identical but behave differently across pages
  • No responsive behavior beyond basic media queries

The AI generates components that look like a design system was followed, but it’s all surface-level. There’s no semantic consistency underneath.

Where I’ve Landed

I’m now repositioning my design system not as a component library but as an AI constraint layer. Instead of providing pre-built components for developers to use, I’m providing design tokens, accessibility rules, and interaction patterns that AI tools can reference when generating components. It’s less “here’s a button” and more “here are the rules any button must follow.”

Whether this works long-term, I honestly don’t know. But I’m convinced that design systems aren’t going away — they’re evolving from component catalogs into quality contracts. And in a world where nobody reads the code, having explicit quality contracts might be the only thing standing between us and complete UI chaos.