I Use Copilot Daily - Here's How I Handle the Security Tradeoffs

After Sam’s post about AI code vulnerabilities, I wanted to share what actually works for me as a developer who uses AI coding tools every day.

I’m not going to pretend the security issues don’t exist. They do. But I’ve also seen the productivity gains firsthand, and blanket bans aren’t the answer.

My Workflow

What I Let AI Do Freely

  • Boilerplate generation - React components, API route scaffolding, test setup
  • Documentation - JSDoc, README sections, inline comments
  • Refactoring - “Convert this to TypeScript,” “Extract this into a custom hook”
  • Data transformations - Array manipulations, object reshaping, format conversions
  • CSS/styling - Tailwind classes, responsive layouts, animations

For these, I barely review beyond “does it work.” The security surface area is minimal.

What Gets Extra Scrutiny

  • Database queries - I always check for injection vectors, even with ORMs
  • API integrations - Especially anything touching payments or external services
  • User input handling - Any data from forms, URLs, or file uploads
  • File operations - Path construction, uploads, downloads

For these, I read every line and often ask the AI “what are the security implications of this approach?”

What I Never Delegate

  • Authentication flows - Login, session management, password handling
  • Authorization checks - Permission verification, role-based access
  • Cryptographic operations - Encryption, hashing, token generation
  • Secret management - API keys, credentials, environment variables

For these, I write the code myself or use well-vetted libraries.

The Prompting Difference

I’ve found that HOW you prompt matters enormously for security:

Bad prompt:
“Create a login function”

Better prompt:
"Create a login function that:

  • Uses bcrypt for password comparison
  • Implements rate limiting
  • Logs failed attempts for security monitoring
  • Returns generic error messages (don’t reveal if user exists)
  • Sets secure cookie flags"

The explicit security requirements dramatically improve output quality.

Tools in My Stack

  • Snyk in CI/CD catches most OWASP issues
  • GitHub Advanced Security for dependency scanning
  • Pre-commit hooks for secrets detection
  • ESLint security plugins for JavaScript-specific issues

None of these are AI-specific, but they become essential when you’re generating more code faster.

The Mental Model

I treat AI like a very fast junior developer who’s read a lot of Stack Overflow. Great at patterns, terrible at threat models. Use accordingly.

What tools and processes are others using?

Alex, your tiered approach is exactly what I recommend to teams. The key insight is that not all code has the same risk profile.

Let me add some guardrails that I’ve seen work at scale:

Automated Guardrails

  1. Semgrep rules for AI-specific patterns - You can write custom rules targeting the exact vulnerability patterns AI tends to produce. For example:
rules:
  - id: ai-sql-concatenation
    pattern: f"SELECT ... {$VAR} ..."
    message: "Possible SQL injection - use parameterized queries"
  1. Mandatory security linters in pre-commit - Make it impossible to commit without passing security checks. This catches issues before they hit CI.

  2. PR labels for AI-generated code - Automatically tag PRs that contain AI-generated code (some IDEs can track this). Security reviewers can prioritize accordingly.

  3. Dependency pinning enforcement - AI loves suggesting npm install latest-shiny-thing. Require version pinning and vulnerability scanning.

Human Guardrails

  1. Security champions per team - One person who gets extra training and reviews all security-sensitive changes.

  2. Threat modeling before coding - For any feature touching auth, payments, or PII, do 15 minutes of threat modeling BEFORE writing code. AI can’t threat model for you.

  3. “Explain to me” reviews - For security-critical code, ask the author to explain the security model. If they can’t, they shouldn’t commit it.

The Meta-Point

Your “junior developer” mental model is perfect. You wouldn’t let a junior push auth code without review. Same rules apply to AI.

The teams that fail are the ones treating AI output as if it came from a senior engineer. It doesn’t.

The individual workflow Alex describes is solid, but I want to address how this scales to a team of 40+ engineers with varying levels of security awareness.

Team-Level Policies We’ve Implemented

  1. Code ownership files for security-critical paths - We use CODEOWNERS to require security team review for any changes to auth, payments, or PII handling directories. AI or not, those paths get extra eyes.

  2. AI usage logging - We configured our IDE plugins to log when AI assistance is used. Not for surveillance - for understanding our actual AI code percentage when auditors ask.

  3. Security training refresh - We updated our security onboarding to specifically cover AI-generated code risks. New hires learn the tiered approach from day one.

  4. Shared prompt libraries - Instead of every developer figuring out security-aware prompts individually, we maintain a team wiki of “blessed” prompts for common security-sensitive operations.

The Incentive Problem

Here’s what I’ve learned: policies without incentives fail.

If you measure developers on velocity alone, they’ll use AI to go fast and skip security review. If you measure them on incidents, they’ll avoid risky work entirely.

We now track:

  • Velocity (PRs shipped)
  • Security posture (vulnerabilities found in their code)
  • Review quality (vulnerabilities caught in their reviews)

The balanced scorecard matters. Developers who ship fast AND catch security issues are the ones who get promoted.

The 80/20 of Team Security

80% of your security issues will come from 20% of your code - the security-critical paths. Focus your human review there. Let AI and automation handle the rest.

Alex’s personal workflow is great. The challenge is making 40 engineers follow it consistently.

Adding a design perspective here, because security defaults are a UX problem as much as a code problem.

Alex’s workflow relies on the developer making the right choices about what code needs scrutiny. But developers are humans, and humans take shortcuts when they’re under pressure.

Designing for Secure Defaults

The best security isn’t “review everything carefully.” It’s “make the secure path the easy path.”

Examples:

  1. Component libraries with security baked in - If your form component automatically sanitizes inputs, developers don’t need to remember to do it manually.

  2. API clients that prevent injection - If your database client only accepts parameterized queries, SQL injection becomes impossible regardless of what AI suggests.

  3. Auth wrappers that handle the hard parts - If developers use withAuth(handler) instead of rolling their own session checks, the footgun is removed.

The AI-Specific Design Problem

AI coding tools have a UX issue: they present suggestions with equal confidence regardless of security implications. A function that concatenates strings looks just as “correct” as a function that uses parameterized queries.

What if AI tools could:

  • Flag security-sensitive suggestions with visual indicators?
  • Require explicit confirmation for auth/crypto/input handling code?
  • Default to the more secure pattern when multiple options exist?

Secure Defaults vs Security Review

Alex’s workflow is excellent for catching issues. But it’s reactive.

The proactive approach is designing your codebase so that the default path is secure. Then AI-generated code is more likely to be secure because it’s following secure patterns.

Make the pit of success the only pit available.