The hiring landscape just shifted again, and I don’t think most companies have caught up.
Karat’s 2026 Engineering Interview Trends report confirms what many of us have been sensing: a growing number of companies now allow candidates to use AI tools — Copilot, ChatGPT, Cursor — during technical interviews. This isn’t a fringe experiment anymore. It’s becoming policy at companies that collectively hire tens of thousands of engineers per year.
The Rationale Makes Sense (On Paper)
The logic is straightforward. Developers use AI tools daily at work. GitHub’s data shows that over 70% of professional developers use some form of AI coding assistant regularly. Banning these tools in interviews creates an artificial environment that doesn’t reflect how people actually build software. It’s like testing a carpenter’s skill but telling them they can’t use a power drill — sure, you’ll learn something, but is it the right something?
But Here’s the Problem
If a candidate uses Copilot to solve a LeetCode-style problem, what exactly did you learn about their ability? You tested their prompting skill and their ability to accept or reject suggestions. You didn’t test their problem-solving, their ability to reason about edge cases, or their understanding of algorithmic tradeoffs. The interview became a test of a different skill — one that matters, but maybe not the one you thought you were evaluating.
What We Tried
At my company, we ran an experiment for three months. We allowed candidates to use any AI tool during take-home assignments. The theory was that it would mirror real work and surface candidates who knew how to leverage AI effectively.
The result? Every single submission looked polished. Code was clean, well-commented, had test coverage. We couldn’t differentiate between a strong senior engineer and a junior developer who spent extra time prompting. The signal-to-noise ratio went to near zero.
So we pivoted. We switched to live pairing sessions where we watch how candidates use AI, not just the output. This was revelatory. We saw candidates who:
- Used Copilot as a starting point but then restructured the code based on their own mental model
- Blindly accepted AI suggestions without reviewing them (red flag)
- Caught subtle bugs in AI-generated code that would have caused production issues
- Knew when to turn AI off and reason from first principles
The delta between these behaviors was massive and immediately visible.
The Emerging Interview Patterns
Across my network, I’m seeing three new interview formats gaining traction:
- “Explain this AI-generated code” — Give the candidate a block of AI-generated code with subtle issues. Can they identify the problems? Do they understand what it does?
- “Debug this AI mistake” — Present code that an LLM produced with a known flaw. Watch how they diagnose and fix it.
- “Architect a system (whiteboard, no AI)” — Strip away tools entirely and test raw system design thinking. This is where genuine understanding shows through.
The Speed vs. Quality Tradeoff
Here’s the tension nobody talks about enough: project-based and pairing-based interviews are dramatically better at evaluating candidates. They’re also 3x slower. A take-home assignment plus a pairing session plus a system design whiteboard takes a week of the candidate’s time and 8-10 hours of interviewer time.
Top candidates ghost slow processes. I’ve lost three excellent candidates this quarter to companies with 2-day hiring loops. When the market is competitive, speed is a feature.
My Current Framework
After a lot of iteration, here’s where we’ve landed:
- Phone screen (30 min, no AI) — Basic technical conversation, culture fit signal
- Live pairing WITH AI tools (90 min) — Real problem, real tools, we observe their process
- System design (60 min, whiteboard only, no AI) — Architecture thinking from first principles
- Culture and values conversation (45 min) — Cross-functional, not just engineering
Total candidate time commitment: ~4 hours. Total interviewer time: ~6 hours across four people. It’s not perfect, but it’s workable.
The Open Question
I’m still not satisfied with our process, and I don’t know anyone who is. The AI-enabled interviews give us better signal than the old LeetCode grind, but we’re still figuring out what “good” looks like.
Has anyone here found a hiring process that actually works in the AI era? Specifically:
- How do you calibrate evaluation when candidates have wildly different comfort levels with AI tools?
- How do you keep the process fast enough to not lose candidates?
- How do you evaluate AI-native junior developers who’ve never written code without AI assistance?
Genuinely curious what others are seeing. This feels like one of those problems that every company is solving independently when we should be sharing notes.