Okay, I need to be honest here: My startup wasted months on bad hires because interviews just “felt good.” ![]()
We’d have great conversations, people seemed smart and friendly, everyone liked them… and then 2 months in, we’d realize they couldn’t actually do the job. Or worse, they could do the job but their work style was completely incompatible with how we operated.
The Problem: Great Talkers Aren’t Always Great Doers
This became painfully clear when we hired a designer whose portfolio was stunning and who interviewed beautifully. Three months in:
- Couldn’t translate abstract requirements into concrete designs
- Needed excessive hand-holding on every project
- Defensive about feedback that contradicted their initial approach
The interview “felt right.” The actual work was a struggle.
That’s when I fell into a research spiral about assessment methods. ![]()
What I Tried (The Good, Bad, and Ugly)
Whiteboard Design Challenges
Asked candidates to “design an app for [random use case]” on the spot.
Problem: Stressful, artificial, didn’t reflect real work conditions. Lost good candidates who don’t perform well under pressure.
Portfolio Reviews Only
Looked at past work, asked them to talk through their process.
Problem: Can’t verify how much was their work vs team effort. Can’t see how they think through new problems.
Take-Home Design Projects (with caveats)
Gave real brief, asked for deliverable within a week.
What worked: Saw actual design thinking, craft quality, presentation skills
What didn’t: 8-hour time commitment—lost candidates to better interview experiences
Real Work Samples (Current Approach)
Give anonymized past project brief: “Here’s a problem we faced—redesign this flow.”
Why this works better:
- Real problem, not contrived case study
- Shows actual thinking process
- See how they handle ambiguity
- Candidates get insight into our real work
The 40% Reduction Data Point
I kept coming back to this research finding: Skills-based assessments reduce bad hires by 40%.
That’s massive. If you’re making 10 hires a year and 3 of them fail (30% failure rate, pretty typical), skills assessments could reduce that to 1.8 failures.
At $17K per bad hire (conservative estimate), that’s $20K saved annually just from better assessment.
My Current Hybrid Assessment Model
Here’s what I’ve evolved to for design hiring (I think the principle applies across roles):
Phase 1: Portfolio + Conversation (1 hour)
Understanding their past work and communication style
Phase 2: Work Sample (4-6 hours, compensated)
Real past project: “Our signup flow had 60% drop-off at step 3. Redesign it.”
Phase 3: Presentation + Collaboration (1.5 hours)
Present their solution to the team, get feedback, iterate in real-time
What this reveals:
Design craft and thinking
Communication and presentation skills
How they respond to feedback
Collaboration style
Strategic thinking about business goals
The Balance: Respect Candidate Time
Here’s my biggest learning: If we wouldn’t work for free, why expect candidates to?
We started compensating for work samples:
- $150 for junior roles
- $300 for senior roles
Unexpected benefits:
- Better completion rates (80% vs 50% before)
- Stronger candidate experience
- Candidates who value their time appropriately
My Question to This Forum
What assessment methods have you actually validated as predictive?
I’m especially curious about:
- How you assess skills without overwhelming candidates
- What signals you’ve found correlate with long-term success
- Failed experiments (what seemed like it should work but didn’t)
- How you balance standardization with role-specific needs
I suspect every role has different assessment challenges. Engineers can do coding tests. Designers can do visual work. But what about PMs? Marketers? Operations folks?
What’s worked for you? What just created false confidence? ![]()
![]()