Here is a stat that sounds like progress: 78% of companies now claim to use skills-based hiring, and only 28% of tech job postings require a four-year degree. Google, IBM, Microsoft, and Apple have all publicly removed degree requirements from large swaths of their job listings. On the surface, this is a sea change — the doors are theoretically open for bootcamp graduates, self-taught developers, career changers, and anyone else who was previously filtered out by a credential requirement they could not afford or did not pursue.
But I have been living inside this transition for 18 months, and I need to tell you: the reality is far messier than the headlines suggest.
The Performative Inclusion Problem
The Brookings Institution published a warning that has stuck with me — they called the trend of removing degree requirements without building rigorous skills assessment “performative inclusion.” Companies rewrite their job ads to look progressive, generate positive press, and expand their top-of-funnel applicant numbers. But the actual hiring process — the interviews, the evaluation rubrics, the decision-making criteria — remains unchanged. The visible gate comes down. The invisible gates stay firmly in place.
This is exactly what happened at my company. We removed the CS degree requirement 18 months ago. Applications from non-traditional backgrounds increased 340%. Hiring from that expanded pool? Increased only 12%.
Why the gap? Because our interview process was still designed by and for CS graduates. Algorithm puzzles on whiteboards. System design questions that assume familiarity with academic distributed systems theory. Cultural fit assessments that unconsciously reward candidates who “look like” existing team members — same schools, same vocabulary, same reference points. We took down the front door but left every interior door locked.
The Skills Assessment Gap
Here is the core problem nobody has solved: if you are not filtering by degree, what are you filtering by?
- Portfolio reviews are subjective and favor people with free time to build side projects (which correlates with socioeconomic privilege, not skill).
- Take-home projects are increasingly AI-generated, making it nearly impossible to distinguish genuine work from prompted output.
- Live coding tests disadvantage people with test anxiety, neurodivergent candidates, and anyone who does not perform well under artificial pressure.
- Certifications vary wildly in rigor — a Google Cloud Professional certification and a weekend Udemy course sit on the same resume with no way to distinguish quality.
The industry has removed the old signal (degrees) without establishing reliable new signals. We are in a measurement vacuum, and in that vacuum, bias fills the gap.
What Is Actually Working
I have been studying this both inside my organization and across my network, and a few approaches are showing real results:
-
Structured work sample tests. Give candidates a realistic task drawn from the actual job — not an algorithmic brain teaser, but a genuine problem your team recently solved. Evaluate the process they follow, the questions they ask, and how they communicate their approach, not just whether the output compiles.
-
Paid trial periods. Two-week paid engagements where the candidate works embedded with the team on real tasks. This is expensive and logistically complex, but it produces the most reliable hiring signal we have found. The candidate also gets to evaluate the company, which improves retention.
-
Apprenticeship programs. Lower the hiring bar deliberately but invest in structured training programs — 3 to 6 months of mentored onboarding with clear milestones. This converts potential into performance in a way that a 45-minute interview cannot.
-
Skills taxonomies. Precisely defining what “proficient in React” actually means with observable behaviors — can build a component from scratch, can debug a state management issue, can review a PR for performance anti-patterns — rather than relying on self-assessment ratings from 1 to 5.
The AI Talent Pipeline Question
One data point that caught my attention: skills-based hiring expanded the AI/ML candidate pool by 8.2x compared to degree-required postings. This suggests that degree requirements were the primary bottleneck for AI talent — a field where practical experience, open-source contributions, and self-directed research often matter more than formal credentials.
But without standardized skills assessment for AI/ML, how do you distinguish genuine competence from certificate collecting? I have interviewed candidates with six AI certifications who cannot explain the difference between supervised and unsupervised learning. The signal-to-noise ratio in AI credentials is even worse than in traditional software engineering.
The Question I Keep Coming Back To
Has your company actually changed how it evaluates candidates — the interview questions, the rubrics, the decision criteria, the interviewers themselves — or did you just change how you write job postings?
Because those are two very different things, and only one of them actually opens doors.