78% of Companies Say They Use Skills-First Hiring — But Are We Actually Evaluating Skills or Just Vibes?

Here is a stat that sounds like progress: 78% of companies now claim to use skills-based hiring, and only 28% of tech job postings require a four-year degree. Google, IBM, Microsoft, and Apple have all publicly removed degree requirements from large swaths of their job listings. On the surface, this is a sea change — the doors are theoretically open for bootcamp graduates, self-taught developers, career changers, and anyone else who was previously filtered out by a credential requirement they could not afford or did not pursue.

But I have been living inside this transition for 18 months, and I need to tell you: the reality is far messier than the headlines suggest.

The Performative Inclusion Problem

The Brookings Institution published a warning that has stuck with me — they called the trend of removing degree requirements without building rigorous skills assessment “performative inclusion.” Companies rewrite their job ads to look progressive, generate positive press, and expand their top-of-funnel applicant numbers. But the actual hiring process — the interviews, the evaluation rubrics, the decision-making criteria — remains unchanged. The visible gate comes down. The invisible gates stay firmly in place.

This is exactly what happened at my company. We removed the CS degree requirement 18 months ago. Applications from non-traditional backgrounds increased 340%. Hiring from that expanded pool? Increased only 12%.

Why the gap? Because our interview process was still designed by and for CS graduates. Algorithm puzzles on whiteboards. System design questions that assume familiarity with academic distributed systems theory. Cultural fit assessments that unconsciously reward candidates who “look like” existing team members — same schools, same vocabulary, same reference points. We took down the front door but left every interior door locked.

The Skills Assessment Gap

Here is the core problem nobody has solved: if you are not filtering by degree, what are you filtering by?

  • Portfolio reviews are subjective and favor people with free time to build side projects (which correlates with socioeconomic privilege, not skill).
  • Take-home projects are increasingly AI-generated, making it nearly impossible to distinguish genuine work from prompted output.
  • Live coding tests disadvantage people with test anxiety, neurodivergent candidates, and anyone who does not perform well under artificial pressure.
  • Certifications vary wildly in rigor — a Google Cloud Professional certification and a weekend Udemy course sit on the same resume with no way to distinguish quality.

The industry has removed the old signal (degrees) without establishing reliable new signals. We are in a measurement vacuum, and in that vacuum, bias fills the gap.

What Is Actually Working

I have been studying this both inside my organization and across my network, and a few approaches are showing real results:

  1. Structured work sample tests. Give candidates a realistic task drawn from the actual job — not an algorithmic brain teaser, but a genuine problem your team recently solved. Evaluate the process they follow, the questions they ask, and how they communicate their approach, not just whether the output compiles.

  2. Paid trial periods. Two-week paid engagements where the candidate works embedded with the team on real tasks. This is expensive and logistically complex, but it produces the most reliable hiring signal we have found. The candidate also gets to evaluate the company, which improves retention.

  3. Apprenticeship programs. Lower the hiring bar deliberately but invest in structured training programs — 3 to 6 months of mentored onboarding with clear milestones. This converts potential into performance in a way that a 45-minute interview cannot.

  4. Skills taxonomies. Precisely defining what “proficient in React” actually means with observable behaviors — can build a component from scratch, can debug a state management issue, can review a PR for performance anti-patterns — rather than relying on self-assessment ratings from 1 to 5.

The AI Talent Pipeline Question

One data point that caught my attention: skills-based hiring expanded the AI/ML candidate pool by 8.2x compared to degree-required postings. This suggests that degree requirements were the primary bottleneck for AI talent — a field where practical experience, open-source contributions, and self-directed research often matter more than formal credentials.

But without standardized skills assessment for AI/ML, how do you distinguish genuine competence from certificate collecting? I have interviewed candidates with six AI certifications who cannot explain the difference between supervised and unsupervised learning. The signal-to-noise ratio in AI credentials is even worse than in traditional software engineering.

The Question I Keep Coming Back To

Has your company actually changed how it evaluates candidates — the interview questions, the rubrics, the decision criteria, the interviewers themselves — or did you just change how you write job postings?

Because those are two very different things, and only one of them actually opens doors.

The 340% increase in applications with only 12% increase in hires is exactly our experience too. We saw nearly identical numbers when we dropped degree requirements in late 2024 — massive funnel expansion at the top, almost no change at the bottom.

We solved part of the problem by completely redesigning our interview loop. Here is what we replaced and why:

Old process → New process:

  • Algorithm whiteboard → Code review exercise. We give candidates a real pull request from our codebase (sanitized of proprietary details) and ask them to review it. We evaluate whether they catch bugs, identify performance issues, ask clarifying questions, and provide constructive feedback. This measures code reading, attention to detail, and communication — skills we actually need every day.

  • System design interview → “Debug this production issue” simulation. We present a scenario with logs, metrics, and error reports from a real incident (anonymized). The candidate walks us through their debugging approach. This tests reasoning under ambiguity, which is what system reliability actually requires — not the ability to draw boxes and arrows for a hypothetical distributed system.

  • Solo coding challenge → 30-minute pairing session. The candidate pairs with a team member on a real bug from our backlog. We evaluate collaboration, communication, and how they approach an unfamiliar codebase. Do they read the code first or start typing? Do they ask questions? Do they explain their thinking?

These exercises measure the skills we actually need — code reading, debugging, communication, and collaborative problem-solving. None of them require a CS degree to perform well. All of them reward practical experience.

The result: our non-traditional background hire rate went from 8% to 31% within two quarters. And retention for those hires is actually higher than our overall average — 94% vs 87% at the one-year mark — which suggests the new process is identifying genuinely strong candidates, not lowering the bar.

But here is the part people underestimate: it required our interviewers to be completely retrained on evaluation criteria. That was its own 3-month project. We had to build new rubrics, run calibration sessions where multiple interviewers scored the same candidate independently and compared notes, and actively de-bias the cultural fit assessment (which we renamed “team collaboration fit” and gave specific behavioral criteria). Most companies skip this step, which is why removing the degree requirement alone does not change outcomes.

The microcredential landscape is wild right now, and I think it is the single biggest obstacle to making skills-based hiring actually work at scale.

I have reviewed candidates over the past year with 15+ certificates from Coursera, Udemy, LinkedIn Learning, various bootcamps, and vendor-specific programs. Some of these candidates are excellent — genuinely skilled, deeply knowledgeable, able to articulate design decisions and trade-offs. Some of them cannot explain the basics of what they are certified in. I had a candidate with an “Advanced React” certificate who could not explain the difference between state and props. Another with a “Machine Learning Specialization” who did not know what overfitting means.

The certification market has been captured by credential mills that optimize for completion rates, not competence. The business model is selling certificates, not verifying skills. Platforms want high completion rates because it drives subscription renewals and positive reviews. So courses get easier, assessments get more forgiving, and the certificate becomes meaningless as a hiring signal.

This creates a brutal information asymmetry problem:

  • Candidates invest time and money in certifications believing they signal competence to employers.
  • Employers see a wall of certificates and have no way to distinguish rigorous training from checkbox completion.
  • Quality programs get drowned out by volume — a candidate with one genuinely rigorous certification looks less impressive on paper than a candidate with twelve easy ones.

Until there is a standardized, industry-recognized skills assessment — something with the rigor and portability of the bar exam, the CPA exam, or professional engineering licensure — skills-based hiring will remain fundamentally subjective. Every company ends up building their own assessment, which is expensive, inconsistent, and often just as biased as the degree requirement it replaced.

I am not saying we need occupational licensing for software developers. The bar exam analogy is imperfect because software engineering evolves too fast for a static test. But we need something more rigorous than “I completed a 6-week online course and here is my PDF certificate.” Some kind of practical, regularly updated, industry-governed assessment that actually tests whether someone can do the work.

Without that shared standard, we are each building our own measuring sticks and wondering why our measurements do not agree.

As someone who is self-taught and spent years feeling impostor syndrome about not having a CS degree, this conversation matters deeply to me. I want to share the candidate-side perspective because I think it gets lost in the hiring process design discussions.

The degree removal was meaningful. I would not have applied to half my past jobs if the listing required a CS degree. Not because I thought I could not do the work — I knew I could — but because I assumed a degree requirement meant they wanted a specific type of person and I was not it. Removing that line from the job posting is not just a policy change; it is a psychological signal that says “we are open to people like you.” That signal matters, even if the rest of the process has not caught up yet.

But the interview process still felt designed for CS grads. I have done interviews where the first question was “describe the time complexity of quicksort.” I can sort data efficiently in production code, but I do not have the academic vocabulary to discuss algorithmic complexity the way someone who took a data structures class would. The knowledge is there; the language is different. And when your evaluation rubric rewards the academic language, you are still filtering for the degree even if you removed the requirement.

The best interview experience I ever had was a company that gave me access to their staging environment, pointed me to a bug report, and said “find and fix this bug.” No algorithms, no trick questions, no whiteboard, no LeetCode. Just real work in a real codebase.

I spent 90 minutes reading their code, understanding the architecture, tracing the bug to a race condition in their WebSocket handler, and writing a fix with a test. Then I spent 30 minutes explaining my approach to two engineers on the team — where I looked first, why I went down certain paths, what I tried that did not work, and how I verified the fix.

I got the offer. And that company also had the best engineering team I have ever worked on — turns out hiring for practical skills gets you practical engineers. The team was diverse in background (two bootcamp grads, one career changer from finance, one self-taught like me, and two CS grads), and the quality of work was exceptional because everyone was selected for their ability to do the job, not their ability to perform in an artificial testing environment.

The one thing I would add to Keisha’s list of what works: make the evaluation criteria transparent to candidates. Tell people what you are testing and how you are scoring it. The mystification of the interview process disproportionately hurts non-traditional candidates who do not have a network of CS grads to coach them on what companies “really” look for. Transparency is an accessibility feature.