We Pivoted to Skills-Based Hiring, But Are We Assessing the Wrong Skills for Remote Teams?

Two years ago, our edtech startup made a deliberate shift: we eliminated degree requirements and moved to skills-based hiring. It was transformative. We started finding incredible engineering talent from bootcamps, self-taught developers, and career-switchers who would have been filtered out by traditional resume screens.

The quality of our technical hires improved dramatically. Fewer false negatives. More diverse backgrounds. People who could actually ship code rather than just talk about algorithms they memorized for interviews.

But six months in, I started noticing a pattern that troubled me.

The Pattern Nobody Warned Me About

Some of our most technically brilliant engineers—people who aced our coding challenges, who demonstrated deep technical knowledge, who had impressive portfolios—were struggling. Not with the code. With remote work itself.

I’m not talking about Zoom fatigue or missing office snacks. I’m talking about fundamental work patterns:

  • Waiting for permission instead of moving forward: Engineers who could solve complex algorithmic problems but got blocked waiting for clarification on simple product questions that could be resolved with reasonable assumptions.

  • Synchronous dependency in an async world: Brilliant engineers who needed immediate feedback on every decision, turning our async-first culture into an all-day Slack conversation.

  • Inability to work through ambiguity: Strong technical contributors who excelled when given crisp requirements but floundered during discovery phases when we were figuring out what to build.

One example that crystallized this for me: We hired an engineer—let’s call him Alex—who crushed our technical assessment. Literally top 5% of all candidates we’d seen. Strong algorithmic thinking, clean code, great architectural instincts.

But Alex struggled in our remote environment. He needed near-constant check-ins. He’d send a Slack message, then wait hours for a response instead of making a judgment call. He’d get stuck on ambiguous product requirements rather than documenting assumptions and moving forward.

In an office, Alex would have been fine. He could have walked over to my desk or grabbed a product manager for a quick conversation. But in our distributed, async-heavy team, these collaboration patterns created bottlenecks.

We Were Evaluating Half the Picture

Here’s what I realized: We had optimized our hiring process to identify technical skills, but we hadn’t validated remote work competencies at all.

The skills that make someone successful remotely—self-direction, comfort with ambiguity, proactive communication, documentation discipline, autonomous decision-making—are completely orthogonal to technical ability.

They’re not “soft skills.” They’re critical work skills. And we weren’t assessing them.

According to recent data, 36% of job openings now include remote or hybrid options, and remote hiring is 29% faster for technical roles. The market has adapted to remote work. But I’m not convinced we’ve adapted our hiring criteria to match.

What I’m Trying Now

I’ve started building a dual evaluation framework:

  1. Technical proficiency (what we were already doing well)
  2. Remote work readiness (what we were missing)

For remote readiness, I’m looking at:

  • Past autonomous work: Have they built side projects? Contributed to open source? Worked independently before?
  • Communication patterns in interviews: Do they ask clarifying questions asynchronously during take-home projects? Or do they immediately jump on a call?
  • Comfort with ambiguity: During case studies, do they document assumptions and move forward? Or wait for perfect information?
  • Evidence of self-unblocking: Can they describe times they were stuck and figured it out themselves?

This isn’t about filtering out people who need support—good remote organizations should provide structure. But there’s a baseline level of self-direction that remote work requires, and I don’t think we can hire our way around it with better onboarding alone.

The Question That Keeps Me Up

Are we selecting for technical skills while accidentally filtering against the collaboration patterns that make remote teams successful?

I’d love to hear from other engineering leaders: How do you evaluate remote work competencies during hiring? Have you seen similar patterns? And critically—how do you do this without it becoming a subjective “culture fit” filter that replicates bias?

Because the shift to skills-based hiring was a huge step forward for equity and access. I don’t want to undo that progress. But I also can’t ignore that some technically strong engineers struggle in remote environments, and I owe it to both them and the team to get this right.

What are you seeing in your organizations?


Sources:

Keisha, this resonates so deeply with what we’ve experienced in financial services. We made the same shift to skills-based hiring about 18 months ago, and I’ve seen exactly this pattern.

What surprised me most: some of the engineers who thrived at Intel and Adobe—companies known for engineering excellence—struggled when they joined our distributed fintech team. Not because they weren’t brilliant. But because the collaboration model was fundamentally different.

The Decision-Making Gap

In financial services, we can’t afford to wait for daily standups to unblock critical decisions. When a payment processing issue occurs, or we need to implement a compliance change, engineers need to assess the situation, make informed decisions, and document their reasoning—often within hours, not days.

I’ve noticed that engineers who excel in co-located environments sometimes expect immediate access to subject matter experts. In an office, that works. You grab the senior architect for 10 minutes. Problem solved.

But in a distributed team across three time zones, that 10-minute conversation might require a 24-hour async exchange. Engineers who can’t tolerate that ambiguity create bottlenecks.

What We’re Trying

I’ve started incorporating behavioral interview questions specifically focused on past remote or autonomous work experiences:

  • “Tell me about a time you were blocked on a technical decision and couldn’t get immediate guidance. How did you move forward?”
  • “Describe a project where requirements were ambiguous. How did you handle the uncertainty?”
  • “Walk me through how you approach documentation. Can you show me examples?”

The answers are incredibly revealing. Strong remote candidates talk about making assumptions, documenting them, and moving forward. They describe building small proofs-of-concept to validate approaches. They show you their README files and decision logs.

Weaker candidates focus on how they “escalated to get clarity” or “scheduled meetings with stakeholders.” Those aren’t wrong behaviors—but if that’s the only approach, it signals sync-dependency.

The Assessment Gap

One thing I’m still figuring out: how do we assess async communication skills during interviews?

We’ve experimented with take-home projects that include intentionally ambiguous requirements. We observe: Do candidates email clarifying questions and wait for responses? Or do they document assumptions in their README and proceed?

But I worry this favors people who’ve already worked remotely and penalizes those who just haven’t had the opportunity yet.

Question for the thread: Has anyone tried structured assessments that simulate remote work collaboration patterns? I’m thinking something like a multi-day async code review exercise, or a technical design document that requires back-and-forth in Slack or GitHub comments.

I want to identify the capability for self-direction, not just prior remote work experience.


Sources:

Oh wow, this hits close to home. My failed startup had exactly this problem, and I didn’t realize it until it was too late.

The Startup Autopsy

We hired incredibly talented engineers—people who could build anything. But they expected product clarity that a pre-product-market-fit startup just doesn’t have. The ambiguity was part of the work, not a bug to be fixed.

I remember one engineer—brilliant coder—who would get frustrated every time we pivoted based on customer feedback. “But you said we were building X!” And I’d respond, “Yeah, but the customers told us they actually need Y.”

In retrospect, we needed people who could thrive in ambiguity, not just tolerate it. People who got energized by “we’re not sure yet, let’s figure it out together” rather than stressed by it.

The remote aspect made it worse. In person, I could sense the frustration and course-correct. Remotely, it festered until people quit.

What I Look for Now

At my current design systems role, I specifically look for signals of self-directed work:

Side projects are a huge green flag. If someone built something from scratch—even if it’s small—they’ve demonstrated:

  • Starting without perfect requirements
  • Making design decisions independently
  • Shipping something complete
  • Learning by doing

Open source contributions matter too. Not necessarily giant PRs, but meaningful async collaboration. Did they participate in GitHub discussions? Write clear issue descriptions? Respond thoughtfully to code review feedback?

These aren’t just “nice to haves”—they’re direct evidence of remote work skills.

The Question I Can’t Answer

Luis, your question about assessing capability vs. prior experience is the one that keeps me up.

I’m a bootcamp grad. I didn’t have open source contributions when I was applying for jobs because I didn’t know that world existed. I got my first break because someone saw my design portfolio and took a chance.

How do we screen for self-direction without accidentally screening out people who haven’t had access to the environments where they could develop those skills?

I don’t have a good answer. But I think about it constantly, especially as someone who benefited from someone giving me a shot despite gaps in my background.

Maybe the answer is being more explicit? Like, literally putting “This role requires comfort with ambiguity and autonomous decision-making” in job descriptions? And then asking candidates to self-assess and share examples—even if those examples aren’t from traditional tech work?

I’ve seen self-directed work ethic in someone who taught themselves guitar via YouTube, or who organized a community event from scratch. The skill transfers. But we have to ask the right questions to surface it.


Sources:

From the product side, this conversation is fascinating because it directly impacts product velocity.

The Product Perspective

Engineers who need constant clarification slow down the entire discovery process. When we’re doing customer research and iterating rapidly, I need engineers who can:

  • Listen to a customer conversation and form their own hypotheses
  • Build quick prototypes based on incomplete information
  • Make reasonable assumptions about edge cases
  • Ask questions asynchronously rather than blocking on every decision

The difference in velocity between teams with high self-direction vs. low self-direction is staggering. I’ve seen it firsthand.

How We Assess This

At our fintech startup, we’ve started using case studies during the interview process that specifically test for comfort with ambiguity.

We give candidates a product scenario with intentionally incomplete information. Something like:

“Our enterprise customers are complaining about the onboarding flow taking too long. Here’s some basic data. Design a solution.”

Strong candidates will:

  • Document their assumptions clearly (“I’m assuming ‘too long’ means more than 10 minutes based on industry benchmarks”)
  • Ask 2-3 clarifying questions but then move forward anyway
  • Show their reasoning process
  • Present multiple options with tradeoffs

Weak candidates will:

  • Get stuck trying to gather perfect information before starting
  • Ask 15+ questions before proposing anything
  • Wait for explicit permission to make assumptions

This isn’t about penalizing thoughtful analysis. It’s about identifying people who can make progress in the face of uncertainty.

The Market Signal

Keisha mentioned that remote hiring is 29% faster for technical roles. I think that’s a market efficiency signal—companies and candidates who’ve adapted to remote-first evaluation processes are moving faster.

But the question is: are we selecting for hiring speed or for remote fitness?

If we’re just optimizing for “who can get through our process fastest,” we might be selecting for people who are good at interviewing remotely, not people who are good at working remotely.

One Concrete Suggestion

Maya’s point about being explicit in job descriptions resonates. What if we literally added a section like:

This role requires:

  • Comfort making technical decisions with incomplete information
  • Ability to work asynchronously across time zones
  • Self-direction and autonomous problem-solving
  • Strong written communication and documentation skills

And then we ask candidates to self-assess on each dimension and provide specific examples—from anywhere in their life, not just prior tech jobs.

Someone who taught themselves a musical instrument via online resources has demonstrated remote learning and self-direction. Someone who organized a community fundraiser has demonstrated autonomous project management. These skills transfer.

We just have to surface them during interviews.


Sources:

This discussion highlights something I’ve been thinking about at the strategic level: we’re treating hiring as a purely screening problem when it’s actually an organizational design problem.

The Organizational Design Angle

Not all engineering roles require the same level of autonomy and self-direction. The question isn’t just “how do we hire for remote readiness”—it’s “what level of remote readiness does each role actually require?”

Consider:

Platform teams that build internal developer tools need extremely high autonomy. They’re often working ahead of product teams, making architectural decisions with incomplete requirements, and operating with minimal day-to-day oversight. If you can’t self-direct in that environment, you’ll struggle.

Feature teams with embedded product managers can provide significantly more structure. Daily standups, clear sprint goals, tight PM collaboration. Engineers in these roles still need some self-direction, but the organization provides more scaffolding.

The mistake I see companies make: treating “self-direction” as a universal hiring bar when it should be role-specific.

The Infrastructure Question

Here’s what concerns me about this entire conversation: we’re focusing on hiring for self-direction while ignoring that good remote organizations create systems that enable autonomy.

If engineers are blocked waiting for answers, that’s often an infrastructure problem, not just a hiring problem:

  • Do we have comprehensive documentation?
  • Do we have clear decision-making frameworks?
  • Do we have async communication rituals that work?
  • Do we have well-defined areas of ownership?
  • Do we have the right tooling for async collaboration?

I’ve seen companies blame “lack of self-direction” when the real issue was that the organization provided no clear way for people to unblock themselves. No docs. No decision logs. No clear ownership. No wonder people waited for permission.

Hiring for self-direction shouldn’t be an excuse for poor remote infrastructure.

A Framework I Use

When designing roles, I think about the autonomy-support spectrum:

High autonomy roles (principal engineers, platform architects, research teams):

  • Hire for demonstrated self-direction
  • Provide strategic guidance but minimal day-to-day structure
  • Expect people to define their own success metrics

Moderate autonomy roles (senior engineers, product-focused teams):

  • Hire for capability to self-direct with some support
  • Provide clear goals and checkpoints
  • Build systems that enable autonomous decision-making

Structured roles (junior engineers, highly specialized roles):

  • Don’t over-index on self-direction during hiring
  • Invest in mentorship, pairing, and structured onboarding
  • Gradually build autonomy over time

The Equity Dimension

Maya and Luis both touched on this, but I want to be explicit: if we make “demonstrated self-direction” a hiring requirement without nuance, we risk replicating the same access barriers that skills-based hiring was supposed to solve.

People who’ve had access to:

  • Remote work experience already
  • Open source communities
  • Side project time and resources
  • Bootcamps that teach async collaboration

…will have an advantage over people who haven’t had those opportunities, even if the underlying capability is the same.

That’s not hypothetical. It’s a real equity concern.

My Recommendation

  1. Be explicit about autonomy requirements in job descriptions—David’s suggestion is solid.

  2. Match hiring criteria to actual role needs—not every role needs the same level of self-direction.

  3. Invest in remote infrastructure—don’t use “they’re not self-directed enough” as a crutch for poor organizational systems.

  4. Look for transferable evidence of autonomy—Maya’s examples (self-taught skills, community organizing) are spot-on.

  5. Build systems that support growing autonomy—junior engineers aren’t self-directed on day one, and that’s okay if you have the infrastructure to support their development.

The shift to skills-based hiring was progress. Now we need to be thoughtful about which skills actually matter for which roles—and honest about whether we’re screening for capability or just for prior access to specific environments.


Sources: