Time to First Commit, First Feature, First On-Call: The Metrics That Actually Matter

If you want to improve onboarding, you need to measure it. But most companies track the wrong things.

What companies typically measure:

  • :white_check_mark: Completed compliance training
  • :white_check_mark: Signed all documents
  • :white_check_mark: Has badge access
  • :white_check_mark: Attended orientation

These are checkbox metrics. They tell you bureaucratic requirements were met, not whether someone is ramping successfully.

What actually matters:

The Three Milestones That Predict Success

1. Time to First Commit

Definition: Days from start date to first merged code (not PR opened - merged).

Target:

  • Senior engineers: Day 3-5
  • Junior engineers: Day 5-10

Why it matters:

First commit proves the system works. If someone can’t merge code in week 1, something is broken:

  • Environment issues
  • Access problems
  • No clear starter task
  • Absent mentor

Studies show engineers who commit by day 5 have significantly better 6-month outcomes than those who commit after day 10.

2. Time to First Feature

Definition: Days from start date to owning and shipping a feature end-to-end (not just implementing a ticket - owning).

Target:

  • Senior engineers: 2-3 weeks
  • Junior engineers: 4-6 weeks

Why it matters:

First feature tests full-cycle capability:

  • Understanding requirements
  • Design decisions
  • Implementation
  • Testing
  • Deployment
  • Monitoring/iteration

This is the real test of becoming a contributing team member.

3. Time to First On-Call

Definition: Days from start date to completing first on-call rotation (with backup support).

Target:

  • Most roles: Day 60-90
  • Security/infra: Day 90-120

Why it matters:

On-call readiness indicates operational understanding:

  • Can diagnose issues independently (or know who to escalate to)
  • Understands system failure modes
  • Knows runbooks and incident processes
  • Trusted with production responsibility

Leading vs Lagging Indicators

Metric Type Examples When to Track
Leading Questions asked, mentor sessions attended, documentation contributions Daily in weeks 1-4
Coincident First commit, first PR, first feature Weekly in month 1-2
Lagging On-call readiness, manager confidence, peer recognition Monthly in months 2-4

How We Track This

Automated:

  • First commit: Git events to Slack notification
  • First feature: Jira ticket closed with “new hire” label
  • PR cycle time: Pulled from GitHub automatically

Manual:

  • Manager confidence score: 1-10 rating at day 30/60/90
  • New hire satisfaction: Survey at day 30/60/90
  • Peer assessment: 360 feedback at day 90

The Dashboard

We maintain an onboarding health dashboard that shows:

  • Average time-to-first-commit (current vs historical)
  • Distribution of onboarding trajectories
  • Red flags (anyone past day 7 without commit?)
  • Cohort comparisons (is this month’s cohort on track?)

What the Data Tells Us

After two years of tracking:

  • Our median first commit moved from day 8 to day 3
  • First feature moved from week 6 to week 3
  • Early attrition dropped from 18% to 9%
  • Manager satisfaction with onboarding went from 5.2/10 to 8.1/10

You can’t improve what you don’t measure. These three milestones give you a clear signal on whether your onboarding changes are working.

Here’s how we track these metrics at scale (80+ engineers, 40+ hires/year):

The Tooling Stack

Automated data collection:

  • GitHub webhooks → First commit, PR cycle times, review participation
  • Jira automation → First feature assigned, completed, time in each stage
  • PagerDuty → On-call rotation participation, incident response metrics
  • Slack events → Questions asked in help channels (engagement signal)

Manual data collection:

  • Lattice → 30/60/90 day check-in surveys
  • Manager forms → Confidence scores at milestones
  • Buddy forms → Qualitative feedback on onboarding experience

The aggregation layer:

We use a simple dashboard (Retool + PostgreSQL) that pulls from all sources and shows:

  1. Individual view: Where is this person in their onboarding journey?
  2. Cohort view: How is this month’s hiring cohort performing?
  3. Trend view: Are we getting better or worse over time?
  4. Alert view: Who needs intervention?

Red Flag Triggers

We’ve codified when to escalate:

Signal Threshold Action
No commit by day 7 Automatic Buddy + manager notified
No PR merged by day 14 Automatic Engineering lead escalation
Manager confidence <5 at day 30 Automatic Skip-level 1:1 scheduled
New hire satisfaction <6 at day 30 Automatic HR partner + manager meeting
Questions declining after week 2 Manual review Could be disengagement or confusion

What We’ve Learned

1. The data reveals patterns you can’t see otherwise

We discovered one team consistently had slower onboarding. Investigation revealed their documentation was outdated and their mentor wasn’t available. Fixed it, numbers improved.

2. Comparing across teams creates healthy accountability

When Team A sees Team B onboards engineers 40% faster, they ask “what are you doing differently?” Knowledge sharing happens naturally.

3. Individual variance matters

Two engineers starting the same day can have wildly different trajectories. The metrics help you figure out why - is it the person, the team, the project, or the support structure?

4. Qualitative + quantitative together

The numbers tell you something is wrong. The surveys and 1:1s tell you what. You need both.

The distinction between leading and lagging indicators is crucial for actionable improvement.

Lagging indicators tell you what happened:

  • “New hire took 5 months to full productivity”
  • “Early attrition was 15% this year”
  • “Manager confidence at day 90 was 6.5/10”

Useful for retrospectives. Not useful for intervention.

Leading indicators tell you what’s happening:

  • “New hire hasn’t committed code by day 5”
  • “Questions per day dropped from 8 to 2 in week 2”
  • “Mentor canceled 3 of 5 scheduled sessions”

Actionable. You can intervene while it still matters.

Our Leading Indicator Framework

Week 1 signals:

Signal Healthy Warning Red Flag
Environment working Day 1 Day 2-3 Day 4+
First commit Day 3-5 Day 6-7 Day 8+
Mentor sessions 4+ hours 2-4 hours <2 hours
Questions asked 10+ 5-10 <5
1:1 with manager Completed Rescheduled Canceled

Week 2-4 signals:

Signal Healthy Warning Red Flag
PR merged Week 1-2 Week 2-3 Week 3+
Code review given Week 2 Week 3 Week 4+
Team meeting participation Active Listening Absent
Documentation contribution Week 3-4 Week 5+ None

The intervention protocol:

  1. Warning signal → Buddy checks in (informal)
  2. Red flag signal → Manager escalation (formal)
  3. Multiple red flags → Skip-level + HR (support plan)

Why this matters:

A struggling new hire at day 7 can be helped. A struggling new hire at day 60 might be a lost cause. The leading indicators are your early warning system.

We’ve caught and corrected several potential bad outcomes by watching these signals. In each case, the problem was fixable - they just needed someone to notice and act.

The “Day 3 first commit” target is a forcing function that improves everything upstream.

Here’s what I mean:

When we set “first commit by day 3” as an explicit goal, it forced us to answer:

  • Is the dev environment actually ready on day 1?
  • Do we have appropriate starter tasks queued?
  • Is the buddy actually available and prepared?
  • Can someone go from “just joined” to “code merged” in 3 days?

If any of those answers was “no,” we couldn’t hit the target. So we had to fix them.

The ripple effects:

Pre-boarding improved because day 1 matters more when day 3 is a milestone.

Starter tasks got better because we needed well-scoped, low-risk work ready to go.

Golden paths got invested in because environment setup blocking day 1 was now visible as blocking day 3.

Buddy accountability increased because if the new hire doesn’t commit by day 3, the first question is “what happened in the buddy sessions?”

The metric became a diagnostic tool:

If first commit is delayed by… Then investigate…
Environment issues Golden path, IT provisioning
No appropriate task Task queue, planning
Unclear process Documentation, buddy support
Code quality issues Starter task scope, mentorship
PR review delays Team responsiveness, reviewer availability

The psychology:

There’s something powerful about a new hire shipping code on day 3. They feel productive. The team sees them contributing. Momentum builds.

Compare to day 10: “What have I even been doing here? Am I slow? Is something wrong with me?”

The day 3 target isn’t about being aggressive. It’s about setting up success early and building from there.