We just published our engineering career ladder publicly - here's what happened

Six months ago, my company made a controversial decision: we published our complete engineering career ladder publicly. Not just internally - we put it on our careers page for everyone to see. Detailed rubrics for each level from Engineer I through Principal, example evidence, anti-patterns, and yes, even compensation bands.

The leadership team fought me on this. The concerns were all the usual ones:

  • “Competitors will poach our people if they know the salaries”
  • “Employees will treat it like a checklist and game the system”
  • “We need flexibility for special cases”
  • “It’ll create more arguments about promotions, not fewer”

I pushed back hard because I’d seen the damage from opacity in my career. At Google and Slack, I watched talented engineers - especially women and people of color - get passed over for promotions because they didn’t know how to “play the game.” The rules were clear to insiders and mysterious to everyone else.

Here’s what actually happened after we went transparent:

The Good:

  • Promotion-related questions in 1-on-1s dropped by 40%. People could see exactly where they were and what was next.
  • Engineers started self-selecting appropriately. People stopped asking for promotions when they could see they weren’t ready yet.
  • Hiring got easier. Candidates loved the clarity and felt more confident accepting offers.
  • Cross-team calibration became more consistent. Managers had clear shared standards.
  • Trust increased measurably in our engagement surveys.

The Challenging:

  • Some engineers did start checking boxes rather than focusing on impact. We had to add language clarifying that meeting the rubric is necessary but not sufficient.
  • A few people realized they were underleveled compared to market and got frustrated.
  • The documentation burden increased - managers now need to explicitly justify every promotion decision with evidence.
  • We had some awkward conversations when people saw that peers at same level had different comp (due to hiring market timing).

The Nuanced Part:
The rubric isn’t perfect. We still need judgment for edge cases. Someone can do something incredibly valuable that’s not in the rubric. Or someone can check all the boxes but not actually demonstrate impact.

We’ve had to clarify repeatedly: the rubric shows you what’s typically required, but promotion decisions still involve human judgment. It’s not a vending machine where you insert achievements and get a promotion.

The Results:
Would I do it again? Absolutely. The transparency has made our promotion process more fair, more consistent, and more trusted. Yes, it’s more work. But it’s good work. It forces us to be clear about what we value and hold ourselves accountable to those standards.

The complaints we’ve gotten are mostly from people who benefited from the old opaque system. The vast majority of engineers love the clarity.

Questions for the community:

  • How transparent is your company’s promotion process?
  • If you have published rubrics, what’s been your experience?
  • If you don’t, what’s holding you back?
  • Should all companies do this, or are there legitimate reasons to stay opaque?

Keisha, this is EXACTLY what I need at TechFlow right now.

I literally just posted about being frustrated with the opaque Staff promotion process, and reading this gives me hope that there’s a better way.

The 40% reduction in promotion questions alone tells me this works. Right now, I’m spending so much mental energy trying to decode the hidden rules. What does “cross-team impact” actually mean? How much is enough? Who decides?

I have so many questions about implementation:

How do you handle edge cases? You mentioned that sometimes people do incredible things not in the rubric. Does that mean you update the rubric afterward? Or do you have a process for exceptions?

What about gaming the system? You said some people started checking boxes rather than focusing on impact. How do you screen for that? Can you tell the difference between someone genuinely operating at the next level vs someone who’s cleverly manufacturing evidence?

How often do you update the rubric? As the company and technology evolves, do the expectations change? How do you handle people who were promoted under the old rubric when the new one is more rigorous?

The big one: Can you share an example Staff rubric? Even just the high-level bullets would be incredibly helpful. I want to see what “good enough for Staff” actually looks like in concrete terms.

I’m going to show this thread to my manager and make the case that we need this kind of transparency at TechFlow. The fact that it improved trust and reduced frustration is huge.

Keisha, this is great and I’m supportive of the direction, but I want to share some lessons from when we tried something similar at my Fortune 500 company.

We published internal career frameworks about two years ago - not as detailed as what you’re describing, but clear rubrics and expectations for each level. The results were mixed, and I think there are some legitimate challenges that don’t mean you’re doing it wrong, just that transparency isn’t a silver bullet.

What worked for us:

  • Much more consistent promotions across 40+ engineers and multiple teams
  • Calibration sessions became way more efficient with shared language
  • Reduced favoritism - harder for managers to promote their buddies without justification
  • New managers especially appreciated having clear guidelines

What was harder than expected:

  • Some of our best promotions in the past were for unique contributions that wouldn’t fit any rubric. One engineer single-handedly fixed a critical security vulnerability that saved the company millions. Hard to put “save company millions” in a rubric.
  • Rigid adherence to the framework made it hard to correct market-based pay issues. Sometimes we just need to pay someone more to retain them, even if they haven’t hit the next level.
  • Documentation burden is real. We went from quarterly promotion cycles to twice a year because the evidence packages took so long to prepare.
  • Some managers are better at documentation than others, which created inequity. An amazing engineer with a mediocre manager might get passed over compared to a good engineer with a great advocate.

Our solution has been a hybrid approach:

  • Published framework that sets clear expectations (like yours)
  • Calibration committee that reviews all promotions and can make exceptions with justification
  • We build “case law” over time - when we make an exception, we document it so there’s precedent for similar situations

One question: How do you handle the “we need this person to stay” promotions? Sometimes someone gets a competing offer and we know they’re not quite at the next level yet, but losing them would be catastrophic. Do you give them the promotion? Adjust comp without title change? Let them go?

I think transparency is the right direction, but implementation details really matter. Sounds like you’re doing it well, but companies need to be ready for the complexity it introduces.

As someone on the design side, I’m reading this with SO much envy. Design career ladders are even murkier than engineering, and I desperately wish we had this kind of transparency.

At my current company, IC designers basically hit a ceiling at Senior. Want to advance beyond that? You have to go into management. This forces amazing designers who love craft into mediocre management roles because it’s the only path to more compensation and recognition.

The result? We have design managers who don’t want to manage, and we’ve lost some of our best IC contributors to other companies that have better IC tracks.

My big question: How did you define “impact” at each level for design-adjacent work?

Engineering impact is easier to quantify - you can measure system uptime, performance improvements, code review velocity, etc. But design impact is squishier:

  • How do you measure good design? User satisfaction? But good design sometimes intentionally adds friction.
  • What about the design work that prevents problems rather than solving them? Hard to prove impact for disasters that didn’t happen.
  • Peer review and feedback are important, but they can be subjective and political.

I’m also curious about cross-functional collaboration in your rubrics. Engineering promotions shouldn’t just be about technical skills - they should include how well engineers collaborate with design, product, and other functions. Do your Staff criteria include things like:

  • “Regularly incorporates design feedback and improves design-eng collaboration”
  • “Partners with product to validate technical feasibility early”
  • “Mentors engineers on user experience thinking”

Because in my experience, the engineers who get promoted purely on technical merit often make terrible cross-functional partners. And that hurts the whole product.

Would love to see engineering career frameworks that reward collaborative, cross-functional leadership as much as technical depth. That would make our design-eng partnerships so much better.

Data scientist here with thoughts on the metrics side of this.

What I love about published rubrics: you can actually measure whether they’re working. At Anthropic, we did something similar and I’ve been tracking the outcomes.

Here’s what we’ve measured:

Time-to-promotion by level:

  • Before rubrics: 2.5 years average from Mid to Senior (but with huge variance - some people 1 year, some 5 years)
  • After rubrics: 2.2 years average with much tighter distribution
  • Interpretation: more consistent and slightly faster promotions

Demographic equity:

  • Before: Women and URMs were promoted 15-20% slower than peers with similar tenure
  • After: Gap reduced to 5-7% (still not perfect, but better)
  • Interpretation: transparency surfaces and reduces bias

Manager variance:

  • Before: Some managers promoted people 2x faster than others
  • After: Much more consistent across managers
  • But: managers who are better at documentation/advocacy still have an edge

The challenge: what do you do with this data?

We found that published rubrics enable measurement, but they don’t automatically fix the problems. You need to:

  1. Actually review the data regularly
  2. Investigate outliers (why is this team promoting faster?)
  3. Train managers on documentation and advocacy
  4. Hold managers accountable for equity in promotions

One finding that surprised us: After we published compensation bands, attrition among high performers actually increased slightly in the first 6 months. Turns out, some people realized they were underpaid relative to market and started looking. We had to do a big market correction.

Was this bad? No - those people probably would have left anyway once they discovered the gap. Transparency just surfaced the problem faster.

My recommendation: If you publish rubrics, also commit to measuring outcomes by demographics, team, and manager. Build dashboards. Review them quarterly. Otherwise you’re just adding process without accountability.

And be ready for some uncomfortable data. You’ll discover that your promotion process has biases you didn’t know about. But that’s how you fix them.