Okay, I am going to say something that might be controversial, especially given the excellent discussion happening in this forum about AI ROI measurement.
What if we are spending more time measuring AI productivity than actually using AI to build better products?
The Measurement Theater
At my last startup (which failed, so take this with appropriate salt), we spent three months building a comprehensive “AI productivity dashboard.” We tracked everything: time saved per developer, lines of code generated, review cycles accelerated, deployment frequency improvements.
You know what we did not do during those three months? Ship the features our customers were actually asking for.
We had daily standups where we discussed dashboard metrics. We had weekly reviews of AI adoption rates. We had monthly business reviews where engineering leaders presented productivity improvements to stakeholders.
And our product velocity actually slowed down because we were so busy measuring productivity that we forgot to be productive.
The Designer Perspective
Here is my bias up front: I am a designer, and the best design tools I have ever used resist quantification.
When Figma introduced auto-layout, did anyone measure the ROI? No, we just started using it because it made our work better. Same with component variants, same with design tokens, same with every tool that fundamentally changed how we work.
AI tools like Claude, Cursor, and v0 have genuinely changed how I prototype and explore ideas. But if you asked me to quantify exactly how much time I am saving or value I am creating, I honestly could not tell you. And I am not sure it matters.
Some of the best work happens in the unmeasurable spaces. The “AI helped me think differently about this problem” moments. The “I tried five variations in an hour instead of one” explorations. The “I learned a new approach from an AI suggestion” growth.
How do you measure that? Should you?
Acknowledging the Reality
Look, I get it. CFOs need numbers. Boards need ROI. Nobody is going to approve a $500K annual AI tool budget with “trust me, the vibes are good” as the business case.
Michelle, Luis, Keisha—you are all absolutely right that engineering leaders need to speak CFO language to protect AI investments. I am not arguing against that reality.
But I am worried about the distraction cost.
The Middle Ground?
Maybe what we need is lightweight tracking instead of comprehensive frameworks.
Instead of building AI productivity dashboards, maybe we just:
- Track adoption (are people using the tools?)
- Survey satisfaction (do people want to keep the tools?)
- Monitor quality (are outcomes getting better or worse?)
- Measure a few key business metrics (cycle time, incident rate, retention)
And then… we trust our engineers to use the tools that make them more effective?
I do not know. Maybe I am being naive. Maybe this only works at small companies. Maybe it is a luxury you lose once you have a board and investors and fiduciary responsibilities.
The Provocation
But here is what really worries me: Are we building AI measurement systems instead of AI-enhanced products?
Are we having more meetings about AI ROI than meetings about customer problems? Are we training our engineers to optimize for metrics instead of outcomes?
At some point, does not the measurement apparatus become more expensive than the thing we are measuring?
The Question
I am genuinely curious how others balance this. How do you measure enough to justify investments without letting measurement become the work itself?
And maybe more importantly: When do we trust that good tools in the hands of good people will create value, even if we cannot measure every dimension of that value?
Unpopular opinion, I know. But someone had to say it.