OpenCode Has 112K GitHub Stars vs Claude Code's 71K—But Claude Code Makes 4% of All Public Commits. Stars Are Vanity, Commits Are Sanity?

Been thinking about this a lot lately as our team debates which AI coding CLI to standardize on.

The headline numbers tell an interesting story:

  • OpenCode: 112K+ GitHub stars (and climbing fast—hit 126K by March)
  • Claude Code: 71K stars

If you stopped there, OpenCode wins by a mile. Open source, MIT licensed, supports 75+ LLM providers, gorgeous TUI built in Go. The community is passionate and growing.

But then you look at the usage data and the picture flips:

  • Claude Code: ~4% of all public GitHub commits (135K/day as of February, 326K/day by mid-March)
  • OpenCode: …crickets on actual commit share data

That’s the gap that caught my eye. Stars measure interest. Commits measure adoption in real workflows. And those are very different things.

The “Stars Are Vanity” Argument

I’ve seen this pattern before in the design tools space. Figma had fewer “fans” than Sketch for years, but the actual usage numbers told a completely different story. People starred Sketch because they wanted to support it. People used Figma because it solved their problems.

OpenCode stars could be:

  • Developers bookmarking it for “someday”
  • People supporting open source on principle
  • Curiosity clicks from the HN/Reddit front page
  • Legitimate daily users

Meanwhile Claude Code’s commit numbers are hard to fake—every Co-authored-by: Claude tag is a real code change that went through a real workflow.

But Here’s What Makes It Complicated

The OAuth drama changes everything. In January 2026, Anthropic silently blocked OpenCode from using Claude models via consumer OAuth tokens. Then in February, they updated their ToS to explicitly ban third-party tools from using Pro/Max plan tokens.

This is where “stars are vanity, commits are sanity” gets uncomfortable. Claude Code’s commit dominance might partly reflect Anthropic closing the door on alternatives rather than winning on pure merit. If you can’t use Claude through OpenCode anymore, of course Claude Code’s numbers go up.

OpenCode responded by launching their own API gateways (Black and Zen), but the damage to the “use any model through any tool” dream was real.

What I’m Actually Wondering

For teams choosing right now:

  1. Is provider lock-in acceptable if the tool genuinely ships more code?
  2. Does OpenCode’s model flexibility actually matter when most developers end up using Claude or GPT-5 anyway?
  3. Are stars a leading or lagging indicator? Maybe OpenCode’s community enthusiasm translates to better tooling 12 months from now.
  4. How do you even measure “better” when Claude Code claims 46% “most loved” in the Pragmatic Engineer survey but OpenCode claims 2.5M monthly developers?

The SemiAnalysis projection that Claude Code will hit 20%+ of all daily GitHub commits by end of 2026 is wild if true. That’s not a coding assistant anymore—that’s infrastructure.

Curious what y’all are seeing on your teams. Are people self-selecting tools, or is there a top-down decision? And which metric actually matters for the decision—stars, commits, or something else entirely?

This hits close to home. I manage 40+ engineers across a Fortune 500 financial services org and the AI tool sprawl is the single biggest unplanned line item in my 2026 budget.

The Procurement Nightmare

Here’s what nobody talks about in the stars-vs-commits debate: who’s paying?

My team’s actual tool landscape right now:

  • 12 engineers on Claude Code Max ($200/month each = $2,400/month)
  • 8 engineers on Cursor Pro + Claude API ($60/month + ~$80 API = $1,120/month)
  • 6 engineers who discovered OpenCode and route through their personal API keys (cost unknown to us)
  • The rest on GitHub Copilot Enterprise (already covered by our GitHub contract)

That’s ~$50K/year in shadow AI tooling costs that didn’t exist 18 months ago. And it’s growing every quarter because engineers find new tools faster than procurement can evaluate them.

Stars Don’t Show Up in Invoices

From an engineering director’s perspective, the metric that matters isn’t stars OR commits—it’s cost per meaningful code change that passes review and reaches production.

We tried tracking this last quarter. Claude Code users generated roughly 2.3x more PRs than non-AI users, but their review-to-merge cycle was 40% longer because reviewers had to scrutinize AI-generated patterns more carefully. Net throughput increase: maybe 1.4x after accounting for review overhead.

OpenCode’s flexibility is theoretically attractive from a cost optimization standpoint—if you can swap between models based on task complexity, you could save significantly. But “theoretically attractive” doesn’t survive contact with a compliance review. Our security team needs to know exactly which models touch our codebase, which APIs data flows through, and which providers have what access. OpenCode’s 75-provider flexibility is actually a liability in regulated environments.

The Real Question for Leaders

Maya’s right that this is becoming infrastructure. And that’s exactly the problem—when your coding tool is infrastructure, you need vendor stability, SLA guarantees, and a clear support escalation path. Open source community vibes don’t cut it when your trading platform has a P1 and the AI tool your team depends on just pushed a breaking update.

I’d rather pay Anthropic’s premium and know exactly what I’m getting than save 30% on API costs while introducing supply chain risk I can’t quantify.

Uncomfortable take, I know. But that’s the calculus in regulated industries.

Let me push back on the framing a bit. The stars-vs-commits comparison is interesting but misses the strategic picture.

The Platform Power Play

What Anthropic did with the OAuth block in January wasn’t just a business decision—it was a platform boundary assertion. They watched OpenCode become the most popular way to access Claude models and decided they’d rather own the developer experience end-to-end.

This is the same playbook every platform has run:

  • Apple blocking Flash
  • Google deprecating third-party cookies
  • Meta restricting API access after Cambridge Analytica

The pattern is always: let the ecosystem grow, let third parties prove the market, then consolidate control. Anthropic saw that Claude was the preferred model for coding tasks and decided that the interface layer was too valuable to cede to OpenCode.

OpenAI publicly welcoming third-party tools immediately after was brilliant counter-positioning, but let’s be honest—they’d do the same thing if they had Claude Code’s market position.

What the Numbers Actually Tell Us

The 4% commit share number is impressive but deserves scrutiny:

  1. It measures public repos only. Enterprise usage patterns may be completely different.
  2. It counts Co-authored-by tags, which are opt-in. Some developers remove them.
  3. Commit volume ≠ code quality. If AI tools generate 4% of commits but those commits have 1.7x more issues (per the CodeRabbit data), is that actually good?

The 326K daily commits in March is a staggering number regardless. But I’d want to know: how many of those commits survive 90 days without being reverted or significantly refactored?

My Actual Recommendation to Other CTOs

Don’t standardize on one tool. Standardize on a governance framework:

  • Approved model providers (not tools—providers)
  • Data classification rules for what code can touch which APIs
  • Cost allocation model so teams feel the spend
  • Quarterly review of tool landscape

The tool wars will keep shifting. The governance principles won’t. Whether your team picks Claude Code, OpenCode, or something that doesn’t exist yet, the guardrails should be the same.

And yes, that means accepting some inefficiency from tool diversity. The alternative—betting your entire engineering workflow on a single vendor’s goodwill—is a risk I’m not willing to take after watching what happened to OpenCode users in January.

As someone who thinks in product metrics all day, this thread is fascinating because it highlights a classic measurement trap.

You’re Comparing a Vanity Metric to a Different Vanity Metric

Stars are a vanity metric—we all agree on that. But raw commit count is also a vanity metric unless you tie it to outcomes.

Here’s how I’d frame this if I were building the competitive analysis deck:

Metric What It Measures What It Doesn’t Measure
GitHub Stars Developer awareness/interest Actual usage or satisfaction
Daily Commits Tool adoption in workflows Code quality, review burden, or business value
“Most Loved” Survey Sentiment among current users Why non-users aren’t using it
Monthly Active Devs Breadth of adoption Depth of usage per developer

The metric I’d actually want: time-to-production-value per developer hour invested. How quickly does code go from “AI generated it” to “it’s running in production and not causing incidents”? That’s the number that maps to business value.

The Anthropic OAuth Move Was a GTM Decision

From a product strategy lens, blocking third-party OAuth wasn’t about technology—it was about controlling the funnel. Every developer using Claude through OpenCode was a developer Anthropic couldn’t upsell to Max, couldn’t collect usage telemetry from, and couldn’t build features around.

It’s the classic platform vs. aggregator distinction. OpenCode wanted to be the aggregator layer (access any model through us). Anthropic decided to be the platform (access Claude through us or not at all).

Both are valid strategies. But OpenCode’s strategy only works if no single model is dominant enough to matter. And right now, Claude is that dominant for coding tasks—46% “most loved” in a 15,000-developer survey is a moat.

What This Means for Tool Selection

Stop thinking about which tool is “better” and start thinking about which tool’s incentives align with yours.

  • Claude Code’s incentive: keep you on Anthropic’s ecosystem, maximize API revenue
  • OpenCode’s incentive: maximize developer adoption, build an API gateway business

Neither is wrong. But they’ll make very different product decisions over time. Anthropic will optimize Claude Code for Claude models. OpenCode will optimize for model portability.

Your choice depends on whether you believe in a single-model future or a multi-model future. If you think Claude stays dominant, lock in. If you think the leaderboard churns every 6 months, stay flexible.

Personally? I think the model layer commoditizes faster than most people expect.

All great strategic points, but I want to bring this back to the human side because that’s where the actual pain lives.

The Tool Wars Are Exhausting Your Team

I’m scaling from 25 to 80+ engineers right now. You know what I spend more time on than I’d like to admit? Mediating debates about AI coding tools. It’s become the new vim-vs-emacs, except it has real budget and security implications.

Here’s what I’ve observed across my org:

The self-selection problem is real. Senior engineers gravitate toward Claude Code because they have the budget authority to expense Max plans and the expertise to make it productive. Junior engineers lean toward OpenCode because it’s free to start with and the community documentation is excellent. Mid-level engineers use whatever their tech lead uses.

Result: inconsistent tooling across teams, no shared best practices, and code review becomes harder because reviewers can’t distinguish between “AI-generated in Claude Code” and “AI-generated in OpenCode with a different model” patterns.

Stars and Commits Miss the Team Dynamics

Neither metric captures what I actually care about:

  1. Onboarding time. How fast can a new hire become productive with the tool? Claude Code’s opinionated workflow actually helps here—there’s one way to do things. OpenCode’s flexibility is paralyzing for junior engineers.

  2. Knowledge transfer. When an engineer leaves, can someone else pick up their AI-assisted workflow? Claude Code’s CLAUDE.md convention is becoming a team knowledge artifact. OpenCode’s .opencode/agents/ directory achieves something similar but with more variability.

  3. Cognitive load on managers. Every tool I add to the approved list is another thing I need to understand, budget for, secure, and support. I’m already juggling Cursor, Copilot, and Claude Code. Adding OpenCode means I now need to understand 75 possible model configurations.

What I Actually Did

We standardized on Claude Code for production codebases and told engineers they can use whatever they want for personal projects and prototyping. If OpenCode users want to contribute to production, they can—but through our standard PR review process, same as everyone else.

Is it perfect? No. Does it reduce the cognitive and operational overhead enough to focus on shipping? Yes.

Michelle’s governance framework approach is the right long-term answer. But in the short term, sometimes you just need a decision so people can stop debating tools and start building things.

The stars-vs-commits debate is a fun metric comparison, but neither number tells you whether your team is actually shipping meaningful work. Focus on that.