Been thinking about this a lot lately as our team debates which AI coding CLI to standardize on.
The headline numbers tell an interesting story:
- OpenCode: 112K+ GitHub stars (and climbing fast—hit 126K by March)
- Claude Code: 71K stars
If you stopped there, OpenCode wins by a mile. Open source, MIT licensed, supports 75+ LLM providers, gorgeous TUI built in Go. The community is passionate and growing.
But then you look at the usage data and the picture flips:
- Claude Code: ~4% of all public GitHub commits (135K/day as of February, 326K/day by mid-March)
- OpenCode: …crickets on actual commit share data
That’s the gap that caught my eye. Stars measure interest. Commits measure adoption in real workflows. And those are very different things.
The “Stars Are Vanity” Argument
I’ve seen this pattern before in the design tools space. Figma had fewer “fans” than Sketch for years, but the actual usage numbers told a completely different story. People starred Sketch because they wanted to support it. People used Figma because it solved their problems.
OpenCode stars could be:
- Developers bookmarking it for “someday”
- People supporting open source on principle
- Curiosity clicks from the HN/Reddit front page
- Legitimate daily users
Meanwhile Claude Code’s commit numbers are hard to fake—every Co-authored-by: Claude tag is a real code change that went through a real workflow.
But Here’s What Makes It Complicated
The OAuth drama changes everything. In January 2026, Anthropic silently blocked OpenCode from using Claude models via consumer OAuth tokens. Then in February, they updated their ToS to explicitly ban third-party tools from using Pro/Max plan tokens.
This is where “stars are vanity, commits are sanity” gets uncomfortable. Claude Code’s commit dominance might partly reflect Anthropic closing the door on alternatives rather than winning on pure merit. If you can’t use Claude through OpenCode anymore, of course Claude Code’s numbers go up.
OpenCode responded by launching their own API gateways (Black and Zen), but the damage to the “use any model through any tool” dream was real.
What I’m Actually Wondering
For teams choosing right now:
- Is provider lock-in acceptable if the tool genuinely ships more code?
- Does OpenCode’s model flexibility actually matter when most developers end up using Claude or GPT-5 anyway?
- Are stars a leading or lagging indicator? Maybe OpenCode’s community enthusiasm translates to better tooling 12 months from now.
- How do you even measure “better” when Claude Code claims 46% “most loved” in the Pragmatic Engineer survey but OpenCode claims 2.5M monthly developers?
The SemiAnalysis projection that Claude Code will hit 20%+ of all daily GitHub commits by end of 2026 is wild if true. That’s not a coding assistant anymore—that’s infrastructure.
Curious what y’all are seeing on your teams. Are people self-selecting tools, or is there a top-down decision? And which metric actually matters for the decision—stars, commits, or something else entirely?