The open source community is experiencing one of its most significant philosophical schisms in recent memory, and it’s not about licensing models or governance structures — it’s about whether AI-generated code has any place in open source projects at all. The fault lines are deepening, and major projects are landing on opposite sides.
Gentoo and NetBSD: The Hard Line
Gentoo Linux drew the first major line in the sand by banning all AI tools from contributions. The reasoning is threefold: copyright uncertainty surrounding AI-generated output, quality concerns about code that hasn’t been deeply understood by a human contributor, and ethical objections to training data that may have been scraped from copyleft-licensed repositories without consent. The policy is unambiguous — contributors are prohibited from using AI code generators for any contribution to the project, and violations result in contribution rejection.
NetBSD went even further with a specifically legal framing. Their policy presumes all AI-generated code is “tainted” — legally uncertain for inclusion in a BSD-licensed project. Contributors must obtain prior written approval before submitting any AI-assisted contribution, and the burden of proof falls on the contributor to demonstrate the code is legally clean. This is an unusually aggressive stance for a project that historically favored permissive approaches.
LLVM and the Middle Path
The LLVM project chose a more pragmatic route: AI tool usage must be disclosed, but isn’t banned outright. Contributors using AI tools must take full responsibility for the output and verify it meets project coding standards, passes tests, and doesn’t introduce security vulnerabilities. It’s essentially a “use at your own risk with full transparency” approach, which acknowledges the reality that AI tools are already deeply embedded in many developers’ workflows.
Debian: Still Deciding
And then there’s Debian — one of the largest and most influential Linux distributions, the upstream source for Ubuntu and dozens of derivatives. Debian is still actively debating its AI contribution policy, and the internal discussion reflects the broader community split. Pragmatists argue that AI is just another productivity tool, no different in principle from autocomplete or code generation templates. Purists counter that AI code threatens the legal and philosophical foundations that make open source possible.
The Copyright Question at the Heart of Everything
Open source licenses — GPL, BSD, Apache, MIT — all operate on a fundamental assumption: human authorship with clear copyright ownership. A human writes code, holds copyright, and voluntarily licenses it under open source terms. The entire legal framework depends on this chain.
AI-generated code breaks that chain. The US Copyright Office has ruled that purely AI-generated works are not copyrightable. However, AI-assisted works — where a human makes substantial creative decisions using AI as a tool — may qualify for copyright protection. The boundary between “AI-generated” and “AI-assisted” is fuzzy at best.
This creates a cascade of unanswered legal questions. If AI-generated code isn’t copyrightable, can it carry a GPL license? If it can’t carry a license, can it legally be included in a licensed project? Does including unlicensed code in a GPL project “contaminate” the project’s licensing? Nobody has definitive answers, and the legal precedent is years away from being established. Conservative projects are banning AI code entirely because the downside risk of getting the legal question wrong is existential.
The Quality and Review Burden Problem
Beyond legality, the CNCF Security Slam 2026 opened specifically to address security concerns in open source projects, with the OpenSSF publishing best practices for handling AI-generated contributions. The concern isn’t just about whether AI code works — it’s about the review burden it creates. AI can generate large volumes of plausible-looking code that compiles, passes basic tests, and looks reasonable on first read. But verifying that code for subtle bugs, security vulnerabilities, and architectural fit requires significant human effort.
Projects with limited maintainer bandwidth — which is most open source projects — simply can’t afford the review cost of evaluating high-volume AI-generated contributions. The fear is that AI lowers the barrier to submitting code so dramatically that maintainers get buried under an avalanche of “good enough” contributions that each require deep review.
Our Company’s Approach
At my company, we allow AI-assisted development internally but require full human review before anything ships. For open source contributions, we follow each project’s stated policy — no exceptions. It’s the only responsible approach when the legal landscape is this uncertain.
What concerns me most is the emerging two-tier ecosystem. Projects that accept AI contributions will move faster and ship more features, but they’ll accumulate quality debt and legal risk. Projects that ban AI will maintain higher quality and legal clarity, but they’ll evolve more slowly and potentially lose contributors who’ve integrated AI deeply into their workflows.
The question for the community: should open source projects ban AI-generated contributions? And more fundamentally — is the copyright question even solvable within our current legal frameworks, or do we need new legislation specifically addressing AI-generated code in open source contexts?