AI Contribution Policies

Starting a thread on AI Contribution policies – feel free to add addtional ones you find.

Gitea AI Contribution Policy

AI Contribution Policy

Contributions made with the assistance of AI tools are welcome, but contributors must use them responsibly and disclose that use clearly.

  1. Review AI-generated code closely before marking a pull request ready for review.
  2. Manually test the changes and add appropriate automated tests where feasible.
  3. Only use AI to assist in contributions that you understand well enough to explain, defend, and revise yourself during review.
  4. Disclose AI-assisted content clearly.
  5. Do not use AI to reply to questions about your issue or pull request. The questions are for you, not an AI model.
  6. AI may be used to help draft issues and pull requests, but contributors remain responsible for the accuracy, completeness, and intent of what they submit.

Maintainers reserve the right to close pull requests and issues that do not disclose AI assistance, that appear to be low-quality AI-generated content, or where the contributor cannot explain or defend the proposed changes themselves.

We welcome new contributors, but cannot sustain the effort of supporting contributors who primarily defer to AI rather than engaging substantively with the review process.

I think the most important point is disclosing one’s use of AI. The second most important point is that the contributor is ultimately responsible for what the AI creates.

But, yeah, all projects should accept AI-assisted contributions. If they don’t, folks will just use AI anyway and pass it off as their own work.

I wonder how important this declaration will be in the near future, as so much code will be written by AI. It seems we need some other metric to judge code quality …

One approach is to submit your plans with your code. I find it much easier to review a plan than code. This exposes the thinking (perhaps much of it by AI) behind the code. If the plan is solid, I don’t worry too much about the AI implementation.

Perhaps every project should have detailed requirements in AGENTS.md of what plans for the project should look like, where they are stored, etc. And a requirement for a plan for every significant PR would also improve human coding as well.

One thing I’m finding is Claude generally does a better job than me at the raw coding. However, the design or architectural level is where the problems usually occur. So I don’t think we need to be concerned about AI implementing low-level code, but rather introducing architectural problems.

Very solid points, and I really like the idea of submitting the plans with the code. The plan has much more value in the world of AI agents because they can be prompted to verify that the plan was properly implemented.

That said, in my experience, even Claude Sonnet/Opus 4.6 doesn’t always get the implementation quite right. I’ve seen it make subtle but silly API decisions even when the plan has reasonable levels of clarity and even if I’ve prompted it to ask me clarifying questions.

Here’s a popular AI-assisted change where the AI model silently omitted the __read_mostly macro, which is a kernel macro to improve memory cache performance. Now, one could make the argument that a human could easily make the same error. In this particular case, the code reviewer knew the contributor and made some assumptions about the code quality… and the reviewer’s lax demeanor led to the bug sneaking through and eventually causing pretty serious performance regressions. This is perhaps a case for declaring the use of AI assistance. It’s almost like saying, “hey, someone wrote this code on my behalf, so perhaps my reputation for higher quality should be questioned.”

On the other hand, AI agents will probably not be getting any dumber in the future, and I concede that the disclosure will likely become irrelevant over time.

To your point about planning vs. implementation… I have found great success going back and forth a time or two with the AI agent in the planning phase. Only when I feel like the agent has a deeper understanding for the intent of the change / feature do I cut it loose to implement it. But, like most humans, it still cannot handle very large changes gracefully. You have to break the problem down into smaller chunks.

Yes, it is powerful to have AI doing the research and collecting the details, while you do the thinking.

Have you tried the superpowers plugin yet? Really impressed with the brainstroming phase.

No. I’m not using Claude Code. Is it a Claude Code thing? I’ve been primarily using Copilot via GitHub. I use the Copilot CLI and other GitHub-based workflows.

Looks like superpowers is available for Copilot CLI: