Skip to content

Establish policy for AI code review tool evaluation #6234

@cblecker

Description

@cblecker

There has been growing interest in using AI-powered code review tools across Kubernetes GitHub organizations (see #5930). Several subprojects have expressed interest in piloting tools like CodeRabbit.

Currently, there is no documented policy governing how these tools should be evaluated, approved, or managed. This creates ambiguity around:

  • Privacy and security: What due diligence is required before enabling a third-party tool that processes repository code via external AI models?
  • Consistency: How do we ensure a consistent evaluation process across orgs and subprojects?
  • Opt-in controls: How do we ensure repos opt in deliberately rather than having tools enabled broadly?
  • Feedback and accountability: How do we structure pilots with clear timelines, feedback collection, and decision criteria?

I'd like to add a policy document to kubernetes/community under github-management/ that covers:

  1. How subproject leads can request a new AI code review tool
  2. The privacy and security assessment process (owned by the GitHub Administration Team)
  3. Pilot structure (90-day evaluation, per-repo opt-in, org-wide default config)
  4. Evaluation criteria and decision process
  5. Removal process

This will give us a clear framework to evaluate #5930 and future requests.

/sig contributor-experience
/area github-management

Metadata

Metadata

Assignees

Labels

area/github-managementIssues or PRs related to GitHub Management subprojectsig/contributor-experienceCategorizes an issue or PR as relevant to SIG Contributor Experience.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions