|
| 1 | +--- |
| 2 | +layout: single |
| 3 | +title: "Navigating LLMs in Open Source: pyOpenSci's New Peer Review Policy" |
| 4 | +excerpt: "Generative AI tools are making is easier to generate large amounts of code which in some cases is causing a strain on volunteer peer review programs like ours. Learn about pyOpenSci's policy on generative AI in peer review in this blog post." |
| 5 | +author: "pyopensci" |
| 6 | +permalink: /blog/generative-ai-peer-review-policy.html |
| 7 | +header: |
| 8 | + overlay_image: images/headers/pyopensci-floral.png |
| 9 | +categories: |
| 10 | + - blog-post |
| 11 | + - community |
| 12 | +classes: wide |
| 13 | +toc: true |
| 14 | +comments: true |
| 15 | +last_modified: 2025-09-16 |
| 16 | +--- |
| 17 | + |
| 18 | +authors: Leah Wasser, Mandy Moore, |
| 19 | + |
| 20 | +## Generative AI meets scientific open source |
| 21 | + |
| 22 | +It has been suggested that for some developers, using AI tools for tasks can increase efficiency by as much as 55%. But in open source scientific software, speed isn't everything—transparency, quality, and community trust matter just as much. So do the ethical questions these tools raise. |
| 23 | + |
| 24 | +**Edit this.** Whatever breakout content we want here.... needs to be all on a single line. |
| 25 | +{: .notice--success} |
| 26 | + |
| 27 | + |
| 28 | +## Why we need guidelines |
| 29 | + |
| 30 | +At [pyOpenSci](https://www.pyopensci.org/), we’ve drafted a new policy for our peer review process to set clear expectations around disclosing use of LLMs in scientific software packages. |
| 31 | + |
| 32 | +This is not about banning AI tools. We recognize their value to some. Instead, our goal is transparency. We want maintainers to **disclose when and how they’ve used LLMs** so editors and reviewers can fairly and efficiently evaluate submissions. |
| 33 | + |
| 34 | +## Our Approach: Transparency and Disclosure |
| 35 | + |
| 36 | +We know that people will continue to use LLMs. We also know they can meaningfully increase productivity and lower barriers to contribution for some. We also know that there are significant ethical, societal and other challenges that come with the development and use of LLM’s. |
| 37 | + |
| 38 | +Our community’s expectation is simple: **be open about it**. |
| 39 | + |
| 40 | +* Disclose LLM use in your README and at the top of relevant modules. |
| 41 | +* Describe how the tools were used |
| 42 | +* Be clear about what human review you performed. |
| 43 | + |
| 44 | +Transparency helps reviewers understand context, trace decisions, and focus their time where it matters most. |
| 45 | + |
| 46 | +### Human oversight |
| 47 | + |
| 48 | +LLM-assisted code must be **reviewed, edited, and tested by humans** before submission. |
| 49 | + |
| 50 | +* Run tests and confirm correctness. |
| 51 | +* Check for security and quality issues. |
| 52 | +* Ensure style, readability, and clear docstrings. |
| 53 | +* Explain your review process in your software submission to pyOpenSci. |
| 54 | + |
| 55 | +Please don’t offload vetting to volunteer reviewers. Arrive with human-reviewed code that you understand, have tested, and can maintain. |
| 56 | + |
| 57 | +### Licensing awareness |
| 58 | + |
| 59 | +LLMs may be trained on mixed-license corpora. Outputs can create **license compatibility questions**, especially when your package uses a permissive license (MIT/BSD-3). |
| 60 | + |
| 61 | +* Acknowledge potential license ambiguity in your disclosure. |
| 62 | +* Avoid pasting verbatim outputs that resemble known copyrighted code. |
| 63 | +* Prefer human-edited, transformative outputs you fully understand. |
| 64 | + |
| 65 | +We can’t control upstream model training data, but we can be cautious, explicit and critical about our usage. |
| 66 | + |
| 67 | +### Ethics and inclusion |
| 68 | + |
| 69 | +LLM outputs can reflect and amplify bias in training data. In documentation and tutorials, that bias can harm the very communities we want to support. |
| 70 | + |
| 71 | +* Review AI-generated text for stereotypes or exclusionary language. |
| 72 | +* Prefer plain, inclusive language. |
| 73 | +* Invite feedback and review from diverse contributors. |
| 74 | + |
| 75 | +Inclusion is part of quality. Treat AI-generated text with the same care as code. |
| 76 | + |
| 77 | +## Supporting volunteer peer review |
| 78 | + |
| 79 | +Peer review runs on **volunteer time**. Rapid, AI-assisted submissions can overwhelm reviewers—especially when code hasn’t been vetted. |
| 80 | + |
| 81 | +* Submit smaller PRs with clear scopes. |
| 82 | +* Summarize changes and provide test evidence. |
| 83 | +* Flag AI-assisted sections so reviewers know where to look closely. |
| 84 | +* Be responsive to feedback, especially on AI-generated code. |
| 85 | + |
| 86 | +These safeguards protect human capacity so high-quality packages can move through review efficiently. |
| 87 | + |
| 88 | +## Benefits and opportunities |
| 89 | + |
| 90 | +LLMs are already helping developers: |
| 91 | + |
| 92 | +* Explaining complex codebases |
| 93 | +* Generating unit tests and docstrings |
| 94 | +* In some cases, simplifying language barriers for participants in open source around the world |
| 95 | +* Speeding up everyday workflows |
| 96 | + |
| 97 | +For some contributors, these tools make open source more accessible. |
| 98 | + |
| 99 | +## Challenges we must address |
| 100 | + |
| 101 | +### Overloaded peer review |
| 102 | + |
| 103 | +Peer review relies on volunteers. LLMs can produce large volumes of code quickly, increasing submissions with content that may not have been carefully reviewed by a human before reaching our review system. |
| 104 | + |
| 105 | +### Ethical and legal complexities |
| 106 | + |
| 107 | +LLMs are often trained on copyrighted or licensed material. Outputs may create conflicts when used in projects under different licenses. They can also reflect extractive practices, like data colonialism, and disproportionately harm underserved communities. |
| 108 | + |
| 109 | +### Bias and equity concerns |
| 110 | + |
| 111 | +AI-generated text can perpetuate bias. When it appears in documentation or tutorials, it can alienate the very groups open source most needs to welcome. |
| 112 | + |
| 113 | +### Environmental impacts |
| 114 | + |
| 115 | +Training and running LLMs [requires massive energy consumption](https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/), raising sustainability concerns that sit uncomfortably alongside much of the scientific research our community supports. |
| 116 | + |
| 117 | +### Impact on learning |
| 118 | + |
| 119 | +Heavy reliance on LLMs risks producing developers who can prompt, but not debug or maintain, code—undermining long-term project sustainability and growth. |
| 120 | + |
| 121 | +## What you can do now |
| 122 | + |
| 123 | +* **Be transparent.** Disclose LLM use in your README and modules. |
| 124 | +* **Be accountable.** Thoroughly review, test, and edit AI-assisted code. |
| 125 | +* **Be license-aware.** Note uncertainties and avoid verbatim look-alikes. |
| 126 | +* **Be inclusive.** Check AI-generated docs for bias and clarity. |
| 127 | +* **Be considerate.** Respect volunteer reviewers’ time. |
| 128 | + |
| 129 | + |
| 130 | +<div class="notice" markdown="1"> |
| 131 | +## Join the conversation |
| 132 | + |
| 133 | +This policy is just the beginning. As AI continues to evolve, so will our practices. We invite you to: |
| 134 | + |
| 135 | +👉 Read the full draft policy |
| 136 | +👉 Share your feedback and help us shape how the scientific Python community approaches AI in open source. |
| 137 | + |
| 138 | +The conversation is only starting, and your voice matters. |
| 139 | +</div> |
0 commit comments