-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Recent developments in the AI research landscape suggest a (not so?) subtle but persistent form of gatekeeping. Several high-impact papers provide only partial methodological transparency, with training data, model weights or critical implementation details withheld, often justified by competitive pressure or safety concerns.
Well known examples include major large language models released without full reproducibility (such as GPT-4 or Gemini, where training data and architecture details remain undisclosed) and institutions that originally emphasized openness but later adopted closed release strategies (for example OpenAI’s shift from the fully open GPT-2 era to its current proprietary stance). This trend risks slowing scientific progress and undermining trust, and it highlights the need for stronger norms around openness and reproducibility in AI research.
Maybe we should add a small section (in the README?) about our stance towards AI research in the Mission statement.