Skip to content

[ENH] our stance towards AI research #1

@SimonBlanke

Description

@SimonBlanke

Recent developments in the AI research landscape suggest a (not so?) subtle but persistent form of gatekeeping. Several high-impact papers provide only partial methodological transparency, with training data, model weights or critical implementation details withheld, often justified by competitive pressure or safety concerns.

Well known examples include major large language models released without full reproducibility (such as GPT-4 or Gemini, where training data and architecture details remain undisclosed) and institutions that originally emphasized openness but later adopted closed release strategies (for example OpenAI’s shift from the fully open GPT-2 era to its current proprietary stance). This trend risks slowing scientific progress and undermining trust, and it highlights the need for stronger norms around openness and reproducibility in AI research.

Maybe we should add a small section (in the README?) about our stance towards AI research in the Mission statement.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions