Skip to content

Consider using OpenAI Evals #80

@aparatext

Description

@aparatext

OpenAI Evals is an open-source crowdsourced collection of tests designed to evaluate many newly emergent capabilities in LLMs. While it may be slightly GPT-4-centric, as tests that it can easily pass don't get merged, it still remains a valuable tool for automatically benchmarking LLMs.

Being a well-designed standard test, this may allow us to compare different open-access models against each other or even OpenAI's offerings objectively and effortlessly, which may allow us to gain deeper insight into what works and what doesn't.

For reference on testing non OpenAI models with Evals, see OpenAssistant model evals.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions