Skip to content
This repository was archived by the owner on Jul 22, 2025. It is now read-only.

Conversation

@nattsw
Copy link
Contributor

@nattsw nattsw commented Feb 21, 2025

Eval configs were hard-coded in DiscourseAi::Evals::Llm. This PR moves them into a config file.

Evaluators can now also define their own config locally with config/eval-llms.local.yml, and will have the choice to define the api_key attribute, or api_key_env if an env var is preferred.

Example file:

llms:
  discourse_llm:
    display_name: Qwen/Qwen2.5-32B-Instruct
    name: Qwen/Qwen2.5-32B-Instruct
    tokenizer: DiscourseAi::Tokenizer::OpenAiTokenizer
    api_key: <FILL IN KEY HERE>
    provider: open_router
    url: <FILL IN URL HERE>
    max_prompt_tokens: 128000
    vision_enabled: false

@SamSaffron
Copy link
Member

nice change 🤗

@SamSaffron SamSaffron merged commit 2486e0e into main Feb 24, 2025
6 checks passed
@SamSaffron SamSaffron deleted the eval-config branch February 24, 2025 05:22
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants