Skip to content

Commit 8bf5a41

Browse files
authored
chore: update models for integration tests of cerebras and fireworks (#822)
## Description <!-- What does this PR do? --> ## PR Type <!-- Delete the types that don't apply --> - 🆕 New Feature - 🐛 Bug Fix - 💅 Refactor - 📚 Documentation - 🚦 Infrastructure ## Relevant issues <!-- e.g. "Fixes #123" --> ## Checklist <!-- If this checklist is deleted from the PR submission it will be immediately closed --> - [ ] I understand the code I am submitting. - [ ] I have added unit tests that prove my fix/feature works - [ ] I have run this code locally and verified it fixes the issue. - [ ] New and existing tests pass locally - [ ] Documentation was updated where necessary - [ ] I have read and followed the [contribution guidelines](https://github.com/mozilla-ai/any-llm/blob/main/CONTRIBUTING.md) - [ ] **AI Usage:** - [ ] No AI was used. - [ ] AI was used for drafting/refactoring. - [ ] This is fully AI-generated. ## AI Usage Information <!-- We welcome the use of AI to aid in contribution! Optional: We're interested in hearing about your setup. What LLM are you using (e.g. Opus 4.5, GPT-5, Minimax), and which tooling (Claude Code, VsCode, OpenCode, etc) --> - AI Model used: - AI Developer Tool used: - Any other info you'd like to share: When answering questions by the reviewer, please respond yourself, do not copy/paste the reviewer comments into an AI system and paste back its answer. We want to discuss with you, not your AI :)
1 parent 8e9719e commit 8bf5a41

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

tests/conftest.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ def provider_model_map() -> dict[LLMProvider, str]:
6666
LLMProvider.LMSTUDIO: "google/gemma-3n-e4b", # You must have LM Studio running and the server enabled
6767
LLMProvider.VLLM: "Qwen/Qwen2.5-0.5B-Instruct",
6868
LLMProvider.COHERE: "command-a-03-2025",
69-
LLMProvider.CEREBRAS: "llama-3.3-70b",
69+
LLMProvider.CEREBRAS: "llama3.1-8b",
7070
LLMProvider.HUGGINGFACE: "huggingface/tgi", # This is the syntax used in `litellm` when using HF Inference Endpoints (https://docs.litellm.ai/docs/providers/huggingface#dedicated-inference-endpoints)
7171
LLMProvider.BEDROCK: "amazon.nova-lite-v1:0",
7272
LLMProvider.SAGEMAKER: "<sagemaker_endpoint_name>",
@@ -95,7 +95,7 @@ def provider_image_model_map(provider_model_map: dict[LLMProvider, str]) -> dict
9595
LLMProvider.NEBIUS: "openai/gpt-oss-20b",
9696
LLMProvider.OPENROUTER: "google/gemini-2.5-flash-lite",
9797
LLMProvider.OLLAMA: "llava-phi3", # Fast vision model compatible with OpenAI format
98-
LLMProvider.FIREWORKS: "accounts/fireworks/models/qwen2p5-vl-32b-instruct",
98+
LLMProvider.FIREWORKS: "accounts/fireworks/models/kimi-k2p5",
9999
LLMProvider.BEDROCK: "anthropic.claude-3-haiku-20240307-v1:0", # Claude 3 Haiku with vision support
100100
}
101101

0 commit comments

Comments
 (0)