Replies: 2 comments
-
You can use a single model by just putting the weight to 1.0. It is currently not possible to use different api_base but you can use an existing proxy or router like OptiLLM - https://github.com/codelion/optillm to router between models as needed with the same url. |
Beta Was this translation helpful? Give feedback.
-
Different api_base should work for each model in the config.yml, just specify it for each model. The top level setting will only be used if not specified otherwise. (cmp. the LLMModelConfig dataclass openevolve/openevolve/config.py Line 18 in 5d09222 openevolve/openevolve/llm/openai.py Line 40 in 5d09222 openevolve/openevolve/config.py Line 99 in 5d09222 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
My current issue is that my graphics memory is only sufficient to deploy one model locally, but the config requires two. How should I configure it to use one locally and one online? Because I see that only one api_base can be set in the config. Or how can I use two models with different api_bases?
Beta Was this translation helpful? Give feedback.
All reactions