-
Notifications
You must be signed in to change notification settings - Fork 665
Open
Description
Different LLMs have varying APIs. For example, the o-series requires the "max_completion_tokens" parameter and does not support "temperature," but this is currently hard-coded in the code. When I tested GPT-5, it followed the same format as the o-series, so we had to add a specific rule to accommodate it. If users rely on self-hosted models or APIs from specialized providers, managing the LLM engines could become quite messy. I suggest that OpenEvolve could integrate a unified LLM proxy like LiteLLM, which can automatically handle different request formats appropriately.
therealpygon
Metadata
Metadata
Assignees
Labels
No labels