I am experiencing a frustrating issue where trying to use OpenRouter models for metrics evaluation fails or falls back to OpenAI defaults (like gpt-4.1), even when configuring it via the CLI (deepeval set-openrouter) or environment variables.
Upon inspecting the source code (specifically the utility functions handling model initialization and validation), I noticed that OpenRouter is entirely missing from the core model routing logic. Specifically:
OpenRouterModel is not imported from deepeval.models.
- There is no
should_use_openrouter() function checking the environment/settings.
initialize_model() does not include OpenRouter in its fallback/routing tree.
- Most importantly,
is_native_model() does not include isinstance(model, OpenRouterModel). Because of this, metrics don't recognize OpenRouter as a native model, setting using_native_model = False and triggering unexpected behaviors or OpenAI API key errors.
Describe the solution you'd like
I would like OpenRouterModel to be fully integrated as a first-class citizen in the metrics utility code alongside Gemini, Anthropic, Ollama, etc.
Specifically, the following changes would resolve the issue:
- Import
OpenRouterModel alongside the other models.
- Add a
should_use_openrouter() helper checking for USE_OPENROUTER_MODEL flags.
- Update
initialize_model() to return OpenRouterModel(model=model), True.
- Update
is_native_model() to include or isinstance(model, OpenRouterModel).
Describe alternatives you've considered
Currently, the only workaround is heavily disguising the OpenRouter API key as an OPENAI_API_KEY and injecting a custom GPTModel instance into every single metric manually in Python code. This defeats the purpose of the CLI configuration and clutters the test files.
Additional context
Based on the provided codebase, here is a quick look at where the gap is happening in is_native_model:
def is_native_model(
model: Optional[Union[str, DeepEvalBaseLLM]] = None,
) -> bool:
if (
isinstance(model, GPTModel)
or isinstance(model, AnthropicModel)
or isinstance(model, AzureOpenAIModel)
or isinstance(model, OllamaModel)
or isinstance(model, LocalModel)
or isinstance(model, GeminiModel)
or isinstance(model, AmazonBedrockModel)
or isinstance(model, LiteLLMModel)
or isinstance(model, KimiModel)
or isinstance(model, GrokModel)
or isinstance(model, DeepSeekModel)
# Missing: or isinstance(model, OpenRouterModel)
):
return True
else:
return False
Adding it here and in initialize_model() would make the OpenRouter integration work seamlessly!
I am experiencing a frustrating issue where trying to use OpenRouter models for metrics evaluation fails or falls back to OpenAI defaults (like
gpt-4.1), even when configuring it via the CLI (deepeval set-openrouter) or environment variables.Upon inspecting the source code (specifically the utility functions handling model initialization and validation), I noticed that OpenRouter is entirely missing from the core model routing logic. Specifically:
OpenRouterModelis not imported fromdeepeval.models.should_use_openrouter()function checking the environment/settings.initialize_model()does not include OpenRouter in its fallback/routing tree.is_native_model()does not includeisinstance(model, OpenRouterModel). Because of this, metrics don't recognize OpenRouter as a native model, settingusing_native_model = Falseand triggering unexpected behaviors or OpenAI API key errors.Describe the solution you'd like
I would like
OpenRouterModelto be fully integrated as a first-class citizen in the metrics utility code alongside Gemini, Anthropic, Ollama, etc.Specifically, the following changes would resolve the issue:
OpenRouterModelalongside the other models.should_use_openrouter()helper checking forUSE_OPENROUTER_MODELflags.initialize_model()to returnOpenRouterModel(model=model), True.is_native_model()to includeor isinstance(model, OpenRouterModel).Describe alternatives you've considered
Currently, the only workaround is heavily disguising the OpenRouter API key as an
OPENAI_API_KEYand injecting a customGPTModelinstance into every single metric manually in Python code. This defeats the purpose of the CLI configuration and clutters the test files.Additional context
Based on the provided codebase, here is a quick look at where the gap is happening in
is_native_model:Adding it here and in
initialize_model()would make the OpenRouter integration work seamlessly!