Skip to content

LiteLLM ignores LLMRequest.model in generate_content_async #3065

@markwaddle

Description

@markwaddle

Describe the bug
google.adk.models.LiteLLM.generate_content_async ignores the model specified by the llm_request: LLMRequest, always using self.model. this prevents scenarios where an agent callback decides to use a specific model, and updates the llm_request.model, for example.

To Reproduce
Steps to reproduce the behavior:

  1. declare an LLMAgent, specifying a LiteLLM as the model, passing in a specific model name. ex: LLMAgent(..., model=LiteLLM(model="openai/gpt5"), ...)
  2. in an agent callback, for example before_model_callback, update the llm_request.model to a different value. ex: llm_request.model = "openai/gpt-nano"
  3. invoke the agent

Expected behavior
LiteLLM should use the model provided by the llm_request. instead, it is fixed to the model it was initialized with.

Model Information:

  • Are you using LiteLLM: Yes

Related line of code:

"model": self.model,

Metadata

Metadata

Assignees

No one assigned

    Labels

    bot triaged[Bot] This issue is triaged by ADK botmodels[Component] Issues related to model support

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions