Skip to content

QUESTION: It seems that LiteLLM does not handle the reasoning_content of the model. #1788

@BreezeTwinesJade

Description

@BreezeTwinesJade

When I was connecting to the model via LiteLLM, I noticed that it doesn't seem to support outputting the model's reasoning_content. Here's the content of my code:

llm = LiteLlm(
    model=os.getenv("BASE_MODEL"),
    api_base=os.getenv("BASE_URL"),
    api_key=os.getenv("API_KEY"),
)

base_planner = BuiltInPlanner(thinking_config=ThinkingConfig(include_thoughts=True))

agent = Agent(
    name="simple_agent",
    model=llm,
    instruction="你是一个乐于助人的中文助手",
    description="回答用户的python编程问题",
    planner=base_planner
)

session_service = InMemorySessionService()

session = session_service.create_session_sync(
    app_name="adk_project",
    user_id="1",
    session_id="1"
)

runner = Runner(
    app_name="adk_project",
    agent=agent,
    session_service=session_service,
)

if __name__ == '__main__':
    content = types.Content(role="user", parts=[
        types.Part(text="我想要请求一个外部服务,通过http,你有什么好的建议?以'经过以下思考:'作为回答的开头")])
    print("开始调用模型")

    thinking = False
    for event in runner.run(user_id="1", session_id="1", new_message=content,
                            run_config=RunConfig(streaming_mode=StreamingMode.SSE)):
        print(event)

I saw the following code in the source code parsing. It seems that LiteLLM does not wrap the reasoning_content output by the model. Is there a problem with my usage or is it indeed not supported?

Metadata

Metadata

Assignees

Labels

bot triaged[Bot] This issue is triaged by ADK botmodels[Component] Issues related to model support

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions