-
Notifications
You must be signed in to change notification settings - Fork 6.5k
add unified tool call block #19947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add unified tool call block #19947
Conversation
llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/responses.py
Show resolved
Hide resolved
llama-index-integrations/llms/llama-index-llms-anthropic/pyproject.toml
Outdated
Show resolved
Hide resolved
blocks=content, | ||
blocks=[ | ||
*content, | ||
*_anthropic_tool_call_to_tool_call_block( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this assumes that tool calls always come after content (I think this is true? just flagging)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that was my assumption: first there is the "explanation" of the tool call and then we have the tool call itself
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be safe, it might be best to just match the order that the content came in, rather than assuming 👍🏻
...ntegrations/llms/llama-index-llms-bedrock-converse/llama_index/llms/bedrock_converse/base.py
Show resolved
Hide resolved
llama-index-integrations/llms/llama-index-llms-ollama/llama_index/llms/ollama/base.py
Outdated
Show resolved
Hide resolved
@logan-markewich implemented all changes as you said, I tried to make OpenAI as backward compatible as possible to avoid breakages in the sense |
...-index-integrations/llms/llama-index-llms-google-genai/llama_index/llms/google_genai/base.py
Show resolved
Hide resolved
I most probably messed something up with the merge 😬, will be fixing with the next commit |
type="tool_use", | ||
) | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to add back this handling (in each LLM), just to ensure that old chat histories don't suddenly stop working. No harm in keeping this here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bedrock is missing the code to take ToolCallBlock -> bedrock format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(when you add this, lets keep the old way too to avoid breaking old convo histories?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gemini seems a little broken.
Testing with an agent, I get this
Chat turn one
[ChatMessage(role=<MessageRole.USER: 'user'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Hello! I am a new user!')]), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={'thought_signatures': []}, blocks=[ToolCallBlock(block_type='tool_call', tool_call_id=None, tool_name='get_the_greeting', tool_kwargs={}), TextBlock(block_type='text', text=''), ThinkingBlock(block_type='thinking', content='', num_tokens=None, additional_information={})]), ChatMessage(role=<MessageRole.TOOL: 'tool'>, additional_kwargs={'tool_call_id': ''}, blocks=[TextBlock(block_type='text', text='Good day sir, top of the morning to ye.')])]
Lots of empty blocks? And from here, it kind of goes into a tail-spin calling the same tool over and over again. Seems sus. This is using the agent script I linked in slack
new_messages.append(AssistantMessage(content=chunks, tool_calls=tool_calls)) | ||
new_messages.append(AssistantMessage(content=chunks)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at mistral docs, I don't think this works. content
can't include tool calls, it has to be attached separately
Adding a tool call block and support for it in: