Replies: 3 comments 1 reply
-
I'm not 100% on the status of tool calling support on Groq for llama 3.1, since it has some minor prompt differences from llama 3 |
Beta Was this translation helpful? Give feedback.
-
I'm having the same issue, when I tried the "Customer Support" example, I used the llama3.1-70b model deployed by the local vllm, it doesn't call any tool, but a different model does. Am i allowed to update my flight to something sooner? I want to leave later today. I'm glad you're eager to get going. Let me see what I can do to help you with that. First, I need to confirm some details. Can you please tell me your departure and destination cities, or at least the route you're flying? This will help me check the availability of earlier flights. Also, I'll need to check our airline's rebooking policies to see if any changes can be made. According to our policy, we do allow changes to flights, but there might be some restrictions or fees applicable. Additionally, as your flight is scheduled to depart later today, I'll need to verify if there are any available seats on earlier flights. Please give me a moment to check on this. (Please hold for a moment while I check the flight schedules...)` |
Beta Was this translation helpful? Give feedback.
-
groq llama-3.1-70b-versatile is broken on tool calling, but llama-3.1-8b-instant works. I hope 70b will work soon! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
When I swap Llama3 with Llama3.1 the function calling breaks down:
`# Import relevant functionality
from langchain_groq import ChatGroq
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.prebuilt import create_react_agent
from dotenv import load_dotenv, find_dotenv
llama3-70b-8192
llama-3.1-70b-versatile
Load environment variables
load_dotenv(find_dotenv())
Create the agent
model = ChatGroq(model_name="llama-3.1-70b-versatile")
search = TavilySearchResults(max_results=2)
tools = [search]
agent_executor = create_react_agent(model, tools)
Use the agent
config = {"configurable": {"thread_id": "abc1234"}}
for chunk in agent_executor.stream(
{"messages": [HumanMessage(content="whats the weather in Den Haag, netherlands?")]}):
print(chunk)
print("----")`
and get this error message:
BadRequestError Traceback (most recent call last)
Cell In[11], line 22
18 # Use the agent
19 config = {"configurable": {"thread_id": "abc1234"}}
---> 22 for chunk in agent_executor.stream(
23 {"messages": [HumanMessage(content="whats the weather in Den Haag, netherlands?")]}):
24 print(chunk)
25 print("----")
File c:\Users\justa\OneDrive\Desktop\Developer\LLM-Enhanced-ATM\venv311\Lib\site-packages\langgraph\pregel_init_.py:948, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug)
945 del fut, task
947 # panic on failure or timeout
--> 948 _panic_or_proceed(done, inflight, loop.step)
949 # don't keep futures around in memory longer than needed
950 del done, inflight, futures
File c:\Users\justa\OneDrive\Desktop\Developer\LLM-Enhanced-ATM\venv311\Lib\site-packages\langgraph\pregel_init_.py:1349, in _panic_or_proceed(done, inflight, step, timeout_exc_cls)
1347 inflight.pop().cancel()
1348 # raise the exception
-> 1349 raise exc
1351 if inflight:
1352 # if we got here means we timed out
1353 while inflight:
1354 # cancel all pending tasks
...
(...)
994 stream_cls=stream_cls,
995 )
BadRequestError: Error code: 400 - {'error': {'message': "Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.", 'type': 'invalid_request_error', 'code': 'tool_use_failed', 'failed_generation': '\n{\n "tool_call": {\n "id": "pending",\n "type": "function",\n "function": {\n "name": "weather"\n },\n "parameters": {\n "city": "Den Haag",\n "country": "Netherlands"\n }\n }\n}\n'}}
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
Beta Was this translation helpful? Give feedback.
All reactions