-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Description
Description
When using newer/specific open-source models like olmo or nemotron-3-nano via Ollama, the agents fail to respect the system prompt injected by CrewAI. The agents do not understand how to format tool calls correctly or ignore instructions entirely, leading to broken execution flows.
Maybe the prompt structure generated by CrewAI might not be compatible with how these specific models expect system instructions, or the models simply hallucinate instead of following the format ?
Steps to Reproduce
- Setup simple crewai agent with a basic tool
- Use olmo or nemotron-3-nano (https://ollama.com/library/nemotron-3-nano)
llm = LLM(
model="ollama/nemotron-3-nano:30b",
base_url=AI_URL, # or your local ollama instance
extra_body={"keep_alive": "1h"},
temperature=0.5,
)
agent = Agent(
role='Tester',
goal='Use the tool',
backstory='You are a testing agent.',
tools=[my_simple_tool],
llm=llm,
verbose=True
)- run the crew
- See the bug
Expected behavior
The agent should assimilate the system prompt, understand the available tools, and execute tool calls using the correct syntax, just as it does with standard models like Llama 3 or Mistral.
Screenshots/Code snippets
Then when tool calling :
=> Wrong usage
Operating System
Ubuntu 20.04
Python Version
3.10
crewAI Version
1.7.1
crewAI Tools Version
1.7.1
Virtual Environment
Venv
Evidence
This finish with a pydantic_core._pydantic_core.ValidationError of the output_pydantic
Possible Solution
None
Additional context
Nothing sorry..