Skip to content

[BUG] Newer OSS models (Olmo, Nemotron-3-nano) ignore system prompts and fail tool calling #4117

@Killian-fal

Description

@Killian-fal

Description

When using newer/specific open-source models like olmo or nemotron-3-nano via Ollama, the agents fail to respect the system prompt injected by CrewAI. The agents do not understand how to format tool calls correctly or ignore instructions entirely, leading to broken execution flows.
Maybe the prompt structure generated by CrewAI might not be compatible with how these specific models expect system instructions, or the models simply hallucinate instead of following the format ?

Steps to Reproduce

  1. Setup simple crewai agent with a basic tool
  2. Use olmo or nemotron-3-nano (https://ollama.com/library/nemotron-3-nano)
llm = LLM(
    model="ollama/nemotron-3-nano:30b",
    base_url=AI_URL,  # or your local ollama instance
    extra_body={"keep_alive": "1h"},
    temperature=0.5,
)

agent = Agent(
    role='Tester',
    goal='Use the tool',
    backstory='You are a testing agent.',
    tools=[my_simple_tool],
    llm=llm,
    verbose=True
)
  1. run the crew
  2. See the bug

Expected behavior

The agent should assimilate the system prompt, understand the available tools, and execute tool calls using the correct syntax, just as it does with standard models like Llama 3 or Mistral.

Screenshots/Code snippets

Image

Then when tool calling :

Image

=> Wrong usage

Operating System

Ubuntu 20.04

Python Version

3.10

crewAI Version

1.7.1

crewAI Tools Version

1.7.1

Virtual Environment

Venv

Evidence

This finish with a pydantic_core._pydantic_core.ValidationError of the output_pydantic

Possible Solution

None

Additional context

Nothing sorry..

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions