Skip to content

Agent llama tools wont work properly #2973

@ixenion

Description

@ixenion

Initial Checks

Description

Im trying to make local llama3.1:8b model use weather tool to suggest clothes to wear.
In response Im getting something like:
I can only respond with formatted JSON as per AI model restrictions.
Or:
I'll respond with a JSON object for a function call that best answers your prompt
(though initial params wont change)

And in (very) rare cases something relevant like waterproof shoes

My setup:
Windows 10, nvidia 4070, virtual ubuntu 24.04 (WSL v2)
Installed llama LLM with $ ollama pull llama3.1:8b
And commands like ollama run llama3.1:8b works fine. But not the tool, let me show you example:

Example Code

import asyncio
from dataclasses import dataclass
from typing import Any

from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext, NativeOutput
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.ollama import OllamaProvider

class AgentOutput(BaseModel):
    response_text: str = Field(description="Message to the user")


# Initialize the Ollama provider
ollama_provider = OllamaProvider(
        base_url="http://localhost:11434/v1",
        )

model = OpenAIChatModel(
    model_name="llama3.1:8b",
    provider=ollama_provider,
)

main_agent = Agent(
    model=model,
    output_type=AgentOutput,
    system_prompt=(
        "You are a smart home assistant. ",
        "Use the available tools when needed. ",
        "When asked about weather, always use the 'get_location_weather' tool."
    ),
    retries=5,
)


@main_agent.tool_plain
async def get_location_weather(
        location: str = Field(description="The location to get weather for"),
        # *args
    ) -> dict[str, Any]:
    """
    Get the current weather for a specific location.

    Args:
        location: Location to check weather
    """
    print(f"[DEBUG] Tool used.")
    print(f"[DEBUG] Location type: '{type(location)}'")
    print(f"[DEBUG] Location val: '{location}'")
    # In a real implementation, you would call a weather API here
    return {location: f"The weather in {location} is currently rainy with a temperature of 15°C."}


async def main():
    result = await main_agent.run(
        "Check the weather in Paris and suggest what am I need to wear?",
    )
    print(result.output)


if __name__ == "__main__":
    asyncio.run(main())

Python, Pydantic AI & LLM client version

python==3.12.3
pydantic==2.11.9
pydantic-ai==1.0.9
pydantic-ai-slim==1.0.9
pydantic-evals==1.0.9
pydantic-graph==1.0.9
pydantic-settings==2.10.1
pydantic_core==2.33.2

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions