-
Notifications
You must be signed in to change notification settings - Fork 493
[Bug] TXTSearchTool fails with ValidationError for OLLAMA_MODEL_NAME after recent update of crewaiΒ #466
Description
Description
After updating to the latest version of crewai (0.201.0) and crewai-tools (0.74.1), initializing TXTSearchTool with a custom Ollama embedding model now fails with a pydantic.ValidationError. The tool is unable to correctly parse the Ollama configuration, even when it conforms to the library's own type definitions.
This appears to be a regression, as this functionality was working correctly in previous versions.
Steps to Reproduce
- Create a config dictionary specifying an Ollama embedding model.
- Initialize TXTSearchTool with this configuration.
from crewai_tools import TXTSearchTool
# Configuration for the embedding model
# This structure with 'model_name' and 'url' aligns with the crewai source code definitions
rag_config = {
"llm": {
"provider": "vllm",
"config": {
"model": "openai/gpt-oss-120b",
"base_url": "https://<your-llm-url>"
},
},
"embedding_model": {
"provider": "ollama",
"config": {
"model_name": "nomic-embed-text",
"url": "https://<your-embedding-model-url>"
},
},
}
# This line now throws a validation error
# Assume 'dummy_file.txt' is any valid text file
try:
tool = TXTSearchTool(txt='dummy_file.txt', config=rag_config)
except Exception as e:
print(e)Expected Behavior
The TXTSearchTool should be initialized successfully without any validation errors, using the provided Ollama configuration for the embedding model.
Actual Behavior
The initialization fails with a Pydantic validation error. The traceback shows that the validator for the Ollama provider is receiving an empty dictionary, causing it to fail and report a missing required field.
1 validation error for TXTSearchTool
OLLAMA_MODEL_NAME
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.11/v/missingEnvironment
crewai version: 0.201.0
crewai-tools version: 0.74.1
Python version: 3.13.7
Analysis & Additional Context
- The error points to an issue with how the config dictionary is being passed from the crewai-tools layer to the core crewai embedding provider.
- The error message input_value={}, input_type=dict strongly suggests that an empty configuration is being passed to the OllamaProvider Pydantic model in the crewai library.
- The required configuration, as defined in crewai's source code (src/crewai/rag/embeddings/providers/ollama/types.py), expects the keys model_name and url.
- I have tried multiple configuration key variations (model/url, model_name/url, OLLAMA_MODEL_NAME/OLLAMA_BASE_URL) and all of them result in the same validation error.
S- ince this was working before the latest release, it seems a change in how the configuration is processed or passed between the tool and the embedding factory has introduced this bug. The TXTSearchTool (via RagTool) is no longer successfully handing off its embedding_model config to the underlying provider.