-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Description
Description
When a Task is configured with both output_pydantic and guardrails, the guardrail_max_retries parameter does not function as expected. Specifically, if the Agent's first retry produces a result that fails Pydantic validation (e.g., malformed JSON), the system crashes immediately. This prevents the task from proceeding to subsequent retry attempts, effectively making the max retries limit always "1" in failure scenarios.
Of course, the probability of the agent failing to meet the requirements of output_pydantic in two consecutive executions is very low, but it shouldn't be zero. This is just my humble opinion.
Steps to Reproduce
-
Set up a Task with output_pydantic=SomeModel and guardrail_max_retries=3.
-
Add a guardrail that validates the format.
-
Provide an input that causes the Agent to consistently hallucinate or provide invalid JSON.
-
Observe that the process terminates with a Pydantic error after the first retry, instead of attempting 3 retries as configured.
Expected behavior
The task should catch any parsing/validation errors during the _export_output phase within the loop, treating it as a failed attempt, and proceed to the next retry until guardrail_max_retries is reached.
Screenshots/Code snippets
The issue resides in the _invoke_guardrail_function method within crewai/task.py. Inside the retry loop, the code executes the following logic:
# File: crewai/task.py
# Method: _invoke_guardrail_function
for attempt in range(max_attempts):
# ... (omitted guardrail check) ...
# 1. Agent is asked to regenerate the output
result = agent.execute_task(
task=self,
context=context,
tools=tools,
)
# 2. CRASH POINT: This call throws an unhandled exception if 'result' is invalid JSON
pydantic_output, json_output = self._export_output(result)
# 3. Execution never reaches here if step 2 fails
task_output = TaskOutput(
# ...
pydantic=pydantic_output,
json_dict=json_output,
# ...
)
Operating System
Ubuntu 20.04
Python Version
3.10
crewAI Version
1.6.0
crewAI Tools Version
无
Virtual Environment
Venv
Evidence
Traceback (most recent call last):
File "F:\project\agent-service\utils\tools.py", line 564, in wrapper
result = await func(*args, **kwargs) # 异步函数使用 await
File "F:\project\agent-service\services\crew_components\crews\seal_analysis_crew.py", line 117, in kickoff_async
result = await self.crew.kickoff_async(inputs=inputs)
File "C:\Users\lk.conda\envs\agent-service\lib\asyncio\tasks.py", line 304, in __wakeup
future.result()
File "C:\Users\lk.conda\envs\agent-service\lib\asyncio\futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "C:\Users\lk.conda\envs\agent-service\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "F:\project\agent-service\venv\lib\site-packages\crewai\crew.py", line 755, in kickoff
result = self._run_sequential_process()
File "F:\project\agent-service\venv\lib\site-packages\crewai\crew.py", line 995, in _run_sequential_process
return self._execute_tasks(self.tasks)
File "F:\project\agent-service\venv\lib\site-packages\crewai\crew.py", line 1103, in _execute_tasks
task_output = task.execute_sync(
File "F:\project\agent-service\venv\lib\site-packages\crewai\task.py", line 458, in execute_sync
return self._execute_core(agent, context, tools)
File "F:\project\agent-service\venv\lib\site-packages\crewai\task.py", line 591, in _execute_core
raise e # Re-raise the exception after emitting the event
File "F:\project\agent-service\venv\lib\site-packages\crewai\task.py", line 557, in _execute_core
task_output = self._invoke_guardrail_function(
File "F:\project\agent-service\venv\lib\site-packages\crewai\task.py", line 943, in _invoke_guardrail_function
pydantic_output, json_output = self._export_output(result)
File "F:\project\agent-service\venv\lib\site-packages\crewai\task.py", line 768, in _export_output
model_output = convert_to_model(
File "F:\project\agent-service\venv\lib\site-packages\crewai\utilities\converter.py", line 191, in convert_to_model
return handle_partial_json(
File "F:\project\agent-service\venv\lib\site-packages\crewai\utilities\converter.py", line 257, in handle_partial_json
exported_result = model.model_validate_json(match.group())
File "F:\project\agent-service\venv\lib\site-packages\pydantic\main.py", line 746, in model_validate_json
return cls.pydantic_validator.validate_json(
pydantic_core._pydantic_core.ValidationError: 1 validation error for DocumentSummaryOutput
Possible Solution
maybe add try-except
like this
# File: crewai/task.py
# Method: _invoke_guardrail_function
for attempt in range(max_attempts):
# ... (omitted guardrail check) ...
# 1. Agent is asked to regenerate the output
result = agent.execute_task(
task=self,
context=context,
tools=tools,
)
# 2. CRASH POINT: This call throws an unhandled exception if 'result' is invalid JSON
try:
pydantic_output, json_output = self._export_output(result)
except Exception as e:
pydantic_output, json_output = None, None
# 3. Execution never reaches here if step 2 fails
task_output = TaskOutput(
# ...
pydantic=pydantic_output,
json_dict=json_output,
# ...
)
Of course, my code is very basic, but this way the loop will actually execute.
Additional context
无