You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: enhance task guardrail documentation with LLM-based validation support (#3879)
- Added section on LLM-based guardrails, explaining their usage and requirements.
- Updated examples to demonstrate the implementation of multiple guardrails, including both function-based and LLM-based approaches.
- Clarified the distinction between single and multiple guardrails in task configurations.
- Improved explanations of guardrail functionality to ensure better understanding of validation processes.
Copy file name to clipboardExpand all lines: docs/en/concepts/tasks.mdx
+149-3Lines changed: 149 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,6 +60,7 @@ crew = Crew(
60
60
|**Output Pydantic**_(optional)_|`output_pydantic`|`Optional[Type[BaseModel]]`| A Pydantic model for task output. |
61
61
|**Callback**_(optional)_|`callback`|`Optional[Any]`| Function/object to be executed after task completion. |
62
62
|**Guardrail**_(optional)_|`guardrail`|`Optional[Callable]`| Function to validate task output before proceeding to next task. |
63
+
|**Guardrails**_(optional)_|`guardrails`| `Optional[List[Callable]| List[str]]` | List of guardrails to validate task output before proceeding to next task. |
63
64
|**Guardrail Max Retries**_(optional)_|`guardrail_max_retries`|`Optional[int]`| Maximum number of retries when guardrail validation fails. Defaults to 3. |
@@ -341,7 +342,11 @@ Task guardrails provide a way to validate and transform task outputs before they
341
342
are passed to the next task. This feature helps ensure data quality and provides
342
343
feedback to agents when their output doesn't meet specific criteria.
343
344
344
-
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
345
+
CrewAI supports two types of guardrails:
346
+
347
+
1. **Function-based guardrails**: Python functions with custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
348
+
349
+
2. **LLM-based guardrails**: String descriptions that use the agent's LLM to validate outputs based on natural language criteria. These are ideal for complex or subjective validation requirements.
return (False, "Unexpected error during validation")
366
371
@@ -372,6 +377,147 @@ blog_task = Task(
372
377
)
373
378
```
374
379
380
+
### LLM-Based Guardrails (String Descriptions)
381
+
382
+
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
383
+
384
+
**Requirements**:
385
+
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
386
+
- Provide a clear, descriptive string explaining the validation criteria
387
+
388
+
```python Code
389
+
from crewai import Task
390
+
391
+
# Single LLM-based guardrail
392
+
blog_task = Task(
393
+
description="Write a blog post about AI",
394
+
expected_output="A blog post under 200 words",
395
+
agent=blog_agent,
396
+
guardrail="The blog post must be under 200 words and contain no technical jargon"
397
+
)
398
+
```
399
+
400
+
LLM-based guardrails are particularly useful for:
401
+
- **Complex validation logic** that's difficult to express programmatically
402
+
- **Subjective criteria** like tone, style, or quality assessments
403
+
- **Natural language requirements** that are easier to describe than code
404
+
405
+
The LLM guardrail will:
406
+
1. Analyze the task output against your description
407
+
2. Return `(True, output)` if the output complies with the criteria
408
+
3. Return `(False, feedback)` with specific feedback if validation fails
409
+
410
+
**Example with detailed validation criteria**:
411
+
412
+
```python Code
413
+
research_task = Task(
414
+
description="Research the latest developments in quantum computing",
415
+
expected_output="A comprehensive research report",
416
+
agent=researcher_agent,
417
+
guardrail="""
418
+
The research report must:
419
+
- Be at least 1000 words long
420
+
- Include at least 5 credible sources
421
+
- Cover both technical and practical applications
422
+
- Be written in a professional, academic tone
423
+
- Avoid speculation or unverified claims
424
+
"""
425
+
)
426
+
```
427
+
428
+
### Multiple Guardrails
429
+
430
+
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
431
+
432
+
The `guardrails` parameter accepts:
433
+
- A list of guardrail functions or string descriptions
434
+
- A single guardrail function or string (same as `guardrail`)
435
+
436
+
**Note**: If `guardrails` is provided, it takes precedence over `guardrail`. The `guardrail` parameter will be ignored when `guardrails` is set.
0 commit comments