You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: enhance human feedback support in flows
- Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity.
- Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback.
- Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities.
- Implemented SQLite persistence for managing pending feedback context.
- Expanded documentation to include examples of async human feedback usage and best practices.
@@ -37,8 +37,8 @@ from crewai.flow.flow import Flow, start, listen
37
37
from crewai.flow.human_feedback import human_feedback
38
38
39
39
classSimpleReviewFlow(Flow):
40
-
@human_feedback(request="Please review this content:")
41
40
@start()
41
+
@human_feedback(message="Please review this content:")
42
42
defgenerate_content(self):
43
43
return"This is AI-generated content that needs review."
44
44
@@ -63,19 +63,20 @@ When this flow runs, it will:
63
63
64
64
| Parameter | Type | Required | Description |
65
65
|-----------|------|----------|-------------|
66
-
|`request`|`str`| Yes | The message shown to the human alongside the method output |
66
+
|`message`|`str`| Yes | The message shown to the human alongside the method output |
67
67
|`emit`|`Sequence[str]`| No | List of possible outcomes. Feedback is collapsed to one of these, which triggers `@listen` decorators |
68
68
|`llm`|`str \| BaseLLM`| When `emit` specified | LLM used to interpret feedback and map to an outcome |
69
69
|`default_outcome`|`str`| No | Outcome to use if no feedback provided. Must be in `emit`|
70
70
|`metadata`|`dict`| No | Additional data for enterprise integrations |
71
+
|`provider`|`HumanFeedbackProvider`| No | Custom provider for async/non-blocking feedback. See [Async Human Feedback](#async-human-feedback-non-blocking)|
71
72
72
73
### Basic Usage (No Routing)
73
74
74
75
When you don't specify `emit`, the decorator simply collects feedback and passes a `HumanFeedbackResult` to the next listener:
75
76
76
77
```python Code
77
-
@human_feedback(request="What do you think of this analysis?")
78
78
@start()
79
+
@human_feedback(message="What do you think of this analysis?")
79
80
defanalyze_data(self):
80
81
return"Analysis results: Revenue up 15%, costs down 8%"
When you specify `emit`, the decorator becomes a router. The human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes:
92
93
93
94
```python Code
95
+
@start()
94
96
@human_feedback(
95
-
request="Do you approve this content for publication?",
97
+
message="Do you approve this content for publication?",
96
98
emit=["approved", "rejected", "needs_revision"],
97
99
llm="gpt-4o-mini",
98
100
default_outcome="needs_revision",
99
101
)
100
-
@start()
101
102
defreview_content(self):
102
103
return"Draft blog post content here..."
103
104
@@ -212,13 +213,13 @@ class ContentApprovalFlow(Flow[ContentState]):
212
213
self.state.draft =f"# {topic}\n\nThis is a draft about {topic}..."
213
214
returnself.state.draft
214
215
216
+
@listen(generate_draft)
215
217
@human_feedback(
216
-
request="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
218
+
message="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
The `@human_feedback` decorator works with other flow decorators. The order matters:
282
+
The `@human_feedback` decorator works with other flow decorators. Place it as the innermost decorator (closest to the function):
282
283
283
284
```python Code
284
-
# Correct: @human_feedback wraps the flow decorator
285
-
@human_feedback(request="Review this:")
285
+
# Correct: @human_feedback is innermost (closest to the function)
286
286
@start()
287
+
@human_feedback(message="Review this:")
287
288
defmy_start_method(self):
288
289
return"content"
289
290
290
-
@human_feedback(request="Review this too:")
291
291
@listen(other_method)
292
+
@human_feedback(message="Review this too:")
292
293
defmy_listener(self, data):
293
294
returnf"processed: {data}"
294
295
```
295
296
296
297
<Tip>
297
-
Place `@human_feedback` as the outermost decorator (first/top) so it runs after the method completes and can capture the return value.
298
+
Place `@human_feedback` as the innermost decorator (last/closest to the function) so it wraps the method directly and can capture the return value before passing to the flow system.
298
299
</Tip>
299
300
300
301
## Best Practices
@@ -305,10 +306,10 @@ The `request` parameter is what the human sees. Make it actionable:
305
306
306
307
```python Code
307
308
# ✅ Good - clear and actionable
308
-
@human_feedback(request="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
309
+
@human_feedback(message="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
309
310
310
311
# ❌ Bad - vague
311
-
@human_feedback(request="Review this:")
312
+
@human_feedback(message="Review this:")
312
313
```
313
314
314
315
### 2. Choose Meaningful Outcomes
@@ -329,7 +330,7 @@ Use `default_outcome` to handle cases where users press Enter without typing:
329
330
330
331
```python Code
331
332
@human_feedback(
332
-
request="Approve? (press Enter to request revision)",
333
+
message="Approve? (press Enter to request revision)",
333
334
emit=["approved", "needs_revision"],
334
335
llm="gpt-4o-mini",
335
336
default_outcome="needs_revision", # Safe default
@@ -365,9 +366,202 @@ When designing flows, consider whether you need routing:
365
366
| Approval gates with approve/reject/revise | Use `emit`|
366
367
| Collecting comments for logging only | No `emit`|
367
368
369
+
## Async Human Feedback (Non-Blocking)
370
+
371
+
By default, `@human_feedback` blocks execution waiting for console input. For production applications, you may need **async/non-blocking** feedback that integrates with external systems like Slack, email, webhooks, or APIs.
372
+
373
+
### The Provider Abstraction
374
+
375
+
Use the `provider` parameter to specify a custom feedback collection strategy:
376
+
377
+
```python Code
378
+
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
379
+
380
+
classWebhookProvider(HumanFeedbackProvider):
381
+
"""Provider that pauses flow and waits for webhook callback."""
The flow framework **automatically persists state** when `HumanFeedbackPending` is raised. Your provider only needs to notify the external system and raise the exception—no manual persistence calls required.
414
+
</Tip>
415
+
416
+
### Handling Paused Flows
417
+
418
+
When using an async provider, `kickoff()` returns a `HumanFeedbackPending` object instead of raising an exception:
419
+
420
+
```python Code
421
+
flow = ReviewFlow()
422
+
result = flow.kickoff()
423
+
424
+
ifisinstance(result, HumanFeedbackPending):
425
+
# Flow is paused, state is automatically persisted
426
+
print(f"Waiting for feedback at: {result.callback_info['webhook_url']}")
427
+
print(f"Flow ID: {result.context.flow_id}")
428
+
else:
429
+
# Normal completion
430
+
print(f"Flow completed: {result}")
431
+
```
432
+
433
+
### Resuming a Paused Flow
434
+
435
+
When feedback arrives (e.g., via webhook), resume the flow:
0 commit comments