Skip to content

Commit c73b36a

Browse files
authored
Adding HITL for Flows (#4143)
* feat: introduce human feedback events and decorator for flow methods - Added HumanFeedbackRequestedEvent and HumanFeedbackReceivedEvent classes to handle human feedback interactions within flows. - Implemented the @human_feedback decorator to facilitate human-in-the-loop workflows, allowing for feedback collection and routing based on responses. - Enhanced Flow class to store human feedback history and manage feedback outcomes. - Updated flow wrappers to preserve attributes from methods decorated with @human_feedback. - Added integration and unit tests for the new human feedback functionality, ensuring proper validation and routing behavior. * adding deployment docs * New docs * fix printer * wrong change * Adding Async Support feat: enhance human feedback support in flows - Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity. - Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback. - Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities. - Implemented SQLite persistence for managing pending feedback context. - Expanded documentation to include examples of async human feedback usage and best practices. * linter * fix * migrating off printer * updating docs * new tests * doc update
1 parent 0c02099 commit c73b36a

24 files changed

+5708
-60
lines changed

docs/docs.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -308,6 +308,7 @@
308308
"en/learn/hierarchical-process",
309309
"en/learn/human-input-on-execution",
310310
"en/learn/human-in-the-loop",
311+
"en/learn/human-feedback-in-flows",
311312
"en/learn/kickoff-async",
312313
"en/learn/kickoff-for-each",
313314
"en/learn/llm-connections",
@@ -735,6 +736,7 @@
735736
"pt-BR/learn/hierarchical-process",
736737
"pt-BR/learn/human-input-on-execution",
737738
"pt-BR/learn/human-in-the-loop",
739+
"pt-BR/learn/human-feedback-in-flows",
738740
"pt-BR/learn/kickoff-async",
739741
"pt-BR/learn/kickoff-for-each",
740742
"pt-BR/learn/llm-connections",
@@ -1171,6 +1173,7 @@
11711173
"ko/learn/hierarchical-process",
11721174
"ko/learn/human-input-on-execution",
11731175
"ko/learn/human-in-the-loop",
1176+
"ko/learn/human-feedback-in-flows",
11741177
"ko/learn/kickoff-async",
11751178
"ko/learn/kickoff-for-each",
11761179
"ko/learn/llm-connections",

docs/en/concepts/flows.mdx

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
572572

573573
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
574574

575+
### Human in the Loop (human feedback)
576+
577+
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
578+
579+
```python Code
580+
from crewai.flow.flow import Flow, start, listen
581+
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
582+
583+
class ReviewFlow(Flow):
584+
@start()
585+
@human_feedback(
586+
message="Do you approve this content?",
587+
emit=["approved", "rejected", "needs_revision"],
588+
llm="gpt-4o-mini",
589+
default_outcome="needs_revision",
590+
)
591+
def generate_content(self):
592+
return "Content to be reviewed..."
593+
594+
@listen("approved")
595+
def on_approval(self, result: HumanFeedbackResult):
596+
print(f"Approved! Feedback: {result.feedback}")
597+
598+
@listen("rejected")
599+
def on_rejection(self, result: HumanFeedbackResult):
600+
print(f"Rejected. Reason: {result.feedback}")
601+
```
602+
603+
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
604+
605+
You can also use `@human_feedback` without routing to simply collect feedback:
606+
607+
```python Code
608+
@start()
609+
@human_feedback(message="Any comments on this output?")
610+
def my_method(self):
611+
return "Output for review"
612+
613+
@listen(my_method)
614+
def next_step(self, result: HumanFeedbackResult):
615+
# Access feedback via result.feedback
616+
# Access original output via result.output
617+
pass
618+
```
619+
620+
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
621+
622+
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
623+
575624
## Adding Agents to Flows
576625

577626
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:

0 commit comments

Comments
 (0)