fix(api): defer streaming response until referenced variables are updated#152
Conversation
Review Summary by QodoDefer streaming response until conversation variables are updated
WalkthroughsDescription• Add blocks_variable_output() method to defer streaming until variable assignment • Fix result input value calculation for CLEAR write mode • Add test case validating v1 Variable Assigner blocks streaming correctly • Create fixture for streaming conversation variables v1 overwrite scenario Diagramflowchart LR
A["Variable Assigner Node"] -->|blocks_variable_output| B["Check if node updates requested variables"]
B -->|True| C["Defer streaming response"]
C -->|After assignment| D["Stream variable updates"]
A -->|_run method| E["Calculate result input value"]
E -->|CLEAR mode| F["Use updated variable value"]
E -->|Other modes| G["Use income value"]
File Changes1. api/core/workflow/nodes/variable_assigner/v1/node.py
|
Code Review by Qodo
1. Set/Tuple annotations used
|
| graph_runtime_state=graph_runtime_state, | ||
| ) | ||
|
|
||
| def blocks_variable_output(self, variable_selectors: Set[Tuple[str, ...]]) -> bool: |
There was a problem hiding this comment.
1. set/tuple annotations used 📘 Rule violation ✓ Correctness
New type annotations use legacy typing.Set/typing.Tuple instead of Python 3.12+ built-in generics. This violates the modern typing syntax requirement and may trigger lint/type-check rules.
Agent Prompt
## Issue description
New annotations use legacy `typing.Set`/`typing.Tuple` instead of Python 3.12+ built-in generics.
## Issue Context
The repository requires modern typing syntax (e.g., `set[tuple[str, ...]]`).
## Fix Focus Areas
- api/core/workflow/nodes/variable_assigner/v1/node.py[1-3]
- api/core/workflow/nodes/variable_assigner/v1/node.py[36-43]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| assert result.success, f"Test failed: {result.error}" | ||
|
|
||
|
|
||
| def test_streaming_conversation_variables_v1_overwrite_waits_for_assignment(): |
There was a problem hiding this comment.
2. Test missing return annotation 📘 Rule violation ✓ Correctness
The newly added pytest function has no return type annotation. This violates the requirement that all Python functions include parameter and return type annotations.
Agent Prompt
## Issue description
The new pytest test function is missing a return type annotation.
## Issue Context
The compliance checklist requires type annotations for all function parameters and return values.
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/test_streaming_conversation_variables.py[50-50]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def test_streaming_conversation_variables_v1_overwrite_waits_for_assignment(): | ||
| fixture_name = "test_streaming_conversation_variables_v1_overwrite" | ||
| input_query = "overwrite-value" | ||
|
|
||
| case = WorkflowTestCase( | ||
| fixture_path=fixture_name, | ||
| use_auto_mock=False, | ||
| mock_config=MockConfigBuilder().build(), | ||
| query=input_query, | ||
| inputs={}, | ||
| expected_outputs={"answer": f"Current Value Of `conv_var` is:{input_query}"}, | ||
| ) | ||
|
|
||
| runner = TableTestRunner() | ||
| result = runner.run_test_case(case) | ||
| assert result.success, f"Test failed: {result.error}" | ||
|
|
||
| events = result.events | ||
| conv_var_chunk_events = [ | ||
| event | ||
| for event in events | ||
| if isinstance(event, NodeRunStreamChunkEvent) and event.selector == ["conversation", "conv_var"] | ||
| ] | ||
|
|
||
| assert conv_var_chunk_events, "Expected conversation variable chunk events to be emitted" | ||
| assert all(event.chunk == input_query for event in conv_var_chunk_events), ( | ||
| "Expected streamed conversation variable value to match the input query" | ||
| ) |
There was a problem hiding this comment.
3. Test lacks aaa structure 📘 Rule violation ⛯ Reliability
The new test mixes setup, execution, and assertions without clearly separated Arrange/Act/Assert sections. This reduces readability and violates the AAA-pattern requirement for pytest tests.
Agent Prompt
## Issue description
The new pytest test does not clearly separate Arrange, Act, and Assert phases.
## Issue Context
The compliance checklist requires tests to follow the AAA pattern for readability and maintainability.
## Fix Focus Areas
- api/tests/unit_tests/core/workflow/graph_engine/test_streaming_conversation_variables.py[50-77]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def blocks_variable_output(self, variable_selectors: Set[Tuple[str, ...]]) -> bool: | ||
| """ | ||
| Check if this Variable Assigner node blocks the output of specific variables. | ||
|
|
||
| Returns True if this node updates any of the requested conversation variables. | ||
| """ | ||
| assigned_selector = self.node_data.assigned_variable_selector | ||
| return assigned_selector in variable_selectors |
There was a problem hiding this comment.
4. Unhashable selector check 🐞 Bug ✓ Correctness
VariableAssignerNode(v1).blocks_variable_output checks a list-based assigned_variable_selector for membership in a set of tuple selectors, which raises TypeError (unhashable list) during ResponseStreamCoordinator path analysis and can break streaming execution.
Agent Prompt
### Issue description
`VariableAssignerNode` v1 implements `blocks_variable_output()` by checking a list selector (`assigned_variable_selector`) for membership in `variable_selectors`, which is a `set[tuple[str, ...]]`. This causes a runtime `TypeError: unhashable type: 'list'` when the response coordinator analyzes paths for streaming.
### Issue Context
`ResponseStreamCoordinator` builds `variable_selectors` as tuples (`tuple(segment.selector[:2])`) and calls `source_node.blocks_variable_output(variable_selectors)` for nodes on paths to response nodes.
### Fix Focus Areas
- api/core/workflow/nodes/variable_assigner/v1/node.py[36-43]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Benchmark PR from agentic-review-benchmarks#1