Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Comment |
Reviewer's GuideImplements a Stage C "intent bridge" in q_core_agent that subscribes to v1 intent messages over NATS, generates deterministic stub proposals without OpenAI using a new Orion-Qiki protocol schema, and publishes proposals back over new proposals subjects, guarded by config and covered by unit tests. Sequence diagram for NATS intent bridge stub proposals flowsequenceDiagram
actor Operator
participant ExternalClient
participant NATS
participant QCoreAgentMain
participant IntentBridge
Operator ->> ExternalClient: Perform action that generates intent
ExternalClient ->> NATS: Publish IntentV1 on QIKI_INTENT_V1
QCoreAgentMain->>QCoreAgentMain: Read QCORE_ENABLE_INTENT_BRIDGE
alt bridge enabled
QCoreAgentMain ->> QCoreAgentMain: start_intent_bridge_in_thread()
QCoreAgentMain ->> IntentBridge: Run serve_intents(servers)
IntentBridge ->> NATS: Connect and subscribe QIKI_INTENT_V1
else bridge disabled
QCoreAgentMain ->> QCoreAgentMain: Do not start intent bridge
end
NATS -->> IntentBridge: Deliver IntentV1 message
IntentBridge ->> IntentBridge: Parse JSON and validate as IntentV1
alt valid intent
IntentBridge ->> IntentBridge: build_stub_proposals(intent)
else invalid intent
IntentBridge ->> IntentBridge: build_invalid_intent_proposals(error)
end
IntentBridge ->> NATS: Publish ProposalsBatchV1 on QIKI_PROPOSALS_V1
IntentBridge ->> IntentBridge: Log proposals hash and intent_ts or error
Class diagram for Orion-Qiki protocol v1 modelsclassDiagram
class _StrictModel {
<<BaseModel>>
}
class LangHint {
<<enum>>
AUTO
RU
EN
}
class EnvironmentMode {
<<enum>>
FACTORY
MISSION
}
class SelectionV1 {
<<_StrictModel>>
+kind : Literal[event|incident|track|snapshot|none] = "none"
+id : str?
}
class IntentV1 {
<<_StrictModel>>
+version : Literal[1] = 1
+text : str
+lang_hint : LangHint = LangHint.AUTO
+screen : str
+selection : SelectionV1
+ts : int
+environment_mode : EnvironmentMode = EnvironmentMode.FACTORY
+snapshot_min : dict[str, Any]
}
class ProposalV1 {
<<_StrictModel>>
+proposal_id : str
+title : str
+justification : str
+priority : int (0..100)
+confidence : float (0.0..1.0)
+proposed_actions : list[Any]
+_must_be_empty_in_stage_a(v: list[Any]) : list[Any]
}
class ProposalsBatchV1 {
<<_StrictModel>>
+version : Literal[1] = 1
+ts : int
+proposals : list[ProposalV1]
+metadata : dict[str, Any]
}
_StrictModel <|-- SelectionV1
_StrictModel <|-- IntentV1
_StrictModel <|-- ProposalV1
_StrictModel <|-- ProposalsBatchV1
LangHint <.. IntentV1
EnvironmentMode <.. IntentV1
IntentV1 o-- SelectionV1
ProposalsBatchV1 "1" o-- "many" ProposalV1
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- The
QCORE_ENABLE_INTENT_BRIDGEcheck inmain.pyis hard to read and effectively always defaults to enabled ((env or "true")); consider simplifying it to an explicit boolean helper (e.g.,is_truthy(os.getenv(..., "true"))) so that empty strings or misconfigurations don’t silently enable the bridge. - In
ProposalV1._must_be_empty_in_stage_athe error message still refers to "Stage A" while the new bridge is described as "Stage C" elsewhere; aligning the terminology will make it clearer to operators and log readers why proposed_actions are rejected. - The
except (json.JSONDecodeError, ValidationError, Exception)block inserve_intentseffectively behaves likeexcept Exception; you can simplify this to a singleexcept Exception(or split into two specific handlers) to reduce redundancy and clarify the intended error handling.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `QCORE_ENABLE_INTENT_BRIDGE` check in `main.py` is hard to read and effectively always defaults to enabled (`(env or "true")`); consider simplifying it to an explicit boolean helper (e.g., `is_truthy(os.getenv(..., "true"))`) so that empty strings or misconfigurations don’t silently enable the bridge.
- In `ProposalV1._must_be_empty_in_stage_a` the error message still refers to "Stage A" while the new bridge is described as "Stage C" elsewhere; aligning the terminology will make it clearer to operators and log readers why proposed_actions are rejected.
- The `except (json.JSONDecodeError, ValidationError, Exception)` block in `serve_intents` effectively behaves like `except Exception`; you can simplify this to a single `except Exception` (or split into two specific handlers) to reduce redundancy and clarify the intended error handling.
## Individual Comments
### Comment 1
<location> `src/qiki/services/q_core_agent/intent_bridge.py:63-64` </location>
<code_context>
+ return ProposalsBatchV1(
+ ts=int(time.time() * 1000),
+ proposals=proposals[:3],
+ metadata={
+ "source": "q_core_agent",
+ "intent_ts": intent.ts,
+ "screen": intent.screen,
</code_context>
<issue_to_address>
**issue (bug_risk):** Use enum `.value` instead of `str(enum)` for environment_mode in metadata.
In `build_stub_proposals`, `environment_mode` is currently emitted as `str(intent.environment_mode)`, which for Enums yields values like `'EnvironmentMode.FACTORY'` instead of `'FACTORY'`. To keep the wire format stable and independent of the Python enum name, emit the raw enum value instead:
```python
"environment_mode": intent.environment_mode.value,
```
</issue_to_address>
### Comment 2
<location> `src/qiki/services/q_core_agent/tests/test_intent_bridge.py:10-19` </location>
<code_context>
+from qiki.shared.models.orion_qiki_protocol import EnvironmentMode, IntentV1, LangHint, SelectionV1
+
+
+def test_build_stub_proposals_actions_empty() -> None:
+ intent = IntentV1(
+ text="scan 360",
+ lang_hint=LangHint.EN,
+ screen="Events/События",
+ selection=SelectionV1(kind="incident", id="INC|x"),
+ ts=1700000000000,
+ environment_mode=EnvironmentMode.FACTORY,
+ snapshot_min={"incidents_top": [{"incident_id": "i1"}]},
+ )
+ batch = build_stub_proposals(intent)
+ assert batch.version == 1
+ assert 1 <= len(batch.proposals) <= 3
+ assert batch.metadata["screen"] == "Events/События"
+ assert batch.metadata["intent_ts"] == 1700000000000
+ for p in batch.proposals:
+ assert p.proposed_actions == []
+
</code_context>
<issue_to_address>
**suggestion (testing):** Tighten expectations in `test_build_stub_proposals_actions_empty` and add coverage for the no-incidents branch.
With a non-empty `incidents_top`, `build_stub_proposals` currently always returns three proposals (`review-incidents`, `clarify-goal`, `status-summary`), so the `1 <= len(batch.proposals) <= 3` assertion is too loose and may miss regressions. Consider asserting the exact proposal count and/or the specific `proposal_id`s. Also, please add a test for the `has_incidents == False` path (e.g., missing or empty `incidents_top`) to verify the expected non-incident proposals and metadata are still produced.
Suggested implementation:
```python
def test_build_stub_proposals_actions_empty() -> None:
intent = IntentV1(
text="scan 360",
lang_hint=LangHint.EN,
screen="Events/События",
selection=SelectionV1(kind="incident", id="INC|x"),
ts=1700000000000,
environment_mode=EnvironmentMode.FACTORY,
snapshot_min={"incidents_top": [{"incident_id": "i1"}]},
)
batch = build_stub_proposals(intent)
assert batch.version == 1
assert batch.metadata["screen"] == "Events/События"
assert batch.metadata["intent_ts"] == 1700000000000
proposal_ids = [p.proposal_id for p in batch.proposals]
# When incidents are present, we expect three specific proposals.
assert proposal_ids == ["review-incidents", "clarify-goal", "status-summary"]
for p in batch.proposals:
assert p.proposed_actions == []
def test_build_stub_proposals_no_incidents() -> None:
intent = IntentV1(
text="scan 360",
lang_hint=LangHint.EN,
screen="Events/События",
selection=SelectionV1(kind="incident", id="INC|x"),
ts=1700000000000,
environment_mode=EnvironmentMode.FACTORY,
snapshot_min={}, # no incidents_top -> has_incidents == False
)
batch = build_stub_proposals(intent)
assert batch.version == 1
assert batch.metadata["screen"] == "Events/События"
assert batch.metadata["intent_ts"] == 1700000000000
proposal_ids = [p.proposal_id for p in batch.proposals]
# In the no-incidents branch, incident-specific proposals must not be present.
assert "review-incidents" not in proposal_ids
# We still expect non-incident proposals to be produced.
assert "clarify-goal" in proposal_ids
assert "status-summary" in proposal_ids
for p in batch.proposals:
assert p.proposed_actions == []
```
The exact set and order of `proposal_id`s in both branches should be aligned with the actual implementation of `build_stub_proposals`. If the function currently returns a different ordering or a different set of proposal IDs, update the expected lists/contains-assertions in these tests accordingly so they match the real behavior while still keeping the tests strict and regression-resistant.
</issue_to_address>
### Comment 3
<location> `src/qiki/services/q_core_agent/tests/test_intent_bridge.py:29-34` </location>
<code_context>
+ assert p.proposed_actions == []
+
+
+def test_build_invalid_intent_proposals() -> None:
+ batch = build_invalid_intent_proposals(ValueError("bad payload"))
+ assert batch.version == 1
+ assert len(batch.proposals) == 1
+ assert batch.proposals[0].proposal_id == "invalid-intent"
+ assert batch.proposals[0].proposed_actions == []
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Extend `test_build_invalid_intent_proposals` to cover truncation and metadata behavior.
Please add a dedicated test that passes a very long error message and asserts both the 240‑character truncation with trailing ellipsis and the exact metadata contract (e.g. `batch.metadata == {"source": "q_core_agent", "error": "invalid_intent"}`). This will lock in the function’s contract and prevent regressions in error handling and metadata shape.
Suggested implementation:
```python
def test_build_invalid_intent_proposals() -> None:
batch = build_invalid_intent_proposals(ValueError("bad payload"))
assert batch.version == 1
assert len(batch.proposals) == 1
assert batch.proposals[0].proposal_id == "invalid-intent"
assert batch.proposals[0].proposed_actions == []
def test_build_invalid_intent_proposals_truncation_and_metadata() -> None:
long_msg = "x" * 300
batch = build_invalid_intent_proposals(ValueError(long_msg))
# metadata contract
assert batch.metadata == {"source": "q_core_agent", "error": "invalid_intent"}
# error message truncation contract: 240 chars + "..."
error_message = batch.proposals[0].metadata["error_message"]
assert len(error_message) == 243
assert error_message.endswith("...")
assert error_message[:240] == "x" * 240
```
If the invalid-intent error string is stored in a different field than `batch.proposals[0].metadata["error_message"]` (for example, `description`, `title`, or another metadata key), update the `error_message = ...` line in the new test to reference the actual field used by `build_invalid_intent_proposals`. The rest of the assertions (length 243, 240 leading characters, trailing ellipsis) should remain unchanged.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| metadata={ | ||
| "source": "q_core_agent", |
There was a problem hiding this comment.
issue (bug_risk): Use enum .value instead of str(enum) for environment_mode in metadata.
In build_stub_proposals, environment_mode is currently emitted as str(intent.environment_mode), which for Enums yields values like 'EnvironmentMode.FACTORY' instead of 'FACTORY'. To keep the wire format stable and independent of the Python enum name, emit the raw enum value instead:
"environment_mode": intent.environment_mode.value,| def test_build_invalid_intent_proposals() -> None: | ||
| batch = build_invalid_intent_proposals(ValueError("bad payload")) | ||
| assert batch.version == 1 | ||
| assert len(batch.proposals) == 1 | ||
| assert batch.proposals[0].proposal_id == "invalid-intent" | ||
| assert batch.proposals[0].proposed_actions == [] |
There was a problem hiding this comment.
suggestion (testing): Extend test_build_invalid_intent_proposals to cover truncation and metadata behavior.
Please add a dedicated test that passes a very long error message and asserts both the 240‑character truncation with trailing ellipsis and the exact metadata contract (e.g. batch.metadata == {"source": "q_core_agent", "error": "invalid_intent"}). This will lock in the function’s contract and prevent regressions in error handling and metadata shape.
Suggested implementation:
def test_build_invalid_intent_proposals() -> None:
batch = build_invalid_intent_proposals(ValueError("bad payload"))
assert batch.version == 1
assert len(batch.proposals) == 1
assert batch.proposals[0].proposal_id == "invalid-intent"
assert batch.proposals[0].proposed_actions == []
def test_build_invalid_intent_proposals_truncation_and_metadata() -> None:
long_msg = "x" * 300
batch = build_invalid_intent_proposals(ValueError(long_msg))
# metadata contract
assert batch.metadata == {"source": "q_core_agent", "error": "invalid_intent"}
# error message truncation contract: 240 chars + "..."
error_message = batch.proposals[0].metadata["error_message"]
assert len(error_message) == 243
assert error_message.endswith("...")
assert error_message[:240] == "x" * 240If the invalid-intent error string is stored in a different field than batch.proposals[0].metadata["error_message"] (for example, description, title, or another metadata key), update the error_message = ... line in the new test to reference the actual field used by build_invalid_intent_proposals. The rest of the assertions (length 243, 240 leading characters, trailing ellipsis) should remain unchanged.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: dd4d58b4f2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| try: | ||
| payload = json.loads(msg.data.decode("utf-8")) | ||
| intent = IntentV1.model_validate(payload) | ||
| batch = build_stub_proposals(intent) |
There was a problem hiding this comment.
Accept legacy intent payloads before strict validate
Because IntentV1 requires screen and ts and _StrictModel forbids extra fields, the current operator console publisher (src/qiki/services/operator_console/main_orion.py) sending only {text, source, ts_epoch} will always raise ValidationError here, so every real intent turns into an invalid-intent proposal instead of stub proposals. Either accept the legacy shape in the bridge (e.g., map ts_epoch and default screen/environment_mode) or update the publisher to emit the full v1 schema.
Useful? React with 👍 / 👎.
Adds QCore intent stub proposals C for Stage 0 plan.
Summary by Sourcery
Introduce a stub intent-to-proposals bridge in q_core_agent and formalize the Orion-Qiki v1 intent/proposal protocol.
New Features:
Enhancements:
Tests: