-
Notifications
You must be signed in to change notification settings - Fork 0
chore: type fixes #74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughMass migration from dataclasses to Pydantic BaseModel across domain, repo, service, and API layers; introduces a comprehensive typed DomainEvent system (discriminated union and TypeAdapter); removes payload wrapper in DB documents in favor of top-level event fields; switches many timestamps from ISO strings to datetime and uses enum members instead of .value. Changes
Estimated code review effort 🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 issues found across 15 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="backend/app/events/event_store.py">
<violation number="1" location="backend/app/events/event_store.py:155">
P1: Query change may break retrieval of existing events. Old documents have `execution_id` stored under `payload.execution_id`, but the query now looks at root level `execution_id`. Consider using a query that checks both paths for backwards compatibility: `{"$or": [{"execution_id": execution_id}, {"payload.execution_id": execution_id}, {"aggregate_id": execution_id}]}`</violation>
</file>
<file name="backend/app/services/admin/admin_events_service.py">
<violation number="1" location="backend/app/services/admin/admin_events_service.py:239">
P1: Using `dict(vars(event))` instead of `asdict(event)` won't recursively convert nested dataclasses like `metadata: EventMetadata` to dicts. This will cause the JSON export to contain string representations of objects instead of properly nested JSON structures. Consider using Pydantic's `model_dump()` method for Pydantic dataclasses, or keep `asdict()` if it was working correctly.</violation>
</file>
<file name="backend/app/domain/user/settings_models.py">
<violation number="1" location="backend/app/domain/user/settings_models.py:78">
P2: The `theme` field uses `str | None` instead of `Theme | None`, which is inconsistent with `DomainUserSettings` and `DomainUserSettingsUpdate` in the same file. If this is a "well-typed domain event", consider using `Theme | None` for type consistency.</violation>
</file>
<file name="backend/app/api/routes/events.py">
<violation number="1" location="backend/app/api/routes/events.py:350">
P1: This change alters the replay semantics: `event.model_extra` returns ALL extra fields, not just the `payload`. If the Event has other extra attributes stored, they would all be included in the published payload. Consider using `getattr(event, 'payload', {})` to access the specific payload attribute while satisfying type checkers.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/app/db/docs/event.py (1)
2-2: Fix CI: remove unused import (Any) flagged by Ruff.This currently fails GitHub Actions (F401).
Proposed diff
-from typing import Anybackend/app/events/event_store.py (1)
117-121: Exclude Beanie internal fields before deserializing event data.When retrieving events from MongoDB,
doc.model_dump()includes Beanie's internal fields (id,revision_id) added by state management. These are silently ignored during deserialization because event models rely on Pydantic's defaultextra='ignore'behavior. This creates a brittle contract—if any event model is later configured withextra='forbid', deserialization will fail. Explicitly exclude these fields at the boundary.Suggested fix (applies to 7 locations)
- event = self.schema_registry.deserialize_json(doc.model_dump()) + event = self.schema_registry.deserialize_json(doc.model_dump(exclude={"id", "revision_id"}))- events = [self.schema_registry.deserialize_json(doc.model_dump()) for doc in docs] + events = [ + self.schema_registry.deserialize_json(doc.model_dump(exclude={"id", "revision_id"})) + for doc in docs + ]Also applies to: 143–147, 160–165, 182–187, 203–207, 216–220, 240–243
🤖 Fix all issues with AI agents
In @backend/app/api/routes/events.py:
- Line 350: The code uses event.model_extra (payload=event.model_extra or {}),
but Event is a Pydantic dataclass and stores extras in __pydantic_extra__, so
replace the direct attribute access with a robust getter that first tries
event.model_extra, then event.__pydantic_extra__, and falls back to an empty
dict; for example, compute the payload via getattr checks on the event object
(checking "model_extra" then "__pydantic_extra__") before passing it into the
payload parameter.
In @backend/app/db/repositories/admin/admin_events_repository.py:
- Around line 194-197: The archival call uses **vars(event) which omits Pydantic
extra fields; update the dict merge to include event.__pydantic_extra__ as well
(e.g., replace **vars(event) with **{**vars(event), **(event.__pydantic_extra__
or {})}) so any extra fields on the Pydantic dataclass are preserved when
constructing the archived record (look for this pattern in
admin_events_repository where deleted_at and deleted_by are set).
- Line 273: The code reads extra Pydantic fields via event.model_extra which
raises AttributeError for Pydantic dataclasses; replace that access with the
pydantic internal extras (use getattr(event, "__pydantic_extra__", {}) or
equivalent) so exec_id = (event.model_extra or {}).get("execution_id") or
event.aggregate_id becomes safe; update the same pattern in
admin_events_repository.py (the line computing exec_id from event) to use
__pydantic_extra__ (or getattr) to avoid AttributeError, mirroring the fix
applied in events.py.
In @backend/app/events/event_store.py:
- Around line 51-53: When constructing EventDocument from the Pydantic model,
call model_dump with exclude_none=True to avoid storing explicit nulls that
defeat MongoDB sparse indexes; update the two sites where EventDocument is
created (the EventDocument(**event.model_dump(), stored_at=now,
ttl_expires_at=ttl) occurrences) to use event.model_dump(exclude_none=True) so
fields like execution_id and pod_name are omitted when None before inserting.
In @backend/app/services/admin/admin_events_service.py:
- Line 239: The event JSON export uses event_dict = dict(vars(event)) which
drops Pydantic extra fields stored in event.model_extra (so fields like
execution_id and pod_name are lost); update the assignment of event_dict in the
export logic (the variable named event_dict in admin_events_service.py) to merge
vars(event) with event.model_extra when present (i.e., combine the two dicts so
extra fields are included), following the same model_extra usage you saw in
admin_events_repository.py.
🧹 Nitpick comments (1)
backend/app/domain/user/settings_models.py (1)
69-86: Make DomainUserSettingsChangedEvent align with emitted changes (Theme type + custom_settings).If these events are the typed source-of-truth for replay/history, consider:
theme: Theme | None(notstr | None) for consistent typing/validation.- add
custom_settings: dict[str, Any] | None = Noneif it can be emitted (it is, via DomainUserSettingsUpdate).Proposed diff
class DomainUserSettingsChangedEvent: @@ - theme: str | None = None + theme: Theme | None = None @@ reason: str | None = None correlation_id: str | None = None + custom_settings: dict[str, Any] | None = None
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
backend/app/api/routes/events.pybackend/app/db/docs/event.pybackend/app/db/repositories/admin/admin_events_repository.pybackend/app/db/repositories/event_repository.pybackend/app/db/repositories/replay_repository.pybackend/app/db/repositories/user_settings_repository.pybackend/app/dlq/manager.pybackend/app/domain/events/event_models.pybackend/app/domain/replay/models.pybackend/app/domain/user/__init__.pybackend/app/domain/user/settings_models.pybackend/app/events/event_store.pybackend/app/services/admin/admin_events_service.pybackend/app/services/kafka_event_service.pybackend/app/services/user_settings_service.py
🧰 Additional context used
🧬 Code graph analysis (8)
backend/app/domain/replay/models.py (1)
backend/app/services/coordinator/queue_manager.py (1)
execution_id(30-31)
backend/app/db/repositories/user_settings_repository.py (3)
backend/app/domain/user/settings_models.py (2)
DomainUserSettings(35-46)DomainUserSettingsChangedEvent(70-85)backend/app/domain/events/query_builders.py (1)
limit(22-24)backend/app/domain/events/event_models.py (1)
correlation_id(60-61)
backend/app/dlq/manager.py (2)
backend/app/dlq/models.py (1)
DLQMessageStatus(10-16)backend/app/db/repositories/dlq_repository.py (1)
_doc_to_message(26-31)
backend/app/events/event_store.py (1)
backend/app/events/schema/schema_registry.py (1)
deserialize_json(178-193)
backend/app/domain/user/__init__.py (1)
backend/app/domain/user/settings_models.py (1)
DomainUserSettingsChangedEvent(70-85)
backend/app/services/user_settings_service.py (2)
backend/app/domain/user/settings_models.py (3)
DomainUserSettingsChangedEvent(70-85)DomainUserSettings(35-46)DomainSettingsHistoryEntry(89-96)backend/app/db/repositories/user_settings_repository.py (1)
get_settings_events(31-59)
backend/app/services/admin/admin_events_service.py (1)
backend/tests/unit/services/idempotency/test_middleware.py (1)
event(32-36)
backend/app/domain/user/settings_models.py (2)
backend/app/services/coordinator/queue_manager.py (1)
user_id(34-35)backend/app/domain/events/event_models.py (1)
correlation_id(60-61)
🪛 GitHub Actions: MyPy Type Checking
backend/app/api/routes/events.py
[error] 350-350: mypy check failed: 'Event' has no attribute 'model_extra' [attr-defined]. Command: 'uv run mypy --config-file pyproject.toml --strict .'
🪛 GitHub Actions: Ruff Linting
backend/app/db/docs/event.py
[error] 2-2: F401 'Any' imported but unused. Remove unused import. Found 1 error. Command failed: 'uv run ruff check . --config pyproject.toml'.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Frontend
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (14)
backend/app/domain/events/event_models.py (1)
44-57: LGTM: Event model properly configured for extra fields.The addition of
extra="allow"configuration and explicitstatus/errorfields aligns with the payload flattening strategy. The Pydantic dataclass will now store event-specific fields at the top level.Note: Extra fields in Pydantic dataclasses are stored in
__pydantic_extra__, notmodel_extra(which is only available on BaseModel). Code accessing extra fields must useevent.__pydantic_extra__.How do you access extra fields in Pydantic v2 dataclasses with extra="allow"?backend/app/domain/replay/models.py (1)
56-56: execution_id is properly indexed at the document root level.The query path
execution_idcorrectly targets the root-level field. EventDocument (backend/app/db/docs/event.py) defines a sparse ASCENDING index onexecution_id(line 59) and includes it in the text search index (line 79). The model'sextra="allow"configuration enables flexible storage of event-specific fields at the document root, supporting the payload flattening design.backend/app/db/repositories/replay_repository.py (1)
94-94: LGTM: Simplified batch construction aligns with flattened event storage.The change to use
model_dump(exclude=...)directly is cleaner and aligns with the PR's goal of eliminating nested payload fields in favor of top-level extra fields.backend/app/services/kafka_event_service.py (2)
244-252: Consistent withpublish_event- payload spread as kwargs.The change mirrors the pattern in
publish_eventat Line 93. The payload construction (Lines 239-241) explicitly excludes base fields, which reduces the risk of field name conflicts when spreading with**payload.
86-94: Event model is correctly configured to support**payloadunpacking.The
Eventmodel inbackend/app/domain/events/event_models.pyalready hasextra="allow"configured via@dataclass(config=ConfigDict(extra="allow")), which properly supports the unpacking of event-specific payload fields. Actual usage shows payload fields are kept separate from base Event fields, preventing field name collisions.backend/app/dlq/manager.py (2)
327-327: LGTM: Explicit type annotation improves clarity.Replacing the cast with a direct type annotation makes the code more explicit and type-safe.
491-513: LGTM: New manual discard method follows established patterns.The new
discard_message_manuallymethod:
- Mirrors the structure of
retry_message_manually(Lines 447-460), maintaining consistency- Includes appropriate state guards against terminal states (DISCARDED, RETRIED)
- Has proper error handling and logging
backend/app/db/repositories/event_repository.py (4)
39-40: LGTM: Improved event storage with None filtering and timestamp defaults.The changes improve robustness:
vars(event)with None filtering prevents storing null valuessetdefaultensuresstored_atis always populated
59-60: Consistent batch storage implementation.The same pattern from
store_eventis applied here, with the optimization of using a single timestamp for the entire batch.
332-337: LGTM: Consistent use ofmodel_dumpfor archival.The archival process now uses
model_dump(exclude=...)to copy event data, consistent with the PR's flattening approach.
147-151: Execution ID field and index verified.The
execution_idis properly indexed as a top-level field inEventDocumentwith a sparse index (idx_execution_id), supporting the query change. The field is stored viaextra="allow"configuration and included in both ascending and text search indexes.backend/app/domain/user/__init__.py (1)
17-17: All references toDomainSettingsEventhave been updated successfully.The breaking change from
DomainSettingsEventtoDomainUserSettingsChangedEventis complete. The old name has been fully removed from the codebase, and the new name is used consistently across all files that depend on it:user_settings_service.py,user_settings_repository.py, and the domain layer exports.backend/app/events/event_store.py (1)
155-161: Review the event storage design for legacypayload.*fields.The current
get_execution_events()query searches for top-levelexecution_idandaggregate_idfields. If the database contains legacy events withpayload.execution_id(from an earlier schema version), the query will miss them.The EventDocument schema indicates a flat structure (comment: "no payload wrapper needed"), and no instances of
payload.execution_idappear in the codebase. However, the MongoDB aggregation queries inquery_builders.pystill reference$payload.duration_secondsand$payload.status, suggesting the possibility of legacy or mixed-format data.Before relying on the current query, verify:
- Whether legacy events with
payload.*fields exist in the production database- If yes, either add a temporary
{"payload.execution_id": execution_id}clause to the$orquery or run a migration script to flatten existing documentsbackend/app/db/repositories/user_settings_repository.py (1)
34-59: Both concerns in this review can be safely dismissed:
str(et)matching: EventType is a StringEnum, sostr(et)correctly returns the stored enum value (e.g.,"user_settings_updated"). The filter works as intended.Payload structure: Payload fields are flattened to the top-level in EventDocument via
**payloadin KafkaEventService.publish_event (line 93-94). Theuser_idandchanged_fieldsare stored as direct document fields, not nested. DomainUserSettingsChangedEvent will receive all required fields correctly.The implementation is sound.
Likely an incorrect or invalid review comment.
…ersions, compatibility with beanie, etc + no manual datetime -> str conversion + single point of definition for class EventMetadata(BaseModel)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
backend/app/db/docs/event.py (1)
2-2: Remove unused import flagged by linter.The pipeline failure indicates
Anyfromtypingis imported but unused.🔧 Proposed fix
-from typing import Anybackend/app/events/core/consumer.py (1)
219-231: Fix ConsumerStatus.state type definition to accept ConsumerState enum.The
ConsumerStatus.statefield is typed asstrinbackend/app/events/core/types.py(line 168), but the code now passesself._statewhich is aConsumerStateenum object. Update the type annotation to:state: ConsumerStateThe datetime changes to
ConsumerMetricsSnapshot.last_message_timeandlast_updatedare correct—they are already properly typed asdatetime | Noneand match the datetime objects being passed.backend/app/domain/replay/models.py (1)
31-45: FixReplayFilter.is_empty()ignoringexclude_event_types.
exclude_event_typesisn’t considered, so a filter with only exclusions will be treated as “empty” (risking unintended broad replays/browses depending on caller behavior).Proposed fix
def is_empty(self) -> bool: return not any( [ self.event_ids, self.execution_id, self.correlation_id, self.aggregate_id, self.event_types, + self.exclude_event_types, self.start_time, self.end_time, self.user_id, self.service_name, self.custom_query, ] )backend/app/services/user_settings_service.py (1)
152-161: Custom settings updates won’t survive replay (event model dropscustom_settings, apply ignores it).
update_custom_setting()emitschanged_fields=["custom_settings", ...], butDomainUserSettingsChangedEventdoesn’t includecustom_settingsand isextra="ignore", so the value is lost;_apply_event()also filters it out. After cache expiry or restart,get_user_settings_fresh()will reconstruct without those changes.Minimum fixes (pick one consistent approach):
- Event-source it: add
custom_settings: dict[str, Any] | NonetoDomainUserSettingsChangedEvent, include it in_settings_fields, and apply it.- If intentionally not event-sourced: on
custom_settingsupdates, write a snapshot immediately (or store custom_settings elsewhere), and don’t rely on replay.Partial fix in this file (still requires updating DomainUserSettingsChangedEvent)
- _settings_fields = {"theme", "timezone", "date_format", "time_format", "notifications", "editor"} + _settings_fields = {"theme", "timezone", "date_format", "time_format", "notifications", "editor", "custom_settings"}Also applies to: 206-219
backend/app/db/repositories/user_settings_repository.py (1)
30-57: Use.valuefor consistency withevent_repository.py.Both
str(et)andet.valueproduce identical results becauseStringEnum.__str__()returnsself.value. However,event_repository.py(line 100) uses.valueconsistently. Adopt the same pattern here for code clarity:Fix
- In(EventDocument.event_type, [str(et) for et in event_types]), + In(EventDocument.event_type, [et.value for et in event_types]),backend/app/db/repositories/event_repository.py (1)
39-53: Useevent.execution_idfor theEXECUTION_IDspan attribute, notaggregate_id.The current implementation sets the span attribute from
event.aggregate_id, but most domain events haveexecution_idas their primary execution identifier. Theget_execution_eventsmethod confirms these are distinct concepts by querying both fields separately. This mislabels execution traces for observability.The same issue exists in
backend/app/events/event_store.py. Usegetattr(event, "execution_id", "")to handle edge cases likeUserSettingsUpdatedEventwhich lack an execution context.Proposed fix
add_span_attributes( **{ str(EventAttributes.EVENT_TYPE): str(event.event_type), str(EventAttributes.EVENT_ID): event.event_id, - str(EventAttributes.EXECUTION_ID): event.aggregate_id or "", + str(EventAttributes.EXECUTION_ID): getattr(event, "execution_id", "") or "", } )
🤖 Fix all issues with AI agents
In @backend/app/db/repositories/admin/admin_events_repository.py:
- Around line 178-185: The archive_event method creates EventArchiveDocument by
spreading event.model_dump() while also passing deleted_at and deleted_by
explicitly, causing duplicate keyword arguments; fix by calling
EventArchiveDocument with event.model_dump(exclude={'deleted_at','deleted_by'})
and then supplying deleted_at=datetime.now(timezone.utc) and
deleted_by=deleted_by, or alternatively remove the explicit
deleted_at/deleted_by args and rely on the values from event.model_dump();
update the archive_event function and reference EventArchiveDocument,
DomainEvent, and event.model_dump() accordingly.
- Around line 252-255: The loop over original_events currently reads
execution_id from event.model_extra (as in the snippet using exec_id =
(event.model_extra or {}).get("execution_id") or event.aggregate_id) which is
correct because EventDocument uses extra="allow"; to make presence/type safety
optional, either keep this access but add a short comment referencing
EventDocument's extra="allow", or validate/deserialize the raw event before
reading execution_id by calling domain_event_adapter.validate_python(event,
from_attributes=True) to get a typed domain event where execution_id is an
explicit field, then read execution_id from that object instead of model_extra.
In @backend/app/domain/events/typed.py:
- Around line 240-264: The DomainEvent discriminated union only lists 19 event
classes but the EventType enum contains ~54 values, so TypeAdapter(DomainEvent)
will fail for unmapped event_type values; update the DomainEvent union (and the
TypeAdapter) to include corresponding event classes for all EventType members
(e.g., add classes for USER_REGISTERED, USER_LOGIN/LOGGED_IN/LOGGED_OUT,
USER_UPDATED/DELETED, all NOTIFICATION_*, SCRIPT_*, SAGA_*, *_COMMAND,
security/resource/system events, etc.) or prune unused EventType entries so
every EventType has a matching event class; ensure each added class uses the
same discriminator key "event_type" and is imported/defined before the
DomainEvent union so that domain_event_adapter: TypeAdapter[DomainEvent] can
successfully deserialize all possible event_type values.
In @backend/app/domain/saga/models.py:
- Line 5: Remove the unused computed_field import from the top-level import list
in models.py; update the import line that currently reads "from pydantic import
BaseModel, ConfigDict, Field, computed_field" to omit computed_field since
SagaListResult uses a standard @property for has_more, leaving the other imports
unchanged.
In @backend/app/services/admin/admin_events_service.py:
- Line 246: The runtime AttributeError occurs because EventFilter is a dataclass
(EventFilter) and event_filter.model_dump() is invalid; replace the model_dump
call with dataclasses.asdict(event_filter) (import asdict from dataclasses) to
serialize the dataclass, or alternatively change EventFilter to a Pydantic
BaseModel in event_models so model_dump remains valid; update the serialization
in the code path that constructs "filters_applied" (where event_filter is
referenced) accordingly.
🧹 Nitpick comments (8)
backend/app/domain/saga/models.py (1)
66-69: Consider usingcomputed_fieldfor serialization.Using
@propertyworks for attribute access but thehas_morevalue won't be included when the model is serialized viamodel_dump(). If this field should be present in API responses, use@computed_fieldinstead.Optional: Use computed_field for serialization
- @property + @computed_field + @property def has_more(self) -> bool: """Calculate has_more.""" return (self.skip + len(self.sagas)) < self.totalThis would also justify keeping the
computed_fieldimport.backend/app/api/routes/events.py (1)
349-354: LGTM! Payload extraction aligns with event flattening.The logic correctly extracts event-specific fields by excluding BaseEvent base fields and the event_type discriminator. This implements the PR's objective of removing the payload wrapper and working with top-level fields.
One optional consideration:
event.model_dump()includes None values by default. If you want to exclude optional fields that are None from the replayed payload, consider usingevent.model_dump(exclude_none=True)on line 351. However, the current behavior may be intentional if None values carry semantic meaning.backend/app/services/event_bus.py (1)
139-139: Prefermodel_dump()overvars()for Pydantic models.Using
vars(event)directly accesses the instance dictionary, bypassing Pydantic's serialization logic. This can miss custom serializers, validators, or computed fields.♻️ Proposed refactor
- value = json.dumps(vars(event)).encode("utf-8") + value = json.dumps(event.model_dump(mode="json")).encode("utf-8")This ensures proper serialization of datetime fields and respects any custom Pydantic configuration.
backend/app/domain/replay/models.py (1)
47-88: Execution-id query flattening looks consistent; consider dropping redundantstr()cast.
execution_idis already typed asstr | None, sostr(self.execution_id)is likely unnecessary unless callers pass non-strings. If you want the coercion, consider typing it asstr | UUID | Noneinstead.backend/app/domain/saved_script/models.py (1)
31-39: Consider makingDomainSavedScriptUpdate.updated_atoptional (set it in repo/service instead).As written, constructing
DomainSavedScriptUpdate()will still advanceupdated_at, which can turn “no changes” into a write.backend/app/db/repositories/user_settings_repository.py (1)
51-57: Guarde.metadataaccess if it can be null.If
EventDocument.metadatais optional in the schema,e.metadata.correlation_idcan throw; considergetattr(e.metadata, "correlation_id", None).backend/app/services/user_settings_service.py (1)
163-180: Consider usingevent.model_dump(exclude_none=True)for history value lookup.This avoids “present-but-None” keys and makes lookups cleaner, especially if the model config changes around extras. (Pydantic v2 behavior detail.)
backend/app/domain/user/user_models.py (1)
106-127: Consider Pydantic-native validation for email/password (instead ofis_valid).E.g.,
EmailStrfor
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (41)
backend/app/api/routes/events.pybackend/app/api/routes/health.pybackend/app/db/docs/event.pybackend/app/db/repositories/admin/admin_events_repository.pybackend/app/db/repositories/admin/admin_user_repository.pybackend/app/db/repositories/event_repository.pybackend/app/db/repositories/execution_repository.pybackend/app/db/repositories/notification_repository.pybackend/app/db/repositories/replay_repository.pybackend/app/db/repositories/resource_allocation_repository.pybackend/app/db/repositories/saga_repository.pybackend/app/db/repositories/saved_script_repository.pybackend/app/db/repositories/sse_repository.pybackend/app/db/repositories/user_repository.pybackend/app/db/repositories/user_settings_repository.pybackend/app/domain/admin/overview_models.pybackend/app/domain/admin/replay_updates.pybackend/app/domain/events/__init__.pybackend/app/domain/events/event_metadata.pybackend/app/domain/events/event_models.pybackend/app/domain/events/typed.pybackend/app/domain/execution/models.pybackend/app/domain/notification/models.pybackend/app/domain/replay/models.pybackend/app/domain/saga/models.pybackend/app/domain/saved_script/models.pybackend/app/domain/sse/models.pybackend/app/domain/user/settings_models.pybackend/app/domain/user/user_models.pybackend/app/events/core/consumer.pybackend/app/events/core/types.pybackend/app/schemas_pydantic/sse.pybackend/app/services/admin/admin_events_service.pybackend/app/services/event_bus.pybackend/app/services/event_service.pybackend/app/services/kafka_event_service.pybackend/app/services/notification_service.pybackend/app/services/sse/sse_service.pybackend/app/services/user_settings_service.pybackend/tests/unit/services/pod_monitor/test_monitor.pybackend/tests/unit/services/sse/test_sse_service.py
💤 Files with no reviewable changes (1)
- backend/app/domain/events/event_metadata.py
🧰 Additional context used
🧬 Code graph analysis (24)
backend/app/db/repositories/admin/admin_user_repository.py (4)
backend/app/domain/user/user_models.py (3)
User(44-57)UserListResult(83-91)UserUpdate(60-80)backend/app/db/repositories/user_repository.py (2)
get_user_by_id(22-24)update_user(53-62)backend/app/services/coordinator/queue_manager.py (1)
user_id(34-35)backend/app/core/security.py (1)
get_password_hash(35-36)
backend/app/events/core/consumer.py (3)
backend/app/services/notification_service.py (1)
state(163-164)backend/app/events/core/producer.py (1)
state(56-57)backend/app/services/pod_monitor/monitor.py (1)
state(140-142)
backend/app/services/event_service.py (3)
backend/app/domain/events/typed.py (1)
ArchivedEvent(219-235)backend/app/db/repositories/event_repository.py (1)
get_event(68-72)backend/app/services/coordinator/queue_manager.py (1)
user_id(34-35)
backend/app/domain/admin/overview_models.py (1)
backend/app/domain/events/event_models.py (1)
EventStatistics(125-136)
backend/app/domain/replay/models.py (1)
backend/app/services/coordinator/queue_manager.py (1)
execution_id(30-31)
backend/app/services/admin/admin_events_service.py (2)
backend/tests/unit/services/idempotency/test_middleware.py (1)
event(32-36)backend/app/domain/events/query_builders.py (1)
limit(22-24)
backend/app/services/user_settings_service.py (3)
backend/app/domain/user/settings_models.py (2)
DomainUserSettingsChangedEvent(72-89)DomainUserSettings(34-47)backend/app/db/repositories/user_settings_repository.py (1)
get_settings_events(30-57)backend/app/domain/events/query_builders.py (1)
limit(22-24)
backend/app/api/routes/events.py (2)
backend/app/domain/events/typed.py (1)
BaseEvent(27-38)backend/app/infrastructure/kafka/events/base.py (1)
BaseEvent(13-37)
backend/app/db/repositories/notification_repository.py (1)
backend/app/domain/notification/models.py (4)
DomainNotification(16-45)DomainNotificationUpdate(96-108)DomainNotificationSubscription(48-67)DomainSubscriptionUpdate(111-126)
backend/app/db/repositories/saga_repository.py (1)
backend/app/domain/saga/models.py (1)
Saga(10-27)
backend/tests/unit/services/pod_monitor/test_monitor.py (1)
backend/app/db/repositories/event_repository.py (1)
store_event(39-52)
backend/app/db/repositories/admin/admin_events_repository.py (3)
backend/app/domain/replay/models.py (1)
ReplayFilter(11-88)backend/app/db/repositories/replay_repository.py (1)
update_replay_session(71-79)backend/app/domain/admin/replay_updates.py (1)
ReplaySessionUpdate(8-23)
backend/app/db/repositories/user_settings_repository.py (1)
backend/app/domain/user/settings_models.py (2)
DomainUserSettings(34-47)DomainUserSettingsChangedEvent(72-89)
backend/app/db/repositories/resource_allocation_repository.py (1)
backend/app/domain/saga/models.py (1)
DomainResourceAllocation(133-147)
backend/app/domain/events/event_models.py (2)
backend/app/core/utils.py (1)
StringEnum(6-31)backend/tests/unit/services/idempotency/test_middleware.py (1)
event(32-36)
backend/app/db/docs/event.py (1)
backend/app/domain/events/typed.py (1)
EventMetadata(13-24)
backend/app/db/repositories/sse_repository.py (2)
backend/app/db/repositories/execution_repository.py (1)
get_execution(27-35)backend/app/domain/execution/models.py (1)
DomainExecution(22-37)
backend/app/db/repositories/execution_repository.py (1)
backend/app/domain/execution/models.py (2)
DomainExecution(22-37)DomainExecutionUpdate(88-98)
backend/app/services/kafka_event_service.py (3)
backend/app/db/repositories/event_repository.py (1)
store_event(39-52)backend/app/events/event_store.py (1)
store_event(46-72)backend/app/infrastructure/kafka/mappings.py (1)
get_event_class_for_type(72-138)
backend/app/db/repositories/replay_repository.py (1)
backend/app/domain/replay/models.py (1)
ReplaySessionState(119-146)
backend/tests/unit/services/sse/test_sse_service.py (1)
backend/app/domain/execution/models.py (1)
ResourceUsageDomain(13-19)
backend/app/db/repositories/user_repository.py (3)
backend/app/domain/user/user_models.py (2)
User(44-57)DomainUserCreate(129-139)backend/app/db/repositories/admin/admin_user_repository.py (2)
create_user(25-28)get_user_by_id(53-55)backend/app/services/coordinator/queue_manager.py (1)
user_id(34-35)
backend/app/domain/events/typed.py (5)
backend/app/domain/enums/common.py (1)
Environment(27-33)backend/app/domain/enums/storage.py (1)
StorageType(16-22)backend/app/domain/execution/models.py (1)
ResourceUsageDomain(13-19)backend/app/services/coordinator/queue_manager.py (2)
user_id(34-35)execution_id(30-31)backend/app/schemas_pydantic/admin_events.py (1)
progress_percentage(104-105)
backend/app/domain/events/__init__.py (2)
backend/app/domain/events/event_models.py (13)
EventBrowseResult(94-100)EventDetail(104-109)EventExportRow(183-196)EventFilter(54-65)EventListResult(83-90)EventProjection(140-149)EventQuery(69-79)EventReplayInfo(153-160)EventSortOrder(15-17)EventStatistics(125-136)EventSummary(44-50)HourlyEventCount(113-115)UserEventCount(119-121)backend/app/domain/events/typed.py (20)
BaseEvent(27-38)EventMetadata(13-24)ExecutionAcceptedEvent(61-66)ExecutionCompletedEvent(91-97)ExecutionFailedEvent(100-108)ExecutionQueuedEvent(69-73)ExecutionRequestedEvent(44-58)ExecutionRunningEvent(84-88)ExecutionStartedEvent(76-81)ExecutionTimeoutEvent(111-117)PodCreatedEvent(131-135)PodDeletedEvent(181-185)PodFailedEvent(161-169)PodRunningEvent(145-149)PodScheduledEvent(138-142)PodSucceededEvent(152-158)PodTerminatedEvent(172-178)ResultFailedEvent(199-203)ResultStoredEvent(191-196)UserSettingsUpdatedEvent(209-213)
🪛 GitHub Actions: MyPy Type Checking
backend/app/services/admin/admin_events_service.py
[error] 246-246: "EventFilter" has no attribute "model_dump" [attr-defined]
🪛 GitHub Actions: Ruff Linting
backend/app/domain/saga/models.py
[error] 5-5: F401: 'pydantic.computed_field' imported but unused. Remove unused import.
backend/app/db/docs/event.py
[error] 2-2: F401: 'typing.Any' imported but unused. Remove unused import.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Scan Backend
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (93)
backend/app/db/repositories/saga_repository.py (5)
32-32: LGTM!Correct migration from
asdict(saga)tosaga.model_dump()for Pydantic v2 serialization when constructing the Beanie document.
43-47: LGTM!The use of
Saga.model_validate(doc, from_attributes=True)is the correct Pydantic v2 pattern for constructing domain models from Beanie documents. SinceSagahasConfigDict(from_attributes=True), this properly reads attributes directly from the document instance.
62-62: LGTM!Consistent application of
model_validatefor list comprehension matches the pattern used elsewhere in the repository.
74-74: LGTM!Consistent with the other retrieval methods in this repository.
118-118: LGTM!Completes the consistent migration to
model_validateacross all saga retrieval paths.backend/app/domain/saga/models.py (2)
10-27: LGTM!Clean migration to Pydantic BaseModel with proper use of
ConfigDict(from_attributes=True)andField(default_factory=...)for mutable defaults. The lambda-based datetime factories correctly generate UTC timestamps.
30-41: LGTM!All remaining model classes are consistently migrated to Pydantic BaseModel with proper configuration. The use of
ConfigDict(from_attributes=True)enables attribute-based construction from ORM/document objects across the codebase.Also applies to: 44-53, 72-79, 82-93, 96-110, 113-130, 133-147, 150-160
backend/app/db/repositories/resource_allocation_repository.py (1)
12-18: LGTM!Correct migration to Pydantic v2 patterns:
create_data.model_dump()properly serializes the Pydantic model for document constructionmodel_validate(doc, from_attributes=True)correctly constructs the domain model from the Beanie documentThis aligns with the same pattern used in
saga_repository.py.backend/app/db/repositories/execution_repository.py (5)
20-25: LGTM! Clean migration to Pydantic v2 patterns.The use of
model_dump()andmodel_validate(..., from_attributes=True)is the correct approach for converting between domain models and Beanie documents. Thefrom_attributes=Trueparameter properly handles attribute access from the document object.
27-35: LGTM! Consistent Pydantic v2 conversion pattern.The retrieval and conversion logic correctly uses
model_validate(..., from_attributes=True)to construct the domain model from the Beanie document.
37-46: LGTM! Proper use of exclude_none for partial updates.Using
model_dump(exclude_none=True)is the correct approach for partial updates, ensuring only explicitly set fields are updated in the database.
67-78: LGTM! Consistent bulk conversion pattern.The list comprehension with
model_validate(..., from_attributes=True)correctly converts multiple Beanie documents to domain models, maintaining consistency with the single-object retrieval methods.
48-65: LGTM. The resource_usage handling correctly usesmodel_dump()with a ternary operator to handle the optional field.ResourceUsageDomainis defined as a PydanticBaseModeland supports themodel_dump()method.backend/app/api/routes/health.py (1)
18-18: Type safety improvement is backward compatible.The migration from string to native
datetimeis clean and improves type safety. Pydantic v2 automatically serializes the timezone-aware datetime to ISO 8601 format (e.g., "2025-06-01T12:13:14+00:00"), which matches the previous.isoformat()output exactly. No changes needed for API consumers.Also applies to: 34-34
backend/app/db/repositories/replay_repository.py (4)
18-23: LGTM! Correct migration to Pydantic model_dump().The switch from
asdict(session)tosession.model_dump()correctly aligns with the Pydantic BaseModel-basedReplaySessionState.
25-29: LGTM! Proper use of model_validate with from_attributes.Using
model_validate(doc, from_attributes=True)is the correct Pydantic v2 pattern for constructing a model from a Beanie Document's attributes.
71-79: LGTM! Clean partial update handling.
model_dump(exclude_none=True)correctly excludes None values for partial updates, which is the idiomatic Pydantic v2 approach.
91-96: LGTM! Simplified event batching aligns with payload flattening.The removal of payload-merging logic and direct use of
doc.model_dump(exclude=...)correctly reflects the flattened event structure where event-specific fields are now at the document level.backend/app/events/core/types.py (2)
149-161: LGTM! Clean migration to Pydantic BaseModel.The conversion from dataclass to
BaseModelwithConfigDict(from_attributes=True)is correct. The timestamp field type change fromstr | Nonetodatetime | Noneimproves type safety and aligns with the PR's goal of tightening typing.
163-172: LGTM! Consistent with ConsumerMetricsSnapshot migration.
ConsumerStatuscorrectly follows the same Pydantic BaseModel pattern withfrom_attributes=Trueconfiguration.backend/app/schemas_pydantic/sse.py (3)
31-31: LGTM! Timestamp type tightened to datetime.Changing from
str | Nonetodatetime | Noneis consistent with the PR's typing improvements. Pydantic will serialize these to ISO 8601 strings in JSON responses, maintaining compatibility with SSE consumers.
77-78: LGTM! Consistent timestamp typing in notification events.Both
timestampandcreated_atfields are now properly typed asdatetime | None, aligning with the execution event schema changes.Also applies to: 88-88
101-101: LGTM! Non-optional datetime for RedisNotificationMessage.
created_atis now a requireddatetimefield without a default, which is appropriate for Redis messages that should always have a creation timestamp.backend/app/domain/sse/models.py (3)
10-20: LGTM! ShutdownStatus migrated to BaseModel.The model correctly includes
from_attributes=Trueconfiguration and properly defines all shutdown status fields.
23-33: LGTM! SSEHealthDomain correctly structured.The health domain model properly aggregates shutdown status and other health metrics with appropriate typing.
36-48: LGTM! SSEExecutionStatusDomain and SSEEventDomain properly defined.Both models follow the established Pydantic BaseModel pattern with
from_attributes=True. Thetimestampfield correctly usesdatetimetype.backend/app/db/docs/event.py (4)
14-30: LGTM! EventDocument correctly updated for flattened event structure.The
extra="allow"configuration properly enables flexible event-specific fields at the document level, andfrom_attributes=Trueenables attribute-based model population. The docstring accurately describes the new storage pattern.
42-44: LGTM! Sparse indexes for event-specific fields.Using
sparse=Trueis correct forexecution_idandpod_namesince these fields only exist on certain event types. This avoids indexing documents where these fields are absent.
58-68: LGTM! Text search index updated for flattened structure.The text search index now correctly references
execution_idat the top level instead ofpayload.execution_id.
72-93: LGTM! EventArchiveDocument mirrors EventDocument changes.The archive document correctly adopts the same
extra="allow"pattern and removes the payload field, maintaining consistency with the active events collection.backend/app/domain/events/__init__.py (2)
1-43: LGTM! Clean reorganization of event module exports.The imports are well-organized, separating query/filter/result types from typed event classes. This provides a clear public API surface for the events domain.
45-87: LGTM! Comprehensive all with clear categorization.The
__all__list is properly organized with comments distinguishing "Query/filter/result types" from "Typed events", making the module's public API discoverable and well-documented.backend/app/domain/execution/models.py (5)
13-19: LGTM! ResourceUsageDomain correctly migrated.The model properly uses
ConfigDict(from_attributes=True)and defines appropriate default values for resource metrics.
22-37: LGTM! DomainExecution with proper Field defaults.The use of
Field(default_factory=lambda: str(uuid4()))forexecution_idandField(default_factory=lambda: datetime.now(timezone.utc))for timestamps correctly ensures unique IDs and UTC-aware timestamps on instantiation.
40-51: LGTM! ExecutionResultDomain properly structured.The model correctly combines required fields (
execution_id,status,exit_code,stdout,stderr) with optional fields and appropriate defaults.
54-73: LGTM! LanguageInfoDomain and ResourceLimitsDomain properly defined.Both models follow the established pattern with
from_attributes=Trueconfiguration and appropriate field types.
76-98: LGTM! Create and Update models properly defined.
DomainExecutionCreatecorrectly defines required fields for execution creation, andDomainExecutionUpdateusesOptionaltypes for all fields to support partial updates.backend/app/domain/user/settings_models.py (3)
46-47: Verify the timestamp default behavior.Using
Field(default_factory=lambda: datetime.now(timezone.utc))means a new timestamp is generated each time the field's default is evaluated during model creation. This is likely the intended behavior forcreated_atandupdated_atfields.However, ensure that when updating existing settings, these timestamps are explicitly set rather than relying on defaults, as Pydantic will call the default_factory for any missing field during model instantiation.
72-89: Event model design looks good.The
DomainUserSettingsChangedEventwithextra="ignore"allows forward compatibility when deserializing events that may contain additional fields not defined in the current schema. This is a good practice for event-driven architectures.The explicit field definitions provide strong typing while maintaining flexibility.
42-44: LGTM: Proper use of Field defaults.Using
Field(default_factory=...)for mutable default values (lists, dicts, and complex objects) is the correct pattern to avoid shared mutable state across instances.backend/app/domain/admin/overview_models.py (1)
8-8: LGTM: Consistent type migration.The migration from
EventtoDomainEventaligns with the PR's objective to standardize event typing across the codebase. The import and field type are updated consistently.Also applies to: 34-34
backend/tests/unit/services/pod_monitor/test_monitor.py (1)
11-11: LGTM: Test doubles aligned with domain changes.The test infrastructure correctly reflects the migration to
DomainEvent. TheFakeEventRepositorynow matches the actualEventRepositorysignature (as shown in the relevant code snippet at backend/app/db/repositories/event_repository.py:38-51).Also applies to: 53-57
backend/app/services/notification_service.py (1)
803-803: This concern is unfounded; the code change is correct.
RedisNotificationMessagehascreated_attyped asdatetime(line 101 inbackend/app/schemas_pydantic/sse.py), and the message is serialized viamodel_dump_json()(line 81 inbackend/app/services/sse/redis_bus.py). Pydantic v2 natively handles datetime serialization to ISO 8601 format, so passing the raw datetime object is the correct approach and aligns with Pydantic best practices.Likely an incorrect or invalid review comment.
backend/app/db/repositories/sse_repository.py (1)
16-16: LGTM! Timestamp and model validation improvements.Both changes align with the PR's migration to typed domain models:
- Line 16: Using native
datetimeobjects instead of ISO strings provides better type safety and allows downstream consumers to work with properly typed temporal data.- Line 23: Using
model_validate(doc, from_attributes=True)is the correct Pydantic v2 pattern for constructing domain models from Beanie documents, consistent with the pattern used inexecution_repository.py.Also applies to: 23-23
backend/app/services/sse/sse_service.py (1)
61-61: LGTM! Consistent timestamp type migration.The systematic replacement of
.isoformat()calls with nativedatetimeobjects across all SSE event constructions (CONNECTED, SUBSCRIBED, ERROR, SHUTDOWN, HEARTBEAT, NOTIFICATION events) ensures type consistency throughout the SSE flow. This aligns with the Pydantic models' expectations and the repository changes that now return datetime objects.Also applies to: 75-75, 90-90, 135-135, 148-148, 207-207, 220-220, 234-234
backend/app/api/routes/events.py (1)
300-300: LGTM! Safe metadata access.The conditional access to
result.metadata.correlation_idwith the null check prevents AttributeError when metadata is not present.backend/tests/unit/services/sse/test_sse_service.py (2)
72-72: LGTM! Test aligns with repository timestamp change.The mock now returns a native
datetimeobject, matching the productionSSERepository.get_execution_statusbehavior updated in this PR.
172-172: LGTM! Keyword arguments improve clarity.Using explicit keyword arguments for
ResourceUsageDomainconstruction enhances readability and prevents errors from positional argument reordering. This aligns with the Pydantic model definition inbackend/app/domain/execution/models.py.backend/app/db/repositories/saved_script_repository.py (1)
9-9: LGTM! Complete Pydantic v2 migration.The repository now consistently uses Pydantic v2 patterns throughout:
model_dump()replacesasdict()for serializing domain models to dictionariesmodel_validate(doc, from_attributes=True)replaces manual dict unpacking for constructing domain models from Beanie documentsmodel_dump(exclude_none=True)simplifies update logic by automatically filtering None valuesThis aligns with the broader codebase migration to Pydantic-based domain models and follows the same pattern used in other repositories throughout this PR.
Also applies to: 11-11, 18-18, 33-33, 35-35, 49-49
backend/app/services/admin/admin_events_service.py (3)
26-40: LGTM!The use of
model_dump(mode="json")correctly serializes the PydanticEventExportRowmodel with automatic datetime-to-ISO-string conversion. The dict construction with display-friendly keys is appropriate for CSV export.
239-240: LGTM!Using
model_dump(mode="json")for events correctly serializesDomainEventinstances with automatic datetime conversion, making the removeddefault=strparameter on line 251 unnecessary.
271-277: The defensive metadata check is necessary and correct.The
metadatafield is typed asEventMetadata | Nonewith a default value ofNoneacross allDomainEventvariants (inherited fromBaseEvent). The null check preventsAttributeErrorwhen accessingcorrelation_idon a potentiallyNonemetadata object, making this defensive pattern essential rather than optional.backend/app/domain/events/event_models.py (4)
5-9: LGTM!Import changes correctly support the migration to Pydantic-based domain events.
86-86: Consistent migration to DomainEvent.All event-containing result types now consistently use
DomainEventinstead of the oldEventdataclass, aligning with the PR's goal of tightening typing across the event system.Also applies to: 97-97, 107-107, 156-156, 167-167
183-196: LGTM!The conversion of
EventExportRowfrom a dataclass to a PydanticBaseModelwithfrom_attributes=Truecorrectly supports attribute-based instantiation and automatic serialization viamodel_dump(mode="json"). Thedatetimetype fortimestampis appropriate sincemode="json"handles ISO string conversion.
171-178: Clarify whether events with missing metadata should be filtered out on line 178.The filter
if e.metadata and not e.metadata.service_name.startswith("system-")will exclude events wheremetadataisNone. Ifmetadatais expected to always be present, this guard is redundant. If it can beNone, confirm whether such events should be silently excluded or handled differently.backend/app/services/event_bus.py (3)
20-28: LGTM!Converting
EventBusEventto a PydanticBaseModelwithdatetimeinstead ofstrfor the timestamp improves type safety and aligns with the PR's goal of tightening typing.
157-164: LGTM!Creating the event with a
datetimetimestamp directly is correct. Pydantic will automatically handle serialization to ISO string format when needed (e.g., for JSON).
290-292: LGTM!Using
EventBusEvent.model_validate(event_dict)correctly deserializes the Kafka message, with Pydantic automatically parsing ISO timestamp strings back todatetimeobjects.backend/app/domain/admin/replay_updates.py (1)
3-23: LGTM!The conversion of
ReplaySessionUpdatefrom a dataclass to a PydanticBaseModelwithfrom_attributes=Trueis clean and consistent with the broader Pydantic v2 migration across the codebase.backend/app/db/repositories/admin/admin_user_repository.py (4)
25-28: LGTM!The use of
model_dump()for serialization andmodel_validate(doc, from_attributes=True)for deserialization correctly implements the Pydantic v2 pattern for Beanie document handling.
30-51: LGTM!The list comprehension using
model_validate(doc, from_attributes=True)correctly converts Beanie documents to domain models, consistent with the repository pattern.
53-55: LGTM!Correct use of
model_validatewith properNonehandling.
57-70: LGTM!The update method correctly uses
model_dump(exclude_none=True)to only update fields that were set, preserves password hashing logic, and properly returns the updated domain model viamodel_validate.backend/app/services/event_service.py (2)
8-15: LGTM!The updated imports align with the PR's domain event typing strategy, replacing generic Event types with DomainEvent and ArchivedEvent.
179-191: Good defensive access to metadata.The conditional check
event.metadata.user_id if event.metadata else Noneguards against missing metadata, preventing potential AttributeErrors.backend/app/db/repositories/notification_repository.py (3)
24-27: LGTM!Correctly migrated to Pydantic v2:
model_dump()for serialization andmodel_validate(from_attributes=True)for domain model construction from Beanie documents.
29-38: Proper partial update pattern.Using
model_dump(exclude_none=True)ensures only provided fields are updated, preventing None values from overwriting existing data.
66-91: Consistent model conversion in list operations.The list comprehension correctly applies
model_validate(from_attributes=True)to each document, maintaining consistency with the repository's conversion pattern.backend/app/db/repositories/user_repository.py (2)
13-20: LGTM!The repository correctly adopts Pydantic v2 patterns for both user retrieval and creation, using
model_dump()for document construction andmodel_validate(from_attributes=True)for domain model conversion.
53-62: Proper update handling with Pydantic v2.The update flow correctly uses
model_dump(exclude_none=True)to extract only provided fields and updates theupdated_attimestamp before persisting.backend/app/services/kafka_event_service.py (3)
105-116: LGTM!The Kafka event construction properly mirrors the domain event structure while using the Avro metadata for Kafka-specific publishing.
243-256: Consistent domain event construction in publish_base_event.The method correctly extracts payload by excluding base fields and reconstructs a typed domain event via
domain_event_adapter, maintaining consistency with the primary publish_event flow.
81-97: No actionable issue found. The code explicitly excludes base fields (event_id,event_type,event_version,timestamp,aggregate_id,metadata) when extracting the payload (line 244), preventing any field conflicts when spreading**payloadintoevent_data. The event classes do not override base fields, and the discriminated union properly validates the combined structure.Likely an incorrect or invalid review comment.
backend/app/db/repositories/admin/admin_events_repository.py (2)
47-62: LGTM!The browse_events method correctly uses
domain_event_adapter.validate_python(from_attributes=True)to convert Beanie documents to typed domain events, aligning with the PR's domain event typing strategy.
187-196: LGTM!The replay session persistence and retrieval correctly use
model_dump()andmodel_validate(from_attributes=True), maintaining consistency with Pydantic v2 patterns.backend/app/domain/replay/models.py (2)
91-117: PrivateAttr progress callback approach is clean.
119-161: Pydantic BaseModel migration + UTC defaults look good.backend/app/domain/saved_script/models.py (1)
8-39: Migration to BaseModel + Field factories looks good.backend/app/db/repositories/user_settings_repository.py (1)
16-28: Snapshot serialize/deserialize viamodel_validate/model_dumpis aligned with the PR goals.backend/app/services/user_settings_service.py (1)
68-86: Rebuild-from-snapshot+typed-events flow looks consistent.backend/app/domain/user/user_models.py (1)
35-151: BaseModel conversion is straightforward and consistent.backend/app/domain/notification/models.py (1)
16-126: Pydantic migration + default factories look good (no shared-mutable defaults).backend/app/db/repositories/event_repository.py (2)
54-67: DomainEvent adapter usage across reads/writes is consistent and simplifies typing.Also applies to: 68-73, 74-144, 256-304, 339-367
145-155: Remove this comment;EventDocumentdoes not declare anexecution_idfield.EventDocument stores
execution_iddynamically viaextra="allow"configuration and only defines it as a sparse index, not as a field. Since it's not a declared field, the raw dict clause{"execution_id": execution_id}is the correct and only approach to query it.Likely an incorrect or invalid review comment.
backend/app/domain/events/typed.py (8)
13-24: LGTM!
EventMetadatais well-structured with appropriate defaults and optional fields. Thecorrelation_iddefault factory ensures unique IDs, andfrom_attributes=Trueenables ORM compatibility.
27-38: LGTM!
BaseEventcorrectly uses Pydantic v2 patterns with appropriate default factories for UUIDs and timezone-aware datetimes. The 30-day TTL default is reasonable for event retention.
44-126: LGTM!The execution event classes are well-designed with:
- Proper
Literaldiscriminators for each event type- Appropriate optional vs required fields
- Consistent use of
ResourceUsageDomainfor resource tracking- Sensible defaults for output fields (
stdout,stderr)
131-186: LGTM!Pod lifecycle events are consistently structured with appropriate defaults. The
container_statusesfield as a string (line 149) works for simple serialization, though if richer status data is needed later, consider making it a structured type.
191-204: LGTM!Result events correctly capture storage outcomes with appropriate optional fields.
209-214: LGTM!
UserSettingsUpdatedEventcorrectly separates the subjectuser_id(whose settings changed) frommetadata.user_id(who triggered the change). Thechanged_fieldslist is a good pattern for audit trails.
219-236: LGTM!
ArchivedEventintentionally extendsBaseModeldirectly rather thanBaseEvent, preserving original event values without applying defaults. The archive-specific fields (deleted_at,deleted_by,deletion_reason) provide good audit context.
1-10: LGTM!Imports are well-organized and all appear to be used. The Pydantic v2 imports (
ConfigDict,Discriminator,TypeAdapter) are the correct modern API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 issues found across 41 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="backend/app/services/event_bus.py">
<violation number="1" location="backend/app/services/event_bus.py:27">
P1: Changing `timestamp` from `str` to `datetime` breaks JSON serialization in `publish()`. The existing `json.dumps(vars(event))` at line 139 cannot serialize `datetime` objects. Replace `vars(event)` with `event.model_dump(mode='json')` which properly converts datetime to ISO string, or use `event.model_dump_json()` directly.</violation>
</file>
<file name="backend/app/domain/events/event_models.py">
<violation number="1" location="backend/app/domain/events/event_models.py:178">
P2: Events with `metadata=None` are silently filtered out when `include_system_events=False`. Consider explicitly handling the None case - perhaps events without metadata should be included since they're not definitively system events.</violation>
</file>
<file name="backend/app/domain/saga/models.py">
<violation number="1" location="backend/app/domain/saga/models.py:66">
P2: `computed_field` is imported but unused. The `@property` decorator on `has_more` won't include this field in serialization (`model_dump()`). If `has_more` needs to be serialized (as the original dataclass behavior suggests), use `@computed_field` instead of `@property`.</violation>
</file>
<file name="backend/app/db/repositories/event_repository.py">
<violation number="1" location="backend/app/db/repositories/event_repository.py:334">
P2: The refactored code no longer excludes `id` and `revision_id` when creating the archived document. The old code explicitly did `doc.model_dump(exclude={"id", "revision_id"})`, ensuring a new MongoDB `_id` would be generated on insert. The new `model_validate(doc, from_attributes=True)` copies the original document's `id`, causing the archive to be inserted with the same `_id` as the source document. While this works (different collection), it's a behavioral change that may not be intentional.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 issues found across 32 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="backend/tests/integration/dlq/test_dlq_discard.py">
<violation number="1" location="backend/tests/integration/dlq/test_dlq_discard.py:151">
P2: `stats_before.by_status` stores counts with string keys (e.g., "discarded"), but this lookup now passes the enum object instead of its string value, so it never finds the existing count and the test can pass even if stats do not change.</violation>
<violation number="2" location="backend/tests/integration/dlq/test_dlq_discard.py:160">
P2: `stats_after.by_status` also uses string keys, so looking up with the enum object never reflects the updated discard count.</violation>
</file>
<file name="backend/app/events/core/dispatcher.py">
<violation number="1" location="backend/app/events/core/dispatcher.py:171">
P2: Return type annotation mismatch: the method signature declares `dict[str, dict[str, int]]` but now returns `dict[EventType, dict[str, int]]`. Update the return type annotation to match the actual return type.</violation>
</file>
<file name="backend/tests/conftest.py">
<violation number="1" location="backend/tests/conftest.py:138">
P1: The fixture now closes the Redis client in its finally block, but the comment explicitly says "Don't close here - Dishka's RedisProvider handles cleanup when scope exits". The `RedisProvider` in `backend/app/core/providers.py` already calls `await client.aclose()` in its teardown. This will cause a double-close scenario which can lead to errors or unexpected behavior. Either remove the `aclose()` call (keeping the original behavior), or update/remove the misleading comment if manual cleanup is now intentionally required.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
backend/app/api/routes/admin/events.py (1)
196-214: Replaceasdict(session)withsession.model_dump()and remove theasdictimport—ReplaySessionStateis a Pydantic model, not a dataclass.The current code calls
asdict()onReplaySessionState, which is a PydanticBaseModel(not a dataclass). This will crash at runtime withTypeError. Usesession.model_dump()instead, which is the standard Pydantic method used throughout the codebase for similar conversions. Theasdictimport fromdataclassescan be removed since it's no longer needed in this file.Proposed fix
-from dataclasses import asdict from datetime import datetime from typing import Annotated @@ -209,7 +208,7 @@ async def get_replay_status(session_id: str, service: FromDishka[AdminEventsService]) -> EventReplayStatusResponse: return EventReplayStatusResponse( **{ - **asdict(session), + **session.model_dump(), "status": session.status, "estimated_completion": estimated_completion, "execution_results": execution_results,backend/tests/integration/test_notifications_routes.py (1)
239-239: Fix enum-to-string comparison bug.When Pydantic v2 deserializes enum fields, accessing them returns enum members, not string values. Comparing
updated_subscription.channel == "in_app"will fail becauseupdated_subscription.channelisNotificationChannel.IN_APP(an enum member), not the string"in_app".🐛 Proposed fix to use enum members
- assert updated_subscription.channel == "in_app" + assert updated_subscription.channel == NotificationChannel.IN_APP - assert updated_subscription.channel == "webhook" + assert updated_subscription.channel == NotificationChannel.WEBHOOK - assert updated_subscription.channel == "slack" + assert updated_subscription.channel == NotificationChannel.SLACKAlso applies to: 277-277, 304-304
backend/tests/unit/services/pod_monitor/test_monitor.py (1)
161-166: Fix mypy “non-overlapping equality” by avoiding narrowing on.stateproperties.
Mypy treatsassert pm.state == MonitorState.RUNNINGas permanently narrowing the property; thenSTOPPEDcomparisons error. Snapshot to a local variable (or add a targeted ignore).Proposed fix
@@ await pm.__aenter__() - assert pm.state == MonitorState.RUNNING + state = pm.state + assert state == MonitorState.RUNNING await pm.aclose() assert pm.state == MonitorState.STOPPED @@ async with create_pod_monitor(cfg, service, _test_logger) as monitor: - assert monitor.state == MonitorState.RUNNING + state = monitor.state + assert state == MonitorState.RUNNING assert monitor.state == MonitorState.STOPPED @@ async with create_pod_monitor( cfg, service, _test_logger, k8s_clients=mock_k8s_clients ) as monitor: - assert monitor.state == MonitorState.RUNNING + state = monitor.state + assert state == MonitorState.RUNNING assert monitor._clients is mock_k8s_clients assert monitor._v1 is mock_v1 assert monitor.state == MonitorState.STOPPEDAlso applies to: 559-563, 583-591
backend/app/services/rate_limit_service.py (1)
258-278: Address inconsistent algorithm label values in metrics.The
algorithmlabel now uses enum objects (lines 264, 270, 275), but other code paths use string literals like"disabled","bypassed", and"no_limit"(lines 210, 234, 247). This inconsistency could cause issues:
- If
RateLimitAlgorithmenum values are uppercase (e.g.,"SLIDING_WINDOW"), they won't match the lowercase string literals.- Metrics aggregation and dashboards may fail to correlate related measurements.
🔧 Recommended fix: standardize algorithm label values
Either define enum members for all algorithm states:
# In app/domain/rate_limit.py class RateLimitAlgorithm(StrEnum): SLIDING_WINDOW = "sliding_window" TOKEN_BUCKET = "token_bucket" DISABLED = "disabled" BYPASSED = "bypassed" NO_LIMIT = "no_limit"Or consistently use
.valueorstr()when passing enums to metrics to ensure predictable string representations.backend/tests/integration/test_replay_routes.py (2)
187-187: Use enum member instead of string literal.This line uses the string literal
"paused"while the rest of the file consistently uses enum members. This is inconsistent with the PR's enum standardization goals and may cause comparison failures ifpause_result.statusis an enum member.🔧 Proposed fix
- if pause_result.status == "paused": + if pause_result.status == ReplayStatus.PAUSED:
364-364: Replace string literals with enum members.This assertion uses string literals
["created", "pending"]instead of enum members, which is inconsistent with the rest of the file and the PR's enum standardization effort.Note that
"pending"does not appear in the ReplayStatus enum based on the values used elsewhere in this file (CREATED, RUNNING, COMPLETED, FAILED, CANCELLED). Verify whether"pending"is a valid status or if this should beReplayStatus.CREATED.🔧 Proposed fix (if "pending" is not a valid status)
- assert replay_response.status in ["created", "pending"] + assert replay_response.status == ReplayStatus.CREATEDOr if both are valid:
- assert replay_response.status in ["created", "pending"] + assert replay_response.status in [ReplayStatus.CREATED, ReplayStatus.PENDING]
🤖 Fix all issues with AI agents
In @backend/app/db/repositories/admin/admin_events_repository.py:
- Line 253: EventDocument currently relies on undeclared fields in model_extra
for execution_id; add an explicit field declaration to the EventDocument model:
add execution_id: str | None = None to the class (analogous to aggregate_id) so
code can reference event.execution_id directly instead of (event.model_extra or
{}).get(...), and ensure any Pydantic config (e.g., extra="allow") remains
unchanged.
In @backend/app/dlq/manager.py:
- Around line 491-514: discard_message_manually currently returns bool but can
raise exceptions and has a TOCTOU race between reading doc.status and performing
side effects; update it to perform an atomic conditional DB update on
DLQMessageDocument (e.g., conditional update where event_id matches and status
NOT IN {DISCARDED, RETRIED, maybe SCHEDULED}) to set a transient state or mark
as DISCARD_IN_PROGRESS, then reload/convert via _doc_to_message and call
_discard_message; also either wrap the whole flow in try/except to log errors
and return False on failure (preserving the bool contract) or explicitly
document that exceptions propagate and remove the implied False-on-error
behavior—reference functions/classes: discard_message_manually,
DLQMessageDocument, DLQMessageStatus, _doc_to_message, _discard_message, and
_monitor_dlq.
- Around line 149-153: The tracing span is recording the unprefixed
self.dlq_topic while the consumer actually subscribes to topic_name =
f"{self.settings.KAFKA_TOPIC_PREFIX}{self.dlq_topic}", causing mismatch; change
the code that sets EventAttributes.KAFKA_TOPIC (where the span is created for
consumed messages) to use msg.topic() (the actual consumed topic) instead of
self.dlq_topic so traces reflect the real topic name. Ensure any other usages in
the message handling path that set Kafka topic attributes also use msg.topic()
for consistency.
In @backend/app/events/core/dispatcher.py:
- Around line 169-171: The return type annotation of get_metrics is incorrect:
it currently states dict[str, dict[str, int]] but the method returns
self._event_metrics which uses EventType keys; update the annotation on
get_metrics to dict[EventType, dict[str, int]] (import or reference the
EventType enum used by self._event_metrics) so the signature matches the actual
return value and type checkers no longer flag a mismatch.
In @backend/tests/conftest.py:
- Around line 133-138: Update the outdated comment above the redis client
fixture to state that the explicit await client.aclose() is intentional
defensive redundancy even though Dishka's RedisProvider also cleans up on scope
exit; reference the redis.Redis client obtained via await scope.get(redis.Redis)
and the use of client.aclose() so readers understand both cleanup paths are
present and idempotent, and optionally apply the same explicit cleanup pattern
to the db fixture (which currently relies solely on Dishka) for consistency.
In @backend/tests/integration/result_processor/test_result_processor.py:
- Line 120: The test is comparing a raw MongoDB field string to an enum member
(doc.get("status") vs ExecutionStatus.COMPLETED) which will fail; update the
assertion to either compare the string value to the enum's value (e.g.,
doc.get("status") == ExecutionStatus.COMPLETED.value) or deserialize the MongoDB
document into your Pydantic/domain model (e.g., build the model from doc and
assert model.status == ExecutionStatus.COMPLETED) so the types match; change the
assertion around doc.get("status") accordingly.
🧹 Nitpick comments (11)
backend/tests/e2e/test_execution_routes.py (1)
94-94: Optional: Simplify enum membership check.The
list()call is unnecessary since Python enums are iterable and support theinoperator directly.♻️ Simplified assertion
- assert execution_result.status in list(ExecutionStatusEnum) + assert execution_result.status in ExecutionStatusEnumbackend/tests/integration/events/test_consumer_group_monitor_real.py (1)
28-28: Good change; consider usingisfor consistency.The update correctly compares to the enum member instead of its string value, aligning with the PR's goal of enum usage consistency.
However, for enum comparisons, prefer
isover==(enum members are singletons). The rest of the file consistently usesisfor enum comparisons (lines 26, 47, 52, 58, 64, 69, 75, 81, 90).♻️ Suggested refactor for consistency
- assert summary["group_id"] == gid and summary["health"] == ConsumerGroupHealth.UNHEALTHY + assert summary["group_id"] == gid and summary["health"] is ConsumerGroupHealth.UNHEALTHYbackend/app/services/pod_monitor/monitor.py (2)
199-203: Be explicit about metric label types (string vs enum).
If the metrics layer expectsstrlabels, pass.value(or otherwise ensureStringEnumis astrsubtype) to avoid surprising exporter behavior / attribute cardinality changes.Proposed fix (if metrics want plain strings)
@@ - self._metrics.record_pod_monitor_watch_error(ErrorType.RESOURCE_VERSION_EXPIRED) + self._metrics.record_pod_monitor_watch_error(ErrorType.RESOURCE_VERSION_EXPIRED.value) @@ - self._metrics.record_pod_monitor_watch_error(ErrorType.API_ERROR) + self._metrics.record_pod_monitor_watch_error(ErrorType.API_ERROR.value) @@ - self._metrics.record_pod_monitor_watch_error(ErrorType.UNEXPECTED) + self._metrics.record_pod_monitor_watch_error(ErrorType.UNEXPECTED.value) @@ - self._metrics.record_pod_monitor_watch_error(ErrorType.PROCESSING_ERROR) + self._metrics.record_pod_monitor_watch_error(ErrorType.PROCESSING_ERROR.value) @@ - self._metrics.record_pod_monitor_event_processing_duration(duration, event.event_type) + self._metrics.record_pod_monitor_event_processing_duration(duration, event.event_type.value) @@ - self._metrics.record_pod_monitor_watch_error(ErrorType.PROCESSING_ERROR) + self._metrics.record_pod_monitor_watch_error(ErrorType.PROCESSING_ERROR.value) @@ - self._metrics.record_pod_monitor_event_published(event.event_type, phase) + self._metrics.record_pod_monitor_event_published(str(event.event_type), phase)Also applies to: 276-277, 320-325, 339-340
445-457:get_status()["state"]should stay trivially JSON-serializable.
Returning the enum instance is fine if it behaves like a string everywhere this dict is consumed; otherwise consider returning.value.backend/app/db/repositories/admin/admin_events_repository.py (1)
263-263: Simplify redundant conditional.The conditional
exec_doc.status if exec_doc.status else Noneis redundant. Ifexec_doc.statusisNoneor any falsy value, it already evaluates to that value.♻️ Proposed simplification
- "status": exec_doc.status if exec_doc.status else None, + "status": exec_doc.status,Or, if the intent is to explicitly convert empty strings to
None:- "status": exec_doc.status if exec_doc.status else None, + "status": exec_doc.status or None,backend/app/db/repositories/event_repository.py (3)
39-52: Prefer immutable pattern over mutating model_dump() result.Lines 40-41 mutate the dictionary returned by
model_dump()withsetdefault(). While this works (Pydantic v2 returns a new dict), mutating return values is a code smell and could lead to subtle bugs if the implementation changes.♻️ Proposed fix
- data = event.model_dump(exclude_none=True) - data.setdefault("stored_at", datetime.now(timezone.utc)) - doc = EventDocument(**data) + data = event.model_dump(exclude_none=True) + doc = EventDocument(stored_at=datetime.now(timezone.utc), **data)Or use dict merging for clarity:
- data = event.model_dump(exclude_none=True) - data.setdefault("stored_at", datetime.now(timezone.utc)) - doc = EventDocument(**data) + data = {"stored_at": datetime.now(timezone.utc), **event.model_dump(exclude_none=True)} + doc = EventDocument(**data)
54-66: Same mutation issue in batch operation.Lines 60-61 have the same dictionary mutation pattern as
store_event. Apply the same refactoring for consistency.♻️ Proposed fix
docs = [] for event in events: - data = event.model_dump(exclude_none=True) - data.setdefault("stored_at", now) - docs.append(EventDocument(**data)) + data = {"stored_at": now, **event.model_dump(exclude_none=True)} + docs.append(EventDocument(**data))
100-100: Remove redundant list() conversion.The
list()call is unnecessary sinceevent_typesis already typed aslist[EventType] | None. This redundant conversion adds no value and could mask type issues.♻️ Proposed fix
- In(EventDocument.event_type, list(event_types)) if event_types else None, + In(EventDocument.event_type, event_types) if event_types else None,backend/app/dlq/manager.py (3)
220-223: Ensureevent_typeis a string-like value for metrics attributes.You now pass
message.event_typedirectly into DLQ metrics (Line 222, 353, 372). Ifevent_typeis ever a non-str Enum (notstr/StrEnum), OpenTelemetry metrics attribute encoding can break or produce unexpected labels.Recommendation: either guarantee
DLQMessage.event_typeisstr/StringEnumat the model boundary, or normalize once (e.g.,.value) before recording metrics.Also applies to: 352-354, 370-373
231-241: OpenTelemetry span attributes must be primitives; verify enum types.Using
EventAttributes.*keys is good, but please confirm bothself.dlq_topicanddlq_message.event_typearestr-subclasses (not plain Enums), since span attributes must be primitives / sequences of primitives.
447-460: Logstatusserialization: confirm it’s JSON-safe in your logging pipeline.
extra={"status": doc.status}(Line 455) is fine ifDLQMessageStatusisstr-like; if not, structured logging JSON encoders often fail on Enums. Considerdoc.status.valueif you see serialization issues.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
backend/app/api/routes/admin/events.pybackend/app/db/repositories/admin/admin_events_repository.pybackend/app/db/repositories/event_repository.pybackend/app/db/repositories/notification_repository.pybackend/app/dlq/manager.pybackend/app/events/consumer_group_monitor.pybackend/app/events/core/dispatcher.pybackend/app/events/core/producer.pybackend/app/services/execution_service.pybackend/app/services/kafka_event_service.pybackend/app/services/notification_service.pybackend/app/services/pod_monitor/monitor.pybackend/app/services/rate_limit_service.pybackend/app/services/result_processor/processor.pybackend/app/services/saga/saga_orchestrator.pybackend/scripts/create_topics.pybackend/tests/conftest.pybackend/tests/e2e/test_execution_routes.pybackend/tests/integration/dlq/test_dlq_discard.pybackend/tests/integration/dlq/test_dlq_retry.pybackend/tests/integration/events/test_consumer_group_monitor_real.pybackend/tests/integration/result_processor/test_result_processor.pybackend/tests/integration/services/admin/test_admin_user_service.pybackend/tests/integration/test_events_routes.pybackend/tests/integration/test_notifications_routes.pybackend/tests/integration/test_replay_routes.pybackend/tests/integration/test_saga_routes.pybackend/tests/unit/core/test_security.pybackend/tests/unit/events/test_event_dispatcher.pybackend/tests/unit/services/pod_monitor/test_event_mapper.pybackend/tests/unit/services/pod_monitor/test_monitor.pybackend/tests/unit/services/sse/test_sse_service.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/services/notification_service.py
🧰 Additional context used
🧬 Code graph analysis (12)
backend/tests/integration/events/test_consumer_group_monitor_real.py (1)
backend/app/events/consumer_group_monitor.py (1)
ConsumerGroupHealth(15-21)
backend/app/dlq/manager.py (4)
backend/app/core/metrics/dlq.py (3)
record_dlq_message_received(49-50)record_dlq_message_retried(52-55)record_dlq_message_discarded(57-60)backend/app/core/tracing/models.py (1)
EventAttributes(9-27)backend/app/dlq/models.py (1)
DLQMessageStatus(10-16)backend/app/db/repositories/dlq_repository.py (1)
_doc_to_message(26-31)
backend/tests/conftest.py (2)
backend/tests/unit/services/pod_monitor/test_monitor.py (1)
aclose(73-74)backend/app/core/lifecycle.py (1)
aclose(33-42)
backend/app/services/saga/saga_orchestrator.py (4)
backend/app/events/core/producer.py (1)
state(56-57)backend/app/services/notification_service.py (1)
state(163-164)backend/app/services/pod_monitor/monitor.py (1)
state(140-142)backend/app/events/core/consumer.py (1)
state(202-203)
backend/tests/unit/services/pod_monitor/test_monitor.py (6)
backend/tests/unit/conftest.py (1)
app(68-69)backend/app/db/repositories/event_repository.py (1)
store_event(39-52)backend/app/events/event_store.py (1)
store_event(46-72)backend/app/events/core/producer.py (1)
state(56-57)backend/app/services/pod_monitor/monitor.py (2)
state(140-142)MonitorState(42-48)backend/app/events/core/consumer.py (1)
state(202-203)
backend/tests/unit/core/test_security.py (1)
backend/app/core/security.py (1)
SecurityService(23-122)
backend/tests/integration/dlq/test_dlq_discard.py (1)
backend/app/dlq/models.py (1)
DLQMessageStatus(10-16)
backend/tests/integration/dlq/test_dlq_retry.py (1)
backend/app/dlq/models.py (1)
DLQMessageStatus(10-16)
backend/app/events/core/dispatcher.py (2)
backend/app/events/core/producer.py (1)
metrics(60-61)backend/app/events/core/consumer.py (1)
metrics(206-207)
backend/app/services/pod_monitor/monitor.py (1)
backend/app/core/metrics/kubernetes.py (3)
record_pod_monitor_watch_error(146-147)record_pod_monitor_event_processing_duration(137-138)record_pod_monitor_event_published(140-141)
backend/app/services/kafka_event_service.py (3)
backend/app/db/repositories/event_repository.py (1)
store_event(39-52)backend/app/events/event_store.py (1)
store_event(46-72)backend/app/infrastructure/kafka/mappings.py (1)
get_event_class_for_type(72-138)
backend/tests/unit/services/sse/test_sse_service.py (1)
backend/app/domain/execution/models.py (1)
ResourceUsageDomain(13-19)
🪛 GitHub Actions: MyPy Type Checking
backend/tests/unit/services/pod_monitor/test_monitor.py
[error] 165-165: Non-overlapping equality check (left operand type: "Literal[MonitorState.RUNNING]", right operand type: "Literal[MonitorState.STOPPED]")
[error] 562-562: Non-overlapping equality check (left operand type: "Literal[MonitorState.RUNNING]", right operand type: "Literal[MonitorState.STOPPED]")
[error] 590-590: Non-overlapping equality check (left operand type: "Literal[MonitorState.RUNNING]", right operand type: "Literal[MonitorState.STOPPED]")
backend/app/api/routes/admin/events.py
[error] 209-209: No overload variant of "asdict" matches argument type "ReplaySessionState"
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (43)
backend/tests/integration/dlq/test_dlq_retry.py (1)
125-137: Good alignment: use enum member keys forby_status.Switching to
stats_*.by_status.get(DLQMessageStatus.RETRIED, 0)is consistent with enum-typed status usage and avoids.valuedrift. Only thing to sanity-check:DLQRepository.get_dlq_stats()should return aby_statusmapping keyed byDLQMessageStatus(vsstr) after any (de)serialization steps.backend/tests/integration/dlq/test_dlq_discard.py (1)
149-161: Good alignment: use enum member keys forby_status.Using
DLQMessageStatus.DISCARDEDas the lookup key matches the direction of the PR. Please just confirmget_dlq_stats().by_statusis actually keyed byDLQMessageStatusin the returned stats model (notstrkeys coming from persistence/JSON).backend/tests/integration/services/admin/test_admin_user_service.py (1)
21-21: No changes needed. The change fromUserRole.USER.valuetoUserRole.USERis correct and safe. SinceUserRoleis aStringEnum(which inherits fromStrEnum), enum members are strings at the Python level. Motor correctly serializes them to the database as string values. This usage is consistent throughout the codebase in unit tests, Pydantic schemas, and services.Likely an incorrect or invalid review comment.
backend/app/services/kafka_event_service.py (6)
15-15: LGTM!The import of
domain_event_adapteraligns with the PR's objective to use a discriminated union adapter for creating typed domain events.
86-97: Well-structured domain event creation via discriminated union adapter.The use of
domain_event_adapter.validate_python()ensures type-safe event creation based on theevent_typediscriminator. The approach correctly separates domain event storage from Kafka event construction.Note that
**payloadis spread last, so any key inpayloadmatching a base field name (e.g.,event_id) would override it. This appears intentional for event-specific field flattening, but callers should ensure payload doesn't accidentally include base field names.
104-116: LGTM!Good consistency:
kafka_event_datacorrectly sourcesevent_idfromdomain_event.event_id, ensuring the stored domain event and published Kafka event share the same identifier. The separation ofavro_metadatafor Kafka anddomain_metadatafor storage is appropriate.
118-123: LGTM!The
correlation_idis correctly sourced fromdomain_metadata, which inherits it from the enrichedavro_metadata. The empty string fallback handles theNonecase appropriately for header values.
241-256: LGTM!The domain event creation pattern in
publish_base_eventis consistent withpublish_event. Thebase_fieldsset correctly filters out non-payload fields, and the discriminated union adapter ensures proper typed event instantiation.
232-232: EventType is StringEnum (StrEnum-based), confirming str() removal is correct.The removal of
str()casting forevent.event_typeat lines 232, 260, and 288 is valid.EventTypeextendsStringEnum, a custom wrapper around Python'sStrEnum, which means enum values are inherently strings. This is compatible with:
- Line 232: OpenTelemetry
span.set_attribute()accepts enum values- Line 260:
headersdict typed asDict[str, str]- Line 288: Logging
extradict expectationsNo type issues arise from this change.
backend/app/services/result_processor/processor.py (1)
303-308: No issues found - the change is correct and safe.The
ProcessingStateenum is aStringEnum(inheriting from Python'sStrEnum), which means enum members ARE actualstrinstances. Returningself._state(the enum object) instead ofself._state.valueis functionally equivalent because the enum member itself is a string. TheStringEnumclass already handles string serialization correctly through its__str__and__repr__methods, and all current usages—logging in workers and string method calls in tests—work without issues.backend/tests/integration/test_notifications_routes.py (4)
56-56: Verify enum member comparison consistency.The assertions now compare against
list(NotificationChannel)andlist(NotificationStatus), which contain enum members. This works correctly when the PydanticNotificationListResponsemodel deserializes JSON strings to enum members.However, note the inconsistency with lines 239, 277, 304 where channel comparisons use string literals. Ensure the comparison approach is uniform across the test suite.
Also applies to: 59-59
69-82: Enum usage is correct.Using enum members directly in the status list and URL parameters aligns with the PR's goal of enum usage consistency. The StrEnum will serialize correctly in the URL, and the assertion properly validates filtered results.
105-105: LGTM: Enum member in query parameter.Using
NotificationStatus.DELIVEREDdirectly in the f-string works correctly for StrEnum, which will use its string value in the URL.
204-204: Enum member comparison is correct.The pattern
subscription.channel in list(NotificationChannel)properly validates that the Pydantic-parsed channel is a valid enum member.backend/app/db/repositories/notification_repository.py (4)
25-27: LGTM: Pydantic migration pattern is correct.The conversion from dict-based operations to
model_dump()/model_validate()follows Pydantic v2 best practices:
model_dump()serializes domain models to dicts for MongoDB operationsmodel_validate(..., from_attributes=True)deserializes documents to typed domain modelsexclude_none=Trueprevents overwriting existing fields with None during updatesAlso applies to: 35-35, 44-44
91-91: LGTM: Consistent model_validate usage in list operations.All list comprehensions properly convert MongoDB documents to
DomainNotificationinstances usingmodel_validate(doc, from_attributes=True).Also applies to: 131-131, 144-144
162-162: LGTM: Subscription methods migrated correctly.The subscription CRUD operations properly use:
model_dump(exclude_none=True)for building update payloadsmodel_validate(..., from_attributes=True)for converting documents toDomainNotificationSubscriptionThe pattern maintains the existing
updated_attimestamp handling while leveraging Pydantic's serialization.Also applies to: 168-168, 173-173, 181-181, 188-188
200-200: LGTM: Simplified enum logging.Logging
list(roles)directly is cleaner than manually extracting.valueattributes. For StrEnum, the string representation will display correctly in logs.backend/tests/unit/services/sse/test_sse_service.py (3)
68-73: LGTM: Timestamp standardization in mock.The mock now returns a
datetimeobject instead of an ISO string, aligning with the PR's timestamp standardization effort. This is appropriate for test mocking.
172-172: LGTM: Explicit keyword arguments for Pydantic model.Using keyword arguments for
ResourceUsageDomainis best practice with Pydantic models and improves readability.
154-154: EventType is correctly implemented as StrEnum for direct string comparisons.EventType extends the custom
StringEnumclass, which is aStrEnumsubclass from Python's standard library. This allows the JSON-deserialized string values from_decode()to be compared directly withEventTypeenum members. The assertions at lines 154 and 189 are correct.backend/tests/unit/services/pod_monitor/test_monitor.py (1)
11-12: DomainEvent migration in test doubles is consistent.
FakeEventRepository now matches the repo/service contract usingDomainEventandevent.event_id.Also applies to: 53-57
backend/scripts/create_topics.py (1)
48-55: No changes needed — the code is correct as-is.The
KafkaTopicenum is aStringEnum(not a plainEnum), and the customStringEnumclass explicitly overrides__format__to return the string value directly. When you usef"{topic}", it returns the enum's.value(e.g.,"execution_events"), not a representation likeClassName.MEMBER. Additionally,get_topic_configs()is keyed byKafkaTopicenum objects, and the lookup correctly uses the same enum object as the key. Both the topic naming and config lookup are already consistent and safe.Likely an incorrect or invalid review comment.
backend/app/services/rate_limit_service.py (3)
300-307: No action needed.status.algorithmis aStringEnum(which extends Python'sStrEnum), and sinceStrEnumis a string subclass, it is natively compatible with OpenTelemetry span attributes. The code correctly passes the enum value directly tospan.set_attribute()without conversion, and no type checking issues will occur.
155-165: No action needed. The enum-based labels are fully compatible with the metrics library.The labels use
StringEnum, a custom class inheriting fromStrEnum. SinceStringEnumis a proper string type (inheriting fromstr), it works seamlessly with OpenTelemetry's attribute handling without any type conversion or compatibility issues.
26-36: The enum serialization implementation is correct and requires no changes.Both
EndpointGroupandRateLimitAlgorithminherit fromStringEnum(a customStrEnumsubclass inapp.core.utils), which inherits fromstr. This allowsjson.dumps()to serialize enum objects directly as their string values without raising aTypeError. The deserialization in_rule_from_dict()correctly reconstructs enums from string values using the enum constructors. The round-trip serialization cycle (enum → JSON string → enum) works correctly, and integration tests confirm this behavior is functioning as intended.backend/app/db/repositories/admin/admin_events_repository.py (4)
60-60: LGTM!The use of
domain_event_adapter.validate_pythonwithfrom_attributes=Truecorrectly converts Beanie documents to typedDomainEventinstances.
167-167: LGTM!The change from
doc.timestamp.isoformat()todoc.timestampaligns with the PR objective to standardize timestamps to datetime objects across the codebase.
178-185: LGTM!The signature update to accept
DomainEventand the use ofevent.model_dump()for archival document construction correctly aligns with the Pydantic migration.
188-188: LGTM!The replay session persistence correctly uses Pydantic model methods:
model_dump()for creating documentsmodel_validate(..., from_attributes=True)for converting documents to domain modelsexclude_none=Truefor partial updatesAlso applies to: 196-196, 199-199
backend/app/db/repositories/event_repository.py (3)
149-152: LGTM!The plain dict query
{"execution_id": execution_id}at line 150 correctly queries the top-levelexecution_idfield after payload flattening. TheOrcondition appropriately handles both the flattened field and theaggregate_idfallback.
333-337: LGTM!The use of
model_validate(..., from_attributes=True).model_copy(update=archive_fields)correctly converts documents and adds archival metadata without mutating the original.
360-360: LGTM!Correctly omits
from_attributes=Truewhen validating aggregation pipeline results, which are already plain dicts rather than Beanie documents.backend/app/dlq/manager.py (1)
321-338: The code is type-safe as-is.inject_trace_context()has an explicit return type ofdict[str, str], and OpenTelemetry's text-format propagators inject only US-ASCII strings. No defensive encoding is needed.Likely an incorrect or invalid review comment.
backend/app/events/consumer_group_monitor.py (1)
419-419: No changes needed. The code at line 419 correctly returns theConsumerGroupHealthenum directly.
StringEnumextends Python'sStrEnum, which means enum instances inherit fromstrand are natively serializable to JSON without special handling. The test suite confirms this is the intended behavior (assert summary["health"] == ConsumerGroupHealth.UNHEALTHY). The timestamp conversion on line 426 is for datetime formatting, not enum value extraction, so there is no inconsistency—they are different data types requiring different transformations.Likely an incorrect or invalid review comment.
backend/tests/integration/test_replay_routes.py (1)
236-254: The code change is correct and requires no action.ReplayStatusextendsStringEnum, which in turn extends Python'sStrEnum. TheStrEnumclass automatically serializes to its string value when used in string contexts like f-strings, so the enum at line 243 will correctly serialize to"created","running", etc. without needing the.valueattribute. The removal of.valuealigns with properStrEnumusage.backend/tests/unit/services/pod_monitor/test_event_mapper.py (1)
6-6: LGTM: Enum-based comparisons properly updated.The test correctly migrated from string-based comparisons (
e.event_type.value == "pod_scheduled") to enum-based comparisons (e.event_type == EventType.POD_SCHEDULED). This aligns with the event mapper returning EventType enum members and improves type safety.Also applies to: 53-106, 216-216
backend/tests/integration/test_events_routes.py (1)
301-301: Same enum serialization concern for event publishing payloads.Similar to the query endpoint, these publish requests send
EventType.SYSTEM_ERRORdirectly in the JSON payload. Verify that the enum serializes correctly to a string value when sent over HTTP.Also applies to: 320-320
backend/app/services/saga/saga_orchestrator.py (1)
418-421: LGTM - Consistent enum usage in structured logging.The change to log the enum object directly instead of its
.valueis consistent with the PR's goal of standardizing enum usage. Python's logging framework will properly handle the enum via its string representation.backend/tests/unit/events/test_event_dispatcher.py (1)
59-60: LGTM - Test correctly updated to match API change.The test assertions now use
EventTypeenum members as dictionary keys, correctly matching the updatedEventDispatcher.get_metrics()implementation that returns enum-keyed dictionaries.backend/app/events/core/dispatcher.py (1)
63-63: LGTM - Enum usage in logging.Using the enum directly in log messages is safe and improves readability. Python will automatically convert the enum to its string representation.
backend/app/events/core/producer.py (1)
128-147: Original review comment is incorrect —ProducerStateis aStringEnum, not a standardEnum, so JSON serialization is not an issue.The
StringEnumimplementation (which extends Python'sStrEnum) is explicitly designed to behave as a string in all contexts. Instances inherit fromstrand serialize directly as string values withjson.dumps(). Test code confirms this works—assertions likest["state"] == "running"and calls to string methods like.lower()all pass, demonstrating the enum functions as a native string.All actual usage in the codebase (workers, tests) treats the status dict values as strings without any serialization errors.
Likely an incorrect or invalid review comment.
backend/tests/integration/test_saga_routes.py (1)
58-58: LGTM! Enum usage is now consistent.The migration from
.valueto direct enum member usage improves type safety and aligns with the broader Pydantic migration. All changes are consistent:
- Query parameters now pass enum members directly (httpx correctly serializes StrEnum members)
- Assertions compare against enum members rather than string values
- Response validation expects enum members (handled by Pydantic deserialization)
Also applies to: 94-94, 185-185, 219-219, 241-241
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backend/tests/unit/services/pod_monitor/test_event_mapper.py (1)
1-239: Fix missing type annotations on inline class attributes.The MyPy failure originates from this file. The
Condclasses defined inline intest_pending_running_and_succeeded_mapping()andtest_scheduled_requires_condition()lack type annotations on instance attributes (self.typeandself.status), violating thedisallow_incomplete_defs = truesetting.Add type annotations to instance attributes:
class Cond: type: str status: str def __init__(self, t: str, s: str) -> None: self.type = t self.status = sbackend/tests/unit/services/idempotency/test_middleware.py (1)
118-122: Remove empty test class shells.These empty test classes serve no purpose. According to the AI summary,
idempotent_handlerandIdempotentConsumerWrapperwere removed from the import surface. If these components no longer exist or are only tested at integration level, remove these placeholder classes.🧹 Proposed fix
-class TestIdempotentHandlerDecorator: - pass - -class TestIdempotentConsumerWrapper: - passbackend/app/api/routes/admin/events.py (1)
206-213: Remove redundant status field override.The response construction spreads
session.model_dump(), which already includes astatusfield, then explicitly overrides it withsession.statuson line 209. This is unnecessary and creates code clutter.While
ReplaySessionState.model_dump()returns the status as a StringEnum object (andEventReplayStatusResponseexpectsstr), Pydantic's validation handles this seamlessly since StringEnum inherits fromstr. The explicit override has no functional effect and should be removed for clarity:return EventReplayStatusResponse( - **{ - **session.model_dump(), - "status": session.status, - "estimated_completion": estimated_completion, - "execution_results": execution_results, - } + **session.model_dump(), + estimated_completion=estimated_completion, + execution_results=execution_results, )
🤖 Fix all issues with AI agents
In
@backend/tests/integration/services/user_settings/test_user_settings_service.py:
- Line 11: Reorder imports in the test module so third-party imports (e.g.,
AsyncContainer from dishka and pytest) come before local app.* imports and after
standard library imports; specifically ensure the standard library imports
(datetime, timezone) remain first, then import pytest and from dishka import
AsyncContainer, and only then import app.domain.enums.Theme,
app.domain.user.settings_models (DomainEditorSettings,
DomainNotificationSettings, DomainUserSettingsUpdate) and
app.services.user_settings_service.UserSettingsService so import ordering
follows PEP 8 conventions.
In @backend/tests/integration/test_admin_routes.py:
- Around line 64-67: Remove the pointless call to original_response.json() in
the test: either delete the line if you don't need the parsed body, or capture
it into a variable (e.g., original_data = original_response.json()) if you
intend to validate or compare the original settings later; locate the call made
after the test_admin.get("/api/v1/admin/settings/") response and update
accordingly.
In @backend/tests/load/plot_report.py:
- Around line 115-118: The list comprehension computing successes uses zip(...,
strict=False) unnecessarily; remove the explicit strict=False since default
behavior suffices. Update the line defining successes (currently successes = [t
- e for t, e in zip(total, errors, strict=False)]) to call zip(total, errors)
instead, keeping the rest of the comprehension unchanged.
In @backend/tests/unit/services/pod_monitor/test_event_mapper.py:
- Line 131: The expression `_ctx(p)` is unused and its result is discarded;
remove this dead call or assign it to a variable if the PodContext is meant to
be used (e.g., store into a variable like `ctx = _ctx(p)`), ensuring any
subsequent assertions or uses reference that variable; specifically update the
test in test_event_mapper.py to either delete the `_ctx(p)` line or replace it
with an assignment to `ctx` (and update references to use `ctx`) so the
PodContext is not silently created and thrown away.
🧹 Nitpick comments (7)
backend/app/api/routes/replay.py (1)
66-66: Avoid duplicatemodel_dump()calls for better performance.The expression
s.model_dump()is called twice in the same line, which is wasteful. Pydantic'smodel_dump()can be expensive for complex models with nested structures.⚡ Proposed optimization
- SessionSummary.model_validate({**s.model_dump(), **s.model_dump()["config"]}) + SessionSummary.model_validate({**(dump := s.model_dump()), **dump["config"]})Or for better readability:
- return [ - SessionSummary.model_validate({**s.model_dump(), **s.model_dump()["config"]}) - for s in service.list_sessions(status=status, limit=limit) - ] + sessions = [] + for s in service.list_sessions(status=status, limit=limit): + dump = s.model_dump() + sessions.append(SessionSummary.model_validate({**dump, **dump["config"]})) + return sessionsAdditionally, verify that this change resolves the MyPy type checking failure reported in the pipeline. The dict spreading pattern may be contributing to type inference issues. If the error persists, consider adding explicit type hints or using a more structured approach to flatten the config fields.
backend/tests/e2e/test_resource_cleaner_orphan.py (1)
40-47: Simplify the dry-run assertion and avoid.get(...)if the return type is a TypedDict (possible mypy error source).If
cleanup_orphaned_resources()returns a TypedDict (or otherwise tightly typed mapping),res.get("configmaps", [])can trip--strict. Alsoname in ...is clearer.Proposed diff
@@ - # Force as orphaned by using a large cutoff + # Treat even just-created resources as eligible by using a zero-hour age threshold await cleaner.cleanup_orphaned_resources(namespace=ns, max_age_hours=0, dry_run=True) @@ # If cleaner is non-deterministic across runs, re-invoke to reflect current state res = await cleaner.cleanup_orphaned_resources(namespace=ns, max_age_hours=0, dry_run=True) - assert any(name == cm for cm in res.get("configmaps", [])) + assert name in res["configmaps"]Also worth sanity-checking e2e flakiness: with
max_age_hours=0, this relies on localdatetime.now()vs apiservercreation_timestampordering (clock skew can make the resource “not old enough” forever).backend/tests/integration/services/user_settings/test_user_settings_service.py (1)
1-11: Consider following PEP 8 import ordering.The
dishkaimport (third-party library) should come before theapp.*imports (local application imports) per PEP 8 conventions.📋 Suggested import ordering
from datetime import datetime, timezone import pytest +from dishka import AsyncContainer + from app.domain.enums import Theme from app.domain.user.settings_models import ( DomainEditorSettings, DomainNotificationSettings, DomainUserSettingsUpdate, ) from app.services.user_settings_service import UserSettingsService -from dishka import AsyncContainerbackend/tests/e2e/test_execution_routes.py (1)
116-116: Consider removing or clarifying unused expression.The expression
exec_response.json()["execution_id"]is evaluated but the result is discarded. If this is intended as an implicit validation that the key exists, consider making it explicit:assert "execution_id" in exec_response.json()If it's not needed, consider removing it entirely.
♻️ Proposed cleanup
For Line 116 (test_execute_with_error):
- exec_response.json()["execution_id"] + # Execution accepted - error will be processed asynchronouslyFor Line 272 (test_execution_with_timeout):
- exec_response.json()["execution_id"] + # Execution accepted - will run until timeoutAlso applies to: 272-272
backend/tests/integration/services/saga/test_saga_service.py (1)
1-7: Import ordering deviates from convention.Moving
dishka.AsyncContainer(a third-party import) after local app imports contradicts PEP 8 and typical isort conventions, which place third-party imports before local imports. Consider keeping it grouped withpytest:from datetime import datetime, timezone import pytest +from dishka import AsyncContainer from app.domain.enums.user import UserRole from app.schemas_pydantic.user import User from app.services.saga.saga_service import SagaService -from dishka import AsyncContainerThe MyPy pipeline failure reported at line 1 appears unrelated to this file's changes—verify the actual MyPy output to identify the root cause.
backend/app/domain/saga/models.py (1)
150-160: Consider removingfrom_attributes=Truefrom input DTOs.
DomainResourceAllocationCreateis an input/creation DTO, not mapped from ORM objects. Thefrom_attributes=Trueconfig is unnecessary here and could be removed to clarify intent:class DomainResourceAllocationCreate(BaseModel): """Data for creating a resource allocation.""" - model_config = ConfigDict(from_attributes=True) - execution_id: strThis is a minor consistency nit—the code works correctly either way.
backend/tests/integration/notifications/test_notification_sse.py (1)
28-35: Consider capturing the notification for stronger assertions.While removing the unused variable assignment is correct, the test could be more robust by verifying that the
notification_idreceived via SSE matches the one from the created notification object.📋 Optional enhancement for test robustness
- await svc.create_notification( + created = await svc.create_notification( user_id=user_id, subject="Hello", body="World", tags=["test"], severity=NotificationSeverity.MEDIUM, channel=NotificationChannel.IN_APP, )Then add an assertion after line 46:
assert msg.notification_id == str(created.notification_id)This would verify the end-to-end flow integrity between notification creation and SSE delivery.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (93)
backend/app/api/routes/admin/events.pybackend/app/api/routes/replay.pybackend/app/db/docs/event.pybackend/app/domain/events/event_models.pybackend/app/domain/saga/models.pybackend/tests/e2e/conftest.pybackend/tests/e2e/test_execution_routes.pybackend/tests/e2e/test_resource_cleaner_k8s.pybackend/tests/e2e/test_resource_cleaner_orphan.pybackend/tests/helpers/cleanup.pybackend/tests/helpers/k8s_fakes.pybackend/tests/helpers/kafka.pybackend/tests/integration/app/test_main_app.pybackend/tests/integration/conftest.pybackend/tests/integration/core/test_container.pybackend/tests/integration/db/repositories/test_admin_settings_repository.pybackend/tests/integration/db/repositories/test_saved_script_repository.pybackend/tests/integration/dlq/test_dlq_discard.pybackend/tests/integration/dlq/test_dlq_manager.pybackend/tests/integration/events/test_consume_roundtrip.pybackend/tests/integration/events/test_consumer_lifecycle.pybackend/tests/integration/events/test_dlq_handler.pybackend/tests/integration/events/test_event_dispatcher.pybackend/tests/integration/events/test_event_store.pybackend/tests/integration/events/test_schema_registry_roundtrip.pybackend/tests/integration/idempotency/test_consumer_idempotent.pybackend/tests/integration/idempotency/test_decorator_idempotent.pybackend/tests/integration/idempotency/test_idempotency.pybackend/tests/integration/idempotency/test_idempotent_handler.pybackend/tests/integration/notifications/test_notification_sse.pybackend/tests/integration/result_processor/test_result_processor.pybackend/tests/integration/services/admin/test_admin_user_service.pybackend/tests/integration/services/coordinator/test_execution_coordinator.pybackend/tests/integration/services/events/test_event_bus.pybackend/tests/integration/services/events/test_kafka_event_service.pybackend/tests/integration/services/execution/test_execution_service.pybackend/tests/integration/services/idempotency/test_redis_repository.pybackend/tests/integration/services/notifications/test_notification_service.pybackend/tests/integration/services/rate_limit/test_rate_limit_service.pybackend/tests/integration/services/replay/test_replay_service.pybackend/tests/integration/services/saga/test_saga_service.pybackend/tests/integration/services/saved_script/test_saved_script_service.pybackend/tests/integration/services/sse/test_partitioned_event_router.pybackend/tests/integration/services/sse/test_redis_bus.pybackend/tests/integration/services/user_settings/test_user_settings_service.pybackend/tests/integration/test_admin_routes.pybackend/tests/integration/test_alertmanager.pybackend/tests/integration/test_auth_routes.pybackend/tests/integration/test_dlq_routes.pybackend/tests/integration/test_user_settings_routes.pybackend/tests/load/cli.pybackend/tests/load/config.pybackend/tests/load/monkey_runner.pybackend/tests/load/plot_report.pybackend/tests/load/strategies.pybackend/tests/load/user_runner.pybackend/tests/unit/conftest.pybackend/tests/unit/core/metrics/test_base_metrics.pybackend/tests/unit/core/metrics/test_connections_and_coordinator_metrics.pybackend/tests/unit/core/metrics/test_database_and_dlq_metrics.pybackend/tests/unit/core/metrics/test_execution_and_events_metrics.pybackend/tests/unit/core/metrics/test_health_and_rate_limit_metrics.pybackend/tests/unit/core/metrics/test_kubernetes_and_notifications_metrics.pybackend/tests/unit/core/metrics/test_metrics_classes.pybackend/tests/unit/core/metrics/test_metrics_context.pybackend/tests/unit/core/metrics/test_replay_and_security_metrics.pybackend/tests/unit/core/test_adaptive_sampling.pybackend/tests/unit/core/test_csrf.pybackend/tests/unit/core/test_security.pybackend/tests/unit/core/test_utils.pybackend/tests/unit/events/core/test_consumer_config.pybackend/tests/unit/events/test_event_dispatcher.pybackend/tests/unit/events/test_mappings_and_types.pybackend/tests/unit/events/test_schema_registry_manager.pybackend/tests/unit/schemas_pydantic/test_events_schemas.pybackend/tests/unit/schemas_pydantic/test_execution_schemas.pybackend/tests/unit/schemas_pydantic/test_notification_schemas.pybackend/tests/unit/services/coordinator/test_queue_manager.pybackend/tests/unit/services/coordinator/test_resource_manager.pybackend/tests/unit/services/idempotency/__init__.pybackend/tests/unit/services/idempotency/test_idempotency_manager.pybackend/tests/unit/services/idempotency/test_middleware.pybackend/tests/unit/services/pod_monitor/test_config_and_init.pybackend/tests/unit/services/pod_monitor/test_event_mapper.pybackend/tests/unit/services/pod_monitor/test_monitor.pybackend/tests/unit/services/result_processor/__init__.pybackend/tests/unit/services/saga/test_execution_saga_steps.pybackend/tests/unit/services/saga/test_saga_comprehensive.pybackend/tests/unit/services/saga/test_saga_orchestrator_unit.pybackend/tests/unit/services/sse/test_shutdown_manager.pybackend/tests/unit/services/sse/test_sse_service.pybackend/tests/unit/services/sse/test_sse_shutdown_manager.pybackend/tests/unit/services/test_pod_builder.py
💤 Files with no reviewable changes (20)
- backend/tests/unit/services/idempotency/init.py
- backend/tests/unit/core/test_adaptive_sampling.py
- backend/tests/e2e/test_resource_cleaner_k8s.py
- backend/tests/load/config.py
- backend/tests/integration/services/sse/test_redis_bus.py
- backend/tests/unit/core/metrics/test_health_and_rate_limit_metrics.py
- backend/tests/unit/schemas_pydantic/test_notification_schemas.py
- backend/tests/unit/conftest.py
- backend/tests/unit/services/result_processor/init.py
- backend/tests/unit/core/metrics/test_connections_and_coordinator_metrics.py
- backend/tests/helpers/cleanup.py
- backend/tests/unit/events/core/test_consumer_config.py
- backend/tests/unit/schemas_pydantic/test_execution_schemas.py
- backend/tests/unit/events/test_schema_registry_manager.py
- backend/tests/unit/services/pod_monitor/test_config_and_init.py
- backend/tests/unit/core/metrics/test_database_and_dlq_metrics.py
- backend/tests/unit/core/metrics/test_metrics_context.py
- backend/tests/unit/services/coordinator/test_resource_manager.py
- backend/tests/unit/core/metrics/test_base_metrics.py
- backend/tests/load/strategies.py
✅ Files skipped from review due to trivial changes (21)
- backend/tests/unit/services/sse/test_sse_shutdown_manager.py
- backend/tests/unit/services/sse/test_shutdown_manager.py
- backend/tests/integration/test_dlq_routes.py
- backend/tests/helpers/kafka.py
- backend/tests/integration/events/test_consume_roundtrip.py
- backend/tests/integration/services/events/test_kafka_event_service.py
- backend/tests/unit/core/metrics/test_kubernetes_and_notifications_metrics.py
- backend/tests/integration/db/repositories/test_saved_script_repository.py
- backend/tests/integration/events/test_dlq_handler.py
- backend/tests/unit/core/metrics/test_replay_and_security_metrics.py
- backend/tests/helpers/k8s_fakes.py
- backend/tests/integration/test_auth_routes.py
- backend/tests/unit/events/test_mappings_and_types.py
- backend/tests/integration/test_user_settings_routes.py
- backend/tests/integration/events/test_consumer_lifecycle.py
- backend/tests/integration/db/repositories/test_admin_settings_repository.py
- backend/tests/unit/services/idempotency/test_idempotency_manager.py
- backend/tests/integration/services/coordinator/test_execution_coordinator.py
- backend/tests/integration/services/execution/test_execution_service.py
- backend/tests/unit/services/saga/test_saga_comprehensive.py
- backend/tests/integration/test_alertmanager.py
🚧 Files skipped from review as they are similar to previous changes (5)
- backend/tests/unit/events/test_event_dispatcher.py
- backend/tests/unit/services/sse/test_sse_service.py
- backend/tests/integration/result_processor/test_result_processor.py
- backend/tests/integration/dlq/test_dlq_discard.py
- backend/tests/unit/core/test_security.py
🧰 Additional context used
🧬 Code graph analysis (18)
backend/tests/unit/services/pod_monitor/test_event_mapper.py (3)
backend/tests/helpers/k8s_fakes.py (6)
ContainerStatus(59-64)FakeApi(141-146)Pod(83-105)State(47-56)Terminated(34-38)Waiting(41-44)backend/app/services/pod_monitor/event_mapper.py (1)
PodContext(31-38)backend/app/infrastructure/kafka/events/metadata.py (1)
AvroEventMetadata(9-31)
backend/tests/unit/core/metrics/test_execution_and_events_metrics.py (2)
backend/tests/unit/conftest.py (1)
app(62-63)backend/app/core/metrics/execution.py (1)
ExecutionMetrics(5-108)
backend/app/api/routes/replay.py (2)
backend/app/schemas_pydantic/replay.py (1)
SessionSummary(38-67)backend/tests/integration/services/sse/test_redis_bus.py (1)
model_dump(26-27)
backend/tests/unit/services/test_pod_builder.py (1)
backend/tests/unit/conftest.py (1)
client(57-58)
backend/tests/integration/notifications/test_notification_sse.py (1)
backend/app/services/notification_service.py (1)
create_notification(258-332)
backend/tests/unit/services/saga/test_saga_orchestrator_unit.py (4)
backend/app/events/schema/schema_registry.py (1)
SchemaRegistryManager(53-229)backend/tests/unit/services/pod_monitor/test_monitor.py (2)
produce(69-72)produce(416-419)backend/tests/unit/services/saga/test_execution_saga_steps.py (1)
produce(119-121)backend/app/events/core/producer.py (1)
produce(175-206)
backend/app/api/routes/admin/events.py (1)
backend/tests/integration/services/sse/test_redis_bus.py (1)
model_dump(26-27)
backend/tests/unit/core/metrics/test_metrics_classes.py (1)
backend/tests/unit/conftest.py (1)
app(62-63)
backend/tests/integration/services/notifications/test_notification_service.py (2)
backend/app/domain/notification/models.py (1)
DomainNotificationCreate(78-93)backend/app/services/coordinator/queue_manager.py (1)
user_id(34-35)
backend/tests/integration/services/saved_script/test_saved_script_service.py (1)
backend/app/domain/saved_script/models.py (1)
DomainSavedScriptUpdate(31-39)
backend/app/domain/events/event_models.py (4)
backend/tests/unit/conftest.py (1)
app(62-63)backend/app/core/utils.py (1)
StringEnum(6-31)backend/app/schemas_pydantic/admin_events.py (1)
EventFilter(20-30)backend/tests/unit/services/idempotency/test_middleware.py (1)
event(30-34)
backend/tests/unit/schemas_pydantic/test_events_schemas.py (1)
backend/app/schemas_pydantic/events.py (1)
EventFilterRequest(65-87)
backend/tests/integration/services/rate_limit/test_rate_limit_service.py (1)
backend/app/domain/rate_limit/rate_limit_models.py (1)
RateLimitConfig(54-127)
backend/app/db/docs/event.py (1)
backend/app/domain/events/typed.py (1)
EventMetadata(13-24)
backend/tests/unit/services/idempotency/test_middleware.py (1)
backend/app/services/idempotency/middleware.py (1)
IdempotentEventHandler(14-90)
backend/tests/integration/app/test_main_app.py (1)
backend/app/db/docs/event.py (2)
Settings(31-68)Settings(94-99)
backend/tests/unit/services/pod_monitor/test_monitor.py (3)
backend/app/db/repositories/event_repository.py (1)
store_event(39-52)backend/app/events/core/producer.py (1)
state(56-57)backend/app/services/pod_monitor/monitor.py (2)
state(140-142)MonitorState(42-48)
backend/tests/e2e/test_resource_cleaner_orphan.py (1)
backend/app/services/result_processor/resource_cleaner.py (2)
ResourceCleaner(18-286)cleanup_orphaned_resources(149-173)
🪛 GitHub Actions: MyPy Type Checking
backend/tests/unit/services/pod_monitor/test_event_mapper.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/core/metrics/test_execution_and_events_metrics.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/events/test_event_bus.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/core/test_container.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/sse/test_partitioned_event_router.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/idempotency/test_consumer_idempotent.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/load/cli.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/test_admin_routes.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/core/test_csrf.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/dlq/test_dlq_manager.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/services/saga/test_execution_saga_steps.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/saga/test_saga_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/events/test_schema_registry_roundtrip.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/app/api/routes/replay.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/services/test_pod_builder.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/replay/test_replay_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/notifications/test_notification_sse.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/events/test_event_dispatcher.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/idempotency/test_idempotent_handler.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/load/plot_report.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/services/saga/test_saga_orchestrator_unit.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/app/api/routes/admin/events.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/core/metrics/test_metrics_classes.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/user_settings/test_user_settings_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/conftest.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/notifications/test_notification_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/saved_script/test_saved_script_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/app/domain/saga/models.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/app/domain/events/event_models.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/events/test_event_store.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/load/monkey_runner.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/e2e/conftest.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/schemas_pydantic/test_events_schemas.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/rate_limit/test_rate_limit_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/services/coordinator/test_queue_manager.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/idempotency/test_redis_repository.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/app/db/docs/event.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/e2e/test_execution_routes.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/services/idempotency/test_middleware.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/app/test_main_app.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/services/pod_monitor/test_monitor.py
[error] 166-166: Mypy: Non-overlapping equality check (left operand type: Literal[MonitorState.RUNNING], right operand type: Literal[MonitorState.STOPPED]).
[error] 563-563: Mypy: Non-overlapping equality check (left operand type: Literal[MonitorState.RUNNING], right operand type: Literal[MonitorState.STOPPED]).
[error] 591-591: Mypy: Non-overlapping equality check (left operand type: Literal[MonitorState.RUNNING], right operand type: Literal[MonitorState.STOPPED]).
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/unit/core/test_utils.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/e2e/test_resource_cleaner_orphan.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/idempotency/test_idempotency.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/services/admin/test_admin_user_service.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/load/user_runner.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
backend/tests/integration/idempotency/test_decorator_idempotent.py
[error] 1-1: Mypy/pipeline failure: Command 'uv run mypy --config-file pyproject.toml --strict .' exited with code 1.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (71)
backend/tests/unit/core/test_csrf.py (2)
32-33: LGTM! Improved exception specificity.Using
CSRFValidationErrorinstead of a genericExceptionmakes the test more precise and maintainable. This correctly validates that the security service raises the appropriate domain exception when CSRF validation fails for authenticated requests.
3-3: No issues found with the import or exception definition.The import of
CSRFValidationErrorfromapp.domain.user.exceptionsis correctly implemented. The exception class is properly defined with full type hints and is correctly exported from the module. The test usage withpytest.raises(CSRFValidationError)is appropriate.backend/tests/load/user_runner.py (1)
5-5: LGTM! Import organization improved.Consolidating the
timeimport at the top with other standard library imports (line 5, alongsideasyncioandrandom) follows PEP 8 best practices and removes duplication. The import is properly placed and the file structure is clean.backend/tests/unit/core/metrics/test_execution_and_events_metrics.py (1)
3-3: Import reordering is correct and properly ordered.Both imports are valid, properly alphabetized (EventMetrics before ExecutionMetrics), and actively used throughout the test file. No functional changes—this is purely cosmetic.
Regarding the pipeline MyPy failure: this file only contains import reordering and cannot be the source of the type error. The failure likely originates from other files with substantive type changes. To identify the actual issue, run MyPy to see the specific error output rather than assuming the source.
backend/tests/integration/services/notifications/test_notification_service.py (3)
6-6: LGTM: Import reorganization improves structure.The relocation of the
AsyncContainerimport aligns with import organization best practices.
17-24: LGTM: Multi-line formatting enhances readability.The reformatted instantiation improves code readability and follows Python best practices (trailing comma for multi-line calls). All required fields are correctly provided and match the
DomainNotificationCreatemodel definition.
1-1: Verify MyPy failure is addressed.The pipeline shows a MyPy type checking failure. While this file contains only formatting changes (unlikely to cause type errors), ensure the repository-wide MyPy errors are resolved before merging.
backend/tests/integration/idempotency/test_decorator_idempotent.py (2)
7-8: LGTM: Import reorganization is cosmetic.The relocation of the
AsyncContainerimport is a stylistic change with no impact on functionality.
1-1: MyPy type checking verification inconclusive.The original review flagged a MyPy failure, but the sandbox environment cannot execute the type checker to confirm or identify specific errors. Manual verification is required by running MyPy locally to determine if type issues actually exist in this file.
backend/tests/integration/events/test_schema_registry_roundtrip.py (1)
6-7: Import reordering looks good.The import organization is correct and follows Python conventions. Recent commits have already addressed MyPy type checking, and the file has proper type annotations throughout.
backend/tests/unit/core/metrics/test_metrics_classes.py (2)
14-14: LGTM! Import correctly added for enum usage.The import of
ExecutionStatusis properly placed and necessary for the test on line 65, aligning with the PR's objective to standardize enum usage.
65-65: No action needed. The usage ofExecutionStatus.QUEUEDon line 65 is correct and properly typed. The enum member exists, the method signature matches the call, and the code aligns with the PR's enum standardization goal of passing enum members directly without.value.backend/tests/unit/core/test_utils.py (1)
2-2: The import ofRequestfromstarlette.requestson line 2 is necessary and properly used. It's referenced as the return type in themake_request()function on line 16 and instantiated on line 25. The function has complete type annotations that align with the strict MyPy configuration. No type-related issues are apparent in this file.Likely an incorrect or invalid review comment.
backend/tests/unit/services/saga/test_saga_orchestrator_unit.py (3)
15-15: Imports are correctly present and used.The imports on lines 15 and 23 are both necessary and properly utilized in the test code. Note that the AI summary incorrectly states "Removed an unused import: Field from pydantic," but
Fieldis clearly imported on line 23 and actively used on lines 40 and 43 for configuring Pydantic model fields in_FakeEvent.Also applies to: 23-23
67-69: LGTM: Formatting improvement.The method signature reformatting improves readability without altering the interface or behavior. The signature correctly matches the parent
UnifiedProducer.producemethod.
1-170: Manual MyPy verification required—repository access unavailable.Unable to verify the MyPy type checking failure due to repository clone failure. The original review comment requests investigation of a reported MyPy exit code 1, but without access to the codebase, configuration files, and broader migration context, the specific errors cannot be identified.
To proceed, manually run MyPy on the backend directory with strict mode enabled:
cd backend && mypy --config-file pyproject.toml --strict .Then identify whether errors originate from this test file or from broader Pydantic v2 migration changes in related domain/event models and repositories.
backend/tests/load/cli.py (3)
21-22: LGTM: Improved readability.The f-string split improves code readability without changing functionality.
50-53: LGTM: Improved readability.The f-string split improves code readability without changing functionality.
93-93: No issues found. Thegenerate_plotsfield is explicitly defined inLoadConfig(backend/tests/load/config.py, line 17) asgenerate_plots: bool = False. The direct assignmentcfg.generate_plots = bool(args.plots)is valid and will pass MyPy type checking since the field is properly declared on the Pydantic BaseSettings model.backend/tests/e2e/test_resource_cleaner_orphan.py (1)
2-8: Imports cleanup looks good and follows the repo's standard pattern. No issues found.The kubernetes imports in lines 6–7 match the pattern used consistently throughout the codebase (pod_builder.py, worker.py, monitor.py, etc.). ResourceDict is a simple type alias (
dict[str, list[str]]), so typed access via.get()has no typing issues. No mypy strict configuration was found in the repo, so the speculative concern about untyped kubernetes imports under--strictdoes not apply here.backend/tests/load/monkey_runner.py (3)
7-8: LGTM: Import additions are correct.The
timemodule import is properly added to support the new deadline-based execution bounds.
87-87: LGTM: Time-bounded loop is correctly implemented.The deadline check properly bounds the swarm execution duration. Using
time.time()is acceptable here as it's non-blocking.
70-70: The deadline calculation is correct and properly typed. Theduration_secondsfield inLoadConfigis explicitly annotated asint, andtime.time() + max(1, cfg.duration_seconds)correctly produces afloat. No type issues or MyPy failures would occur with this code. If you want to improve readability, you may optionally add an explicit type annotation fordeadline: float = ..., but this is not required.backend/tests/integration/conftest.py (2)
10-19: Verify fixture type compatibility.The fixture implementation looks correct, and the documented cleanup strategy (pre-test only to avoid event loop issues) is well-justified.
However, ensure that:
- The
dbfixture (defined elsewhere, likely in a parent conftest.py) returns aDatabasetype- The
redis_clientfixture returnsredis.Redis(orredis.asyncio.Redis)- The
cleanup_db_and_redisfunction signature matches these parameter types
5-6: Code changes look correct and properly typed.The
Databasetype is correctly defined as a type alias inapp/core/database_context.py(line 11:type Database = AsyncDatabase[MongoDocument]) and is properly imported and used for type annotations in the fixture. The function signature in_cleanupmatchescleanup_db_and_rediswhich also uses the sameDatabasetype parameter.backend/tests/e2e/conftest.py (2)
10-18: Verify type compatibility between fixture parameters and cleanup function.The fixture correctly uses
pytest_asyncio.fixturefor async cleanup, and the logic (pre-test cleanup only, no post-test) is well-documented. However, ensure the type annotationsDatabaseandredis.Redismatch whatcleanup_db_and_redisexpects.This verification depends on the MyPy script above. If the cleanup function expects different types (e.g.,
redis.asyncio.Redisexplicitly, or a different Database type), the annotations here must be adjusted accordingly.
6-7: No issues found with the import or function usage.The imported
cleanup_db_and_redisfunction signature inbackend/tests/helpers/cleanup.pymatches perfectly with how it's called in the fixture:
- Function:
async def cleanup_db_and_redis(db: Database, redis_client: redis.Redis) -> None:- Call:
await cleanup_db_and_redis(db, redis_client)Both files consistently use
import redis.asyncio as redis, so theredis.Redistype annotation refers to the same type in both locations. The async/await usage is correct, and the import path resolves properly.backend/tests/unit/schemas_pydantic/test_events_schemas.py (2)
3-3: LGTM: Import ordering follows conventions.The reordering of imports to place
SortOrderbeforeEventFilterRequestfollows alphabetical module ordering conventions and is logically sound given thatEventFilterRequestdepends onSortOrder.
1-1: Investigate the MyPy type checking failure.The pipeline reports that MyPy exited with code 1, but the detailed error output is not provided. Given that this PR involves a major migration to Pydantic v2 with extensive type changes, it's critical to resolve all type checking errors before merging.
Please provide the full MyPy error output to identify which files or type annotations are causing the failure.
backend/tests/integration/services/admin/test_admin_user_service.py (1)
20-20: > Likely an incorrect or invalid review comment.backend/tests/e2e/test_execution_routes.py (3)
93-93: No issues found. TheExecutionResultschema hasstatuscorrectly typed asExecutionStatus(imported asExecutionStatusEnumin the test), which is aStringEnum. Pydantic automatically deserializes the JSON response to enum members, so the assertion comparing againstlist(ExecutionStatusEnum)works correctly.
11-11: No issues found.The
AsyncClientimport fromhttpxis correct and fully consistent with the fixture definitions inbackend/tests/conftest.py, which yieldhttpx.AsyncClientobjects. All test methods properly annotate their parameters with the imported type.
1-1: Manual verification is required to confirm the MyPy failure. Without access to the actual error output, the type annotations in this file appear correct:ExecutionResult.statusis properly typed asExecutionStatus, matching the enum imported and used in the test assertions.backend/tests/unit/services/pod_monitor/test_event_mapper.py (7)
5-5: LGTM!Import of
EventTypeenum aligns with the PR objective to standardize enum usage across the codebase.
30-37: LGTM!Multiline formatting improves readability of
PodContextconstruction.
40-52: LGTM!Well-structured test data setup with a complete JSON payload for executor output, enabling proper end-to-end testing of the log parsing path.
66-90: LGTM!Event type assertions properly migrated to use
EventTypeenum comparisons. The debug output at lines 74-75 is helpful for troubleshooting test failures.
97-113: LGTM!Pod construction reformatted for readability, and event type assertions correctly use
EventTypeenums.
120-122: LGTM!Assertion correctly uses
EventType.EXECUTION_COMPLETEDenum, with helpful comment documenting the expected behavior for DELETED events.
232-232: LGTM!Assertion correctly uses
EventType.EXECUTION_COMPLETEDenum.backend/tests/integration/services/events/test_event_bus.py (1)
1-3: The code is properly typed and all imports are valid. No MyPy failures are evident in this file or its direct dependencies:
EventBusEventandEventBusManagerare properly typed inevent_bus.pyAsyncContainerfromdishkais a valid, typed import used consistently across the test suite- The
eventuallyhelper has properTypeVarandAwaitabletype hints- The test function has complete type annotations
The import reordering at line 3 is a benign, non-functional change. If MyPy failures exist in the PR, they originate from other modules, not this file.
Likely an incorrect or invalid review comment.
backend/tests/integration/services/idempotency/test_redis_repository.py (3)
13-13: LGTM! Import reorganization improves structure.The
DuplicateKeyErrorimport is properly placed and correctly used in the duplicate insert test (line 106).
143-152: LGTM! Refactoring improves test readability.Extracting the statuses tuple to a named variable and using it in the loop makes the test intent clearer while preserving the original behavior.
1-1: Verify MyPy failure by running type checking locally.This file appears structurally sound—the
pymongo.errors.DuplicateKeyErrorimport is used at line 106,jsonis used at line 117, andIdempotencyRecordis properly defined usingpydantic.dataclasses. However, without access to the actual MyPy error output, the root cause cannot be determined. The failure could stem from pymongo type stub issues or errors elsewhere in the codebase from the extensive type migrations in this PR.Run the following command locally to identify the specific type errors:
cd backend && python3 -m mypy --config-file ../pyproject.toml --strict .backend/tests/integration/services/replay/test_replay_service.py (1)
5-6: Potential mypy hotspot:svc: ReplayService = await scope.get(ReplayService)ifAsyncContainer.getisn’t typed genericallyImport change itself is fine, but if mypy is failing with an “incompatible types in assignment” around
scope.get(...), you may need a small typing adapter (e.g., Protocol forget()or a typed helper thatcast()s the result).backend/tests/integration/core/test_container.py (1)
4-4: Import is correct;dishka==1.6.0exportsAsyncContainerbut may lack type stubs
AsyncContaineris properly part of the public API indishka==1.6.0. However, the package does not appear to includepy.typedin published releases, so mypy-strict support is not guaranteed. If your CI mypy failure is related to this import, verify that your environment has type stubs for dishka or consider disabling strict type checking for dishka imports.backend/tests/integration/idempotency/test_consumer_idempotent.py (1)
15-16: The import is safe; no Dishka typing concerns apply given mypy configuration.The import move itself is behavior-neutral and the
scope: AsyncContainerfixture is properly typed throughout the codebase. More importantly, the mypy configuration inbackend/pyproject.tomlexplicitly disables import-untyped errors viadisable_error_code = ["import-untyped", "import-not-found"], so even if Dishka lacked complete stubs, it would not cause a mypy-strict failure. All usage patterns (await scope.get(Type)) are correctly annotated at call sites. No typing shim is needed.Likely an incorrect or invalid review comment.
backend/tests/integration/services/sse/test_partitioned_event_router.py (1)
13-14: Import is correct and properly used—no issues.The addition of
from tests.helpers import make_execution_requested_eventat line 13 is correct and properly utilized at line 45. The import is cleanly exported fromtests.helpers.__init__.pywith no type-checking issues. The MyPy errors present in the file relate to missing stubs for theredislibrary (line 5), which are unrelated to this import change.backend/tests/unit/services/coordinator/test_queue_manager.py (1)
6-7: LGTM.The import addition is correct and is properly used by the
ev()helper function at line 15. The import resolves frombackend/tests/helpers/__init__.pywhich exportsmake_execution_requested_eventas a factory function for test events.backend/tests/integration/services/saved_script/test_saved_script_service.py (3)
22-24: LGTM: Formatting improvement.The multi-line formatting of the
update_saved_scriptcall improves readability without changing any logic.
1-4: Verify the import ordering aligns with project conventions.The
AsyncContainerimport fromdishka(third-party) is now placed after the localapp.*imports. Confirm this matches the project's import sorting configuration, as it deviates from standard PEP 8 ordering (stdlib → third-party → local).
1-32: The code in this file is properly typed and includes all necessary type annotations. The recent commit message ("ruff and mypy fixes") indicates type checking issues have already been addressed. No type errors are present in the test file, imports, or related domain models.backend/app/domain/saga/models.py (2)
10-28: LGTM!Clean migration to Pydantic
BaseModelwith appropriate use ofField(default_factory=...)for mutable defaults (lists, dicts) and timestamps. TheConfigDict(from_attributes=True)enables seamless conversion from ORM objects.
56-70: LGTM!The
has_morelogic is correctly implemented as a read-only@property, which is the idiomatic Pydantic approach for computed fields that shouldn't be serialized by default.backend/tests/unit/services/saga/test_execution_saga_steps.py (3)
206-207: LGTM!Direct attribute access
cps.publish_commands is Trueis cleaner and more readable thangetattr(cps, "publish_commands") is Truewhen the attribute is guaranteed to exist on the typed object.
46-71: LGTM!The
_FakeAllocRepocorrectly implements the repository interface and instantiatesDomainResourceAllocationwith the new Pydantic model signature. The test properly exercises allocation creation and release paths.
1-21: Verify the actual MyPy error source.All imports in the test file are resolvable:
make_execution_requested_eventis properly exported frombackend/tests/helpers/, and all saga, domain, and infrastructure classes exist with correct type annotations. If MyPy is still reporting errors, runmypy --strict backend/tests/unit/services/saga/test_execution_saga_steps.pyto identify whether the issue originates from type mismatches in the imported modules rather than this file's import statements.backend/tests/integration/notifications/test_notification_sse.py (2)
8-9: LGTM! Clean import organization.The addition of
AsyncContainerfor the type annotation on line 16 and the removal of unused imports align well with the type-focused improvements in this PR.
1-1: The import and test fixture are type-safe; the MyPy failure is likely elsewhere.The
uuid4import from the standard library is properly typed, and thescopeparameter is correctly annotated asAsyncContainerin the test function signature. The fixture is defined inconftest.py(line 120-122) with proper type annotations:async def scope(app_container: AsyncContainer) -> AsyncGenerator[AsyncContainer, None].If the pipeline MyPy failure persists, the issue is not in this file. Check other files in the backend or verify that all type stub dependencies (listed in
pyproject.tomldev dependencies) are installed correctly.backend/tests/integration/services/rate_limit/test_rate_limit_service.py (3)
16-16: LGTM!The
AsyncContainerimport from dishka is correctly placed and properly used throughout the test functions for dependency injection.
157-161: LGTM!The multi-line formatting of
RateLimitConfigconstructions improves readability and maintains consistency across the test file, especially for the rule with multiple parameters (lines 173-183).Also applies to: 173-183
1-1: Manual verification of MyPy failure required through CI/CD pipeline.Code inspection shows proper type annotations throughout the test file and correct usage of RateLimitConfig, RateLimitRule, and AsyncContainer. However, without executing MyPy directly, the specific type errors cannot be confirmed or dismissed. Run MyPy in the CI/CD pipeline to identify the exact errors preventing merge.
backend/tests/integration/idempotency/test_idempotent_handler.py (1)
7-8: The import change is correct and follows established patterns.The reordering of
from dishka import AsyncContaineris syntactically correct and matches the import pattern used consistently throughout the codebase (34+ test files). The import follows PEP 8 standards for grouping third-party imports.If a MyPy failure was reported in the pipeline, it is likely unrelated to this import change or caused by a pre-existing type information issue with the dishka dependency (which lacks type stubs).
backend/tests/unit/services/idempotency/test_middleware.py (3)
4-5: LGTM: Import additions are correct.The
pytestimport was missing (it's used throughout the file), and theIdempotencyStatusimport aligns with the PR's migration to domain-based types.
37-39: LGTM: Formatting improvements.Multi-line parameter lists improve readability with no functional changes.
Also applies to: 50-52, 85-91
1-1: No MyPy failure detected in this code. The test file has proper type annotations throughout (AsyncMock, MagicMock, return type None, IdempotentEventHandler), and all imports are correctly structured. IdempotencyResult is properly defined with typed fields, and the middleware module uses appropriate type hints including Callable, Awaitable, and union types. Without specific MyPy error messages or evidence of an actual pipeline failure, this concern cannot be substantiated.Likely an incorrect or invalid review comment.
backend/tests/unit/services/pod_monitor/test_monitor.py (1)
11-11: LGTM: DomainEvent migration implemented correctly.The migration from
EventtoDomainEventin the fake repository aligns with the PR's domain model refactoring objectives and maintains consistency with the updatedEventRepositoryinterface.Also applies to: 54-56
backend/app/db/docs/event.py (1)
1-99: LGTM! Event storage restructuring is well-designed.The migration from payload-wrapped to flattened event storage is implemented correctly:
- Sparse indexes (Lines 42-43) appropriately handle event-specific fields that only exist on certain event types.
extra="allow"configuration (Lines 29, 92) enables flexible storage of event-specific fields without defining them explicitly.- Index updates correctly reference top-level fields (
execution_idinstead ofpayload.execution_id).- EventMetadata import properly sourced from the centralized domain types.
The structure aligns well with Pydantic v2 patterns and the broader DomainEvent migration.
backend/app/domain/events/event_models.py (1)
172-181: LGTM! Good defensive programming with metadata check.The migration to
DomainEventis implemented correctly across all result classes. The guard on Line 179 (if e.metadata and not e.metadata.service_name.startswith("system-")) is a good defensive check that prevents potentialAttributeErrorif metadata isNone.The overall migration from dataclasses to Pydantic
BaseModel(Lines 53-56, 184-187) aligns well with the PR's standardization objectives.backend/tests/integration/events/test_event_store.py (1)
1-153: LGTM! Import reordering only.This change only reorders imports (Line 10) without affecting any test logic or functionality. The tests remain comprehensive and well-structured.
backend/tests/integration/events/test_event_dispatcher.py (1)
14-14: No issues with the AsyncContainer import from dishka 1.6.0.The import reordering is a cosmetic change that doesn't affect type safety. dishka 1.6.0 includes in-source type hints and is compatible with mypy. If there's a mypy pipeline failure, it is likely unrelated to this file or the import reordering—verify the full mypy output from the pipeline to identify the actual source of the error.
backend/tests/integration/services/user_settings/test_user_settings_service.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 issues found across 10 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="backend/app/domain/events/typed.py">
<violation number="1" location="backend/app/domain/events/typed.py:280">
P2: Inconsistent timestamp type: `sent_at` should use `datetime` instead of `str` to match the PR's stated goal of "unified timestamps to datetime" and align with other timestamp fields like `cancelled_at`, `stored_at`, and `deleted_at`.</violation>
<violation number="2" location="backend/app/domain/events/typed.py:288">
P2: Inconsistent timestamp type: `delivered_at` should use `datetime` instead of `str` to match the PR's stated goal of "unified timestamps to datetime" and align with other timestamp fields.</violation>
<violation number="3" location="backend/app/domain/events/typed.py:304">
P2: Inconsistent timestamp type: `read_at` should use `datetime` instead of `str` to match the PR's stated goal of "unified timestamps to datetime" and align with other timestamp fields.</violation>
<violation number="4" location="backend/app/domain/events/typed.py:311">
P2: Inconsistent timestamp type: `clicked_at` should use `datetime` instead of `str` to match the PR's stated goal of "unified timestamps to datetime" and align with other timestamp fields.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @backend/app/db/repositories/admin/admin_events_repository.py:
- Around line 178-185: archive_event currently passes event.model_dump() which
can include metadata=None and fails Pydantic validation because
EventArchiveDocument.metadata is required; update async def archive_event so it
guards against missing metadata (check if event.metadata is None) and either
raise a clear error (e.g., ValueError) or populate a default/normalized metadata
value before building EventArchiveDocument, then construct the
EventArchiveDocument without a None metadata (or use
event.model_dump(exclude_none=True) after ensuring metadata is present) and
proceed with insert().
- Around line 59-62: The loop mapping DB docs to domain events can crash the
whole response if domain_event_adapter.validate_python raises a ValidationError;
update the mapping in the browse path to process each doc in a try/except,
catching the ValidationError (or generic Exception if ValidationError type isn't
imported), log a warning with the document id and error, and skip/continue so a
partial list is returned to build EventBrowseResult(events=..., total=...,
skip=..., limit=...); apply the same defensive pattern to the
single-event/detail path that calls domain_event_adapter.validate_python (lines
referenced near 69-70), returning either a lightweight fallback object or a
controlled error response for that single item while ensuring the endpoint does
not crash for all events.
In @backend/app/domain/events/typed.py:
- Around line 29-41: BaseEvent.metadata is optional but EventDocument.metadata
and EventArchiveDocument.metadata are required, causing Pydantic validation
failures when archiving events with metadata=None; update the persistence models
to match domain semantics by making EventDocument.metadata and
EventArchiveDocument.metadata optional (EventMetadata | None), and ensure
ArchivedEvent's metadata type aligns as well, plus adjust archive_event() to
handle None metadata when constructing those documents.
- Around line 15-41: The timestamp fields on the notification events are
incorrectly typed as str; update NotificationSentEvent.sent_at,
NotificationDeliveredEvent.delivered_at, NotificationReadEvent.read_at, and
NotificationClickedEvent.clicked_at to be datetime and give them timezone-aware
defaults consistent with BaseEvent (use Field(default_factory=lambda:
datetime.now(timezone.utc)) or equivalent), ensuring you import datetime and
timezone if needed and match the existing BaseEvent timestamp pattern.
In @backend/app/infrastructure/kafka/events/notification.py:
- Around line 66-70: The Kafka-layer NotificationPreferencesUpdatedEvent
duplicates the domain event with an inconsistent required changed_fields; either
consolidate by importing and reusing the domain
NotificationPreferencesUpdatedEvent type instead of redefining it here, or make
the Kafka definition match the domain by giving
NotificationPreferencesUpdatedEvent.changed_fields a default (use
Field(default_factory=list)) so both layers have the same optional default and
avoid runtime errors and maintenance drift.
In @backend/tests/unit/domain/events/test_event_schema_coverage.py:
- Around line 165-170: The test_event_type_count_sanity currently enforces a
hard upper bound on len(EventType) (<= 100) which can cause churn as event types
grow; remove or relax that upper bound by either deleting the upper-bound
assertion or replacing it with a configurable guard (e.g., use a MAX_EVENT_TYPES
constant or an environment-driven limit) and update the assertion to only
enforce the lower bound (assert count >= 50) or assert count <= MAX_EVENT_TYPES
with MAX_EVENT_TYPES clearly documented so future growth is intentional; locate
the test_event_type_count_sanity function and modify the assertions accordingly.
🧹 Nitpick comments (4)
unlike.md (1)
1-90: Fix markdown formatting issues.The documentation content is valuable for tracking technical debt, but multiple markdown linting issues affect readability:
- Unordered list items should have 0 spaces of indentation (currently 2)
- Spaces inside emphasis markers on lines 65, 67
📝 Proposed markdown formatting fixes
Architecture & Design - 4. Optional aggregate_id on BaseEvent (app/infrastructure/kafka/events/base.py:20) - aggregate_id: str | None = None - - For proper event sourcing, aggregate_id should be required for domain events - - Currently requires manual setting everywhere (easy to forget, as test helper showed) +4. Optional aggregate_id on BaseEvent (app/infrastructure/kafka/events/base.py:20) +aggregate_id: str | None = None +- For proper event sourcing, aggregate_id should be required for domain events +- Currently requires manual setting everywhere (easy to forget, as test helper showed)Apply similar fixes to all list items throughout the file.
backend/tests/unit/domain/events/test_event_schema_coverage.py (2)
21-52: Tests are valuable, but make adapter-coverage assertion resilient (avoid parsing exception strings).
Relying onstr(e).lower()substring checks is likely to break on Pydantic upgrades. Prefer catchingpydantic.ValidationErrorand checkinge.errors()for discriminator/union-tag error types.Also applies to: 89-105
107-140: Orphan class scans should be recursive (not just direct__subclasses__()).
If you ever introduce intermediate base classes, these tests will miss them. Consider a small_all_subclasses(cls)helper and use it for both DomainBaseEvent and KafkaBaseEvent scans.backend/app/domain/events/__init__.py (1)
1-89: Watch import-cost/circular-dep risk from re-exporting every concrete event at package import time.
Ifapp.domain.eventsis imported in hot paths, consider keeping it lean (export onlyDomainEvent,BaseEvent,domain_event_adapter, plus query/result models) and importing concrete event classes directly fromapp.domain.events.typedwhere needed, or adding a lazy import pattern.Also applies to: 91-178
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backend/app/db/docs/event.pybackend/app/db/repositories/admin/admin_events_repository.pybackend/app/domain/events/__init__.pybackend/app/domain/events/typed.pybackend/app/events/event_store.pybackend/app/infrastructure/kafka/events/__init__.pybackend/app/infrastructure/kafka/events/notification.pybackend/app/infrastructure/kafka/events/user.pybackend/tests/unit/domain/events/test_event_schema_coverage.pyunlike.md
🧰 Additional context used
🧬 Code graph analysis (8)
backend/app/infrastructure/kafka/events/user.py (3)
backend/app/domain/events/typed.py (2)
UserLoginEvent(225-230)BaseEvent(29-40)backend/app/domain/enums/kafka.py (1)
KafkaTopic(7-53)backend/app/domain/enums/auth.py (1)
LoginMethod(4-10)
backend/app/db/docs/event.py (1)
backend/app/domain/events/typed.py (1)
EventMetadata(15-26)
backend/app/events/event_store.py (2)
backend/app/db/docs/event.py (1)
EventDocument(13-71)backend/app/events/schema/schema_registry.py (1)
deserialize_json(178-193)
backend/app/infrastructure/kafka/events/notification.py (2)
backend/app/domain/events/typed.py (2)
NotificationPreferencesUpdatedEvent(315-318)BaseEvent(29-40)backend/app/domain/enums/kafka.py (1)
KafkaTopic(7-53)
backend/app/domain/events/__init__.py (5)
backend/app/domain/events/event_models.py (7)
EventBrowseResult(95-101)EventDetail(105-110)EventExportRow(184-197)EventFilter(53-66)EventProjection(141-150)EventQuery(70-80)EventSummary(44-50)backend/app/schemas_pydantic/admin_events.py (1)
EventFilter(20-30)backend/app/schemas_pydantic/events.py (2)
EventProjection(241-271)EventQuery(183-208)backend/app/domain/events/typed.py (8)
AllocateResourcesCommandEvent(407-411)ArchivedEvent(520-536)AuthFailedEvent(466-470)BaseEvent(29-40)CreatePodCommandEvent(380-395)DeletePodCommandEvent(398-404)EventMetadata(15-26)ResourceLimitExceededEvent(476-481)backend/app/infrastructure/kafka/events/base.py (1)
BaseEvent(13-37)
backend/app/infrastructure/kafka/events/__init__.py (3)
backend/app/domain/events/typed.py (2)
NotificationPreferencesUpdatedEvent(315-318)UserLoginEvent(225-230)backend/app/infrastructure/kafka/events/notification.py (1)
NotificationPreferencesUpdatedEvent(66-70)backend/app/infrastructure/kafka/events/user.py (1)
UserLoginEvent(40-46)
backend/tests/unit/domain/events/test_event_schema_coverage.py (2)
backend/app/domain/events/typed.py (1)
BaseEvent(29-40)backend/app/events/schema/schema_registry.py (1)
_get_event_type_to_class_mapping(41-50)
backend/app/domain/events/typed.py (4)
backend/app/domain/enums/auth.py (1)
LoginMethod(4-10)backend/app/domain/enums/common.py (1)
Environment(27-33)backend/app/domain/enums/storage.py (1)
StorageType(16-22)backend/app/domain/execution/models.py (1)
ResourceUsageDomain(13-19)
🪛 LanguageTool
unlike.md
[style] ~1-~1: Consider a different adjective to strengthen your wording.
Context: ... on everything I've observed, here's my deep analysis of codebase issues: Archite...
(DEEP_PROFOUND)
🪛 markdownlint-cli2 (0.18.1)
unlike.md
7-7: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
8-8: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
13-13: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
14-14: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
15-15: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
19-19: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
20-20: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
28-28: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
29-29: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
30-30: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
33-33: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
34-34: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
39-39: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
41-41: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
42-42: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
47-47: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
48-48: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
49-49: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
55-55: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
56-56: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
59-59: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
60-60: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
65-65: Spaces inside emphasis markers
(MD037, no-space-in-emphasis)
67-67: Spaces inside emphasis markers
(MD037, no-space-in-emphasis)
68-68: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
69-69: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
74-74: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
75-75: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
79-79: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
80-80: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
85-85: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
86-86: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
89-89: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
90-90: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (15)
backend/app/infrastructure/kafka/events/__init__.py (1)
1-137: LGTM!The export updates for
NotificationPreferencesUpdatedEventandUserLoginEventfollow the existing module pattern and are properly organized.backend/app/events/event_store.py (9)
46-72: LGTM!The refactoring to use
event.model_dump(exclude_none=True)simplifies event storage and aligns with the PR's goal of flattening event data to top-level fields.
74-109: LGTM!Batch storage correctly applies the same
model_dump(exclude_none=True)pattern for consistency.
111-121: LGTM!Deserialization using
doc.model_dump()is consistent with the new event structure.
123-147: LGTM!Deserialization pattern correctly applied in list comprehension.
149-164: LGTM!The query correctly searches for
execution_idat the top level (instead ofpayload.execution_id), aligning with the flattened event structure. Theaggregate_idfallback ensures backward compatibility.
166-186: LGTM!Deserialization correctly applied.
188-207: LGTM!Deserialization correctly applied.
209-220: LGTM!Deserialization correctly applied.
222-251: LGTM!Deserialization correctly applied in the replay flow.
backend/app/infrastructure/kafka/events/user.py (1)
40-47: Verify the architectural intent behind domain vs. infrastructure event separation.
UserLoginEventis defined in bothbackend/app/domain/events/typed.py(line 225) andbackend/app/infrastructure/kafka/events/user.py(line 40), but they are not identical—they extend different base classes and serve different purposes:
- Domain version: Extends Pydantic
BaseEvent, used for MongoDB persistence and event querying viadomain_event_adapter- Infrastructure version: Extends
AvroBase, includestopic: ClassVar[KafkaTopic]for Kafka serializationWhile architectural separation (domain vs. infrastructure) appears intentional, the core event fields (
user_id,login_method,ip_address,user_agent) are duplicated. Consider whether this duplication could be reduced through shared field definitions or composition without coupling the domain and infrastructure layers.
UserLoggedInEvent(lines 49-55) has identical fields toUserLoginEventbut distinctevent_typevalues—confirm this dual-event pattern is necessary for your business logic.Likely an incorrect or invalid review comment.
backend/app/db/repositories/admin/admin_events_repository.py (2)
187-206: ReplaySession model_dump/model_validate migration looks consistent.
Usingmodel_dump()for persistence andmodel_validate(..., from_attributes=True)for hydration is a clean, consistent move.Also applies to: 271-272
163-176: CSV datetime serialization is properly handled.The
timestampfield is serialized using Pydantic'smodel_dump(mode="json"), which automatically convertsdatetimeobjects to ISO 8601 format (e.g.,2024-01-15T10:30:45.123456+00:00). This approach is consistent and correct; the code does not use Python's defaultstr(datetime)formatting.backend/app/db/docs/event.py (1)
74-96: Archive docs: extra="allow" matches the “store event-specific fields at top-level” approach.
This aligns well withEventArchiveDocument(**event.model_dump(), ...)usage (once metadata-nullability is resolved).backend/app/domain/events/typed.py (1)
541-610: Discriminated union + TypeAdapter approach is solid; verify discriminator behavior on enums for Pydantic 2.9.2.
The overallDomainEvent = Annotated[..., Discriminator("event_type")]+TypeAdapter(DomainEvent)design is good—just ensure the discriminator acceptsEventTypevalues as stored/serialized (enum vs string) in all ingestion paths.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/events/event_store.py (1)
54-60: Span attribute likely wrong after flattening:EventAttributes.EXECUTION_IDusesaggregate_id.
Given the move to top-levelexecution_id, consider preferringevent.execution_idwhen present (fallback toaggregate_id).Proposed fix
add_span_attributes( **{ str(EventAttributes.EVENT_TYPE): str(event.event_type), str(EventAttributes.EVENT_ID): event.event_id, - str(EventAttributes.EXECUTION_ID): event.aggregate_id or "", + str(EventAttributes.EXECUTION_ID): getattr(event, "execution_id", None) or event.aggregate_id or "", } )
🤖 Fix all issues with AI agents
In @backend/app/domain/events/typed.py:
- Around line 264-318: The timestamp fields in notification event models use str
but must be standardized to datetime: update the type annotations for sent_at in
NotificationSentEvent, delivered_at in NotificationDeliveredEvent, read_at in
NotificationReadEvent, and clicked_at in NotificationClickedEvent from str to
datetime (import datetime from the datetime module) and adjust any related
(de)serialization or pydantic validators elsewhere if present to accept/produce
datetime objects instead of ISO strings so these event classes align with the
standardized timestamp convention.
In @backend/app/events/event_store.py:
- Around line 80-84: The EventDocument construction in store_batch is raising a
double-kwarg TypeError because e.model_dump(exclude_none=True) includes
stored_at and ttl_expires_at which you also pass explicitly; update the dump to
exclude those keys (e.g., exclude stored_at and ttl_expires_at from e.model_dump
or remove them from the dict before unpacking) so you only pass stored_at=now
and ttl_expires_at=ttl when creating EventDocument instances.
In @backend/app/infrastructure/kafka/events/user.py:
- Around line 40-47: UserLoginEvent is defined but not mapped or used
(EventType.USER_LOGIN appears in event_store.py) which will break
deserialization; either remove the unused UserLoginEvent class and the
EventType.USER_LOGIN enum reference in event_store.py (and ensure only
UserLoggedInEvent remains), or add the schema registry mapping by adding
EventType.USER_LOGIN: UserLoginEvent to the mappings dictionary in
infrastructure/kafka/mappings.py so the deserializer can instantiate it; also
confirm the PII fields ip_address and user_agent retention settings (7 days in
Kafka, 90 days in event store) meet security/privacy requirements and update
retention or field handling accordingly.
In @unlike.md:
- Around line 1-90: The markdown in unlike.md violates markdownlint rules
(MD007/MD037); fix by converting the scratch notes into proper Markdown
structure (use top-level headings, consistent list indentation, blank lines
around fenced/code blocks and around headings, and correct emphasis spacing) or
add unlike.md to the markdownlint ignore list in CI config; reference the
offending rules MD007 and MD037 when updating lint config or commit message so
CI stops failing.
🧹 Nitpick comments (4)
backend/app/events/event_store.py (1)
117-117: Filter document-specific fields before deserializing events.
doc.model_dump()passes EventDocument's internal fields (stored_at,ttl_expires_at,execution_id,_id) todeserialize_json, which passes them tomodel_validate(). While event models silently ignore these extra fields (noextra="forbid"), it's cleaner to exclude them explicitly:event = self.schema_registry.deserialize_json( doc.model_dump(exclude={"stored_at", "ttl_expires_at", "execution_id", "_id"}) )This prevents unnecessary fields from being processed and makes the intent clearer.
Applies to lines 117, 143, 160, 182, 203, 216, 240.
backend/app/infrastructure/kafka/events/notification.py (1)
66-70: Alignchanged_fieldsdefault with domain typed event (avoid required field drift).In
backend/app/domain/events/typed.py,NotificationPreferencesUpdatedEvent.changed_fieldsusesField(default_factory=list), making it optional with an empty default. The Kafka version here requires the field. Adding a matching default improves consistency and simplifies instantiation in tests and producers.Proposed change
class NotificationPreferencesUpdatedEvent(BaseEvent): event_type: Literal[EventType.NOTIFICATION_PREFERENCES_UPDATED] = EventType.NOTIFICATION_PREFERENCES_UPDATED topic: ClassVar[KafkaTopic] = KafkaTopic.NOTIFICATION_EVENTS user_id: str - changed_fields: list[str] + changed_fields: list[str] = []backend/tests/unit/domain/events/test_event_schema_coverage.py (2)
21-51: Suggest simplifying union extraction logic.The function
get_domain_event_classes()has complex nested logic to extract union types with multiple fallbacks (lines 26-34). While defensive, this complexity may indicate fragility in how the DomainEvent union is defined or accessed.Consider:
- Documenting why the nested extraction is necessary (Python version differences?)
- Testing this logic across Python 3.10+ to ensure it works consistently
- Potentially simplifying by relying solely on the
DomainBaseEvent.__subclasses__()fallback if the union extraction proves unreliable📝 Consider adding documentation
def get_domain_event_classes() -> dict[EventType, type]: - """Extract EventType -> class mapping from DomainEvent union.""" + """Extract EventType -> class mapping from DomainEvent union. + + Uses multiple extraction strategies due to Python version differences: + 1. Try to extract from DomainEvent union args (Python 3.10+ syntax) + 2. Fall back to iterating all DomainBaseEvent subclasses + + This ensures compatibility across Python versions and handles edge cases. + """ mapping: dict[EventType, type] = {}
171-175: Update hardcoded EventType count range if needed.The test expects 50-100 EventTypes. Based on the PR adding 53+ event schemas, this range appears reasonable now, but may need adjustment as the system grows.
💡 Consider making the range configurable
+# Expected EventType count range (update as system grows) +MIN_EVENT_TYPES = 50 +MAX_EVENT_TYPES = 100 + def test_event_type_count_sanity(self) -> None: """Sanity check: we should have a reasonable number of event types.""" count = len(EventType) - assert count >= 50, f"Expected at least 50 EventTypes, got {count}" - assert count <= 100, f"Expected at most 100 EventTypes, got {count} - is this intentional?" + assert count >= MIN_EVENT_TYPES, f"Expected at least {MIN_EVENT_TYPES} EventTypes, got {count}" + assert count <= MAX_EVENT_TYPES, f"Expected at most {MAX_EVENT_TYPES} EventTypes, got {count} - is this intentional?"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backend/app/db/docs/event.pybackend/app/db/repositories/admin/admin_events_repository.pybackend/app/domain/events/__init__.pybackend/app/domain/events/typed.pybackend/app/events/event_store.pybackend/app/infrastructure/kafka/events/__init__.pybackend/app/infrastructure/kafka/events/notification.pybackend/app/infrastructure/kafka/events/user.pybackend/tests/unit/domain/events/test_event_schema_coverage.pyunlike.md
🧰 Additional context used
🧬 Code graph analysis (7)
backend/tests/unit/domain/events/test_event_schema_coverage.py (2)
backend/app/domain/events/typed.py (1)
BaseEvent(29-40)backend/app/events/schema/schema_registry.py (1)
_get_event_type_to_class_mapping(41-50)
backend/app/infrastructure/kafka/events/user.py (3)
backend/app/domain/events/typed.py (2)
UserLoginEvent(225-230)BaseEvent(29-40)backend/app/domain/enums/kafka.py (1)
KafkaTopic(7-53)backend/app/domain/enums/auth.py (1)
LoginMethod(4-10)
backend/app/infrastructure/kafka/events/notification.py (2)
backend/app/domain/events/typed.py (2)
NotificationPreferencesUpdatedEvent(315-318)BaseEvent(29-40)backend/app/domain/enums/kafka.py (1)
KafkaTopic(7-53)
backend/app/db/docs/event.py (2)
backend/app/domain/events/typed.py (1)
EventMetadata(15-26)backend/app/services/coordinator/queue_manager.py (1)
execution_id(30-31)
backend/app/events/event_store.py (3)
backend/app/db/docs/event.py (1)
EventDocument(13-71)backend/app/events/schema/schema_registry.py (1)
deserialize_json(178-193)backend/app/domain/events/event_models.py (1)
SortDirection(20-22)
backend/app/infrastructure/kafka/events/__init__.py (3)
backend/app/domain/events/typed.py (2)
NotificationPreferencesUpdatedEvent(315-318)UserLoginEvent(225-230)backend/app/infrastructure/kafka/events/notification.py (1)
NotificationPreferencesUpdatedEvent(66-70)backend/app/infrastructure/kafka/events/user.py (1)
UserLoginEvent(40-46)
backend/app/domain/events/typed.py (4)
backend/app/domain/enums/auth.py (1)
LoginMethod(4-10)backend/app/domain/enums/common.py (1)
Environment(27-33)backend/app/domain/enums/storage.py (1)
StorageType(16-22)backend/app/domain/execution/models.py (1)
ResourceUsageDomain(13-19)
🪛 LanguageTool
unlike.md
[style] ~1-~1: Consider a different adjective to strengthen your wording.
Context: ... on everything I've observed, here's my deep analysis of codebase issues: Archite...
(DEEP_PROFOUND)
🪛 markdownlint-cli2 (0.18.1)
unlike.md
7-7: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
8-8: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
13-13: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
14-14: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
15-15: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
19-19: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
20-20: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
28-28: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
29-29: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
30-30: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
33-33: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
34-34: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
39-39: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
41-41: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
42-42: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
47-47: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
48-48: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
49-49: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
55-55: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
56-56: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
59-59: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
60-60: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
65-65: Spaces inside emphasis markers
(MD037, no-space-in-emphasis)
67-67: Spaces inside emphasis markers
(MD037, no-space-in-emphasis)
68-68: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
69-69: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
74-74: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
75-75: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
79-79: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
80-80: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
85-85: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
86-86: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
89-89: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
90-90: Unordered list indentation
Expected: 0; Actual: 2
(MD007, ul-indent)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (32)
backend/app/infrastructure/kafka/events/__init__.py (1)
14-22: Exports look consistent (imports +__all__).
AddingUserLoginEventandNotificationPreferencesUpdatedEventto both the import surface and__all__is coherent.Also applies to: 59-67, 92-107
backend/app/events/event_store.py (2)
155-155:$orquery for execution retrieval: good for backward-compat, but confirm index coverage.
This keeps olderaggregate_id == execution_iddata discoverable while supporting the new top-levelexecution_id. Just make sure Mongo has indexes that keep this query from devolving into scans under load.
49-52: No fix needed—BaseEvent does not includestored_atorttl_expires_atfields.The code is safe as-is.
event.model_dump(exclude_none=True)only returns BaseEvent's own fields (event_id, event_type, event_version, timestamp, aggregate_id, metadata), so explicitly passing stored_at and ttl_expires_at as kwargs does not create duplicate arguments.Likely an incorrect or invalid review comment.
backend/app/db/repositories/admin/admin_events_repository.py (10)
60-60: LGTM! Correctly uses domain_event_adapter for event deserialization.The use of
domain_event_adapter.validate_python(d, from_attributes=True)properly deserializes EventDocument instances into typed DomainEvent variants, leveraging Pydantic's discriminated union based onevent_type.
69-69: LGTM! Consistent domain_event_adapter usage.The adapter correctly deserializes the EventDocument into a typed DomainEvent for event detail retrieval.
178-185: LGTM! Archive flow correctly uses DomainEvent and model_dump.The signature change from
EventtoDomainEventand the use ofevent.model_dump()align with the Pydantic migration. The archive document creation correctly unpacks all event fields and adds deletion metadata.
188-190: LGTM! Replay session creation uses Pydantic serialization.The use of
session.model_dump()correctly serializes the ReplaySessionState for document insertion.
196-196: LGTM! Replay session retrieval uses Pydantic deserialization.The use of
ReplaySessionState.model_validate(doc, from_attributes=True)correctly deserializes the document into a domain model.
199-199: LGTM! Update logic uses Pydantic serialization with exclude_none.The use of
updates.model_dump(exclude_none=True)correctly serializes only non-None fields for partial updates.
271-271: LGTM! Session conversion uses Pydantic deserialization.The use of
ReplaySessionState.model_validate(doc, from_attributes=True)correctly converts the document to the domain model.
251-251: No action required—execution_id is properly defined as a top-level field on all execution-related events.Verification confirms all Execution*Event classes in
typed.py(ExecutionRequestedEvent, ExecutionAcceptedEvent, ExecutionQueuedEvent, ExecutionStartedEvent, ExecutionRunningEvent, ExecutionCompletedEvent, ExecutionFailedEvent, ExecutionTimeoutEvent, ExecutionCancelledEvent, and others) defineexecution_idas a top-level field. The code change at line 251 correctly accessesevent.execution_id, and nopayload.execution_idreferences remain in the codebase.
167-167: EventExportRow.timestamp expects datetime objects — verified correct.EventExportRow is defined with
timestamp: datetimein event_models.py. EventDocument also stores timestamp asdatetime, so passingdoc.timestampdirectly is appropriate. CSV serialization uses Pydantic'smodel_dump(mode="json"), which automatically converts datetime to ISO string format. No changes needed.
259-259: ExecutionDocument.status field is properly defined as ExecutionStatus enum.The
statusfield exists in ExecutionDocument (backend/app/db/docs/execution.py, line 31) and is typed asExecutionStatus, a StringEnum with defined values. The field defaults toExecutionStatus.QUEUED, so the conditional check at line 259 (if exec_doc.status else None) is redundant since the field always has a value.backend/tests/unit/domain/events/test_event_schema_coverage.py (2)
89-105: LGTM! Adapter coverage test correctly validates discriminator recognition.The test properly distinguishes between validation errors (acceptable - means type IS recognized) and discriminator errors (failure - means type NOT in union). The error string checks (
"no match","unable to extract") are appropriate for Pydantic's discriminated union error messages.
177-186: LGTM! Naming convention test ensures consistency.The test correctly enforces lowercase snake_case for EventType values and prohibits spaces/hyphens, which aligns with Python naming conventions and ensures consistency across the codebase.
backend/app/db/docs/event.py (5)
10-10: LGTM! EventMetadata import aligns with typed event system.Importing EventMetadata from
app.domain.events.typedconsolidates metadata definitions in the new typed event module, ensuring consistency across domain and storage layers.
14-32: LGTM! EventDocument schema correctly flattens event data.The removal of the
payloadfield and use ofextra="allow"enables event-specific fields to be stored at the top level, simplifying queries. The addition ofevent_type,event_version, andexecution_idas explicit fields provides better type safety and indexing support.
44-46: LGTM! Sparse indexes on execution_id and pod_name are appropriate.Since these fields only exist on specific event types (execution and pod events), sparse indexes correctly avoid indexing null values, optimizing storage and query performance.
60-70: LGTM! Text search index updated to include execution_id.Adding
execution_idto the text search index enables full-text search on execution identifiers, which is useful for admin/debugging scenarios.
75-95: The archiving process handles migration correctly viaextra="allow".
EventArchiveDocumentusesextra="allow"(same asEventDocument), which provides backward compatibility—any additional fields on archived events are stored and retrieved without error. The archiving mechanism (lines 334-337 inevent_repository.py) correctly copies all fields fromEventDocumenttoEventArchiveDocument, so no migration is needed.However, there is an unrelated architectural concern: the query builders in
query_builders.pystill referencepayload.duration_seconds(lines 117, 120), but the current event storage uses a flattened structure (viaextra="allow") rather than apayloadwrapper. This query pattern mismatch should be addressed separately to ensure analytics and dashboard queries work correctly.backend/app/domain/events/typed.py (11)
15-26: LGTM! EventMetadata provides comprehensive event context.The metadata model includes all necessary fields for event tracking (service info, correlation, user context, environment) with sensible defaults for correlation_id and environment.
29-40: LGTM! BaseEvent establishes consistent event structure.The base model defines common fields across all events with appropriate defaults (UUIDs for IDs, UTC timestamps, 30-day TTL). The use of
from_attributes=Trueenables compatibility with ORM documents.
46-127: LGTM! Execution event models are comprehensive and well-typed.The execution lifecycle events (Requested, Accepted, Queued, Started, Running, Completed, Failed, Timeout, Cancelled) cover all necessary states with appropriate fields (execution_id, queue metrics, pod info, exit codes, resource usage, stdout/stderr).
133-187: LGTM! Pod event models cover the full lifecycle.The pod events (Created, Scheduled, Running, Succeeded, Failed, Terminated, Deleted) properly model Kubernetes pod states with relevant fields (pod_name, node_name, exit codes, statuses, reasons).
193-205: LGTM! Result event models handle storage outcomes.ResultStoredEvent and ResultFailedEvent appropriately model result persistence with storage metadata (type, path, size) and error information.
211-258: LGTM! User event models cover authentication and lifecycle.The user events (Settings, Registered, Login, LoggedIn, LoggedOut, Updated, Deleted) appropriately model user actions with relevant context (login methods, changed fields, reasons).
324-418: LGTM! Saga and command event models are well-structured.The saga events (Started, Completed, Failed, Cancelled, Compensating, Compensated) and command events (CreatePod, DeletePod, AllocateResources, ReleaseResources) appropriately model orchestration workflows with execution context, step tracking, and resource specifications.
424-444: LGTM! Script event models capture sharing and lifecycle.The script events (Saved, Deleted, Shared) appropriately model script management with user context, metadata, and permissions.
450-514: LGTM! Security, resource, and system event models are comprehensive.The security events (SecurityViolation, RateLimitExceeded, AuthFailed), resource events (ResourceLimitExceeded, QuotaExceeded), and system events (SystemError, ServiceUnhealthy, ServiceRecovered) provide adequate monitoring and alerting coverage with relevant context.
520-536: LGTM! ArchivedEvent wrapper adds deletion metadata.The ArchivedEvent model appropriately wraps event data with archive-specific fields (deleted_at, deleted_by, deletion_reason) for audit trails.
541-610: LGTM! DomainEvent union enables type-safe event dispatch.The discriminated union on
event_typewith the TypeAdapter provides runtime validation and polymorphic deserialization. This is a well-designed type system for event handling.Verify that all EventType enum values have corresponding event classes in the union.
The test file
backend/tests/unit/domain/events/test_event_schema_coverage.pyalready validates this coverage, so the verification is built into the test suite.backend/app/domain/events/__init__.py (1)
1-178: LGTM! Public API reorganization consolidates typed events with correct imports across codebase.The restructuring moves all typed events to
app.domain.events.typedand updates__all__with clear categorization. Verification confirms all 6 files importing fromapp.domain.eventsuse valid exports, with no stale imports of outdated paths. This improves discoverability and aligns with the migration from legacy Event to DomainEvent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 issues found across 3 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="mkdocs.yml">
<violation number="1" location="mkdocs.yml:121">
P2: The new navigation item points to architecture/event-system-design.md, but that document does not exist in the repo, so MkDocs will fail link validation.</violation>
</file>
<file name="docs/architecture/event-system-design.md">
<violation number="1" location="docs/architecture/event-system-design.md:78">
P3: The `DomainEvent` example is missing the comma that separates the union expression from `Discriminator("event_type")`, so the documented snippet is invalid Python and can’t be copied as-is.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
docs/architecture/event-system-design.md (1)
36-38: Reduce repetition in sentence starters.Three consecutive sentences beginning with "They" reduce readability. Consider rewriting one or more to vary the structure.
Domain events are internal to the bounded context. They carry whatever information the domain needs, including storage-related fields like `stored_at` and `ttl_expires_at`. These events get stored in MongoDB and replayed during event sourcing operations. Integration events cross bounded context boundaries. They flow through Kafka to other services or workers. These must be serializable to Avro, which means they can't contain arbitrary Python objects. Finally, they carry routing information like the `topic` ClassVar.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
docs/architecture/event-system-design.mdmkdocs.yml
✅ Files skipped from review due to trivial changes (1)
- mkdocs.yml
🧰 Additional context used
🪛 LanguageTool
docs/architecture/event-system-design.md
[style] ~38-~38: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...can't contain arbitrary Python objects. They carry routing information like the `top...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (2)
docs/architecture/event-system-design.md (2)
1-165: Comprehensive and well-structured architecture documentation.The documentation clearly explains the three-layer event system design, articulates the rationale for separating domain and Kafka events through DDD concepts, and provides concrete examples of discriminated unions and schema coverage. The mermaid diagrams aid understanding, and the workflow guidance for adding new event types will be valuable for contributors. The table of key files and related docs provides good navigation.
112-112: The documentation build system is already properly configured. The--8<--syntax at lines 112 and 132 is handled bypymdownx.snippets(which is configured in mkdocs.yml), not by the include-markdown plugin. The includes will render correctly as configured.Likely an incorrect or invalid review comment.
+ timestamp fixes (all are datetime now) + openapi spec generation fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
backend/app/services/event_bus.py (1)
23-23: Consider whetherfrom_attributes=Trueis needed.The
from_attributesconfiguration is typically used when constructing models from ORM objects or other attribute-bearing instances. SinceEventBusEventappears to be constructed only via explicit keyword arguments (line 159), this configuration may be unnecessary.backend/app/domain/events/typed.py (2)
29-41: Consider clarifying the semantic difference betweentimestampandstored_at.Both fields default to
datetime.now(timezone.utc). Iftimestamprepresents when the event occurred andstored_atrepresents when it was persisted, the current defaults may obscure this distinction when events are created but stored later.
225-239: Clarify the distinction betweenUserLoginEventandUserLoggedInEvent.These two events have identical fields but different event types. If
USER_LOGINrepresents the login attempt andUSER_LOGGED_INrepresents successful completion, consider adding a docstring to clarify this semantic difference.backend/app/infrastructure/kafka/events/notification.py (1)
67-71: Consider adding a default forchanged_fieldsto match the domain model.The domain's
NotificationPreferencesUpdatedEvent(intyped.py) defineschanged_fields: list[str] = Field(default_factory=list), but this Kafka event class requires the field. This inconsistency could cause validation errors when constructing events. Consider whether the field should be required (remove default from domain) or optional (add default here).♻️ Suggested fix to add default
class NotificationPreferencesUpdatedEvent(BaseEvent): event_type: Literal[EventType.NOTIFICATION_PREFERENCES_UPDATED] = EventType.NOTIFICATION_PREFERENCES_UPDATED topic: ClassVar[KafkaTopic] = KafkaTopic.NOTIFICATION_EVENTS user_id: str - changed_fields: list[str] + changed_fields: list[str] = []deploy.sh (1)
341-359: Make OpenAPI output writing more robust (create parent dir; avoidpython -cquoting fragility).Right now
> "../$OUTPUT"will fail ifdocs/reference/(or a custom output dir) doesn’t exist, and the multilinepython -c "..."is easy to break with quoting changes.Proposed diff
cmd_openapi() { print_header "Generating OpenAPI Spec" local OUTPUT="${1:-docs/reference/openapi.json}" cd backend print_info "Extracting schema from FastAPI app..." - uv run python -c " + mkdir -p "../$(dirname "$OUTPUT")" + + uv run python - <<'PY' import json from app.main import create_app app = create_app() schema = app.openapi() print(json.dumps(schema, indent=2)) -" > "../$OUTPUT" +PY + > "../$OUTPUT" cd .. print_success "OpenAPI spec written to $OUTPUT" }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backend/app/api/routes/events.pybackend/app/domain/events/event_models.pybackend/app/domain/events/typed.pybackend/app/infrastructure/kafka/events/notification.pybackend/app/infrastructure/kafka/mappings.pybackend/app/services/admin/admin_events_service.pybackend/app/services/event_bus.pybackend/app/services/event_service.pybackend/app/services/saga/execution_saga.pydeploy.sh
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/app/api/routes/events.py
- backend/app/services/admin/admin_events_service.py
🧰 Additional context used
🧬 Code graph analysis (4)
backend/app/infrastructure/kafka/mappings.py (2)
backend/app/domain/events/typed.py (3)
ExecutionAcceptedEvent(63-68)NotificationPreferencesUpdatedEvent(315-318)UserLoginEvent(225-230)backend/app/infrastructure/kafka/events/notification.py (1)
NotificationPreferencesUpdatedEvent(67-71)
backend/app/services/saga/execution_saga.py (2)
backend/app/services/coordinator/queue_manager.py (1)
user_id(34-35)backend/tests/unit/services/idempotency/test_middleware.py (1)
event(30-34)
backend/app/infrastructure/kafka/events/notification.py (2)
backend/app/domain/events/typed.py (3)
NotificationClickedEvent(307-312)BaseEvent(29-40)NotificationPreferencesUpdatedEvent(315-318)backend/app/domain/enums/kafka.py (1)
KafkaTopic(7-53)
backend/app/domain/events/typed.py (1)
backend/app/domain/enums/auth.py (1)
LoginMethod(4-10)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: E2E Tests
- GitHub Check: Integration Tests
🔇 Additional comments (21)
backend/app/services/event_bus.py (3)
12-12: LGTM! Clean migration to Pydantic.The conversion of
EventBusEventfrom a dataclass to a PydanticBaseModelis well-executed. Thetimestampfield correctly usesdatetimeinstead ofstr, which provides better type safety and eliminates manual ISO string handling.Also applies to: 20-29
139-139: Excellent use of Pydantic serialization and timezone-aware timestamps.The serialization using
model_dump_json()(line 139) follows Pydantic v2 best practices, and creating timestamps withdatetime.now(timezone.utc)(line 162) ensures consistent timezone-aware datetime handling.Also applies to: 162-162
290-292: Deserialization correctly uses Pydantic validation.The use of
model_validate()properly handles the conversion of the timestamp ISO string back to adatetimeobject. The round-trip serialization (datetime → ISO string → datetime) via Pydantic is handled transparently and correctly.backend/app/services/event_service.py (4)
7-15: LGTM!The updated imports correctly reference the new
DomainEventandArchivedEventtypes from the migrated domain events module, aligning with the Pydantic-based event system.
79-82: LGTM!The direct access to
e.metadata.user_idis safe sincemetadatais a required field onBaseEvent. The null check on line 80 correctly handles the case whereuser_idisNone.
153-161: Verify filtering behavior for events with null user_id.Events where
e.metadata.user_idisNonewill be filtered out by this comparison. If there are system-generated events without a user_id that should be visible to the requesting user, they would be excluded.
179-191: LGTM!The return type annotations have been correctly updated to use
DomainEventandArchivedEventfrom the new typed event system, maintaining consistency with the repository layer changes.Also applies to: 217-227, 232-242
backend/app/domain/events/event_models.py (4)
53-66: LGTM!The migration of
EventFilterfrom dataclass to PydanticBaseModelwithConfigDict(from_attributes=True)enables ORM-style instantiation and provides built-in validation. The field definitions are clean and appropriately typed.
83-91: LGTM!The result types (
EventListResult,EventBrowseResult,EventDetail,EventReplayInfo,ExecutionEventsResult) now correctly useDomainEventfor their event fields, providing proper type discrimination and validation through the Pydantic-based union type.Also applies to: 94-101, 104-110, 153-161, 164-181
172-181: LGTM!The
get_filtered_eventsmethod correctly accessese.metadata.service_namewithout null checks since bothmetadataandservice_nameare required fields in the new domain model.
184-197: LGTM!The
EventExportRowmodel has been properly migrated to PydanticBaseModelwithConfigDict(from_attributes=True). Thetimestampfield is now correctly typed asdatetime, aligning with the PR's standardization of timestamp handling.backend/app/domain/events/typed.py (4)
15-27: LGTM!The
EventMetadatamodel is well-designed with appropriate defaults. Thecorrelation_idauto-generates a UUID, and optional fields likeuser_id,ip_address, anduser_agentcorrectly default toNone.
46-128: LGTM!The execution lifecycle events are comprehensive and well-structured, covering the full execution flow from request through completion/failure/timeout/cancellation. Each event captures the appropriate context for its lifecycle stage.
520-537: LGTM!The
ArchivedEventmodel correctly wraps deletion metadata while preserving the original event's type information viaevent_type: EventType. Keeping it separate from theDomainEventunion is appropriate since archived events have a different lifecycle.
541-610: LGTM!The
DomainEventdiscriminated union withDiscriminator("event_type")provides excellent type safety for event deserialization. Thedomain_event_adapterTypeAdapter enables polymorphic validation of raw dictionaries to the correct typed event class.backend/app/services/saga/execution_saga.py (2)
164-192: LGTM!The
CreatePodCommandEventconstruction correctly propagates execution context from the triggering event. Theuser_idfallback to"system"is appropriate for saga orchestration events.
179-184: The code change is safe. Kafka'sBaseEvent(line 21 ofbackend/app/infrastructure/kafka/events/base.py) definesmetadata: AvroEventMetadataas a required field with no default value, guaranteeing it is neverNone. The guard checkif event.metadatain the original code was unnecessary. The new patternevent.metadata.user_id or "system"correctly assumesmetadataexists and falls back whenuser_idisNone(as defined inAvroEventMetadataat line 15 ofbackend/app/infrastructure/kafka/events/metadata.py).backend/app/infrastructure/kafka/mappings.py (2)
7-17: LGTM!The new imports for
ExecutionAcceptedEvent,NotificationPreferencesUpdatedEvent, andUserLoginEventare correctly added from their respective Kafka event modules.Also applies to: 18-26, 63-71
77-111: LGTM!The
event_mapis correctly extended with mappings for the new event types (EXECUTION_ACCEPTED,USER_LOGIN,NOTIFICATION_PREFERENCES_UPDATED). These align with the event classes defined in the Kafka events modules and the domainEventTypeenum.backend/app/infrastructure/kafka/events/notification.py (1)
1-1: LGTM!The timestamp fields (
sent_at,delivered_at,read_at,clicked_at) have been correctly migrated fromstrtodatetime, aligning with the PR's standardization of timestamp handling across the codebase.Also applies to: 22-28, 31-37, 50-55, 58-64
deploy.sh (1)
349-355: Factory pattern for OpenAPI is safe—no blocking initialization increate_app().The switch from a global
apptoapp = create_app()is a good change. Dishka uses lazy initialization: the database, Redis, and Kafka providers only create connections when explicitly requested, not duringcreate_app(). The lifespan handlers (which do async initialization) are not invoked when generating the schema—only the synchronous app setup runs. The only potential failure point is ifSettings()cannot loadSECRET_KEYfrom environment or.envfile, but that's a deployment configuration issue rather than a code problem. No refactoring needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (8)
backend/tests/unit/domain/events/test_event_schema_coverage.py (4)
17-17: Consider exposing a public function for the Kafka event mapping.Importing the private function
_get_event_type_to_class_mappingcouples this test to internal implementation details. If the schema registry is refactored, this test may break unexpectedly. Consider exposing a public API for this mapping or documenting that this private function is intentionally used by tests.
26-43: Redundant check and fragile union introspection logic.Two issues with this helper:
Lines 32-34: The expression
get_args(inner) or [inner]can never be falsy (it returns[inner]at minimum), so lines 33-34 are unreachable dead code.Lines 41-43:
__subclasses__()only retrieves direct subclasses. If your class hierarchy has intermediate base classes (e.g.,DomainBaseEvent→CategoryEvent→ConcreteEvent), those concrete events won't be found.Proposed fix
- # Python 3.10+ union syntax - event_classes = get_args(inner) or [inner] - if not event_classes: - event_classes = list(union_types[:-1]) # Exclude Discriminator + # Python 3.10+ union syntax + event_classes = get_args(inner) or list(union_types[:-1]) or [inner]For the subclass traversal, consider a recursive approach:
def _get_all_subclasses(cls: type) -> list[type]: """Recursively get all subclasses.""" result = [] for subclass in cls.__subclasses__(): result.append(subclass) result.extend(_get_all_subclasses(subclass)) return result
98-104: String matching on exception messages is fragile.Parsing error message strings to detect discriminator failures could break if Pydantic changes its error wording in future versions. Consider checking the exception type or structured error details instead.
More robust alternative
from pydantic import ValidationError for et in EventType: try: domain_event_adapter.validate_python({"event_type": et}) except ValidationError as e: # Check if the error is a discriminator mismatch vs missing fields for error in e.errors(): if error.get("type") == "union_tag_invalid": errors.append(f" - {et.value}: not in DomainEvent union") break except Exception as e: errors.append(f" - {et.value}: unexpected error: {e}")
171-186: Minor: Magic number and incomplete snake_case validation.
Line 174: The threshold
50is a magic number that may need updates. Consider deriving it from a known baseline or adding a comment explaining why 50.Lines 181-184: The snake_case check only validates lowercase and absence of spaces/hyphens, but doesn't catch issues like double underscores (
foo__bar) or leading/trailing underscores. A regex like^[a-z][a-z0-9]*(_[a-z0-9]+)*$would be more thorough.Optional: stricter snake_case validation
import re SNAKE_CASE_PATTERN = re.compile(r"^[a-z][a-z0-9]*(_[a-z0-9]+)*$") def test_all_event_types_are_lowercase_snake_case(self) -> None: violations: list[str] = [] for et in EventType: if not SNAKE_CASE_PATTERN.match(et.value): violations.append(f" - {et.name}: '{et.value}' is not valid snake_case") assert not violations, "EventType naming violations:\n" + "\n".join(violations)backend/app/services/sse/redis_bus.py (2)
88-88: Same priming pattern - ensure consistency.This line mirrors the subscription priming added to
open_subscription(line 77). The same considerations apply:
- Consider adding a comment explaining the priming step for consistency
- Verify the 1-second timeout is appropriate
If this pattern is repeated elsewhere or grows more complex, consider extracting a shared helper method:
async def _prime_subscription(self, pubsub: redis.client.PubSub, channel: str) -> None: """Consume the initial subscription confirmation message.""" await pubsub.subscribe(channel) await pubsub.get_message(timeout=1.0)However, given the simplicity of the current implementation, this refactoring can be deferred.
77-77: Add documentation for the subscription priming step and align the timeout value with message polling.The
get_messagecall on line 77 (and line 88) consumes the Redis subscription confirmation message to establish a clean subscription state. This is valid but needs clarification:
- The call uses the default
ignore_subscribe_messages=False, which intentionally returns subscription messages (unlike line 25's message polling which passesignore_subscribe_messages=Trueto skip them).- The 1.0-second timeout differs from the 0.5-second timeout used during normal message polling (line 25). While 1.0 seconds is reasonable for a one-time setup operation, the inconsistency and lack of justification makes it unclear.
Consider adding a brief explanatory comment:
Suggested change
await pubsub.subscribe(channel) +# Consume subscription confirmation message to establish a clean subscription state await pubsub.get_message(timeout=1.0)Optionally, extract the timeout to a class constant for maintainability and to document the intentional difference from message polling operations.
backend/app/infrastructure/kafka/events/notification.py (1)
69-73: New event class looks good, consider requiring changed_fields.The
NotificationPreferencesUpdatedEventstructure is consistent with other notification events and matches the domain event definition. The use ofField(default_factory=list)is the correct Pydantic pattern for mutable defaults.Consider whether
changed_fieldsshould be required rather than defaulting to an empty list. If notification preferences were updated, there should logically be at least one changed field. An empty list might hide bugs in producer code that forgets to populate this field.💡 Optional: Make changed_fields required
If changed_fields should always be populated when preferences are updated:
- changed_fields: list[str] = Field(default_factory=list) + changed_fields: list[str]Alternatively, if empty list is valid but you want to validate it's intentional, you could add a validator:
+ @field_validator('changed_fields') + @classmethod + def validate_changed_fields(cls, v: list[str]) -> list[str]: + if not v: + logger.warning("NotificationPreferencesUpdatedEvent created with no changed_fields") + return vbackend/tests/integration/test_sse_routes.py (1)
126-149: Consider consistent datetime usage across notification tests.Line 134 uses a hardcoded past date
datetime(2024, 6, 15, 12, 0, 0, tzinfo=timezone.utc), while line 167 usesdatetime.now(timezone.utc). For consistency and to avoid potential brittleness, consider usingdatetime.now(timezone.utc)in both tests.♻️ Proposed change for consistency
notification = RedisNotificationMessage( notification_id="notif-123", severity=NotificationSeverity.HIGH, status=NotificationStatus.PENDING, tags=["urgent", "system"], subject="Test Alert", body="This is a test notification", action_url="https://example.com/action", - created_at=datetime(2024, 6, 15, 12, 0, 0, tzinfo=timezone.utc), + created_at=datetime.now(timezone.utc), )Otherwise, the notification pub/sub tests correctly verify message structure, field values, and user-specific channel isolation with proper enum member usage.
Also applies to: 159-169
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
backend/app/infrastructure/kafka/events/notification.pybackend/app/services/sse/redis_bus.pybackend/tests/conftest.pybackend/tests/integration/test_admin_routes.pybackend/tests/integration/test_sse_routes.pybackend/tests/unit/domain/events/test_event_schema_coverage.pybackend/tests/unit/services/pod_monitor/test_event_mapper.py
💤 Files with no reviewable changes (1)
- backend/tests/integration/test_admin_routes.py
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/tests/unit/services/pod_monitor/test_event_mapper.py
- backend/tests/conftest.py
🧰 Additional context used
🧬 Code graph analysis (3)
backend/tests/integration/test_sse_routes.py (4)
backend/app/domain/enums/sse.py (2)
SSEControlEvent(4-12)SSENotificationEvent(15-21)backend/app/domain/events/typed.py (1)
PodCreatedEvent(133-137)backend/app/schemas_pydantic/sse.py (3)
RedisNotificationMessage(91-101)RedisSSEMessage(58-63)SSEHealthResponse(115-125)backend/app/services/sse/redis_bus.py (5)
get(23-35)open_subscription(73-78)publish_event(65-71)open_notification_subscription(84-89)publish_notification(80-82)
backend/app/services/sse/redis_bus.py (1)
backend/tests/integration/services/sse/test_redis_bus.py (2)
pubsub(71-72)get_message(40-45)
backend/app/infrastructure/kafka/events/notification.py (1)
backend/app/domain/events/typed.py (3)
NotificationClickedEvent(307-312)BaseEvent(29-40)NotificationPreferencesUpdatedEvent(315-318)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (10)
backend/tests/unit/domain/events/test_event_schema_coverage.py (3)
62-87: LGTM!Clear set-difference logic with actionable error messages that will help developers quickly identify which event types need implementation.
107-145: Orphan detection logic is sound but has same subclass limitation.Both orphan tests use
__subclasses__()which only finds direct subclasses. If your event hierarchy uses intermediate base classes, nested subclasses won't be checked. Consider the recursive subclass helper mentioned earlier.Otherwise, the validation logic correctly identifies classes with missing or invalid
event_typedefaults.
147-165: LGTM!This consistency check ensures domain and Kafka event classes maintain aligned naming, which is valuable for maintainability. The implementation correctly handles partial mappings.
backend/app/infrastructure/kafka/events/notification.py (2)
1-5: LGTM! Necessary imports for datetime types and Pydantic Field.The added imports support the timestamp field migrations to
datetimeand theField(default_factory=list)pattern in the new event class.
30-65: No code changes needed—datetime types and serialization are already properly configured.The notification events already use
datetimetypes for timestamp fields (sent_at,delivered_at,read_at,clicked_at). Pydantic v2.9.2 withmode="json"automatically serializes all datetime fields to ISO strings when events are sent via Kafka, andmodel_validate()deserializes them back to datetime objects. This is consistent across both infrastructure and domain event definitions, and all consumers already receive properly deserialized datetime objects.Likely an incorrect or invalid review comment.
backend/tests/integration/test_sse_routes.py (5)
27-40: LGTM! Authentication enforcement is well-tested.The test class correctly verifies that all SSE endpoints (notification stream, execution stream, and health) require authentication by asserting 401 responses for unauthenticated requests.
43-54: LGTM! Proper use of Pydantic model validation.The health check test correctly uses
SSEHealthResponse.model_validate()to parse the response, aligning with the PR's migration to Pydantic models. The assertions validate expected health states and non-negative connection counts.
61-112: LGTM! Redis pub/sub execution tests are well-structured.The tests correctly verify:
RedisSSEMessagewrapper structure with properevent_typeenum member comparison (not.value)- Execution-specific channel isolation to ensure events are delivered only to subscribed channels
- Typed message retrieval using
subscription.get(RedisSSEMessage)Note: These tests depend on the
PodCreatedEventimport compatibility flagged in an earlier comment.
187-269: LGTM! SSE stream event tests are comprehensive and well-designed.The tests effectively verify:
- Notification and execution stream control events (CONNECTED, SUBSCRIBED) with correct field validation
- Proper enum member usage (
SSENotificationEvent.CONNECTED,SSEControlEvent.SUBSCRIBED)connection_iduniqueness across concurrent streams usingasyncio.gather- Timestamp presence in all control events
The manual loop breaks (lines 198, 227) are appropriate for integration tests where control events are yielded immediately upon stream creation.
14-14: No action required. The import is correct.The test properly imports
PodCreatedEventfrom the infrastructure Kafka layer, which extends the infrastructureBaseEvent(fromapp.infrastructure.kafka.events.base). Thepublish_eventmethod inredis_bus.pyexpects this same infrastructureBaseEventtype. The test correctly instantiatesPodCreatedEventwithAvroEventMetadata, which is the metadata type required by the infrastructure layer. Type compatibility is correct and there is no mismatch.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="backend/app/events/schema/schema_registry.py">
<violation number="1" location="backend/app/events/schema/schema_registry.py:213">
P2: Creating a new `httpx.AsyncClient()` per request defeats connection pooling. Since `set_compatibility` is called in a loop during `initialize_schemas`, consider creating the client once and reusing it across all calls, either by passing it as a parameter or storing it as an instance attribute.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| url = f"{self.url}/config/{subject}" | ||
| response = httpx.put(url, json={"compatibility": mode}) | ||
| async with httpx.AsyncClient() as client: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Creating a new httpx.AsyncClient() per request defeats connection pooling. Since set_compatibility is called in a loop during initialize_schemas, consider creating the client once and reusing it across all calls, either by passing it as a parameter or storing it as an instance attribute.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/app/events/schema/schema_registry.py, line 213:
<comment>Creating a new `httpx.AsyncClient()` per request defeats connection pooling. Since `set_compatibility` is called in a loop during `initialize_schemas`, consider creating the client once and reusing it across all calls, either by passing it as a parameter or storing it as an instance attribute.</comment>
<file context>
@@ -210,7 +210,8 @@ def set_compatibility(self, subject: str, mode: str) -> None:
url = f"{self.url}/config/{subject}"
- response = httpx.put(url, json={"compatibility": mode})
+ async with httpx.AsyncClient() as client:
+ response = await client.put(url, json={"compatibility": mode})
response.raise_for_status()
</file context>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/app/events/admin_utils.py (1)
37-50: LGTM! Proper async handling of topic creation.The blocking
result()call is correctly offloaded to the executor. The dual timeout configuration is appropriate:operation_timeoutcontrols broker-side behavior whileresult(timeout)controls client-side waiting.Note: The
create_topicscall (lines 42-44) returns a futures dict immediately and is non-blocking, so wrapping it inrun_in_executoradds slight overhead but maintains consistency and doesn't harm correctness.🔧 Optional: Skip executor for non-blocking create_topics call
async def create_topic(self, topic: str, num_partitions: int = 1, replication_factor: int = 1) -> bool: """Create a single topic.""" try: new_topic = NewTopic(topic, num_partitions=num_partitions, replication_factor=replication_factor) loop = asyncio.get_running_loop() - futures = await loop.run_in_executor( - None, lambda: self._admin.create_topics([new_topic], operation_timeout=30.0) - ) + futures = self._admin.create_topics([new_topic], operation_timeout=30.0) await loop.run_in_executor(None, lambda: futures[topic].result(timeout=30.0)) self.logger.info(f"Topic {topic} created successfully") return Truebackend/app/events/schema/schema_registry.py (1)
213-214: Avoid creating a newAsyncClientper call.Creating a new
AsyncClientfor each invocation loses connection pooling benefits. Sinceinitialize_schemascalls this method once per event class, this results in N client instantiations. Consider:
- Accepting an optional client parameter
- Creating a single client in
initialize_schemasand passing it down- Adding an explicit timeout for resilience
♻️ Suggested refactor
- async def set_compatibility(self, subject: str, mode: str) -> None: + async def set_compatibility( + self, subject: str, mode: str, client: httpx.AsyncClient | None = None + ) -> None: """ Set compatibility for a subject via REST API. Valid: BACKWARD, FORWARD, FULL, NONE, BACKWARD_TRANSITIVE, FORWARD_TRANSITIVE, FULL_TRANSITIVE """ valid_modes = { "BACKWARD", "FORWARD", "FULL", "NONE", "BACKWARD_TRANSITIVE", "FORWARD_TRANSITIVE", "FULL_TRANSITIVE", } if mode not in valid_modes: raise ValueError(f"Invalid compatibility mode: {mode}") url = f"{self.url}/config/{subject}" - async with httpx.AsyncClient() as client: - response = await client.put(url, json={"compatibility": mode}) + if client is not None: + response = await client.put(url, json={"compatibility": mode}) + else: + async with httpx.AsyncClient(timeout=10.0) as c: + response = await c.put(url, json={"compatibility": mode}) response.raise_for_status() self.logger.info(f"Set {subject} compatibility to {mode}")Then update
initialize_schemas:async def initialize_schemas(self) -> None: """Initialize all event schemas in the registry (set compat + register).""" if self._initialized: return - for event_class in _get_all_event_classes(): - subject = f"{self.subject_prefix}{event_class.__name__}-value" - await self.set_compatibility(subject, "FORWARD") - self.register_schema(subject, event_class) + async with httpx.AsyncClient(timeout=10.0) as client: + for event_class in _get_all_event_classes(): + subject = f"{self.subject_prefix}{event_class.__name__}-value" + await self.set_compatibility(subject, "FORWARD", client=client) + self.register_schema(subject, event_class) self._initialized = True self.logger.info(f"Initialized {len(_get_all_event_classes())} event schemas")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/app/events/admin_utils.pybackend/app/events/schema/schema_registry.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/events/schema/schema_registry.py (2)
backend/tests/conftest.py (1)
client(102-110)backend/tests/unit/conftest.py (1)
client(57-58)
backend/app/events/admin_utils.py (1)
backend/scripts/create_topics.py (1)
create_topics(20-101)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Integration Tests
- GitHub Check: E2E Tests
🔇 Additional comments (2)
backend/app/events/admin_utils.py (1)
27-35: LGTM! Correctly offloads blocking operation to thread pool.The
list_topicscall is blocking and properly delegated to the executor to avoid blocking the event loop. The error handling with explicitreturn Falseis a good improvement.backend/app/events/schema/schema_registry.py (1)
226-227: Correct usage of await for the async method.The await is properly applied to the now-async
set_compatibility. Note thatregister_schema(line 227) remains synchronous as it uses the confluent-kafkaSchemaRegistryClient, which could block the event loop during schema registration. This is pre-existing behavior but worth considering for future async consistency.



Summary by cubic
Flattened event data by removing the payload wrapper and moved the domain layer to Pydantic models with a typed event hierarchy. Standardized timestamps to datetime across APIs/SSE/services, unified enum usage, updated indexes/queries/serialization, and added a manual DLQ discard helper.
Refactors
New Features
Written for commit e2e95ef. Summary will update on new commits.
Summary by CodeRabbit
New Features
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.