Conversation
📝 WalkthroughWalkthroughThis pull request implements storage of sleep stage intervals (hypnogram data) by capturing start/end timestamps for each sleep stage transition. The system now persists detailed stage sequences in JSONB format instead of only aggregated minutes per stage, requiring updates across database models, API schemas, service logic, and frontend types. Changes
Sequence DiagramsequenceDiagram
participant Client as Client
participant SleepService as Sleep Service
participant Redis as Redis State
participant Database as Database
participant EventService as Event Record Service
Client->>SleepService: handle_sleep_data(apple_records)
SleepService->>Redis: load_sleep_state()
Redis-->>SleepService: SleepState object
loop For each stage transition
SleepService->>SleepService: _apply_transition(stage_event)
SleepService->>SleepService: append SleepStateStage to state.stages
end
SleepService->>SleepService: _calculate_final_metrics(stages)
SleepService->>SleepService: compute aggregates & clean overlaps
SleepService->>Database: finish_sleep(sleep_detail)
Database->>Database: store sleep_stages JSONB
EventService->>Database: query sleep_details.sleep_stages
Database-->>EventService: SleepStage[] via JSON_TABLE
EventService->>Client: return SleepSession with sleep_stage_intervals
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/integrations/celery/tasks/garmin_gc_task.py (1)
75-92:⚠️ Potential issue | 🟡 MinorDuplicate anchor_ts check — likely copy-paste error.
Lines 75-82 and 85-92 contain identical logic for fetching and parsing
anchor_val. The second block (marked as "Fallback") performs the exact same operations as the first, resulting in redundant Redis calls and duplicated parsing.🐛 Suggested fix — remove the duplicate block
anchor_val = get_redis_client().get(_key(user_id, "window", "anchor_ts")) if anchor_val: try: anchor_ts = datetime.fromisoformat(anchor_val) if most_recent is None or anchor_ts > most_recent: most_recent = anchor_ts except (ValueError, TypeError): pass - # Fallback: check anchor_ts (set when backfill starts, before any type is triggered) - anchor_val = get_redis_client().get(_key(user_id, "window", "anchor_ts")) - if anchor_val: - try: - anchor_ts = datetime.fromisoformat(anchor_val) - if most_recent is None or anchor_ts > most_recent: - most_recent = anchor_ts - except (ValueError, TypeError): - pass - # No timestamps at all = orphaned lock🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/integrations/celery/tasks/garmin_gc_task.py` around lines 75 - 92, The duplicate Redis fetch-and-parse block for anchor_ts is a copy-paste error: remove the second identical block (the "Fallback" block) so anchor_val is only retrieved and parsed once; keep the first occurrence that uses get_redis_client().get(_key(user_id, "window", "anchor_ts")), parse with datetime.fromisoformat into anchor_ts, update most_recent if appropriate, and preserve the try/except for ValueError/TypeError handling.
🧹 Nitpick comments (2)
backend/tests/integrations/test_apple_sdk_import.py (1)
26-38: Fixture return type should useGeneratorinstead ofAny.Per project conventions, yield-based pytest fixtures must use
Generator[YieldType, None, None]fromcollections.abc. The currentAnyreturn type obscures the yielded type.♻️ Suggested fix
+from collections.abc import Generator + `@pytest.fixture`(autouse=True) -def mock_sleep_redis() -> Any: +def mock_sleep_redis() -> Generator[MagicMock, None, None]: """Mock Redis client in sleep_service module to prevent connection errors."""Based on learnings: "In this repository, pytest fixture return type annotations must use Generator[YieldType, None, None] from collections.abc, not Iterator[YieldType]."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/integrations/test_apple_sdk_import.py` around lines 26 - 38, Change the return type annotation of the yield-based pytest fixture mock_sleep_redis to use Generator[MagicMock, None, None] imported from collections.abc (instead of Any); update the import to include Generator and ensure the fixture signature reads mock_sleep_redis() -> Generator[MagicMock, None, None], leaving the body (mock_redis, its methods, and the with patch("app.services.apple.healthkit.sleep_service.get_redis_client") as mock_get_redis yield mock_redis) unchanged so the yielded type is properly annotated.backend/app/services/apple/healthkit/sleep_service.py (1)
288-288: Move the local import back to module scope.Line 288 re-imports
SleepStageTypeinside the hot path even though the module already imports it at the top. As per coding guidelines "All imports must be at module level and never inside functions or methods".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/services/apple/healthkit/sleep_service.py` at line 288, A local re-import of SleepStageType was left inside the hot path; remove the inner "from app.constants.sleep import SleepStageType" and rely on the module-level import already at the top of the file, ensuring no function or method contains that import (replace any local reference with the existing SleepStageType symbol) and run linters/tests to verify imports are only at module scope.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/integrations/celery/tasks/finalize_stale_sleep_task.py`:
- Around line 29-31: The stale-sleep checker normalizes the local variable
end_time but does not update the SleepState object, so finish_sleep() later
reads a naive state.end_time (and potentially state.start_time) and persists a
timezone-less EventRecordCreate; update the SleepState in-place before calling
finish_sleep() by setting state.end_time =
state.end_time.replace(tzinfo=timezone.utc) (and similarly for state.start_time
if tzinfo is None), or ensure load_sleep_state() already returns timezone-aware
datetimes; make the change near where end_time is currently normalized so
finish_sleep(state, ...) receives normalized datetimes.
In `@backend/app/services/apple/healthkit/sleep_service.py`:
- Around line 261-287: The code currently filters out "in_bed" and "unknown"
when building specific_raw which means cleaned_stages (later written to
sleep_stages and used by finish_sleep) omits these intervals; instead, keep all
intervals in cleaned_stages by removing the initial filter over stages and build
metrics by conditionally skipping "in_bed"/"unknown" only when accumulating
metrics (e.g. check phase_str against stage_to_metric or explicitly skip those
keys before adding to metrics), ensuring cleaned_stages retains all SleepStage
entries so SleepStageType, sleep_stages persistence, and session bounds in
finish_sleep remain correct.
- Around line 128-130: The code currently collapses SleepPhase.SLEEPING into
deep sleep by using a combined case (case SleepPhase.ASLEEP_DEEP |
SleepPhase.SLEEPING) and setting stage_label = SleepStageType.DEEP; instead,
handle the two phases separately: add distinct case arms for
SleepPhase.ASLEEP_DEEP (map to SleepStageType.DEEP and increment deep_seconds)
and SleepPhase.SLEEPING (map to a separate generic/unknown stage — e.g.,
SleepStageType.SLEEPING or SleepStageType.UNKNOWN — and increment the
appropriate counter), updating the same logic that assigns stage_label and
accumulates duration so generic “sleeping” remains its own stage throughout
mapping/persistence (look for the switch/case block in sleep_service.py that
references SleepPhase and SleepStageType and the variable stage_label).
In `@backend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.py`:
- Around line 23-24: The migration
backend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.py
includes index operations for unrelated tables (op.drop_index/op.create_index
calls affecting idx_refresh_token_developer_id, idx_refresh_token_user_id, and
indices on user_invitation_code) which likely came from autogeneration; remove
those refresh_token and user_invitation_code index changes from this
sleep_stages migration and instead create a new, separate Alembic migration
containing only the index drops/creates for idx_refresh_token_developer_id,
idx_refresh_token_user_id and the user_invitation_code indices so the
sleep_stages migration only contains sleep_stages table changes (preserve any
up/down semantics and the "please adjust!" note when splitting).
In `@backend/tests/integrations/test_suunto_import.py`:
- Around line 124-136: Move the local "import json" out of the test body to
module level: add "import json" with the other imports at the top of the
test_suunto_import.py module and remove the inline "import json" inside the test
where state_data, mock_redis, and the patch of
app.services.providers.templates.base_oauth.get_redis_client are defined so the
test uses the module-level json import.
In `@backend/tests/tasks/test_finalize_stale_sleep_task.py`:
- Around line 360-361: The test currently triggers a ValueError from
mock_load_state which no longer matches the malformed-state branch because
load_sleep_state now catches parse errors and returns None; update the test to
simulate that behavior by having mock_load_state return None (instead of
raising) so the code follows the malformed Redis payload path, and adjust any
assertions to expect the malformed-state handling in finalize_stale_sleep_task
(or the test's call site) rather than the generic-exception branch;
alternatively, if you prefer to test the generic-exception branch, rename the
test to reflect that behavior.
---
Outside diff comments:
In `@backend/app/integrations/celery/tasks/garmin_gc_task.py`:
- Around line 75-92: The duplicate Redis fetch-and-parse block for anchor_ts is
a copy-paste error: remove the second identical block (the "Fallback" block) so
anchor_val is only retrieved and parsed once; keep the first occurrence that
uses get_redis_client().get(_key(user_id, "window", "anchor_ts")), parse with
datetime.fromisoformat into anchor_ts, update most_recent if appropriate, and
preserve the try/except for ValueError/TypeError handling.
---
Nitpick comments:
In `@backend/app/services/apple/healthkit/sleep_service.py`:
- Line 288: A local re-import of SleepStageType was left inside the hot path;
remove the inner "from app.constants.sleep import SleepStageType" and rely on
the module-level import already at the top of the file, ensuring no function or
method contains that import (replace any local reference with the existing
SleepStageType symbol) and run linters/tests to verify imports are only at
module scope.
In `@backend/tests/integrations/test_apple_sdk_import.py`:
- Around line 26-38: Change the return type annotation of the yield-based pytest
fixture mock_sleep_redis to use Generator[MagicMock, None, None] imported from
collections.abc (instead of Any); update the import to include Generator and
ensure the fixture signature reads mock_sleep_redis() -> Generator[MagicMock,
None, None], leaving the body (mock_redis, its methods, and the with
patch("app.services.apple.healthkit.sleep_service.get_redis_client") as
mock_get_redis yield mock_redis) unchanged so the yielded type is properly
annotated.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 83855df9-8ea2-47cd-80c8-b669b65243d6
📒 Files selected for processing (24)
backend/app/constants/series_types/apple/sleep_types.pybackend/app/constants/sleep.pybackend/app/integrations/celery/tasks/finalize_stale_sleep_task.pybackend/app/integrations/celery/tasks/garmin_backfill_task.pybackend/app/integrations/celery/tasks/garmin_gc_task.pybackend/app/mappings.pybackend/app/models/sleep_details.pybackend/app/repositories/event_record_repository.pybackend/app/schemas/apple/healthkit/sleep_state.pybackend/app/schemas/event_record_detail.pybackend/app/schemas/events.pybackend/app/schemas/sleep.pybackend/app/services/apple/healthkit/sleep_service.pybackend/app/services/event_record_service.pybackend/app/services/providers/templates/base_oauth.pybackend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.pybackend/tests/conftest.pybackend/tests/integrations/test_apple_sdk_import.pybackend/tests/integrations/test_suunto_import.pybackend/tests/providers/garmin/test_garmin_backfill.pybackend/tests/providers/garmin/test_garmin_oauth.pybackend/tests/tasks/test_finalize_stale_sleep_task.pybackend/tests/tasks/test_garmin_backfill_permission.pyfrontend/src/lib/api/types.ts
| end_time = state.end_time | ||
| if end_time.tzinfo is None: | ||
| end_time = end_time.replace(tzinfo=timezone.utc) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== finish_sleep / load_sleep_state references =="
while IFS= read -r f; do
echo "--- $f ---"
rg -n -C3 '\bdef finish_sleep\b|\bdef load_sleep_state\b|\bstate\.end_time\b|\bend_time\b' "$f"
done < <(fd 'sleep_service\.py$' backend)
echo
echo "== SleepState definition =="
rg -n -C3 '\bclass SleepState\b|^\s*end_time\s*:' backendRepository: the-momentum/open-wearables
Length of output: 193
🏁 Script executed:
fd 'finalize_stale_sleep_task\.py$' backendRepository: the-momentum/open-wearables
Length of output: 191
🏁 Script executed:
cat -n backend/app/integrations/celery/tasks/finalize_stale_sleep_task.py | head -50Repository: the-momentum/open-wearables
Length of output: 1725
🏁 Script executed:
rg -n 'def finish_sleep|class SleepState' backend --type py -A 5Repository: the-momentum/open-wearables
Length of output: 1679
🏁 Script executed:
rg -n 'state\.end_time|end_time' backend/app/integrations/celery/tasks/finalize_stale_sleep_task.py -C 2Repository: the-momentum/open-wearables
Length of output: 488
🏁 Script executed:
cat -n backend/app/schemas/apple/healthkit/sleep_state.pyRepository: the-momentum/open-wearables
Length of output: 1346
🏁 Script executed:
sed -n '326,370p' backend/app/services/apple/healthkit/sleep_service.pyRepository: the-momentum/open-wearables
Length of output: 1921
🏁 Script executed:
rg -n 'def load_sleep_state' backend/app/services/apple/healthkit/sleep_service.py -A 15Repository: the-momentum/open-wearables
Length of output: 672
🏁 Script executed:
rg -n 'state\.end_time' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 2Repository: the-momentum/open-wearables
Length of output: 443
🏁 Script executed:
rg -n 'SleepState(' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 8 | head -40Repository: the-momentum/open-wearables
Length of output: 143
🏁 Script executed:
rg -n 'save_sleep_state|state\s*=' backend/app/services/apple/healthkit/sleep_service.py | head -20Repository: the-momentum/open-wearables
Length of output: 552
🏁 Script executed:
rg -n 'def _apply_transition' backend/app/services/apple/healthkit/sleep_service.py -A 25 | head -35Repository: the-momentum/open-wearables
Length of output: 1055
🏁 Script executed:
rg -n '_create_new_sleep_state' backend/app/services/apple/healthkit/sleep_service.py -A 20 | head -50Repository: the-momentum/open-wearables
Length of output: 2045
🏁 Script executed:
rg -n 'def _create_new_sleep_state' backend/app/services/apple/healthkit/sleep_service.py -A 30Repository: the-momentum/open-wearables
Length of output: 962
🏁 Script executed:
rg -n 'end_time|start_time' backend/app/services/apple/healthkit/sleep_service.py | grep -E '(timezone|utc|tzinfo|replace|astimezone)' | head -20Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'sjson\.(startDate|endDate)|_normalize_sleep_record' backend/app/services/apple/healthkit/sleep_service.py -B 3 -A 3Repository: the-momentum/open-wearables
Length of output: 624
🏁 Script executed:
fd 'xml_service\.py' backendRepository: the-momentum/open-wearables
Length of output: 122
🏁 Script executed:
rg -n 'datetime.fromisoformat|timezone|tzinfo' backend/app/services/apple/apple_xml/xml_service.py | head -20Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'def.*sjson|class.*json|_normalize_sleep_record|startDate|endDate' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 2 | head -50Repository: the-momentum/open-wearables
Length of output: 916
🏁 Script executed:
rg -n 'from.*xml_service|import.*sjson|SleepJsonRecord' backend/app/services/apple/healthkit/sleep_service.py -B 1 -A 1Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
sed -n '1,60p' backend/app/services/apple/apple_xml/xml_service.pyRepository: the-momentum/open-wearables
Length of output: 1668
🏁 Script executed:
rg -n 'startDate|endDate|datetime.fromisoformat' backend/app/services/apple/apple_xml/xml_service.py -B 3 -A 3 | head -60Repository: the-momentum/open-wearables
Length of output: 1981
🏁 Script executed:
rg -n '_normalize_sleep_record|HKCategoryTypeIdentifierSleepAnalysis' backend/app/services/apple/apple_xml/xml_service.py -B 2 -A 10Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'datetime.fromisoformat|strptime|timezone|tzinfo' backend/app/services/apple/apple_xml/xml_service.py -B 2 -A 2Repository: the-momentum/open-wearables
Length of output: 407
🏁 Script executed:
rg -n 'def.*parse.*date|def.*normalize.*date' backend/app/services/apple/apple_xml/xml_service.py -A 8Repository: the-momentum/open-wearables
Length of output: 566
🏁 Script executed:
rg -n 'SLEEP|sleep.*category|HKCategoryType' backend/app/services/apple/apple_xml/xml_service.py -B 2 -A 2 | head -40Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'model_validate_json|model_dump_json' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 2Repository: the-momentum/open-wearables
Length of output: 878
🏁 Script executed:
cat -n backend/tests/tasks/test_finalize_stale_sleep_task.py | head -100Repository: the-momentum/open-wearables
Length of output: 4638
🏁 Script executed:
rg -n 'state.end_time|end_time.*=|tzinfo|timezone' backend/tests/tasks/test_finalize_stale_sleep_task.py -B 1 -A 1Repository: the-momentum/open-wearables
Length of output: 2172
🏁 Script executed:
rg -n 'EventRecordCreate.*start_datetime|end_datetime' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 5Repository: the-momentum/open-wearables
Length of output: 397
🏁 Script executed:
rg -n 'class EventRecordCreate' backend -A 15Repository: the-momentum/open-wearables
Length of output: 1123
🏁 Script executed:
rg -n 'def.*process_sleep|sjson.*=' backend/app/services/apple/healthkit/sleep_service.py | head -30Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
cat -n backend/app/schemas/event_record.py | sed -n '1,70p'Repository: the-momentum/open-wearables
Length of output: 2598
🏁 Script executed:
rg -n 'start_datetime|end_datetime' backend/app/schemas/event_record.py -B 1 -A 1Repository: the-momentum/open-wearables
Length of output: 712
🏁 Script executed:
rg -n 'class EventRecordBase' backend -A 20 | head -30Repository: the-momentum/open-wearables
Length of output: 1555
🏁 Script executed:
rg -n 'SleepState\(' backend -A 8 | head -80Repository: the-momentum/open-wearables
Length of output: 6752
🏁 Script executed:
rg -n 'end_time.*=' backend/app/services/apple/healthkit/sleep_service.py | head -20Repository: the-momentum/open-wearables
Length of output: 511
🏁 Script executed:
rg -n 'datetime.now\(|timezone\.|datetime\(' backend/app/services/apple/healthkit/sleep_service.py | head -15Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'sjson\.' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 2 | grep -E 'sjson\.(start|end)Date'Repository: the-momentum/open-wearables
Length of output: 241
🏁 Script executed:
sed -n '160,240p' backend/app/services/apple/healthkit/sleep_service.pyRepository: the-momentum/open-wearables
Length of output: 3142
🏁 Script executed:
rg -n 'class.*SyncRequest|sleep.*:.*list' backend/app/services/apple/healthkit/sleep_service.py -B 2 -A 8 | head -30Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
fd 'sync_request\.py|models.*sync' backend --type pyRepository: the-momentum/open-wearables
Length of output: 240
🏁 Script executed:
rg -n 'from.*request|import.*request' backend/app/services/apple/healthkit/sleep_service.py | head -10Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'class.*Request|def.*process_sleep' backend/app/services/apple/healthkit/sleep_service.py -B 1 -A 5 | head -40Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
rg -n 'SDKSyncRequest' backend -A 10 | head -50Repository: the-momentum/open-wearables
Length of output: 3818
🏁 Script executed:
fd 'sleep' backend/app/schemas --type fRepository: the-momentum/open-wearables
Length of output: 150
🏁 Script executed:
rg -n 'startDate.*endDate|class.*Sleep.*JSON' backend/app/schemas/apple/healthkit/sync_request.py -A 5 | head -50Repository: the-momentum/open-wearables
Length of output: 53
🏁 Script executed:
cat -n backend/app/schemas/apple/healthkit/sync_request.py | head -100Repository: the-momentum/open-wearables
Length of output: 4068
🏁 Script executed:
python3 << 'EOF'
from datetime import datetime, timezone
import json
from pydantic import BaseModel
class TestModel(BaseModel):
dt: datetime
# Test with timezone-aware datetime
tz_aware = TestModel(dt=datetime.now(timezone.utc))
json_str = tz_aware.model_dump_json()
print("Serialized:", json_str)
# Deserialize
restored = TestModel.model_validate_json(json_str)
print("Restored tzinfo:", restored.dt.tzinfo)
print("Is aware:", restored.dt.tzinfo is not None)
# Test with naive datetime
naive = TestModel(dt=datetime.now())
json_str2 = naive.model_dump_json()
print("\nNaive serialized:", json_str2)
restored2 = TestModel.model_validate_json(json_str2)
print("Naive restored tzinfo:", restored2.dt.tzinfo)
print("Is aware:", restored2.dt.tzinfo is not None)
EOFRepository: the-momentum/open-wearables
Length of output: 254
🏁 Script executed:
rg -n 'zoneOffset|timezone|tzinfo' backend/app/schemas/apple/healthkit/sync_request.pyRepository: the-momentum/open-wearables
Length of output: 350
🏁 Script executed:
rg -n '_create_new_sleep_state|load_sleep_state' backend/app/services/apple/healthkit/sleep_service.py -B 1 -A 1 | grep -E '(def|_create|load)'Repository: the-momentum/open-wearables
Length of output: 378
Normalize the SleepState object, not just the local comparison variable.
Lines 29–31 normalize end_time only for the stale check, but line 34 passes the original state to finish_sleep(). When finish_sleep() has no cleaned stages, it uses state.end_time directly (line 337 in sleep_service.py) to create the persisted EventRecordCreate. If state.end_time is naive, it will be persisted with no timezone information.
Update state.end_time (and state.start_time) before calling finish_sleep(), or verify that load_sleep_state() guarantees all datetimes are timezone-aware end-to-end.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/integrations/celery/tasks/finalize_stale_sleep_task.py` around
lines 29 - 31, The stale-sleep checker normalizes the local variable end_time
but does not update the SleepState object, so finish_sleep() later reads a naive
state.end_time (and potentially state.start_time) and persists a timezone-less
EventRecordCreate; update the SleepState in-place before calling finish_sleep()
by setting state.end_time = state.end_time.replace(tzinfo=timezone.utc) (and
similarly for state.start_time if tzinfo is None), or ensure load_sleep_state()
already returns timezone-aware datetimes; make the change near where end_time is
currently normalized so finish_sleep(state, ...) receives normalized datetimes.
| case SleepPhase.ASLEEP_DEEP | SleepPhase.SLEEPING: | ||
| state.deep_seconds += duration_seconds | ||
| stage_label = SleepStageType.DEEP |
There was a problem hiding this comment.
Preserve generic sleeping as its own stage.
SleepPhase.SLEEPING means “asleep, subtype unknown”, not deep sleep. Folding it into SleepStageType.DEEP will over-report deep sleep and persist a misleading hypnogram for Apple payloads that only expose the generic asleep phase. Please keep this distinct end-to-end instead of coercing it to deep.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/apple/healthkit/sleep_service.py` around lines 128 -
130, The code currently collapses SleepPhase.SLEEPING into deep sleep by using a
combined case (case SleepPhase.ASLEEP_DEEP | SleepPhase.SLEEPING) and setting
stage_label = SleepStageType.DEEP; instead, handle the two phases separately:
add distinct case arms for SleepPhase.ASLEEP_DEEP (map to SleepStageType.DEEP
and increment deep_seconds) and SleepPhase.SLEEPING (map to a separate
generic/unknown stage — e.g., SleepStageType.SLEEPING or SleepStageType.UNKNOWN
— and increment the appropriate counter), updating the same logic that assigns
stage_label and accumulates duration so generic “sleeping” remains its own stage
throughout mapping/persistence (look for the switch/case block in
sleep_service.py that references SleepPhase and SleepStageType and the variable
stage_label).
| specific_raw = [s for s in stages if s.stage != "in_bed" and s.stage != "unknown"] | ||
| sorted_specific = sorted(specific_raw, key=lambda x: x.start_time) | ||
|
|
||
| cleaned_stages: list[SleepStage] = [] | ||
| last_end = None | ||
|
|
||
| for stage in sorted_specific: | ||
| start = stage.start_time | ||
| end = stage.end_time | ||
|
|
||
| if last_end and start < last_end: | ||
| start = last_end | ||
|
|
||
| if start >= end: | ||
| continue | ||
|
|
||
| duration = (end - start).total_seconds() | ||
|
|
||
| # Safe to access .stage (Pydantic model) | ||
| phase_str = str(stage.stage) | ||
|
|
||
| metric_key = stage_to_metric.get(phase_str) | ||
| if metric_key: | ||
| metrics[metric_key] += duration | ||
|
|
||
| # Construct final SleepStage (also Pydantic) | ||
| # Note: stage.stage is SleepStageType enum member |
There was a problem hiding this comment.
Don't drop in_bed / unknown intervals from the persisted list.
cleaned_stages is later written to sleep_stages and also drives the final session bounds in finish_sleep. With this filter, in_bed never reaches JSONB even though SleepStageType and the frontend contract model it, and leading/trailing bed intervals can shrink the saved session window to the first/last non-in_bed stage. If only aggregates should ignore these phases, filter them in the metric accumulator, not in the persisted interval list.
Also applies to: 289-291
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/apple/healthkit/sleep_service.py` around lines 261 -
287, The code currently filters out "in_bed" and "unknown" when building
specific_raw which means cleaned_stages (later written to sleep_stages and used
by finish_sleep) omits these intervals; instead, keep all intervals in
cleaned_stages by removing the initial filter over stages and build metrics by
conditionally skipping "in_bed"/"unknown" only when accumulating metrics (e.g.
check phase_str against stage_to_metric or explicitly skip those keys before
adding to metrics), ensuring cleaned_stages retains all SleepStage entries so
SleepStageType, sleep_stages persistence, and session bounds in finish_sleep
remain correct.
| op.drop_index(op.f("idx_refresh_token_developer_id"), table_name="refresh_token") | ||
| op.drop_index(op.f("idx_refresh_token_user_id"), table_name="refresh_token") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if these indices are still defined in models or if removal is intentional
rg -n "idx_refresh_token_developer_id|idx_refresh_token_user_id|idx_user_invitation_code_user_id" --type pyRepository: the-momentum/open-wearables
Length of output: 2192
🏁 Script executed:
cat -n backend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.pyRepository: the-momentum/open-wearables
Length of output: 2272
🏁 Script executed:
# Check for model definitions and recent changes to these tables
find backend -name "*.py" -type f | xargs grep -l "class.*RefreshToken\|class.*UserInvitationCode" | head -5Repository: the-momentum/open-wearables
Length of output: 291
🏁 Script executed:
# Search for column definitions in these model classes to see if they were modified
rg -A 20 "class RefreshToken|class UserInvitationCode" backend --type pyRepository: the-momentum/open-wearables
Length of output: 13421
🏁 Script executed:
# Check git history to see if these model files were modified recently
git log --oneline --all -- backend/app/models/refresh_token.py backend/app/models/user_invitation_code.py | head -10Repository: the-momentum/open-wearables
Length of output: 279
🏁 Script executed:
# Check the status of the migration - was it auto-generated or manually created?
git log --oneline -1 backend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.pyRepository: the-momentum/open-wearables
Length of output: 102
🏁 Script executed:
# Check git status to see if there are any uncommitted changes to model files
git status backend/app/models/refresh_token.py backend/app/models/user_invitation_code.pyRepository: the-momentum/open-wearables
Length of output: 136
🏁 Script executed:
# Check git diff to see if there are staged changes
git diff --cached backend/app/models/refresh_token.py backend/app/models/user_invitation_code.pyRepository: the-momentum/open-wearables
Length of output: 53
Indices on unrelated tables are bundled with the sleep_stages migration.
This migration drops and recreates indices on refresh_token and user_invitation_code tables, which are unrelated to the sleep_stages feature. The "please adjust!" Alembic comment indicates this was auto-generated and requires review. These operations appear to reflect model drift from earlier commits rather than intentional schema changes in this migration.
Consider splitting these index operations into a separate migration for clarity and to keep this migration focused on the sleep_stages feature. This improves auditability and makes rollback operations safer if needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.py`
around lines 23 - 24, The migration
backend/migrations/versions/2026_03_06_1440-07afc893962c_sleep_stages.py
includes index operations for unrelated tables (op.drop_index/op.create_index
calls affecting idx_refresh_token_developer_id, idx_refresh_token_user_id, and
indices on user_invitation_code) which likely came from autogeneration; remove
those refresh_token and user_invitation_code index changes from this
sleep_stages migration and instead create a new, separate Alembic migration
containing only the index drops/creates for idx_refresh_token_developer_id,
idx_refresh_token_user_id and the user_invitation_code indices so the
sleep_stages migration only contains sleep_stages table changes (preserve any
up/down semantics and the "please adjust!" note when splitting).
| # Invalid timestamp format causing load_sleep_state to fail | ||
| mock_load_state.side_effect = ValueError("Invalid timestamp format") |
There was a problem hiding this comment.
This test no longer matches the malformed-state path.
load_sleep_state() now catches parse/validation errors and returns None, so raising ValueError here exercises the task’s generic exception branch instead of malformed Redis payload handling. Either rename the test to reflect that, or drive invalid state through load_sleep_state itself.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/tasks/test_finalize_stale_sleep_task.py` around lines 360 -
361, The test currently triggers a ValueError from mock_load_state which no
longer matches the malformed-state branch because load_sleep_state now catches
parse errors and returns None; update the test to simulate that behavior by
having mock_load_state return None (instead of raising) so the code follows the
malformed Redis payload path, and adjust any assertions to expect the
malformed-state handling in finalize_stale_sleep_task (or the test's call site)
rather than the generic-exception branch; alternatively, if you prefer to test
the generic-exception branch, rename the test to reflect that behavior.
It resolves #517 by storing start/end timestamps for each stage of sleep.
Summary by CodeRabbit
New Features
Bug Fixes