fix: GPS drill-down, snapshot performance, and UX improvements#31
fix: GPS drill-down, snapshot performance, and UX improvements#31akkaouim merged 5 commits intojjackson:labs-mainfrom
Conversation
- Refactor GPS drill-down from inline fetch to useEffect([expandedGps]) pattern, fixing silent ReferenceError (appliedStart was undefined) - Rewrite _rebuild_gps_with_visits to query ComputedVisitCache directly via SQLCacheManager instead of re-running the full pipeline (~20s vs ~8min) - Add opportunity_id to SSE stream URL to avoid 302 redirects - Add MBWSaveSnapshotView endpoint for manual snapshot saving - Show "Data from" timestamp for both SSE (live/green) and snapshot (amber) - Hide Save Snapshot and Refresh Data buttons on completed audits - Increase DATA_UPLOAD_MAX_MEMORY_SIZE to 50MB for snapshot POST payloads - Update DOCUMENTATION.md with OOM optimizations, sectioned SSE streaming, ComputedVisitCache pattern, GPS drill-down useEffect, and troubleshooting Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds sectioned SSE streaming with per-section freeing and RSS logging; snapshot save/load endpoints and frontend dataSource/snapshotSaving with auto-save-on-complete; Computed Visit Cache fast-path with pipeline fallback and server-side GPS visit embedding; raises upload limit for snapshot POSTs. Changes
Sequence DiagramsequenceDiagram
participant Frontend as Frontend (UI)
participant UseEffect as Frontend: useEffect / GPS detail
participant Cache as Computed Visit Cache
participant Backend as Backend/API (MBWGPSDetailView / MBWSaveSnapshotView)
participant Pipeline as Full Pipeline
participant DB as Database
rect rgba(200,200,255,0.5)
Frontend->>+UseEffect: expand GPS detail (opportunity_id, filters)
UseEffect->>+Cache: _load_visits_from_cache(opportunity_id, username)
alt Cache Hit
Cache-->>-UseEffect: visits (embedded)
UseEffect-->>Frontend: render visits (from snapshot/cache)
else Cache Miss
Cache-->>UseEffect: miss
UseEffect->>+Backend: GET GPS detail (passes params)
Backend->>+Pipeline: _load_visits_from_pipeline(...)
Pipeline->>+DB: query visits (date/app_version filters)
DB-->>-Pipeline: visit rows
Pipeline-->>-Backend: processed visits
Backend-->>-UseEffect: visits
UseEffect-->>Frontend: render visits
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md (2)
278-278: Clarify "required" vs conditional usage.Line 278 states that
opportunity_idis "required to avoid 302 redirects," but the code example on lines 267-269 shows it's conditionally added only when present. If it's truly required, the conditional check seems inconsistent. Consider either:
- Changing "required" to "recommended" or "should be included when available"
- Or adding documentation explaining what happens when it's omitted (e.g., "will trigger a 302 redirect if missing")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md` at line 278, Update the documentation to clarify the conditional nature of the opportunity_id parameter: either change the phrase "required to avoid 302 redirects" to "recommended" or "should be included when available", or explicitly state the behavioral consequence when omitted (e.g., "omitting opportunity_id will trigger a 302 redirect from LabsContextMiddleware"); reference the opportunity_id usage shown in the example (lines where opportunity_id is conditionally added) and mention LabsContextMiddleware so readers understand when to include the parameter versus when it's optional.
893-893: Clarify cache indexing structure.The description states the Computed Visit Cache is "indexed by username," but the Key Pattern column shows
opportunity_id+config_hash. This could be clearer. Consider:
- "Per opportunity, indexed by username" if username is a secondary key within the cache
- Or "keyed by opportunity and config; data grouped by username" to clarify the structure
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md` at line 893, Update the Computed Visit Cache description to remove ambiguity between its Key Pattern and indexing: replace "Per opportunity, indexed by username" with a clear phrase such as "Keyed by opportunity_id + config_hash; entries grouped or partitioned by username" (or alternately "Per opportunity keyed by opportunity_id + config_hash, with username used as a secondary index") so the Key Pattern (`opportunity_id` + `config_hash`) and the role of username are both explicit in the DOCUMENTATION.md entry for Computed Visit Cache.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Line 1285: Update the incomplete file reference for the streaming parser:
replace the short reference backend.py:_parse_and_store_streaming() with the
full module path
labs/analysis/backends/sql/backend.py:_parse_and_store_streaming() in
DOCUMENTATION.md (around the Sectioned SSE description) so it matches the File
Reference format and points to the actual function name
_parse_and_store_streaming().
In `@commcare_connect/workflow/templates/mbw_monitoring/template.py`:
- Around line 717-719: The save-success branch currently flips the UI source to
snapshot by calling setFromSnapshot(true) when handling a successful save in the
save handler (the block that calls setSnapshotSaving(false), showToast(...) and
setSnapshotTimestamp(...)); remove the setFromSnapshot(true) call so saving only
updates the timestamp/toast and does not change the source badge, and ensure any
other identical save-success handler (the other occurrence calling
setFromSnapshot) is updated the same way; keep setSnapshotTimestamp(new
Date().toISOString()) to record the save and only switch source via the explicit
snapshot-load code path.
- Around line 768-813: The React.useEffect that loads GPS detail currently only
depends on expandedGps and therefore can use stale gpsFlws,
instance.opportunity_id, appliedAppVersionOp, and appliedAppVersionVal; update
the effect dependency array on the effect that contains
setGpsDetail/setGpsDetailLoading to include gpsFlws, instance.opportunity_id,
appliedAppVersionOp, and appliedAppVersionVal so the effect re-runs when those
values change, and ensure the cancellation/local variables (cancelled,
start/end/params) are recreated per run (they already are inside the effect) so
stale responses are ignored correctly.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 1291-1299: Validate the incoming body and types before
dereferencing: ensure body is a dict and that run_id (variable run_id) is
present and of an expected scalar type (e.g., str or int) and that snapshot_data
is a dict before calling snapshot_data.get; if any check fails return a
JsonResponse with a 400 and an explanatory error. Update the start of the view
where run_id = body.get("run_id") and snapshot_data = body.get("snapshot_data",
{}) to explicitly check isinstance(body, dict), validate run_id type/emptiness,
and validate isinstance(snapshot_data, dict) before using
snapshot_data.get("gps_data") (gps_data) so attribute errors are avoided.
- Around line 1197-1200: _rebuild_gps_with_visits currently reads opportunity_id
from request.labs_context (labs_context / opportunity_id) which can differ from
the monitoring run being saved; change it to derive the opportunity id from the
run object passed into the save flow (use the run or monitoring_run object's
opportunity/opportunity_id attribute) and fall back to request.labs_context only
if run has no opportunity; update references in _rebuild_gps_with_visits and any
callers so gps_data is rebuilt using run.opportunity (not request.labs_context)
to avoid pulling visits from the wrong cache.
---
Nitpick comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Line 278: Update the documentation to clarify the conditional nature of the
opportunity_id parameter: either change the phrase "required to avoid 302
redirects" to "recommended" or "should be included when available", or
explicitly state the behavioral consequence when omitted (e.g., "omitting
opportunity_id will trigger a 302 redirect from LabsContextMiddleware");
reference the opportunity_id usage shown in the example (lines where
opportunity_id is conditionally added) and mention LabsContextMiddleware so
readers understand when to include the parameter versus when it's optional.
- Line 893: Update the Computed Visit Cache description to remove ambiguity
between its Key Pattern and indexing: replace "Per opportunity, indexed by
username" with a clear phrase such as "Keyed by opportunity_id + config_hash;
entries grouped or partitioned by username" (or alternately "Per opportunity
keyed by opportunity_id + config_hash, with username used as a secondary index")
so the Key Pattern (`opportunity_id` + `config_hash`) and the role of username
are both explicit in the DOCUMENTATION.md entry for Computed Visit Cache.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.mdcommcare_connect/workflow/templates/mbw_monitoring/template.pycommcare_connect/workflow/templates/mbw_monitoring/urls.pycommcare_connect/workflow/templates/mbw_monitoring/views.pyconfig/settings/labs.py
- Replace fromSnapshot boolean with dataSource three-state string
('live' green, 'saved' blue, 'snapshot' amber) for clearer UX
- Add gpsFlws, opportunity_id, appliedAppVersionOp/Val to GPS
useEffect dependency array to avoid stale closures
- Validate POST body types in MBWSaveSnapshotView (dict checks)
- Accept opportunity_id from POST body in _rebuild_gps_with_visits,
falling back to request.labs_context
- Send opportunity_id in snapshot save POST body from frontend
- Docs: full module path for backend.py, soften opportunity_id note,
clarify ComputedVisitCache key/index description
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
commcare_connect/workflow/templates/mbw_monitoring/views.py (1)
1295-1315:⚠️ Potential issue | 🟠 MajorHarden snapshot request validation and bind GPS rebuild to the run’s
opportunity_id.At Line [1295],
run_idaccepts any truthy value; at Line [1312],opportunity_idis taken from client/body context. This can produce invalid saves or rebuild GPS visits from the wrong cache scope. Also guardgps_datatype beforegps_data.get(...).Suggested fix
- run_id = body.get("run_id") - if not run_id: + run_id = _parse_int_param(body.get("run_id")) + if run_id is None: return JsonResponse({"error": "run_id is required"}, status=400) + monitoring_session = load_monitoring_run(request, run_id) + if not monitoring_session: + return JsonResponse({"error": "Run not found"}, status=404) snapshot_data = body.get("snapshot_data", {}) if not isinstance(snapshot_data, dict): return JsonResponse({"error": "snapshot_data must be a JSON object"}, status=400) # Re-read GPS visits from pipeline cache if not already embedded gps_data = snapshot_data.get("gps_data") + if gps_data is not None and not isinstance(gps_data, dict): + return JsonResponse({"error": "gps_data must be a JSON object"}, status=400) if gps_data: has_visits = any( s.get("visits") for s in gps_data.get("flw_summaries", []) ) if not has_visits: - # Prefer opportunity_id from POST body, fall back to middleware context - labs_context = getattr(request, "labs_context", {}) - opp_id = body.get("opportunity_id") or labs_context.get("opportunity_id") snapshot_data["gps_data"] = _rebuild_gps_with_visits( - request, gps_data, opportunity_id=opp_id + request, + gps_data, + opportunity_id=monitoring_session.opportunity_id, )
🧹 Nitpick comments (1)
commcare_connect/workflow/templates/mbw_monitoring/template.py (1)
716-719: Prefer server-provided snapshot timestamp when save succeeds.At Line 718, using only client time can drift from persisted backend time. Using
data.snapshot_timestampfirst keeps the “Data from” indicator more accurate.♻️ Proposed refinement
- if (data.success) { showToast('Snapshot saved'); setDataSource('saved'); setSnapshotTimestamp(new Date().toISOString()); return true; } + if (data.success) { showToast('Snapshot saved'); setDataSource('saved'); setSnapshotTimestamp(data.snapshot_timestamp || new Date().toISOString()); return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/template.py` around lines 716 - 719, The save-success branch should prefer the server-provided timestamp instead of always using client time: when handling the resolved promise in the then callback, use data.snapshot_timestamp (if present) to call setSnapshotTimestamp, falling back to new Date().toISOString() only when data.snapshot_timestamp is missing; update the block that calls setSnapshotSaving(false), showToast(...), setDataSource('saved') and setSnapshotTimestamp(...) so it assigns the persisted timestamp returned by the server.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Line 1053: The documentation incorrectly states the GPS drill-down effect uses
useEffect([expandedGps]); update the docs to list the full current dependency
array used in the implementation (e.g., include expandedGps plus any state/props
and callbacks referenced inside the effect such as embeddedVisits, username,
fetchGpsData, setState handlers or other variables) so the docs exactly match
the effect signature and avoid regressions; specifically mention that the effect
first checks embeddedVisits (or snapshot) then lazy-loads from
/api/gps/<username>/, and explicitly enumerate every dependency present in the
real useEffect call (e.g., useEffect([expandedGps, embeddedVisits, username,
fetchGpsData]) — replace with the actual identifiers used in the code).
In `@commcare_connect/workflow/templates/mbw_monitoring/template.py`:
- Around line 768-776: The effect uses gpsFlws.find(...) without guarding
gpsFlws, causing a runtime error when gpsFlws is undefined; update the
React.useEffect block (the one referencing expandedGps, setGpsDetail,
setGpsDetailLoading and gpsFlws) to first check gpsFlws is defined (e.g., if
(!gpsFlws) { setGpsDetail(null); setGpsDetailLoading(false); return; } or use
(gpsFlws || []).find(...)) before calling .find so the conditional handling and
early returns execute safely when gpsFlws is missing.
- Line 3208: The snapshot empty-state copy currently suggests clicking "Refresh
Data" even when the refresh button is hidden for completed audits; update the
ternary at the dataSource === 'snapshot' branch to be completion-aware by
checking the same completed-audit condition used where the "Refresh Data" button
is hidden and render three states: (1) if dataSource === 'snapshot' AND audit is
completed -> show completion-aware copy (e.g., "Drill-down data not available
for completed audits."), (2) if dataSource === 'snapshot' AND audit is not
completed -> keep the current 'Click "Refresh Data" to load details.' message,
otherwise -> keep 'No due visits found for this FLW.'; reuse the same completion
flag/condition used to hide the Refresh button so logic stays consistent.
---
Nitpick comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/template.py`:
- Around line 716-719: The save-success branch should prefer the server-provided
timestamp instead of always using client time: when handling the resolved
promise in the then callback, use data.snapshot_timestamp (if present) to call
setSnapshotTimestamp, falling back to new Date().toISOString() only when
data.snapshot_timestamp is missing; update the block that calls
setSnapshotSaving(false), showToast(...), setDataSource('saved') and
setSnapshotTimestamp(...) so it assigns the persisted timestamp returned by the
server.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.mdcommcare_connect/workflow/templates/mbw_monitoring/template.pycommcare_connect/workflow/templates/mbw_monitoring/views.py
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md
Outdated
Show resolved
Hide resolved
…estamp - Replace derived gpsFlws (new array each render) with stable dashData in useEffect dep array to prevent re-firing every render cycle - Read gps flw_summaries directly from dashData inside the effect with safe chaining guard against undefined - Show shorter snapshot message on completed audits where Refresh Data button is hidden (no misleading "Click Refresh Data") - Return server-side UTC timestamp from save-snapshot endpoint instead of relying on client clock - Update DOCUMENTATION.md to reflect current useEffect dependency array Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
commcare_connect/workflow/templates/mbw_monitoring/views.py (1)
1295-1297: Consider using_parse_int_param()forrun_idvalidation consistency.Other views in this file (e.g.,
MBWSaveFlwResultViewat line 1077,MBWCompleteSessionViewat line 1119) use_parse_int_param()to validate integer parameters. Here,run_idis used directly frombody.get("run_id"). While JSON parsing preserves the type, a malformed request with{"run_id": "abc"}would pass the truthy check but could fail downstream insave_dashboard_snapshot().♻️ Proposed fix for consistency
- run_id = body.get("run_id") - if not run_id: + run_id = _parse_int_param(body.get("run_id")) + if run_id is None: return JsonResponse({"error": "run_id is required"}, status=400)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/views.py` around lines 1295 - 1297, The view reads run_id directly from body.get("run_id") which allows non-integer strings to slip through; update the view to validate and parse run_id with the existing helper _parse_int_param (use the same call pattern as MBWSaveFlwResultView / MBWCompleteSessionView), assign the parsed integer to run_id, and return the same JsonResponse error when _parse_int_param indicates a missing/invalid value so downstream calls like save_dashboard_snapshot() always receive an int.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/template.py`:
- Around line 702-714: The saveSnapshot function lacks an in-flight guard so
multiple calls (e.g., from the caller around lines 736-740) can trigger
duplicate large POSTs; add a boolean guard (use the existing snapshot-saving
state modeled by setSnapshotSaving/ snapshotSaving) at the start of saveSnapshot
to return early when a save is already in progress, set snapshot-saving true
before the fetch and ensure snapshot-saving is cleared in finally (or on both
success/error), and update the callers (the code that invokes saveSnapshot
around the referenced lines) to check the guard or await the returned promise
instead of unconditionally invoking saveSnapshot to prevent concurrent uploads.
---
Nitpick comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 1295-1297: The view reads run_id directly from body.get("run_id")
which allows non-integer strings to slip through; update the view to validate
and parse run_id with the existing helper _parse_int_param (use the same call
pattern as MBWSaveFlwResultView / MBWCompleteSessionView), assign the parsed
integer to run_id, and return the same JsonResponse error when _parse_int_param
indicates a missing/invalid value so downstream calls like
save_dashboard_snapshot() always receive an int.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.mdcommcare_connect/workflow/templates/mbw_monitoring/template.pycommcare_connect/workflow/templates/mbw_monitoring/views.py
…te run_id - Disable Complete Audit and Refresh Data buttons while snapshot save is in flight to prevent concurrent large POSTs or mid-save data reset - Validate run_id with _parse_int_param in MBWSaveSnapshotView for consistency with MBWSaveFlwResultView and MBWCompleteSessionView Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (3)
commcare_connect/workflow/templates/mbw_monitoring/views.py (2)
1299-1308:⚠️ Potential issue | 🟠 MajorValidate
gps_datatype before.get()dereference.At Line 1307, malformed
gps_data(e.g., list/string) will raise and return 500 instead of a 400 validation error.Proposed fix
snapshot_data = body.get("snapshot_data", {}) if not isinstance(snapshot_data, dict): return JsonResponse({"error": "snapshot_data must be a JSON object"}, status=400) # Re-read GPS visits from pipeline cache if not already embedded gps_data = snapshot_data.get("gps_data") + if gps_data is not None and not isinstance(gps_data, dict): + return JsonResponse({"error": "gps_data must be a JSON object"}, status=400) if gps_data: has_visits = any( s.get("visits") for s in gps_data.get("flw_summaries", []) )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/views.py` around lines 1299 - 1308, The code dereferences gps_data with gps_data.get(...) which will raise if gps_data is not a mapping; update the validation after snapshot_data to explicitly check that gps_data (snapshot_data.get("gps_data")) is either None or a dict and return a 400 JsonResponse if it's present but not a mapping; then compute has_visits using gps_data.get("flw_summaries", []) only when gps_data is a dict (preserve the existing has_visits logic), referencing the variables gps_data, has_visits and the "flw_summaries" key to locate where to add the type check and error response.
1310-1315:⚠️ Potential issue | 🟠 MajorUse the run’s
opportunity_idfor GPS rebuild, not client/body context.Lines 1310-1315 can rebuild from the wrong cache when
run_idandbody.opportunity_id/request.labs_contextdiffer.Proposed fix
run_id = _parse_int_param(body.get("run_id")) if not run_id: return JsonResponse({"error": "run_id is required"}, status=400) + monitoring_session = load_monitoring_run(request, run_id) + if not monitoring_session: + return JsonResponse({"error": "Run not found"}, status=404) @@ if not has_visits: - # Prefer opportunity_id from POST body, fall back to middleware context + # Prefer the run's opportunity_id for snapshot consistency labs_context = getattr(request, "labs_context", {}) - opp_id = body.get("opportunity_id") or labs_context.get("opportunity_id") + opp_id = ( + monitoring_session.opportunity_id + or body.get("opportunity_id") + or labs_context.get("opportunity_id") + ) snapshot_data["gps_data"] = _rebuild_gps_with_visits( request, gps_data, opportunity_id=opp_id )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/views.py` around lines 1310 - 1315, The GPS rebuild currently uses body.get("opportunity_id") or request.labs_context which can pick the wrong cache when run_id differs; instead obtain the opportunity_id tied to the run (e.g., use run.opportunity_id or snapshot_data.get("run", {}).get("opportunity_id")/the run object available in scope) and pass that into _rebuild_gps_with_visits, replacing the opp_id assignment that reads from body or request.labs_context (keep a safe None fallback if the run has no opportunity_id).commcare_connect/workflow/templates/mbw_monitoring/template.py (1)
701-727:⚠️ Potential issue | 🟠 Major
saveSnapshot()still needs an in-flight request guard to prevent duplicate POSTs.Line 702 currently allows concurrent calls before
snapshotSaving-driven UI disables fully settle, andhandleComplete()can still race into another call path. This can send duplicate large uploads and flipsnapshotSavingfalse while another save is still active.💡 Proposed hard guard (promise dedupe + centralized finally)
+ var snapshotSaveInFlightRef = React.useRef(null); + // Save snapshot — reused by manual button and auto-save on Complete var saveSnapshot = function() { if (!instance.id || !dashData) return Promise.resolve(false); + if (snapshotSaveInFlightRef.current) return snapshotSaveInFlightRef.current; setSnapshotSaving(true); var snapshotUrl = '/custom_analysis/mbw_monitoring/api/save-snapshot/'; if (instance.opportunity_id) { snapshotUrl += '?opportunity_id=' + encodeURIComponent(instance.opportunity_id); } - return fetch(snapshotUrl, { + var req = fetch(snapshotUrl, { method: 'POST', credentials: 'same-origin', headers: { 'Content-Type': 'application/json', 'X-CSRFToken': getCSRF() }, body: JSON.stringify({ run_id: instance.id, opportunity_id: instance.opportunity_id, snapshot_data: dashData }) }) .then(function(r) { return r.json(); }) .then(function(data) { - setSnapshotSaving(false); if (data.success) { showToast('Snapshot saved'); setDataSource('saved'); setSnapshotTimestamp(data.timestamp || new Date().toISOString()); return true; } else { showToast('Failed to save snapshot: ' + (data.error || 'Unknown error')); return false; } }) .catch(function(err) { - setSnapshotSaving(false); console.error('Snapshot save failed:', err); showToast('Failed to save snapshot'); return false; - }); + }) + .finally(function() { + setSnapshotSaving(false); + snapshotSaveInFlightRef.current = null; + }); + snapshotSaveInFlightRef.current = req; + return req; }; // Complete session — auto-saves snapshot first (best-effort) var handleComplete = function() { + if (snapshotSaving) return; if (!actions || !actions.completeRun) { showToast('Complete not available — please hard-refresh (Cmd+Shift+R)'); return; }Also applies to: 736-740
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/template.py` around lines 701 - 727, The saveSnapshot function needs an in-flight request guard to prevent duplicate POSTs: add a module-scoped variable (e.g., snapshotSavePromise) that is set to the fetch promise when saveSnapshot starts and returned immediately if already present, and ensure you move the shared cleanup (setSnapshotSaving(false) and clearing snapshotSavePromise) into a single finally handler so state is only cleared when the actual in-flight request completes; apply the same pattern to the other snapshot save path referenced around the 736-740 area (the other call site that triggers snapshot saving, and any handleComplete flow) so all callers dedupe via snapshotSavePromise and share the centralized finally.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/template.py`:
- Around line 701-727: The saveSnapshot function needs an in-flight request
guard to prevent duplicate POSTs: add a module-scoped variable (e.g.,
snapshotSavePromise) that is set to the fetch promise when saveSnapshot starts
and returned immediately if already present, and ensure you move the shared
cleanup (setSnapshotSaving(false) and clearing snapshotSavePromise) into a
single finally handler so state is only cleared when the actual in-flight
request completes; apply the same pattern to the other snapshot save path
referenced around the 736-740 area (the other call site that triggers snapshot
saving, and any handleComplete flow) so all callers dedupe via
snapshotSavePromise and share the centralized finally.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 1299-1308: The code dereferences gps_data with gps_data.get(...)
which will raise if gps_data is not a mapping; update the validation after
snapshot_data to explicitly check that gps_data (snapshot_data.get("gps_data"))
is either None or a dict and return a 400 JsonResponse if it's present but not a
mapping; then compute has_visits using gps_data.get("flw_summaries", []) only
when gps_data is a dict (preserve the existing has_visits logic), referencing
the variables gps_data, has_visits and the "flw_summaries" key to locate where
to add the type check and error response.
- Around line 1310-1315: The GPS rebuild currently uses
body.get("opportunity_id") or request.labs_context which can pick the wrong
cache when run_id differs; instead obtain the opportunity_id tied to the run
(e.g., use run.opportunity_id or snapshot_data.get("run",
{}).get("opportunity_id")/the run object available in scope) and pass that into
_rebuild_gps_with_visits, replacing the opp_id assignment that reads from body
or request.labs_context (keep a safe None fallback if the run has no
opportunity_id).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
commcare_connect/workflow/templates/mbw_monitoring/template.pycommcare_connect/workflow/templates/mbw_monitoring/views.py
…n-based opp_id - Add in-flight promise guard (snapshotSaveInFlightRef) to saveSnapshot so concurrent callers coalesce on a single request; consolidate cleanup in .finally() handler - Validate gps_data is a dict before calling .get() on it (400 if not) - Load run via WorkflowDataAccess.get_run() to derive opportunity_id from the actual run object instead of trusting client-supplied values; also adds 404 response for nonexistent run_id Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
commcare_connect/workflow/templates/mbw_monitoring/views.py (1)
1269-1271: Include traceback in GPS rebuild fallback logging.This path is intentionally non-fatal, but current logging loses stack context and makes unexpected rebuild failures harder to diagnose.
Suggested fix
- except Exception as e: - logger.warning("[MBW Dashboard] Failed to rebuild GPS visits for snapshot: %s", e) + except Exception as e: + logger.warning( + "[MBW Dashboard] Failed to rebuild GPS visits for snapshot: %s", + e, + exc_info=True, + ) return gps_data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/views.py` around lines 1269 - 1271, The except block that catches rebuild failures for "Failed to rebuild GPS visits for snapshot" currently logs only the exception message and loses stack context; update that except handler (the except Exception as e block that returns gps_data) to include the traceback by either calling logger.exception("[MBW Dashboard] Failed to rebuild GPS visits for snapshot") or passing exc_info=True to logger.warning so the full stack trace is recorded; leave the non-fatal return of gps_data intact.commcare_connect/workflow/templates/mbw_monitoring/template.py (1)
349-349: Harden snapshot timestamp key handling across load/save paths.Load endpoint (
MBWSnapshotView, line 1165) returnssnapshot_timestampwhile save endpoint (MBWSaveSnapshotView, line 1330) returnstimestamp. Frontend at line 349 correctly readssnapshot_timestampand line 722 readstimestamp || fallback, but the inconsistent naming across endpoints invites typos and makes the code fragile to future changes. Recommend defensive fallback chaining to prevent silent adoption of client time:♻️ Proposed hardening
- setSnapshotTimestamp(data.snapshot_timestamp); + setSnapshotTimestamp(data.snapshot_timestamp || data.timestamp || null);- setSnapshotTimestamp(data.timestamp || new Date().toISOString()); + var serverTimestamp = data.snapshot_timestamp || data.timestamp; + setSnapshotTimestamp(serverTimestamp || new Date().toISOString());Also applies to: 722-723
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/template.py` at line 349, The snapshot timestamp key is inconsistent between endpoints (MBWSnapshotView returns snapshot_timestamp while MBWSaveSnapshotView returns timestamp), so update the template consumers (where setSnapshotTimestamp(...) is called and the other usage around the timestamp fallback) to defensively read both keys and fall back safely: prefer data.snapshot_timestamp || data.timestamp || existingSnapshot (or undefined) instead of trusting a single key or using client time; adjust the setter invocation (setSnapshotTimestamp) and any save payloads to normalize to one canonical key when sending (e.g., include both snapshot_timestamp and timestamp or map timestamp -> snapshot_timestamp) so both load/save paths remain robust; reference MBWSnapshotView and MBWSaveSnapshotView when changing the mapping/normalization logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 1023-1032: The code currently uses computed.get("gps_location")
and discards the cached location; update the gps_location assignment to fall
back to the cached location (e.g., gps_location = computed.get("gps_location")
or row.get("location")) before adding metadata to visits_for_analysis, and apply
the same change in the other block (lines ~1228-1237) so metadata["location"]
uses the fallback.
---
Nitpick comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/template.py`:
- Line 349: The snapshot timestamp key is inconsistent between endpoints
(MBWSnapshotView returns snapshot_timestamp while MBWSaveSnapshotView returns
timestamp), so update the template consumers (where setSnapshotTimestamp(...) is
called and the other usage around the timestamp fallback) to defensively read
both keys and fall back safely: prefer data.snapshot_timestamp || data.timestamp
|| existingSnapshot (or undefined) instead of trusting a single key or using
client time; adjust the setter invocation (setSnapshotTimestamp) and any save
payloads to normalize to one canonical key when sending (e.g., include both
snapshot_timestamp and timestamp or map timestamp -> snapshot_timestamp) so both
load/save paths remain robust; reference MBWSnapshotView and MBWSaveSnapshotView
when changing the mapping/normalization logic.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 1269-1271: The except block that catches rebuild failures for
"Failed to rebuild GPS visits for snapshot" currently logs only the exception
message and loses stack context; update that except handler (the except
Exception as e block that returns gps_data) to include the traceback by either
calling logger.exception("[MBW Dashboard] Failed to rebuild GPS visits for
snapshot") or passing exc_info=True to logger.warning so the full stack trace is
recorded; leave the non-fatal return of gps_data intact.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
commcare_connect/workflow/templates/mbw_monitoring/template.pycommcare_connect/workflow/templates/mbw_monitoring/views.py
Summary by CodeRabbit
New Features
Performance / Reliability
Documentation