Conversation
…resh - MBW Monitoring Dashboard: overview, GPS analysis, and follow-up rate tabs with per-FLW drill-down, mother-level metrics, and AI task integration - Tolerance-based HQ case caching (dev: 85%/90min, prod: 98%/30min) - Dynamic DJANGO_CACHE_TTL (5400s dev, 3600s prod) - CommCare OAuth auto-refresh using stored refresh tokens - Mother filter dropdown with name-based filtering - Bust Cache button with improved visibility Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Covers architecture, data flow, three-tab dashboard details, caching strategy, OAuth integration, frontend state management, and requirements traceability. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ld update - Plans 1-8: Follow-up rate, overview columns (meter/visit, minute/visit), quality/fraud metrics, missed row, tooltips, metadata fields, GS score wiring, debug logging - Fix meter/visit: switch gps_location, visit_datetime, app_build_version from path+transform to extractor in pipeline_config - Fetch GS forms from CCHQ: add list_applications(), discover_form_xmlns(), fetch_gs_forms() for cross-app xmlns discovery - Replace pipeline-based GS extraction with CCHQ form-based extraction - Remove dead gs_checklist_percentage and gs_flw_connect_id FieldComputations - Fix cases_still_eligible color threshold (yellow >= 50, was >= 40) - Audit creation wizard and monitoring session enhancements Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…spended) - Overview tab: 3 assessment buttons + notes + filter + task (actions consolidated here) - GPS/Follow-up tabs: only filter button (removed assessment + task buttons) - Notes modal: 3 options replacing pass/fail - Complete modal: assessment summary counts, no overall result required - Suspend is now a status label only (no Connect API action) - Backend: VALID_FLW_RESULTS constant, updated validation - Documentation updated to reflect all changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds detailed explanation of the split between completed visits (Connect API via AnalysisPipeline) and expected visits/mother metadata (CCHQ Form API). Clarifies that CCHQ is the primary data store and Connect receives a forwarded subset. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move MBW monitoring session management from the audit module into the workflow module while keeping the MBW dashboard for visualization. This achieves zero diff on audit/ files vs labs-main. New files: - workflow/templates/mbw_monitoring.py: Workflow template with React JSX render_code for FLW selection, audit history, and dashboard launch - workflow_adapter.py: WorkflowMonitoringSession adapter wrapping WorkflowRunRecord with load/save/complete helpers - flw_api.py: OpportunityFLWListAPIView with two-source history enrichment (audit sessions + workflow runs) Updated files: - views.py: Use workflow_adapter instead of AuditDataAccess for session management, accept both run_id and session_id params - urls.py: Add opportunity-flws API endpoint - dashboard.html: Fix probation button contrast (amber-500 -> amber-600), change "Back to Sessions" to "Back to Workflows" - workflow/templates/__init__.py: Register mbw_monitoring template - DOCUMENTATION.md: Add Workflow Integration section with full technical specification Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add Last Audit Date and enhanced Open Tasks columns to MBW workflow FLW selection table (render_code + flw_api history enrichment) - Fix OCS session linking: capture trigger_bot() response, add server-side polling (3x2s), increase frontend polling (8x3s), add Retry Linking button - Update documentation with table columns and OCS fix details Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
| **current_results, | ||
| username: { | ||
| "result": result, | ||
| "notes": notes, | ||
| "assessed_by": assessed_by, | ||
| "assessed_at": datetime.now(timezone.utc).isoformat(), | ||
| }, | ||
| } | ||
|
|
||
| # Shallow merge: pass entire flw_results dict | ||
| updated_run = data_access.update_run_state(run_id, { |
There was a problem hiding this comment.
Bug: A race condition in save_flw_result can cause data loss when multiple supervisors save assessment results concurrently, as the read-modify-write operation is not atomic.
Severity: HIGH
Suggested Fix
Implement an atomic update mechanism. This could involve using a database transaction with locking (e.g., SELECT FOR UPDATE) if the underlying data store supports it. Alternatively, implement optimistic locking by adding a version number to the run state. The update operation would then fail if the version has changed since the read, forcing the client to retry the read-modify-write cycle with the latest data.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: commcare_connect/custom_analysis/mbw_monitoring/workflow_adapter.py#L181-L196
Potential issue: The `save_flw_result` function in `workflow_adapter.py` performs a
non-atomic read-modify-write operation when updating FLW assessment results. It fetches
the current run state, modifies the `flw_results` dictionary in memory, and then writes
the entire dictionary back. If two requests to save results for different FLWs occur
concurrently, they can fetch the same initial state. The second request to complete will
overwrite the changes made by the first, leading to the loss of an assessment result.
The frontend UI facilitates this by allowing multiple supervisors to submit assessments
simultaneously without client-side locking.
Did we get this right? 👍 / 👎 to inform future reviews.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
📝 WalkthroughWalkthroughThis PR introduces a comprehensive MBW Monitoring Dashboard system that provides GPS analysis, follow-up metrics tracking, and FLW performance monitoring. It adds GPS distance and time computation utilities, data fetching and caching layers, an SSE streaming pipeline for real-time data, multiple API endpoints including FLW listing and session management, follow-up rate metrics aggregation, and interactive frontend templates. The system integrates pipeline data with CCHQ forms and task management for unified FLW performance visibility. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Browser Client
participant DashboardView as DashboardView
participant StreamView as StreamView (SSE)
participant DataFetchers as DataFetchers
participant CCHQ as CommCare HQ
participant LocalAnalysis as GPS/FollowUp Analysis
Client->>DashboardView: GET /mbw_monitoring/
DashboardView-->>Client: Render dashboard template
Client->>StreamView: GET /stream/ (SSE)
StreamView->>DataFetchers: fetch_opportunity_metadata
DataFetchers-->>StreamView: opportunity metadata
StreamView->>DataFetchers: fetch_visit_cases_by_ids + load GPS pipeline
DataFetchers-->>StreamView: visit cases
StreamView->>DataFetchers: fetch_registration_forms, fetch_gs_forms
DataFetchers->>CCHQ: GET forms with xmlns discovery
CCHQ-->>DataFetchers: form data
DataFetchers-->>StreamView: forms
StreamView->>LocalAnalysis: compute_median_meters_per_visit
LocalAnalysis-->>StreamView: GPS metrics
StreamView->>LocalAnalysis: build_followup_from_pipeline
LocalAnalysis-->>StreamView: follow-up metrics
StreamView->>LocalAnalysis: aggregate_flw_followup, compute_overview_quality_metrics
LocalAnalysis-->>StreamView: aggregated metrics
StreamView-->>Client: SSE event with GPS data
StreamView-->>Client: SSE event with followup data
StreamView-->>Client: SSE event complete
Client->>Client: Parse and render dashboard tabs
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
commcare_connect/tasks/views.py (1)
491-521:⚠️ Potential issue | 🟡 MinorAdd status validation before passing to
create_taskThe
statusparameter is accepted directly from the request body without validation. Sincecreate_taskstores it directly in the task data without checks, invalid values could be persisted and break status-based filters or workflows. Validate against the allowed status set documented inTaskRecord.status.Proposed validation
status = body.get("status", "investigating") + allowed_statuses = { + "investigating", + "flw_action_in_progress", + "flw_action_completed", + "review_needed", + "closed", + } + if status not in allowed_statuses: + return JsonResponse( + {"success": False, "error": f"Invalid status: {status}"}, + status=400, + ) if not username:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/tasks/views.py` around lines 491 - 521, Validate the incoming status in the view before calling TaskDataAccess.create_task: check the request-derived status against the allowed statuses defined on TaskRecord.status (or a canonical list of valid statuses used by TaskRecord), and if it’s not in that set return a 400 JsonResponse with an error message; only pass the validated status to create_task (reference symbols: create_task, TaskRecord.status, TaskDataAccess.create_task, status).
🧹 Nitpick comments (16)
commcare_connect/templates/tasks/task_create_edit.html (1)
1307-1318: Add user feedback when retry finds no session or fails.
Currently, a retry with no session is silent; consider a toast to confirm the check and outcome.Suggested minimal UX feedback
async retrySessionLink(item) { try { const response = await fetch(`/tasks/${this.taskId}/ai/sessions/`); const data = await response.json(); if (data.success && data.sessions?.length > 0) { // Reload page to get updated session data window.location.reload(); + return; } + this.showToastMessage('Session still unavailable. Please try again later.'); } catch (err) { console.error('Retry linking failed:', err); + this.showToastMessage('Retry failed. Please try again.'); } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/templates/tasks/task_create_edit.html` around lines 1307 - 1318, The retrySessionLink method currently reloads silently or logs errors; update retrySessionLink to provide user feedback: show a success toast (or notification) when a session is found (before calling window.location.reload()), show an informative toast when the fetch succeeds but data.success is false or data.sessions is empty (e.g., "No sessions found for task {this.taskId}"), and show an error toast when the catch block is triggered (include the error message). Use the existing toast/notification utility in the codebase (or a simple alert fallback) and keep the fetch and JSON parsing logic in retrySessionLink intact while adding these notifications.commcare_connect/custom_analysis/mbw_monitoring/workflow_adapter.py (2)
137-154: Consider using context manager or try/finally for resource cleanup.The
data_access.close()call could be missed if an exception occurs between initialization and the explicit close. Using a context manager pattern or try/finally would ensure cleanup.♻️ Example with try/finally
def load_monitoring_run(request, run_id): try: data_access = WorkflowDataAccess(request=request) - run = data_access.get_run(int(run_id)) - data_access.close() - - if not run: - return None - - # Verify it's a monitoring run (has selected_flws in state) - state = run.data.get("state", {}) - if "selected_flws" not in state: - logger.warning(f"Run {run_id} is not a monitoring session (no selected_flws)") - return None - - return WorkflowMonitoringSession(run) + try: + run = data_access.get_run(int(run_id)) + if not run: + return None + + state = run.data.get("state", {}) + if "selected_flws" not in state: + logger.warning(f"Run {run_id} is not a monitoring session (no selected_flws)") + return None + + return WorkflowMonitoringSession(run) + finally: + data_access.close() except Exception as e: logger.warning(f"Failed to load monitoring run {run_id}: {e}") return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/workflow_adapter.py` around lines 137 - 154, The WorkflowDataAccess instance created in WorkflowMonitoringSession loading (use of WorkflowDataAccess, get_run, data_access.close) may not be closed if an exception occurs; wrap the data access usage in a try/finally (or implement/use a context manager on WorkflowDataAccess) so that data_access.close() is always called even on exceptions, keeping existing logic that checks run, state (selected_flws) and returns WorkflowMonitoringSession(run) or None; ensure the finally block calls data_access.close() only if data_access was successfully created.
157-206: Consider validatingresultagainstVALID_FLW_RESULTS.The
resultparameter is not validated before being saved. Invalid results could be persisted, which might cause issues downstream.♻️ Suggested validation
def save_flw_result(request, run_id, username, result, notes, assessed_by): """Save a FLW assessment result to the workflow run state. ... """ + if result is not None and result not in VALID_FLW_RESULTS: + logger.warning(f"Invalid FLW result '{result}' for user {username}") + return None + try: data_access = WorkflowDataAccess(request=request)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/workflow_adapter.py` around lines 157 - 206, The save_flw_result function currently saves the provided result without checking it; validate the result against the VALID_FLW_RESULTS set/constant at the start of save_flw_result (before building updated_results) and handle invalid values by logging a clear error (use logger.error) and returning None (or raising a controlled exception) instead of persisting; ensure the check references VALID_FLW_RESULTS and keep the existing flow that uses WorkflowDataAccess.get_run, update_run_state, and returns a WorkflowMonitoringSession on success.commcare_connect/labs/integrations/commcare/api_client.py (1)
257-264: Uselogging.exceptionfor better error diagnostics.When logging the HTTPStatusError on line 261, using
logger.exception()instead oflogger.error()will include the full stack trace, making debugging easier.♻️ Suggested fix
except httpx.HTTPStatusError as e: if e.response.status_code == 404: logger.warning(f"Batch {batch_num}: some cases not found") else: - logger.error(f"Bulk fetch batch {batch_num} failed: {e}") + logger.exception(f"Bulk fetch batch {batch_num} failed: {e}") except httpx.TimeoutException: logger.warning(f"Timeout on bulk fetch batch {batch_num}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/labs/integrations/commcare/api_client.py` around lines 257 - 264, In the except block inside the bulk fetch logic (the handler catching httpx.HTTPStatusError in commcare_connect.labs.integrations.commcare.api_client.py), replace the logger.error call in the non-404 branch with logger.exception so the full stack trace is recorded; keep the 404 branch as-is (logger.warning) and ensure the exception object (e) or a descriptive message is passed to logger.exception to preserve context for debugging.commcare_connect/workflow/templates/mbw_monitoring.py (1)
24-277: Consider the maintainability tradeoff of hardcoded URLs in embedded JSX.The render code contains hardcoded URLs (lines 40, 83, 150, 247) such as
/custom_analysis/mbw_monitoring/api/opportunity-flws/. While this is a common pattern for embedded JSX strings where Django's URL resolution isn't available, changes to URL routing will require updates here.Additionally, the
useEffectfetch on line 40-55 doesn't include an AbortController for cleanup, which could cause React warnings if the component unmounts before the fetch completes. This is a minor concern given the usage context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring.py` around lines 24 - 277, The embedded JSX in WorkflowUI hardcodes multiple URLs and performs a fetch inside React.useEffect without cancellation; update the component to accept URL(s) or a base/api endpoints prop (e.g., pass an api object or baseUrl into WorkflowUI via the surrounding renderer) and replace hardcoded strings like '/custom_analysis/mbw_monitoring/api/opportunity-flws/', '/custom_analysis/mbw_monitoring/?run_id=' and '/tasks/' with values from that prop (locate usages in WorkflowUI: the useEffect fetch, the run_id redirects, and task links), and modify the useEffect fetch to create an AbortController, pass controller.signal to fetch, handle aborts in the catch, and call controller.abort() in the effect cleanup to avoid setting state after unmount.commcare_connect/templates/custom_analysis/mbw_monitoring/gps_analysis.html (1)
463-496: Consider removing or conditionalizing debug console.log statements.Multiple
console.logstatements are present for debugging (lines 463, 470, 496). For production code, consider removing these or wrapping them in a debug flag to reduce console noise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/templates/custom_analysis/mbw_monitoring/gps_analysis.html` around lines 463 - 496, The SSE onmessage handler contains multiple console.log calls (e.g., the ones logging '[MBW GPS] Starting SSE connection:', the parsed 'SSE event:' and the final loaded FLWs message) that should be removed or gated behind a debug flag; add a component-level boolean (e.g., this.debug or ENABLE_MBW_GPS_DEBUG) and replace each console.log in the EventSource setup and in the onmessage handler with a conditional log (if (this.debug) { console.log(...); }) or remove them entirely so production runs don't emit debug output, touching the EventSource creation and its onmessage handler where this.eventSource and related variables (this.loadingMessage, this.flwSummaries, this.summary, this.opportunityId) are referenced.commcare_connect/custom_analysis/mbw_monitoring/views.py (4)
345-360: Redundant nested try/except structure.The outer
try/except(lines 345, 359-360) wraps the loop which already has its owntry/exceptper iteration. If the inner handler catches all exceptions, the outer one will never trigger. Consider removing the outer try/except or consolidating error handling.♻️ Proposed fix
- try: - for opp_id in opportunity_ids: - try: - metadata = fetch_opportunity_metadata(access_token, opp_id) - ... - except Exception as e: - logger.warning(f"[MBW Dashboard] Registration form fetch failed for opp {opp_id}: {e}") - logger.info(f"[MBW Dashboard] Fetched {len(registration_forms)} registration forms") - except Exception as e: - logger.warning(f"[MBW Dashboard] Registration form fetch failed: {e}") + for opp_id in opportunity_ids: + try: + metadata = fetch_opportunity_metadata(access_token, opp_id) + ... + except Exception as e: + logger.warning(f"[MBW Dashboard] Registration form fetch failed for opp {opp_id}: {e}") + logger.info(f"[MBW Dashboard] Fetched {len(registration_forms)} registration forms")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/views.py` around lines 345 - 360, The outer try/except around the opportunity_ids loop is redundant because each iteration already catches exceptions; remove the outer try/except and unindent the logger.info call so it runs after the loop completes. Specifically, edit the block that iterates over opportunity_ids (which calls fetch_opportunity_metadata and fetch_registration_forms and appends to registration_forms) to keep the per-iteration except handling and eliminate the outer exception handler, leaving logger.warning inside the inner except and logger.info after the loop.
432-437: Move import to top of file.
Counterfromcollectionsis a standard library import and should be at the top of the file with other imports, not inside the method.♻️ Proposed fix
Add to imports at top of file:
+from collections import Counter from collections.abc import GeneratorThen remove line 432:
- from collections import Counter🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/views.py` around lines 432 - 437, The local import of Counter inside the view should be moved to the module imports: add "from collections import Counter" to the top import block and remove the inline "from collections import Counter" statement near where form_name_counts is computed (the block using all_pipeline_rows and form_name_counts). Keep the rest of the logic (the generator expression and logger.info) unchanged so the code now uses the top-level Counter import.
188-612: SSE streaming method is very long (~425 lines) - consider decomposition.The
stream_datamethod handles too many responsibilities: validation, pipeline loading, FLW resolution, GPS analysis, form fetching, follow-up metrics, overview building, and response assembly. This makes it hard to test and maintain.Consider extracting logical sections into helper methods (e.g.,
_load_pipeline_data,_resolve_flw_names,_compute_gps_metrics,_compute_followup_metrics,_build_overview).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/views.py` around lines 188 - 612, The stream_data method is too large and should be decomposed into focused helper methods; refactor stream_data by extracting the major logical blocks into private helpers (for example _load_pipeline_data(request, opportunity_ids, bust_cache) to encapsulate AnalysisPipeline/stream_pipeline and returning merged rows/metadata, _resolve_flw_names(access_token, opportunity_ids, session_flw_filter) for fetch_flw_names logic, _compute_gps_metrics(pipeline_rows, active_usernames, flw_names, start_date, end_date) wrapping analyze_gps_metrics/filter_visits_by_date/build_result_from_analyzed_visits and returning gps_result/gps_data, _fetch_forms(request, access_token, opportunity_ids, bust_cache) to return registration_forms and gs_forms, _compute_followup_metrics(all_pipeline_rows, active_usernames, registration_forms, current_date, flw_names) to call build_followup_from_pipeline, aggregate_flw_followup, aggregate_visit_status_distribution and return followup_data plus supporting maps (mother_metadata, parity_by_mother, etc.), and _build_overview(...) that consumes those results (compute_overview_quality_metrics, count_mothers_from_pipeline, meters/minutes per visit, eligible/cases logic) to produce overview_data; keep send_sse_event calls and exception handling in stream_data (or have helpers raise exceptions) and ensure signatures return the same data structures used later (e.g., gps_data, followup_data, overview_data) so external behavior and SSE event sequencing remain unchanged while improving testability and readability.
511-517: Complex nested comprehension reduces readability.This line is difficult to read and maintain. Consider extracting into a helper function for clarity.
♻️ Proposed refactor
+ def _count_eligible_mothers(flw_cases: list[dict], mother_metadata: dict) -> int: + mother_ids = { + c.get("properties", {}).get("mother_case_id", "") + for c in flw_cases + if c.get("properties", {}).get("mother_case_id") + } + return sum( + 1 for mid in mother_ids + if mother_metadata.get(mid, {}).get("properties", {}).get("eligible_full_intervention_bonus") == "1" + ) + eligible_mothers_by_flw = {} for flw_username, flw_cases in visit_cases_by_flw.items(): - mother_ids = {c.get("properties", {}).get("mother_case_id", "") for c in flw_cases if c.get("properties", {}).get("mother_case_id")} - eligible_count = sum( - 1 for mid in mother_ids - if mother_metadata.get(mid, {}).get("properties", {}).get("eligible_full_intervention_bonus") == "1" - ) - eligible_mothers_by_flw[flw_username] = eligible_count + eligible_mothers_by_flw[flw_username] = _count_eligible_mothers(flw_cases, mother_metadata)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/views.py` around lines 511 - 517, The nested comprehension that computes eligible_count inside the loop over visit_cases_by_flw is hard to read; extract the logic into a small helper (e.g., count_eligible_mothers_for_flw) that takes flw_cases and mother_metadata, builds the set of mother_ids from each case's properties["mother_case_id"], then iterates those ids and returns the count where mother_metadata[id]["properties"]["eligible_full_intervention_bonus"] == "1"; replace the current inline set and eligible_count computation with a call to that helper and assign into eligible_mothers_by_flw[flw_username].commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py (1)
85-85: Use timezone-awaredatetime.now(timezone.utc)instead of deprecateddatetime.utcnow().
datetime.utcnow()is deprecated since Python 3.12. Usedatetime.now(timezone.utc)for clarity and forward compatibility.♻️ Proposed fix
+from datetime import datetime, timezone + # In _validate_hq_cache: - cached_at = datetime.fromisoformat(cached_at_str) - age_minutes = (datetime.utcnow() - cached_at).total_seconds() / 60 + cached_at = datetime.fromisoformat(cached_at_str).replace(tzinfo=timezone.utc) + age_minutes = (datetime.now(timezone.utc) - cached_at).total_seconds() / 60Also update lines 225 and 291:
- "cached_at": datetime.utcnow().isoformat(), + "cached_at": datetime.now(timezone.utc).isoformat(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py` at line 85, Replace deprecated datetime.utcnow() usages with timezone-aware calls: use datetime.now(timezone.utc) wherever datetime.utcnow() appears in this module (specifically the age calculation that computes age_minutes and the other occurrences noted). Ensure timezone is imported (e.g., add "from datetime import timezone" or reference datetime.timezone) so cached_at and now are both timezone-aware to avoid naive/aware dt issues. Update all instances (including the other two occurrences referenced) so the code consistently uses datetime.now(timezone.utc).commcare_connect/templates/custom_analysis/mbw_monitoring/followup_analysis.html (2)
257-257: Unused variabledataApiUrl.The
dataApiUrlproperty is defined but never used in the component. Remove it to avoid confusion.🧹 Proposed fix
// URLs streamApiUrl: '{{ stream_api_url|default:"" }}', - dataApiUrl: '{{ data_api_url|default:"" }}',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/templates/custom_analysis/mbw_monitoring/followup_analysis.html` at line 257, Remove the unused template property dataApiUrl from the component's config (the line setting dataApiUrl: '{{ data_api_url|default:"" }}',) since it is never referenced; locate the dataApiUrl declaration in the template and delete that property so the component has no leftover unused config.
359-364: SSE error handler doesn't attempt reconnection.When an SSE connection fails, the handler closes the connection and shows an error. For transient network issues, consider implementing automatic reconnection with exponential backoff.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/templates/custom_analysis/mbw_monitoring/followup_analysis.html` around lines 359 - 364, The current SSE onerror handler (this.eventSource.onerror) just closes the connection and sets loadError/loading; change it to attempt automatic reconnection with exponential backoff instead: introduce a reconnection routine (e.g., reconnectSSE or connectEventSource) and state (retryCount, maxRetries, baseDelayMs) and on error increment retryCount, compute delay = baseDelayMs * 2^retryCount (with jitter and cap), close and null the old this.eventSource, schedule reconnect via setTimeout to call connectEventSource, and only set loadError/loading to true/failure after maxRetries are exceeded; update the handler to clear retryCount on successful open (this.eventSource.onopen) so transient failures auto-recover.commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md (3)
50-50: Add language identifiers to fenced code blocks.Markdownlint flags these blocks for missing language tags, which can reduce readability and tooling support.
✏️ Example fix
-``` +```text User Browser | ├── GET /custom_analysis/mbw_monitoring/-``` +```text ┌─────────────────┐ ┌─────────────────────┐ ┌──────────────────┐ │ Dashboard View │────>│ Stream View (SSE) │────>│ AnalysisPipeline │-``` +```text 1. User navigates to /labs/workflow/ 2. Clicks "Create Workflow" → selects "MBW Monitoring" template-``` +```text Pipeline Visit Forms (Connect API) │ ├── username ──────────> FLW Names (Connect API)Also applies to: 84-84, 269-269, 565-565
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md` at line 50, The fenced code blocks in DOCUMENTATION.md are missing language identifiers; update each triple-backtick block (the examples showing ASCII diagrams and numbered steps such as the User Browser/GET /custom_analysis/mbw_monitoring/, the ASCII flow diagrams, and the numbered workflow steps) to include a language tag like `text` (e.g., change ``` to ```text) so markdownlint stops flagging them and tooling renders them correctly; apply this consistently to the other similar blocks referenced in the comment.
443-457: Surround tables with blank lines.Markdownlint warns that these tables should be separated by blank lines for consistent rendering.
✏️ Example fix
-**FLW Table Columns**: -| Column | Data Source | Description | +**FLW Table Columns**: + +| Column | Data Source | Description | |--------|-----------|-------------| | FLW Name | Connect API | Display name with avatar |-**FLW Table Columns**: -| Column | Description | +**FLW Table Columns**: + +| Column | Description | |--------|-----------| | FLW Name | With avatar initial |-**FLW Table Columns**: -| Column | Description | +**FLW Table Columns**: + +| Column | Description | |--------|-----------| | FLW Name | Color-coded avatar (green/yellow/red based on follow-up rate) |Also applies to: 471-480, 502-510
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md` around lines 443 - 457, The Markdown tables in DOCUMENTATION.md (e.g., the table starting with "FLW Name" and other tables containing columns like "GS Score", "Follow-up Rate", "Revisit Dist.", etc.) need blank lines above and below them to satisfy markdownlint; edit the surrounding text in the mbw_monitoring documentation to insert a single empty line before the table start and a single empty line after the table end for the block shown and also apply the same fix to the other table blocks referenced (lines corresponding to 471-480 and 502-510).
796-799: Avoid weak intensifier (“very”).LanguageTool suggests replacing “very poor performance” with a more specific descriptor (e.g., “severely poor performance” or a concrete criterion).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md` around lines 796 - 799, Update the "Suspended" row description for the `suspended` status to avoid the weak intensifier "very"; replace "very poor performance" with a clearer, more specific phrase such as "severely poor performance" or "consistent/severe performance failures (e.g., repeated non-compliance or <specific measurable criterion>)" and ensure the sentence reads naturally with the new wording while keeping the rest of the table row intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py`:
- Around line 222-227: The cache is recording the requested ID count rather than
the number of cases actually fetched, which can mask missing records; update the
cache write in the function that builds cache_data (the block that currently
sets "cases": all_cases, "cached_count": len(unique_ids), "cached_at": ...) to
set "cached_count" to len(all_cases) instead, and apply the identical change in
the other similar function fetch_mother_cases_by_ids so both cached_count values
reflect the actual fetched case count used for cache validation.
- Around line 443-451: The fallback currently clears the entire cache via
cache.clear() and sets cleared = -1; instead, avoid clearing all cached data in
production by removing both cache.clear() calls in the fallback and exception
paths, set cleared = 0 to indicate nothing was removed, and log the failure with
context using logger.warning (include exception details in the except block) so
callers of this function (watch for the cleared variable) know no keys were
evicted rather than wiping the whole cache.
- Around line 500-506: The registration form filter currently checks
f.get("@name") but the submission object nests the form under the "form" key;
update the client-side fallback that builds forms = [...] (using
client.fetch_forms()) to access the name via f.get("form", {}).get("@name") so
it matches the Gold Standard and registration form filters and the CCHQ Form API
v1 structure.
In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md`:
- Around line 458-460: The wording repeats the adverb "only" (e.g., "Overview
tab only — other tabs have Filter only" and "Assessment buttons (monitoring
session only)"); tighten for readability by removing one or both "only"
instances and rephrase succinctly — for example change "Overview tab only —
other tabs have Filter only" to "Overview tab — other tabs have Filter" and/or
"Assessment buttons (monitoring session only)" to "Assessment buttons
(monitoring session)"; update the DOCUMENTATION.md text referencing "Overview
tab", "Assessment buttons", and `flw_results` accordingly.
- Around line 595-599: The doc mentions "FieldComputations" in the
MBW_GPS_PIPELINE_CONFIG section; verify whether this is the correct identifier
(matches the class/variable name in pipeline_config.py) or if it should be
written as human-readable prose ("Field computations"); update the
DOCUMENTATION.md entry so the spelling/casing matches the actual symbol in
pipeline_config.py (MBW_GPS_PIPELINE_CONFIG) or use the natural-language form
consistently across the docs.
In `@commcare_connect/custom_analysis/mbw_monitoring/flw_api.py`:
- Line 162: The task status check in flw_api.py uses a hardcoded comparison
(task.status != "closed") which misses other non-open statuses; update the
condition in the relevant function (where task.status is inspected) to use a set
of closed statuses (e.g., closed_statuses = {"closed", "resolved"} or reuse the
existing closed_statuses from views.py) and change the conditional to test
membership (task.status not in closed_statuses) so it consistently treats
"resolved" (and any other closed values) as closed.
In `@commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py`:
- Line 371: The variable grace_cutoff defined in aggregate_mother_metrics
(grace_cutoff = current_date - timedelta(days=GRACE_PERIOD_DAYS)) is unused;
either remove that line or apply it to the intended filtering logic. If the
intent was to exclude records older than the grace period, use grace_cutoff in
the relevant filter/comparison inside aggregate_mother_metrics (e.g., compare
record dates against grace_cutoff) otherwise delete the grace_cutoff declaration
to avoid dead code.
- Around line 445-453: The VISIT_CREATE_FLAGS constant and its preceding comment
are incorrectly indented and currently reside at module level while they were
intended to be outside (or inside) the _build_visit_details scope; fix by moving
or re-indenting so that the comment and the VISIT_CREATE_FLAGS dictionary are
declared at the correct scope (either fully inside _build_visit_details if they
are local to that function, or entirely at module level with no stray comment
tied to the function) and ensure the function _build_visit_details ends with its
return details line followed by proper dedentation before the VISIT_CREATE_FLAGS
definition.
- Around line 866-873: The function compute_overview_quality_metrics has an
unused parameter current_date; remove current_date from the function signature
and its type annotations in compute_overview_quality_metrics, and update every
caller (e.g., the view that calls compute_overview_quality_metrics around the
referenced call site) to stop passing that argument; ensure any tests or imports
that referenced the old signature are updated accordingly so the function is
called with visit_cases_by_flw, mother_metadata, parity_by_mother,
anc_date_by_mother and pnc_date_by_mother only.
- Line 121: Remove the unused local variable assignment visit_type =
props.get("visit_type", "") in followup_analysis.py; locate the assignment (the
visit_type variable and its props.get call) and delete that line so the function
no longer creates an unused variable.
In
`@commcare_connect/templates/custom_analysis/mbw_monitoring/followup_analysis.html`:
- Around line 270-276: The computeOverallRate function currently uses totalDue
as the denominator, which is incorrect; update computeOverallRate (involving
flwSummaries and fields completed_total, due_total, missed_total) to compute the
overall completion percentage using the correct denominator—either sum of total
visits (totalCompleted + totalDue + totalMissed) or by calculating a weighted
average using each entry's completion_rate weighted by its total_visits—so the
returned value is Math.round((totalCompleted / (totalCompleted + totalDue +
totalMissed)) * 100) or the equivalent weighted-average approach using
completion_rate and total_visits.
In `@commcare_connect/templates/custom_analysis/mbw_monitoring/gps_analysis.html`:
- Around line 179-182: The percentage expression can divide by zero when
flw.total_visits is 0; update the x-text for the percentage to guard against
that by checking flw.total_visits before computing the ratio (e.g., use a
ternary that returns "0%" or an empty string when flw.total_visits is falsy),
referencing the same flw.visits_with_gps and flw.total_visits properties used in
the current <span x-text="..."> so the displayed percentage never becomes NaN.
In `@config/urls.py`:
- Around line 47-50: The URL change removed /custom_analysis/mbw/ and broke
external links; add a compatibility redirect and update hardcoded references:
add a new path("custom_analysis/mbw/",
RedirectView.as_view(pattern_name="mbw:index", permanent=True)) or equivalent
redirect in the same URLconf where path(... "custom_analysis/mbw_monitoring/"
...) is declared to forward requests to the new namespace; update the template
reference in commcare_connect/custom_analysis/mbw/views.py (the template name
string currently "custom_analysis/mbw/gps_analysis.html") to the new path
"custom_analysis/mbw_monitoring/gps_analysis.html" (or the correct new template
location), and update all occurrences in
commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md that reference
custom_analysis/mbw/* (e.g., gps_analysis.py, gps_utils.py) to the new
custom_analysis/mbw_monitoring/* paths so links and imports match the new route.
---
Outside diff comments:
In `@commcare_connect/tasks/views.py`:
- Around line 491-521: Validate the incoming status in the view before calling
TaskDataAccess.create_task: check the request-derived status against the allowed
statuses defined on TaskRecord.status (or a canonical list of valid statuses
used by TaskRecord), and if it’s not in that set return a 400 JsonResponse with
an error message; only pass the validated status to create_task (reference
symbols: create_task, TaskRecord.status, TaskDataAccess.create_task, status).
---
Nitpick comments:
In `@commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py`:
- Line 85: Replace deprecated datetime.utcnow() usages with timezone-aware
calls: use datetime.now(timezone.utc) wherever datetime.utcnow() appears in this
module (specifically the age calculation that computes age_minutes and the other
occurrences noted). Ensure timezone is imported (e.g., add "from datetime import
timezone" or reference datetime.timezone) so cached_at and now are both
timezone-aware to avoid naive/aware dt issues. Update all instances (including
the other two occurrences referenced) so the code consistently uses
datetime.now(timezone.utc).
In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md`:
- Line 50: The fenced code blocks in DOCUMENTATION.md are missing language
identifiers; update each triple-backtick block (the examples showing ASCII
diagrams and numbered steps such as the User Browser/GET
/custom_analysis/mbw_monitoring/, the ASCII flow diagrams, and the numbered
workflow steps) to include a language tag like `text` (e.g., change ``` to
```text) so markdownlint stops flagging them and tooling renders them correctly;
apply this consistently to the other similar blocks referenced in the comment.
- Around line 443-457: The Markdown tables in DOCUMENTATION.md (e.g., the table
starting with "FLW Name" and other tables containing columns like "GS Score",
"Follow-up Rate", "Revisit Dist.", etc.) need blank lines above and below them
to satisfy markdownlint; edit the surrounding text in the mbw_monitoring
documentation to insert a single empty line before the table start and a single
empty line after the table end for the block shown and also apply the same fix
to the other table blocks referenced (lines corresponding to 471-480 and
502-510).
- Around line 796-799: Update the "Suspended" row description for the
`suspended` status to avoid the weak intensifier "very"; replace "very poor
performance" with a clearer, more specific phrase such as "severely poor
performance" or "consistent/severe performance failures (e.g., repeated
non-compliance or <specific measurable criterion>)" and ensure the sentence
reads naturally with the new wording while keeping the rest of the table row
intact.
In `@commcare_connect/custom_analysis/mbw_monitoring/views.py`:
- Around line 345-360: The outer try/except around the opportunity_ids loop is
redundant because each iteration already catches exceptions; remove the outer
try/except and unindent the logger.info call so it runs after the loop
completes. Specifically, edit the block that iterates over opportunity_ids
(which calls fetch_opportunity_metadata and fetch_registration_forms and appends
to registration_forms) to keep the per-iteration except handling and eliminate
the outer exception handler, leaving logger.warning inside the inner except and
logger.info after the loop.
- Around line 432-437: The local import of Counter inside the view should be
moved to the module imports: add "from collections import Counter" to the top
import block and remove the inline "from collections import Counter" statement
near where form_name_counts is computed (the block using all_pipeline_rows and
form_name_counts). Keep the rest of the logic (the generator expression and
logger.info) unchanged so the code now uses the top-level Counter import.
- Around line 188-612: The stream_data method is too large and should be
decomposed into focused helper methods; refactor stream_data by extracting the
major logical blocks into private helpers (for example
_load_pipeline_data(request, opportunity_ids, bust_cache) to encapsulate
AnalysisPipeline/stream_pipeline and returning merged rows/metadata,
_resolve_flw_names(access_token, opportunity_ids, session_flw_filter) for
fetch_flw_names logic, _compute_gps_metrics(pipeline_rows, active_usernames,
flw_names, start_date, end_date) wrapping
analyze_gps_metrics/filter_visits_by_date/build_result_from_analyzed_visits and
returning gps_result/gps_data, _fetch_forms(request, access_token,
opportunity_ids, bust_cache) to return registration_forms and gs_forms,
_compute_followup_metrics(all_pipeline_rows, active_usernames,
registration_forms, current_date, flw_names) to call
build_followup_from_pipeline, aggregate_flw_followup,
aggregate_visit_status_distribution and return followup_data plus supporting
maps (mother_metadata, parity_by_mother, etc.), and _build_overview(...) that
consumes those results (compute_overview_quality_metrics,
count_mothers_from_pipeline, meters/minutes per visit, eligible/cases logic) to
produce overview_data; keep send_sse_event calls and exception handling in
stream_data (or have helpers raise exceptions) and ensure signatures return the
same data structures used later (e.g., gps_data, followup_data, overview_data)
so external behavior and SSE event sequencing remain unchanged while improving
testability and readability.
- Around line 511-517: The nested comprehension that computes eligible_count
inside the loop over visit_cases_by_flw is hard to read; extract the logic into
a small helper (e.g., count_eligible_mothers_for_flw) that takes flw_cases and
mother_metadata, builds the set of mother_ids from each case's
properties["mother_case_id"], then iterates those ids and returns the count
where mother_metadata[id]["properties"]["eligible_full_intervention_bonus"] ==
"1"; replace the current inline set and eligible_count computation with a call
to that helper and assign into eligible_mothers_by_flw[flw_username].
In `@commcare_connect/custom_analysis/mbw_monitoring/workflow_adapter.py`:
- Around line 137-154: The WorkflowDataAccess instance created in
WorkflowMonitoringSession loading (use of WorkflowDataAccess, get_run,
data_access.close) may not be closed if an exception occurs; wrap the data
access usage in a try/finally (or implement/use a context manager on
WorkflowDataAccess) so that data_access.close() is always called even on
exceptions, keeping existing logic that checks run, state (selected_flws) and
returns WorkflowMonitoringSession(run) or None; ensure the finally block calls
data_access.close() only if data_access was successfully created.
- Around line 157-206: The save_flw_result function currently saves the provided
result without checking it; validate the result against the VALID_FLW_RESULTS
set/constant at the start of save_flw_result (before building updated_results)
and handle invalid values by logging a clear error (use logger.error) and
returning None (or raising a controlled exception) instead of persisting; ensure
the check references VALID_FLW_RESULTS and keep the existing flow that uses
WorkflowDataAccess.get_run, update_run_state, and returns a
WorkflowMonitoringSession on success.
In `@commcare_connect/labs/integrations/commcare/api_client.py`:
- Around line 257-264: In the except block inside the bulk fetch logic (the
handler catching httpx.HTTPStatusError in
commcare_connect.labs.integrations.commcare.api_client.py), replace the
logger.error call in the non-404 branch with logger.exception so the full stack
trace is recorded; keep the 404 branch as-is (logger.warning) and ensure the
exception object (e) or a descriptive message is passed to logger.exception to
preserve context for debugging.
In
`@commcare_connect/templates/custom_analysis/mbw_monitoring/followup_analysis.html`:
- Line 257: Remove the unused template property dataApiUrl from the component's
config (the line setting dataApiUrl: '{{ data_api_url|default:"" }}',) since it
is never referenced; locate the dataApiUrl declaration in the template and
delete that property so the component has no leftover unused config.
- Around line 359-364: The current SSE onerror handler
(this.eventSource.onerror) just closes the connection and sets
loadError/loading; change it to attempt automatic reconnection with exponential
backoff instead: introduce a reconnection routine (e.g., reconnectSSE or
connectEventSource) and state (retryCount, maxRetries, baseDelayMs) and on error
increment retryCount, compute delay = baseDelayMs * 2^retryCount (with jitter
and cap), close and null the old this.eventSource, schedule reconnect via
setTimeout to call connectEventSource, and only set loadError/loading to
true/failure after maxRetries are exceeded; update the handler to clear
retryCount on successful open (this.eventSource.onopen) so transient failures
auto-recover.
In `@commcare_connect/templates/custom_analysis/mbw_monitoring/gps_analysis.html`:
- Around line 463-496: The SSE onmessage handler contains multiple console.log
calls (e.g., the ones logging '[MBW GPS] Starting SSE connection:', the parsed
'SSE event:' and the final loaded FLWs message) that should be removed or gated
behind a debug flag; add a component-level boolean (e.g., this.debug or
ENABLE_MBW_GPS_DEBUG) and replace each console.log in the EventSource setup and
in the onmessage handler with a conditional log (if (this.debug) {
console.log(...); }) or remove them entirely so production runs don't emit debug
output, touching the EventSource creation and its onmessage handler where
this.eventSource and related variables (this.loadingMessage, this.flwSummaries,
this.summary, this.opportunityId) are referenced.
In `@commcare_connect/templates/tasks/task_create_edit.html`:
- Around line 1307-1318: The retrySessionLink method currently reloads silently
or logs errors; update retrySessionLink to provide user feedback: show a success
toast (or notification) when a session is found (before calling
window.location.reload()), show an informative toast when the fetch succeeds but
data.success is false or data.sessions is empty (e.g., "No sessions found for
task {this.taskId}"), and show an error toast when the catch block is triggered
(include the error message). Use the existing toast/notification utility in the
codebase (or a simple alert fallback) and keep the fetch and JSON parsing logic
in retrySessionLink intact while adding these notifications.
In `@commcare_connect/workflow/templates/mbw_monitoring.py`:
- Around line 24-277: The embedded JSX in WorkflowUI hardcodes multiple URLs and
performs a fetch inside React.useEffect without cancellation; update the
component to accept URL(s) or a base/api endpoints prop (e.g., pass an api
object or baseUrl into WorkflowUI via the surrounding renderer) and replace
hardcoded strings like '/custom_analysis/mbw_monitoring/api/opportunity-flws/',
'/custom_analysis/mbw_monitoring/?run_id=' and '/tasks/' with values from that
prop (locate usages in WorkflowUI: the useEffect fetch, the run_id redirects,
and task links), and modify the useEffect fetch to create an AbortController,
pass controller.signal to fetch, handle aborts in the catch, and call
controller.abort() in the effect cleanup to avoid setting state after unmount.
| cache_data = { | ||
| "cases": all_cases, | ||
| "cached_count": len(unique_ids), | ||
| "cached_at": datetime.utcnow().isoformat(), | ||
| } | ||
| cache.set(cache_key, cache_data, config["cases_ttl"]) |
There was a problem hiding this comment.
Potential mismatch: cached_count stores requested IDs but cache returns fetched cases.
cached_count is set to len(unique_ids) (requested), but all_cases (fetched) may differ if some IDs don't exist. This could cause cache validation to incorrectly accept stale data when fewer cases are returned than requested.
Consider storing len(all_cases) instead for accurate cache validation.
🛠️ Proposed fix
cache_data = {
"cases": all_cases,
- "cached_count": len(unique_ids),
+ "cached_count": len(all_cases),
"cached_at": datetime.utcnow().isoformat(),
}Apply the same fix at line 290 for fetch_mother_cases_by_ids.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py` around
lines 222 - 227, The cache is recording the requested ID count rather than the
number of cases actually fetched, which can mask missing records; update the
cache write in the function that builds cache_data (the block that currently
sets "cases": all_cases, "cached_count": len(unique_ids), "cached_at": ...) to
set "cached_count" to len(all_cases) instead, and apply the identical change in
the other similar function fetch_mother_cases_by_ids so both cached_count values
reflect the actual fetched case count used for cache validation.
| else: | ||
| # Fallback: clear the entire cache (safe for dev) | ||
| cache.clear() | ||
| cleared = -1 # Indicate full clear | ||
| logger.info("[HQ Cache] Cleared entire Django cache (no pattern support)") | ||
| except Exception as e: | ||
| logger.warning(f"[HQ Cache] Cache bust failed: {e}") | ||
| cache.clear() | ||
| cleared = -1 |
There was a problem hiding this comment.
Fallback to cache.clear() is risky in production.
Clearing the entire cache as a fallback (lines 445, 450) could impact unrelated cached data and cause performance degradation across the application. Consider logging the failure and returning 0 instead of clearing everything.
🛠️ Proposed fix
try:
if hasattr(cache, "delete_pattern"):
cleared += cache.delete_pattern("mbw_visit_cases:*")
cleared += cache.delete_pattern("mbw_mother_cases:*")
cleared += cache.delete_pattern("mbw_opp_metadata:*")
logger.info(f"[HQ Cache] Busted {cleared} cache keys via pattern")
else:
- # Fallback: clear the entire cache (safe for dev)
- cache.clear()
- cleared = -1 # Indicate full clear
- logger.info("[HQ Cache] Cleared entire Django cache (no pattern support)")
+ # Pattern deletion not supported - skip full cache clear in production
+ logger.warning("[HQ Cache] Pattern deletion not supported, skipping cache bust")
+ cleared = 0
except Exception as e:
- logger.warning(f"[HQ Cache] Cache bust failed: {e}")
- cache.clear()
- cleared = -1
+ logger.warning(f"[HQ Cache] Cache bust failed: {e}", exc_info=True)
+ cleared = 0
return cleared📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| else: | |
| # Fallback: clear the entire cache (safe for dev) | |
| cache.clear() | |
| cleared = -1 # Indicate full clear | |
| logger.info("[HQ Cache] Cleared entire Django cache (no pattern support)") | |
| except Exception as e: | |
| logger.warning(f"[HQ Cache] Cache bust failed: {e}") | |
| cache.clear() | |
| cleared = -1 | |
| try: | |
| if hasattr(cache, "delete_pattern"): | |
| cleared += cache.delete_pattern("mbw_visit_cases:*") | |
| cleared += cache.delete_pattern("mbw_mother_cases:*") | |
| cleared += cache.delete_pattern("mbw_opp_metadata:*") | |
| logger.info(f"[HQ Cache] Busted {cleared} cache keys via pattern") | |
| else: | |
| # Pattern deletion not supported - skip full cache clear in production | |
| logger.warning("[HQ Cache] Pattern deletion not supported, skipping cache bust") | |
| cleared = 0 | |
| except Exception as e: | |
| logger.warning(f"[HQ Cache] Cache bust failed: {e}", exc_info=True) | |
| cleared = 0 | |
| return cleared |
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 448-448: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py` around
lines 443 - 451, The fallback currently clears the entire cache via
cache.clear() and sets cleared = -1; instead, avoid clearing all cached data in
production by removing both cache.clear() calls in the fallback and exception
paths, set cleared = 0 to indicate nothing was removed, and log the failure with
context using logger.warning (include exception details in the except block) so
callers of this function (watch for the cleared variable) know no keys were
evicted rather than wiping the whole cache.
| # Last resort: fetch all forms, filter client-side | ||
| logger.warning("[MBW Dashboard] Could not discover GS xmlns, falling back to client-side filtering") | ||
| forms = client.fetch_forms() | ||
| forms = [ | ||
| f for f in forms | ||
| if (f.get("form", {}).get("@name") or "") == "Gold Standard Visit Checklist" | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, check the file structure and read the relevant sections
wc -l commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 137
🏁 Script executed:
#!/bin/bash
# Read the section around line 500-506
sed -n '500,510p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 560
🏁 Script executed:
#!/bin/bash
# Read the section around line 570
sed -n '560,580p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 822
🏁 Script executed:
#!/bin/bash
# Search for all `@name` patterns to understand the API structure
rg -n '@name' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 216
🏁 Script executed:
#!/bin/bash
# Search for form name access patterns in the entire codebase
rg -n 'f\.get\("form"' commcare_connect/ -A 2Repository: jjackson/commcare-connect
Length of output: 375
🏁 Script executed:
#!/bin/bash
# Look for imports and client usage to understand CCHQ API
head -50 commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 1559
🏁 Script executed:
#!/bin/bash
# Search for any tests or examples of form structure
find commcare_connect -type f -name "*.py" | xargs rg -l "fetch_forms" | head -5Repository: jjackson/commcare-connect
Length of output: 191
🏁 Script executed:
#!/bin/bash
# Search for form structure usage patterns in tests
rg -n '@name' commcare_connect --type=py -B 3 -A 3 | head -60Repository: jjackson/commcare-connect
Length of output: 5395
🏁 Script executed:
#!/bin/bash
# Look at the client.fetch_forms calls and how results are used elsewhere
rg -n 'fetch_forms' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py -B 3 -A 10Repository: jjackson/commcare-connect
Length of output: 1493
🏁 Script executed:
#!/bin/bash
# Check the CommCareDataAccess client to understand fetch_forms return structure
head -100 commcare_connect/labs/integrations/commcare/api_client.pyRepository: jjackson/commcare-connect
Length of output: 3386
🏁 Script executed:
#!/bin/bash
# Search for fetch_forms definition in the API client
rg -n 'def fetch_forms' commcare_connect/labs/integrations/commcare/api_client.py -A 20Repository: jjackson/commcare-connect
Length of output: 895
🏁 Script executed:
#!/bin/bash
# Look for any tests that show form structure
find commcare_connect -type f -name "*test*.py" | xargs rg -l "fetch_forms" 2>/dev/nullRepository: jjackson/commcare-connect
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or documentation about form structure
rg -n 'form_json\|form dict\|form structure' commcare_connect --type=py -B 2 -A 2 | head -40Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Get the full fetch_forms method implementation
sed -n '268,330p' commcare_connect/labs/integrations/commcare/api_client.pyRepository: jjackson/commcare-connect
Length of output: 2357
🏁 Script executed:
#!/bin/bash
# Check if there are any examples or usage in documentation
find commcare_connect -type f \( -name "*.md" -o -name "*.txt" -o -name "*.rst" \) | xargs grep -l "fetch_forms\|@name" 2>/dev/null | head -5Repository: jjackson/commcare-connect
Length of output: 133
🏁 Script executed:
#!/bin/bash
# Search for how forms are used after being fetched in both functions
sed -n '495,520p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py
sed -n '560,580p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 1754
🏁 Script executed:
#!/bin/bash
# Check the DOCUMENTATION.md to understand the form structure better
cat commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md | head -100Repository: jjackson/commcare-connect
Length of output: 5493
🏁 Script executed:
#!/bin/bash
# Search for any references to the actual API response structure or examples
rg -n 'objects.*form\|form.*@name' commcare_connect --type=py -B 2 -A 2 | head -50Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Look for other uses of form objects from fetch_forms to understand structure
rg -n 'for.*in.*fetch_forms\|for f in forms' commcare_connect/custom_analysis/ --type=py -A 3 | head -60Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Search for CommCare API documentation references or comments about form structure
rg -n 'Form API\|form.*structure\|form.*response' commcare_connect --type=py -B 2 -A 2Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Search for form access patterns more broadly
rg -n 'f\[.*form.*\]\|form\[.*@name' commcare_connect --type=py | head -20Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Look at the whole context of how forms are structured - check comments around fetch_forms calls
sed -n '480,520p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 1586
🏁 Script executed:
#!/bin/bash
# Check the entire registration forms function to see full context
sed -n '545,580p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 1466
🏁 Script executed:
#!/bin/bash
# Search for any API response examples or sample data
find commcare_connect -type f -name "*.json" -o -name "*fixture*" -o -name "*mock*" | head -10Repository: jjackson/commcare-connect
Length of output: 361
🏁 Script executed:
#!/bin/bash
# Search for any CCHQ API documentation links or references
rg -n 'commcarehq\|CCHQ.*API\|Form API' commcare_connect --type=py | grep -i 'doc\|url\|http' | head -20Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Look at the get_form_xmlns method to understand form structure better
rg -n 'def get_form_xmlns\|def discover_form_xmlns' commcare_connect/labs/integrations/commcare/api_client.py -A 30Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Check if there are any test files that mock or show form API responses
find commcare_connect -type f -name "*test*.py" -exec rg -l "form.*@name\|objects.*form" {} \;Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Search for ".form." notation usage which might indicate nested structure
rg -n '\.form\.' commcare_connect --type=py | head -20Repository: jjackson/commcare-connect
Length of output: 1874
🏁 Script executed:
#!/bin/bash
# Check the pipeline_config reference to line 55 for context on form_json structure
sed -n '50,70p' commcare_connect/custom_analysis/mbw/pipeline_config.pyRepository: jjackson/commcare-connect
Length of output: 818
🏁 Script executed:
#!/bin/bash
# Search for form_json usage to understand the structure
rg -n 'form_json' commcare_connect/custom_analysis/ --type=py -B 2 -A 2 | head -50Repository: jjackson/commcare-connect
Length of output: 4129
🏁 Script executed:
#!/bin/bash
# Look for any comments about CCHQ Form API v1 structure in the code
rg -n 'Form API v1\|form submission\|objects.*list' commcare_connect --type=py -B 1 -A 1Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Check what the actual fetch_forms returns by looking at how it's used elsewhere
rg -n 'fetch_forms.*\|all_forms.extend' commcare_connect/labs/integrations/commcare/api_client.py -B 2 -A 2Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Search for how forms from fetch_forms are actually used in other places
rg -n 'forms\s*=' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py | head -20Repository: jjackson/commcare-connect
Length of output: 370
🏁 Script executed:
#!/bin/bash
# Check if there's any CommCare API documentation reference or comments about form structure
rg -n 'CCHQ\|CommCare.*Form\|Form API' commcare_connect/labs/integrations/commcare/ --type=py -B 1 -A 1Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Look at the entire fetch_gs_forms function to see how forms are returned and used
sed -n '475,515p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 1550
🏁 Script executed:
#!/bin/bash
# Compare with the registration function - check what's done with the forms after filtering
sed -n '575,590p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 246
🏁 Script executed:
#!/bin/bash
# Let me check if there are other places in the codebase where forms from fetch_forms are processed
rg -n 'fetch_forms\(\)' commcare_connect --type=py -A 10 | head -60Repository: jjackson/commcare-connect
Length of output: 1169
🏁 Script executed:
#!/bin/bash
# Check how forms are used after being returned - do they access `@name` or form.@name?
sed -n '475,520p' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.pyRepository: jjackson/commcare-connect
Length of output: 1730
🏁 Script executed:
#!/bin/bash
# Look at a broader context to see if forms are processed differently anywhere else
rg -n '\[.*in.*forms\]' commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py -B 1 -A 1Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Search for CCHQ Form API documentation or any notes about response structure
rg -n 'objects\|Form API v1' commcare_connect/labs/integrations/commcare/api_client.py -B 3 -A 3Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Check if there's any usage or test that shows the actual structure of form objects
find . -type f -name "*.py" -exec grep -l "Gold Standard\|Register Mother" {} \; 2>/dev/nullRepository: jjackson/commcare-connect
Length of output: 266
🏁 Script executed:
#!/bin/bash
# Let's verify by checking if the registration filter would actually match anything
# by looking for any tests or examples
find commcare_connect -type f -name "*test*.py" -o -name "*fixture*.py" | xargs grep -l "Register Mother\|@name" 2>/dev/nullRepository: jjackson/commcare-connect
Length of output: 53
🏁 Script executed:
#!/bin/bash
# Check the entire api_client to see the response structure being returned
sed -n '287,330p' commcare_connect/labs/integrations/commcare/api_client.pyRepository: jjackson/commcare-connect
Length of output: 1715
🏁 Script executed:
#!/bin/bash
# Check how forms are used in followup_analysis.py
rg -n 'fetch_gs_forms\|fetch_registration_forms' commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py -A 5 -B 2Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Search for how the returned forms are actually processed/accessed
rg -n 'forms\[.*\]\|for.*in forms\|forms\.' commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py | head -20Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Look at the actual use case of these forms in followup_analysis
sed -n '1,100p' commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py | grep -A 10 -B 10 "fetch_gs_forms\|fetch_registration_forms"Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Check CommCare HQ API documentation reference
web_search CommCare HQ Form API v1 response structure form objectRepository: jjackson/commcare-connect
Length of output: 119
🌐 Web query:
CommCare HQ Form API v1 response structure form object @name``
💡 Result:
In CommCare HQ’s Form API v1, the returned form object is a JSON-ified version of the submitted XForm, where keys prefixed with @ represent XML attributes on the XForm’s root node.
form["@name"]= the human‑readable form name, taken from the XForm root element’sname="..."attribute (e.g.,<data name="Enregistrer un cas COVID-19" ...>). [1][2]- (Related)
form["@xmlns"]= the XForm root element’sxmlns="..."value. [2]
Example (XForm → API form object mapping)
- XForm:
<data name="My Form" xmlns="http://example.org/myform"> ...[1] - API JSON:
form: { "@name": "My Form", "@xmlns": "http://example.org/myform", ... }(attributes become@...). [1][2]
Sources:
[1] CommCare HQ docs – Form Submission API (shows the root <data name="...">)
[2] Dimagi CommCare Help – Metadata Glossary (defines @name and @xmlns)
🏁 Script executed:
#!/bin/bash
# Let me check if there are any actual usages of these forms returned
# Check followup_analysis for how gs_forms and registration_forms are used
rg -n 'gs_forms\|registration_forms' commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py -B 2 -A 5Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Check the entire followup_analysis.py file to see imports and usage
head -150 commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.pyRepository: jjackson/commcare-connect
Length of output: 4699
🏁 Script executed:
#!/bin/bash
# Now verify: the fetch_forms method returns data.get("objects", [])
# Let's check if there are any examples or hints about the actual form object structure
rg -n '"objects"' commcare_connect/labs/integrations/commcare/api_client.py -B 5 -A 5Repository: jjackson/commcare-connect
Length of output: 939
🏁 Script executed:
#!/bin/bash
# Check if there's any comment in the codebase about the form structure from CCHQ API
rg -n 'form\s*object\|form.*structure\|@name.*form\|form.*@name' commcare_connect --type=py -B 2 -A 2 | head -50Repository: jjackson/commcare-connect
Length of output: 51
🏁 Script executed:
#!/bin/bash
# Verify the structure by checking if there are any actual form object examples or tests
find commcare_connect -name "*.py" -type f | xargs grep -l "test.*fetch_forms\|mock.*fetch_forms\|fixture.*form" 2>/dev/nullRepository: jjackson/commcare-connect
Length of output: 117
🏁 Script executed:
#!/bin/bash
# Double-check by looking for any other places where `@name` is accessed to see the pattern
rg -n 'get\("form"' commcare_connect --type=py -A 1 -B 1Repository: jjackson/commcare-connect
Length of output: 4669
Fix registration form filter to access form name correctly.
Line 570 is missing the "form" key wrapper. It should be f.get("form", {}).get("@name") to match the GS form filter at line 505 and align with the CCHQ Form API v1 response structure where the form submission object contains a nested "form" property. Without this fix, the filter will not match any registration forms since @name does not exist at the top level of the submission object.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/data_fetchers.py` around
lines 500 - 506, The registration form filter currently checks f.get("@name")
but the submission object nests the form under the "form" key; update the
client-side fallback that builds forms = [...] (using client.fetch_forms()) to
access the name via f.get("form", {}).get("@name") so it matches the Gold
Standard and registration form filters and the CCHQ Form API v1 structure.
| **Actions per FLW** (Overview tab only — other tabs have Filter only): | ||
| - **Assessment buttons** (monitoring session only): Eligible for Renewal (green), Probation (yellow), Suspended (red) — toggle on click, stored in `flw_results` | ||
| - **Notes button**: Opens modal with assessment + notes for the FLW |
There was a problem hiding this comment.
Tighten wording to avoid repeated adverb.
LanguageTool flags the repeated adverb around this sentence. Consider removing one instance for readability.
🧰 Tools
🪛 LanguageTool
[style] ~458-~458: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...rview tab only — other tabs have Filter only): - Assessment buttons (monitoring ...
(ADVERB_REPETITION_PREMIUM)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md` around
lines 458 - 460, The wording repeats the adverb "only" (e.g., "Overview tab only
— other tabs have Filter only" and "Assessment buttons (monitoring session
only)"); tighten for readability by removing one or both "only" instances and
rephrase succinctly — for example change "Overview tab only — other tabs have
Filter only" to "Overview tab — other tabs have Filter" and/or "Assessment
buttons (monitoring session only)" to "Assessment buttons (monitoring session)";
update the DOCUMENTATION.md text referencing "Overview tab", "Assessment
buttons", and `flw_results` accordingly.
| The `MBW_GPS_PIPELINE_CONFIG` in `pipeline_config.py` defines 12 FieldComputations for visit-level data extraction: | ||
|
|
||
| | Name | Type | Path / Extractor | Notes | | ||
| |------|------|-----------------|-------| | ||
| | `gps_location` | extractor | `extract_gps_location(visit_data)` | Reads `form_json.form.meta.location` | |
There was a problem hiding this comment.
Check spelling in the pipeline configuration section.
LanguageTool flagged a possible spelling issue around “FieldComputations”. Please verify it’s the intended term and spelling.
🧰 Tools
🪛 LanguageTool
[grammar] ~595-~595: Ensure spelling is correct
Context: ...FIGinpipeline_config.py` defines 12 FieldComputations for visit-level data extraction: | Nam...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md` around
lines 595 - 599, The doc mentions "FieldComputations" in the
MBW_GPS_PIPELINE_CONFIG section; verify whether this is the correct identifier
(matches the class/variable name in pipeline_config.py) or if it should be
written as human-readable prose ("Field computations"); update the
DOCUMENTATION.md entry so the spelling/casing matches the actual symbol in
pipeline_config.py (MBW_GPS_PIPELINE_CONFIG) or use the natural-language form
consistently across the docs.
| # Visit type → form-level create flag name (used in Register Mother forms) | ||
| VISIT_CREATE_FLAGS = { | ||
| "ANC Visit": "create_antenatal_visit", | ||
| "Postnatal Delivery Visit": "create_postnatal_visit", | ||
| "1 Week Visit": "create_one_two_visit", | ||
| "1 Month Visit": "create_one_month_visit", | ||
| "3 Month Visit": "create_three_month_visit", | ||
| "6 Month Visit": "create_six_month_visit", | ||
| } |
There was a problem hiding this comment.
Indentation error: comment outside function scope.
Line 445-446 appears to be a comment that should be inside a function but is at module level due to incorrect indentation from line 442's return details.
🐛 Proposed fix
# Sort by scheduled date, then visit type
details.sort(key=lambda d: (d.get("visit_date_scheduled") or "", d.get("visit_type", "")))
return details
- # Visit type → form-level create flag name (used in Register Mother forms)
+# Visit type → form-level create flag name (used in Register Mother forms)
VISIT_CREATE_FLAGS = {The comment and VISIT_CREATE_FLAGS constant are at incorrect indentation - they appear to be inside _build_visit_details but are module-level constants.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Visit type → form-level create flag name (used in Register Mother forms) | |
| VISIT_CREATE_FLAGS = { | |
| "ANC Visit": "create_antenatal_visit", | |
| "Postnatal Delivery Visit": "create_postnatal_visit", | |
| "1 Week Visit": "create_one_two_visit", | |
| "1 Month Visit": "create_one_month_visit", | |
| "3 Month Visit": "create_three_month_visit", | |
| "6 Month Visit": "create_six_month_visit", | |
| } | |
| # Visit type → form-level create flag name (used in Register Mother forms) | |
| VISIT_CREATE_FLAGS = { | |
| "ANC Visit": "create_antenatal_visit", | |
| "Postnatal Delivery Visit": "create_postnatal_visit", | |
| "1 Week Visit": "create_one_two_visit", | |
| "1 Month Visit": "create_one_month_visit", | |
| "3 Month Visit": "create_three_month_visit", | |
| "6 Month Visit": "create_six_month_visit", | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py` around
lines 445 - 453, The VISIT_CREATE_FLAGS constant and its preceding comment are
incorrectly indented and currently reside at module level while they were
intended to be outside (or inside) the _build_visit_details scope; fix by moving
or re-indenting so that the comment and the VISIT_CREATE_FLAGS dictionary are
declared at the correct scope (either fully inside _build_visit_details if they
are local to that function, or entirely at module level with no stray comment
tied to the function) and ensure the function _build_visit_details ends with its
return details line followed by proper dedentation before the VISIT_CREATE_FLAGS
definition.
| def compute_overview_quality_metrics( | ||
| visit_cases_by_flw: dict[str, list[dict]], | ||
| mother_metadata: dict[str, dict], | ||
| parity_by_mother: dict[str, str], | ||
| current_date: date, | ||
| anc_date_by_mother: dict[str, str] | None = None, | ||
| pnc_date_by_mother: dict[str, str] | None = None, | ||
| ) -> dict[str, dict]: |
There was a problem hiding this comment.
Unused parameter current_date in function signature.
The current_date parameter is never used in compute_overview_quality_metrics. Either remove it or use it if it was intended for some calculation.
🧹 Proposed fix
def compute_overview_quality_metrics(
visit_cases_by_flw: dict[str, list[dict]],
mother_metadata: dict[str, dict],
parity_by_mother: dict[str, str],
- current_date: date,
anc_date_by_mother: dict[str, str] | None = None,
pnc_date_by_mother: dict[str, str] | None = None,
) -> dict[str, dict]:Note: Update all callers (e.g., views.py line 479-483) to remove this argument.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def compute_overview_quality_metrics( | |
| visit_cases_by_flw: dict[str, list[dict]], | |
| mother_metadata: dict[str, dict], | |
| parity_by_mother: dict[str, str], | |
| current_date: date, | |
| anc_date_by_mother: dict[str, str] | None = None, | |
| pnc_date_by_mother: dict[str, str] | None = None, | |
| ) -> dict[str, dict]: | |
| def compute_overview_quality_metrics( | |
| visit_cases_by_flw: dict[str, list[dict]], | |
| mother_metadata: dict[str, dict], | |
| parity_by_mother: dict[str, str], | |
| anc_date_by_mother: dict[str, str] | None = None, | |
| pnc_date_by_mother: dict[str, str] | None = None, | |
| ) -> dict[str, dict]: |
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 870-870: Unused function argument: current_date
(ARG001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/custom_analysis/mbw_monitoring/followup_analysis.py` around
lines 866 - 873, The function compute_overview_quality_metrics has an unused
parameter current_date; remove current_date from the function signature and its
type annotations in compute_overview_quality_metrics, and update every caller
(e.g., the view that calls compute_overview_quality_metrics around the
referenced call site) to stop passing that argument; ensure any tests or imports
that referenced the old signature are updated accordingly so the function is
called with visit_cases_by_flw, mother_metadata, parity_by_mother,
anc_date_by_mother and pnc_date_by_mother only.
| computeOverallRate() { | ||
| if (this.flwSummaries.length === 0) return 0; | ||
| const totalCompleted = this.flwSummaries.reduce((sum, flw) => sum + flw.completed_total, 0); | ||
| const totalDue = this.flwSummaries.reduce((sum, flw) => sum + flw.due_total, 0); | ||
| if (totalDue === 0) return 0; | ||
| return Math.round((totalCompleted / totalDue) * 100); | ||
| }, |
There was a problem hiding this comment.
Incorrect completion rate calculation logic.
The computeOverallRate function divides totalCompleted by totalDue, but totalDue represents only visits that are still due (not completed). The correct denominator should be totalCompleted + totalDue + totalMissed or use the completion_rate values from the server.
🐛 Proposed fix
computeOverallRate() {
if (this.flwSummaries.length === 0) return 0;
- const totalCompleted = this.flwSummaries.reduce((sum, flw) => sum + flw.completed_total, 0);
- const totalDue = this.flwSummaries.reduce((sum, flw) => sum + flw.due_total, 0);
- if (totalDue === 0) return 0;
- return Math.round((totalCompleted / totalDue) * 100);
+ // Average the per-FLW completion rates
+ const totalRate = this.flwSummaries.reduce((sum, flw) => sum + flw.completion_rate, 0);
+ return Math.round(totalRate / this.flwSummaries.length);
},Alternatively, compute weighted average using total visits as weights.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@commcare_connect/templates/custom_analysis/mbw_monitoring/followup_analysis.html`
around lines 270 - 276, The computeOverallRate function currently uses totalDue
as the denominator, which is incorrect; update computeOverallRate (involving
flwSummaries and fields completed_total, due_total, missed_total) to compute the
overall completion percentage using the correct denominator—either sum of total
visits (totalCompleted + totalDue + totalMissed) or by calculating a weighted
average using each entry's completion_rate weighted by its total_visits—so the
returned value is Math.round((totalCompleted / (totalCompleted + totalDue +
totalMissed)) * 100) or the equivalent weighted-average approach using
completion_rate and total_visits.
| <td class="px-4 py-3 whitespace-nowrap text-sm text-gray-900"> | ||
| <span x-text="flw.visits_with_gps"></span> | ||
| <span class="text-gray-500" x-text="'(' + Math.round(flw.visits_with_gps / flw.total_visits * 100) + '%)'"></span> | ||
| </td> |
There was a problem hiding this comment.
Potential division by zero when displaying GPS percentage.
If flw.total_visits is 0, this expression will produce NaN.
🛡️ Suggested fix
<td class="px-4 py-3 whitespace-nowrap text-sm text-gray-900">
<span x-text="flw.visits_with_gps"></span>
- <span class="text-gray-500" x-text="'(' + Math.round(flw.visits_with_gps / flw.total_visits * 100) + '%)'"></span>
+ <span class="text-gray-500" x-text="'(' + (flw.total_visits > 0 ? Math.round(flw.visits_with_gps / flw.total_visits * 100) : 0) + '%)'"></span>
</td>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <td class="px-4 py-3 whitespace-nowrap text-sm text-gray-900"> | |
| <span x-text="flw.visits_with_gps"></span> | |
| <span class="text-gray-500" x-text="'(' + Math.round(flw.visits_with_gps / flw.total_visits * 100) + '%)'"></span> | |
| </td> | |
| <td class="px-4 py-3 whitespace-nowrap text-sm text-gray-900"> | |
| <span x-text="flw.visits_with_gps"></span> | |
| <span class="text-gray-500" x-text="'(' + (flw.total_visits > 0 ? Math.round(flw.visits_with_gps / flw.total_visits * 100) : 0) + '%)'"></span> | |
| </td> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/templates/custom_analysis/mbw_monitoring/gps_analysis.html`
around lines 179 - 182, The percentage expression can divide by zero when
flw.total_visits is 0; update the x-text for the percentage to guard against
that by checking flw.total_visits before computing the ratio (e.g., use a
ternary that returns "0%" or an empty string when flw.total_visits is falsy),
referencing the same flw.visits_with_gps and flw.total_visits properties used in
the current <span x-text="..."> so the displayed percentage never becomes NaN.
| path( | ||
| "custom_analysis/mbw/", | ||
| include("commcare_connect.custom_analysis.mbw.urls", namespace="mbw"), | ||
| "custom_analysis/mbw_monitoring/", | ||
| include("commcare_connect.custom_analysis.mbw_monitoring.urls", namespace="mbw"), | ||
| ), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for hardcoded references to the old URL path in various file types
echo "=== Searching for 'custom_analysis/mbw/' references ==="
rg -n "custom_analysis/mbw/" --type=py
echo ""
echo "=== Searching in HTML files ==="
rg -n "custom_analysis/mbw/" --type=html
echo ""
echo "=== Searching in JavaScript files ==="
rg -n "custom_analysis/mbw/" --type=js
echo ""
echo "=== Searching in Markdown files ==="
rg -n "custom_analysis/mbw/" --glob "*.md"
echo ""
echo "=== Checking for redirect patterns in urls.py ==="
cat -n config/urls.py | head -100Repository: jjackson/commcare-connect
Length of output: 6251
Breaking change: URL path changed from /custom_analysis/mbw/ to /custom_analysis/mbw_monitoring/ without backward compatibility redirect.
Found hardcoded references to old path that need updating:
commcare_connect/custom_analysis/mbw/views.py:139: Template name references"custom_analysis/mbw/gps_analysis.html"(old path)commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md: Multiple lines (933-936) contain old path references likecustom_analysis/mbw/gps_analysis.py,custom_analysis/mbw/gps_utils.py, etc.
Add a redirect from /custom_analysis/mbw/ to /custom_analysis/mbw_monitoring/ to maintain backward compatibility for external links and bookmarks, and update the above references to use the new path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/urls.py` around lines 47 - 50, The URL change removed
/custom_analysis/mbw/ and broke external links; add a compatibility redirect and
update hardcoded references: add a new path("custom_analysis/mbw/",
RedirectView.as_view(pattern_name="mbw:index", permanent=True)) or equivalent
redirect in the same URLconf where path(... "custom_analysis/mbw_monitoring/"
...) is declared to forward requests to the new namespace; update the template
reference in commcare_connect/custom_analysis/mbw/views.py (the template name
string currently "custom_analysis/mbw/gps_analysis.html") to the new path
"custom_analysis/mbw_monitoring/gps_analysis.html" (or the correct new template
location), and update all occurrences in
commcare_connect/custom_analysis/mbw_monitoring/DOCUMENTATION.md that reference
custom_analysis/mbw/* (e.g., gps_analysis.py, gps_utils.py) to the new
custom_analysis/mbw_monitoring/* paths so links and imports match the new route.
- Replace deprecated datetime.utcnow() with datetime.now(timezone.utc) - Move Counter import to module top in views.py - Remove redundant outer try/except in views.py - Add try/finally for data_access.close() in session_adapter.py - Fix weak intensifier wording in DOCUMENTATION.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
I'm addressing all issues reported by the AI review in the refactor. |
Summary
This branch adds the MBW Monitoring Dashboard — a real-time performance monitoring tool for frontline health workers (FLWs) — integrated with the Workflow module for session management. It also fixes a bug in the Task module where OCS AI session linking was failing.
What's included:
trigger_bot()response is now captured, server-side polling added, frontend polling extended, retry button addedArchitecture
┌──────────────┐ ┌──────────────────┐
│ Create Run │ │ Overview Tab │
│ FLW Selection │ ─redirect─——► │ GPS Tab │
│ (render_code) │ │ Follow-Up Tab │
│ State CRUD │ ◄─save result—— │ FLW Assessment │
└──────────────┘ └──────────────────┘
WorkflowRunRecord,update_run_state())custom_analysis/mbw_monitoring/)workflow_adapter.pywrapsWorkflowRunRecordasWorkflowMonitoringSessionFiles Changed (21 files, +7,434 / -23)
New MBW Monitoring Module (
custom_analysis/mbw_monitoring/)views.pydata_fetchers.pyfollowup_analysis.pyworkflow_adapter.pyflw_api.pyurls.pyDOCUMENTATION.mddashboard.htmlWorkflow Template
workflow/templates/mbw_monitoring.pyworkflow/templates/__init__.pyOCS Session Linking Fix (Task Module)
tasks/views.pytrigger_bot()response, add server-side polling (3×2s)templates/tasks/task_create_edit.htmlOther
labs/integrations/commcare/api_client.pydiscover_form_xmlns(),list_applications()for CCHQ form fetchingconfig/urls.pyCommit History (read in order)
62ab1ebc— Initial MBW dashboard with cache tolerance and OAuth169a0bbf— Comprehensive documentation5f4efdab— Extractor fix, GS form CCHQ fetch, threshold update26735b98— Documentation updateef9652d2— Replace pass/fail with 3-option FLW assessmente6c7972f— Two-Source Visit Architecture documentation02462c51— Refactor from audit module to workflow module80210f42— FLW selection table enhancements + OCS session linking fixHow to Test
/labs/workflow/→ Create Workflow → select "MBW Monitoring" → select FLWs → Launch DashboardOCS trigger_bot response→ verify session links automatically or via "Retry Linking" buttonDocumentation
See
custom_analysis/mbw_monitoring/DOCUMENTATION.mdfor full technical documentation covering architecture, data flow, workflow integration, shallow merge caveats, and field paths.Summary by CodeRabbit
New Features
Improvements