Conversation
Move MBW monitoring from custom_analysis to workflow/templates module, add template sync, worker result APIs, dashboard snapshotting, run timestamps, template type tracking, and documentation updates. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add Alpine.js-based sort (name, date, latest run) and template filter to the workflow list. Runs now show Created date and FLW count columns, sorted latest-first. Uses CSS order property for client-side reordering. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace static "Showing X of Y" footer with Previous/Next pagination buttons using Django's page_obj. Preserves filter query parameters across pages. Falls back to simple text when all tasks fit on one page. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add on_bad_lines="warn" to pandas read_csv so rows with unterminated strings (e.g. unescaped quotes in form_json) are skipped instead of crashing the entire pipeline with ParserError. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace deprecated datetime.utcnow() with datetime.now(timezone.utc) - Move Counter import to module top in views.py - Remove redundant outer try/except in views.py - Add try/finally for data_access.close() in session_adapter.py - Fix weak intensifier wording in DOCUMENTATION.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sortable columns (name, Connect ID, past audits, last audit date, last result, open tasks) with ascending/descending toggle indicators. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bring back the original GPS analysis views, URLs, and template from labs-main that were inadvertently removed during the MBW monitoring refactor. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new MBW Monitoring workflow (views, SSE stream, data fetchers, analytics, serializers, APIs, URLs, and docs), frontend action handlers and types for saving/completing runs, server APIs for worker results and run completion, CommCare/Connect client enhancements, timezone/snapshot support, UI/template tweaks, and CSV parsing warnings. Changes
Sequence Diagram(s)sequenceDiagram
participant Browser as Client (Browser)
participant Dashboard as MBW Dashboard UI
participant Server as Django Server
participant Cache as Cache (Redis)
participant CC as CommCare / Connect APIs
participant Labs as Labs API / DB
Browser->>Dashboard: Request dashboard page
Dashboard->>Server: GET page + SSE URL
Server-->>Dashboard: HTML with SSE endpoint
Dashboard->>Server: Open SSE stream (/stream)
Server->>Cache: Check cached payload
alt cache hit
Cache-->>Server: Cached payload
else cache miss
Server->>CC: Fetch opportunities, forms, cases
CC-->>Server: Data responses
Server->>Cache: Store payload
end
Server->>Server: Run GPS & follow‑up analysis
Server->>Labs: Optionally save snapshot
Server-->>Dashboard: Push SSE event (GPS + followup + overview)
Dashboard->>Browser: Render charts and lists
sequenceDiagram
participant UI as Client UI
participant Actions as Frontend ActionHandlers
participant API as Django API (/api/run/{runId}/worker-result/)
participant DataAccess as WorkflowDataAccess
participant Labs as Labs API / DB
UI->>Actions: saveWorkerResult(runId, params)
Actions->>API: POST JSON + CSRF
API->>DataAccess: Load run record
DataAccess->>Labs: GET run record
Labs-->>DataAccess: run data
API->>DataAccess: Merge flw_results, update state/snapshot
DataAccess->>Labs: Update run record
Labs-->>DataAccess: success
API-->>Actions: JSON { success, progress, worker_results }
Actions-->>UI: Resolve promise (update UI)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (7)
commcare_connect/templates/tasks/tasks_list.html (1)
271-274: Consider URL-encoding query parameter values.The query parameter values are output directly without encoding. If a value contains special characters like
&or=, the URL could break.🛡️ Proposed fix to add URL encoding
- <a href="?page={{ page_obj.previous_page_number }}{% for key, value in request.GET.items %}{% if key != 'page' %}&{{ key }}={{ value }}{% endif %}{% endfor %}" + <a href="?page={{ page_obj.previous_page_number }}{% for key, value in request.GET.items %}{% if key != 'page' %}&{{ key }}={{ value|urlencode }}{% endif %}{% endfor %}"Apply the same change to the "Next" link at line 277.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/templates/tasks/tasks_list.html` around lines 271 - 274, The Previous and Next pagination links in tasks_list.html embed request.GET values without encoding, which can break URLs if values contain & or =; update the href-building loop inside the anchor tags (the {% for key, value in request.GET.items %} ... {% endfor %} blocks used in the "Previous" and corresponding "Next" <a> elements) to apply Django's urlencode filter to each value (i.e., use {{ value|urlencode }}), preserving the existing key != 'page' check so all parameters are correctly URL-encoded.commcare_connect/workflow/views.py (2)
971-975: Consider usinglogger.exceptionfor better stack trace logging.Per static analysis hint: when logging in an exception handler,
logger.exceptionautomatically includes the traceback without needingexc_info=True. This is a minor improvement consistent with Python logging best practices.♻️ Proposed fix
except Exception as e: - logger.error(f"Failed to sync template render code for definition {definition_id}: {e}") + logger.exception(f"Failed to sync template render code for definition {definition_id}: {e}") return JsonResponse({"error": str(e)}, status=500)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/views.py` around lines 971 - 975, Replace the generic logger.error call inside the except Exception as e handler that logs "Failed to sync template render code for definition {definition_id}: {e}" with logger.exception so the full traceback is included; locate the except Exception as e block (the handler that currently calls logger.error and returns JsonResponse({"error": str(e)}, status=500)) and change the logging call to logger.exception("Failed to sync template render code for definition %s", definition_id) while keeping the same JsonResponse return behavior.
953-953: Redundant import ofget_template.
TEMPLATESis already imported at module level (line 19), andget_templatesimply doesTEMPLATES.get(template_key). You can useTEMPLATES.get(template_key)directly or remove this local import since you're already usingTEMPLATESon line 943.♻️ Proposed simplification
- from commcare_connect.workflow.templates import get_template - - template = get_template(template_key) + template = TEMPLATES.get(template_key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/views.py` at line 953, Remove the redundant local import of get_template and use the module-level TEMPLATES mapping directly; find the import statement "from commcare_connect.workflow.templates import get_template" and delete it, then replace any usages of get_template(template_key) (e.g., in the view function where TEMPLATES is already referenced) with TEMPLATES.get(template_key) or simply call TEMPLATES.get(...) so the code relies on the existing module-level TEMPLATES import.commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md (1)
381-381: Minor wording improvement suggestion.The phrase "Overview tab only - other tabs have Filter only" has a slight repetition of "only". Consider rephrasing for clarity.
📝 Suggested wording
-**Actions per FLW** (Overview tab only - other tabs have Filter only): +**Actions per FLW** (Overview tab — other tabs have Filter button only):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md` at line 381, Update the heading text "**Actions per FLW** (Overview tab only - other tabs have Filter only):" to remove the duplicated "only" and improve clarity; for example change it to "**Actions per FLW** (Overview tab only; other tabs have Filter)" or "**Actions per FLW** (Overview tab only — other tabs provide Filter)". Make the replacement in the same template line so the meaning remains identical but reads more cleanly.commcare_connect/workflow/templates/mbw_monitoring/session_adapter.py (1)
16-16: Consider usingVALID_FLW_RESULTSfor validation insave_flw_result().The constant
VALID_FLW_RESULTSis defined here but not used for validation. The validation currently happens insave_worker_result_apiinviews.py, butsave_flw_result()in this module could be called directly and would accept any result value.🛡️ Proposed validation in save_flw_result
def save_flw_result(request, run_id, username, result, notes, assessed_by): ... data_access = None try: data_access = WorkflowDataAccess(request=request) run = data_access.get_run(int(run_id)) if not run: return None + + # Validate result value + if result and result not in VALID_FLW_RESULTS: + logger.warning(f"Invalid FLW result value: {result}") + return None current_state = run.data.get("state", {})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/session_adapter.py` at line 16, The save_flw_result function currently accepts any result value though VALID_FLW_RESULTS is defined; update save_flw_result to validate its result argument against VALID_FLW_RESULTS (e.g., if result not in VALID_FLW_RESULTS raise a ValueError or appropriate exception) before proceeding so callers that invoke save_flw_result directly cannot persist invalid states; reference the save_flw_result function and the VALID_FLW_RESULTS constant when making this change.commcare_connect/workflow/templates/mbw_monitoring/flw_api.py (1)
98-122: Resource cleanup should use try/finally pattern.The
data_access.close()call could be skipped if an exception occurs betweenget_audit_sessions()andclose(). Consider using try/finally for consistent resource cleanup, similar to the pattern insession_adapter.py.♻️ Proposed fix for audit sessions section
# 1. Read traditional audit sessions + data_access = None try: from commcare_connect.audit.data_access import AuditDataAccess data_access = AuditDataAccess(request=request) all_sessions = data_access.get_audit_sessions() - data_access.close() for session in all_sessions: # ... processing logic ... except Exception as e: logger.warning(f"Failed to fetch audit history: {e}") + finally: + if data_access: + data_access.close()The same pattern should be applied to the workflow runs (lines 125-148) and tasks (lines 151-173) sections.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/flw_api.py` around lines 98 - 122, Wrap each data-access usage in a try/finally to guarantee close() is always called: for the audit sessions block, move creation of AuditDataAccess(request=request) into a try and call data_access.close() in the finally so get_audit_sessions() exceptions don't skip cleanup; do the same pattern for the workflow runs and tasks sections (the blocks that call their respective data access classes and methods and then .close()), ensuring you still handle exceptions (log or rethrow) but always close the data access in finally.commcare_connect/workflow/templates/mbw_monitoring/views.py (1)
645-648: LGTM - Top-level exception handling for SSE stream.Capturing the exception to Sentry and yielding an error event is the correct pattern for SSE streaming boundaries. The error message could be slightly improved by avoiding
str(e)in the f-string (RUF010), but this is minor.Optional: Use explicit conversion flag
- yield send_sse_event("Error", error=f"Failed to load dashboard data: {str(e)}") + yield send_sse_event("Error", error=f"Failed to load dashboard data: {e!s}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/views.py` around lines 645 - 648, The SSE stream except block currently constructs the error message using str(e) inside an f-string; change it to use an explicit conversion flag or standard interpolation to avoid RUF010. Update the logger.error call in the except block that references logger.error(...) and the subsequent yield send_sse_event(...) so the error message uses f"...{e!s}" (or equivalent explicit conversion) instead of f"...{str(e)}", leaving sentry_sdk.capture_exception(e) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/templates/workflow/list.html`:
- Around line 784-788: Replace the incorrect truncatechars filter on
run.created_at with Django's date filter: locate the template fragment that
renders {{ run.created_at }} and swap the truncatechars:10 usage for a date
filter such as date:"Y-m-d" (or date:"M d, Y" if that matches surrounding
templates) so the timestamp is formatted as an actual date rather than a
truncated ISO string.
In `@commcare_connect/workflow/data_access.py`:
- Line 572: The created_at field is using a naive datetime via datetime.now();
update the code that sets "created_at" (in commcare_connect.workflow.data_access
where the dict is built) to use a timezone-aware UTC timestamp, e.g.
datetime.now(timezone.utc) (or datetime.utcnow().replace(tzinfo=timezone.utc))
and ensure timezone is imported from datetime so the stored value is
timezone-aware and consistent across the codebase.
In `@commcare_connect/workflow/templates/mbw_monitoring/data_fetchers.py`:
- Around line 429-452: The bust_mbw_hq_cache function currently falls back to
cache.clear() when cache.delete_pattern is unavailable which will wipe unrelated
production caches; change the fallback so it does NOT call cache.clear(): when
hasattr(cache, "delete_pattern") is False, log a warning and return 0 (or -1
only if you explicitly want to signal full-clear), and in the except block
remove the cache.clear() call and instead log the exception and return 0;
reference bust_mbw_hq_cache, cache.delete_pattern and cache.clear when making
the changes, or alternatively implement a targeted iteration over known MBW key
prefixes (mbw_visit_cases:, mbw_mother_cases:, mbw_opp_metadata:) if you can
enumerate keys without clearing the whole cache.
In `@commcare_connect/workflow/templates/mbw_monitoring/followup_analysis.py`:
- Line 371: The line computing grace_cutoff in aggregate_mother_metrics creates
an unused variable (grace_cutoff = current_date -
timedelta(days=GRACE_PERIOD_DAYS)); remove that computation or use it where
intended. Locate aggregate_mother_metrics and either delete the grace_cutoff
assignment and any related imports/refs, or replace its current unused
calculation by applying grace_cutoff where logic requires (e.g., filtering by
cutoff date) so the variable is actually used; reference GRACE_PERIOD_DAYS and
current_date to find the exact place to change.
- Around line 120-121: In calculate_visit_status, remove the unused local
variable visit_type (currently set via visit_case.get("properties", {}) →
props.get("visit_type", "")) since it's never read; keep props and any other
property access intact and ensure no logic depends on visit_type before deleting
the assignment to avoid unused-variable warnings.
- Around line 445-453: The inline comment currently placed after the end of
_build_visit_details is orphaned; remove it from after the function and relocate
it to directly above the VISIT_CREATE_FLAGS constant (or convert it to a
module-level docstring) so the comment properly documents the VISIT_CREATE_FLAGS
mapping; ensure the comment text references VISIT_CREATE_FLAGS and is not
indented as if inside _build_visit_details.
- Around line 897-904: compute_overview_quality_metrics declares a current_date
parameter that is never used; remove the unused parameter (current_date) from
the function signature and all call sites that pass it (update callers that call
compute_overview_quality_metrics to stop passing current_date), and run tests to
ensure no remaining references; alternatively, if current_date is intended for
future logic, add a brief comment explaining it's reserved and reference it in
the docstring, but prefer removing the parameter to avoid dead API surface.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 570-582: The current block creates TaskDataAccess and calls
get_tasks(), but if get_tasks() raises an exception TaskDataAccess.close() is
never called; wrap the use of TaskDataAccess in a try/finally (or use a context
manager if TaskDataAccess supports __enter__/__exit__) so that
data_access.close() is always invoked: instantiate
TaskDataAccess(user=request.user, request=request), then in a try block call
data_access.get_tasks() and any processing, and in the finally block call
data_access.close(); keep the existing exception logging (logger.warning) in the
except block or re-raise as appropriate.
---
Nitpick comments:
In `@commcare_connect/templates/tasks/tasks_list.html`:
- Around line 271-274: The Previous and Next pagination links in tasks_list.html
embed request.GET values without encoding, which can break URLs if values
contain & or =; update the href-building loop inside the anchor tags (the {% for
key, value in request.GET.items %} ... {% endfor %} blocks used in the
"Previous" and corresponding "Next" <a> elements) to apply Django's urlencode
filter to each value (i.e., use {{ value|urlencode }}), preserving the existing
key != 'page' check so all parameters are correctly URL-encoded.
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Line 381: Update the heading text "**Actions per FLW** (Overview tab only -
other tabs have Filter only):" to remove the duplicated "only" and improve
clarity; for example change it to "**Actions per FLW** (Overview tab only; other
tabs have Filter)" or "**Actions per FLW** (Overview tab only — other tabs
provide Filter)". Make the replacement in the same template line so the meaning
remains identical but reads more cleanly.
In `@commcare_connect/workflow/templates/mbw_monitoring/flw_api.py`:
- Around line 98-122: Wrap each data-access usage in a try/finally to guarantee
close() is always called: for the audit sessions block, move creation of
AuditDataAccess(request=request) into a try and call data_access.close() in the
finally so get_audit_sessions() exceptions don't skip cleanup; do the same
pattern for the workflow runs and tasks sections (the blocks that call their
respective data access classes and methods and then .close()), ensuring you
still handle exceptions (log or rethrow) but always close the data access in
finally.
In `@commcare_connect/workflow/templates/mbw_monitoring/session_adapter.py`:
- Line 16: The save_flw_result function currently accepts any result value
though VALID_FLW_RESULTS is defined; update save_flw_result to validate its
result argument against VALID_FLW_RESULTS (e.g., if result not in
VALID_FLW_RESULTS raise a ValueError or appropriate exception) before proceeding
so callers that invoke save_flw_result directly cannot persist invalid states;
reference the save_flw_result function and the VALID_FLW_RESULTS constant when
making this change.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 645-648: The SSE stream except block currently constructs the
error message using str(e) inside an f-string; change it to use an explicit
conversion flag or standard interpolation to avoid RUF010. Update the
logger.error call in the except block that references logger.error(...) and the
subsequent yield send_sse_event(...) so the error message uses f"...{e!s}" (or
equivalent explicit conversion) instead of f"...{str(e)}", leaving
sentry_sdk.capture_exception(e) unchanged.
In `@commcare_connect/workflow/views.py`:
- Around line 971-975: Replace the generic logger.error call inside the except
Exception as e handler that logs "Failed to sync template render code for
definition {definition_id}: {e}" with logger.exception so the full traceback is
included; locate the except Exception as e block (the handler that currently
calls logger.error and returns JsonResponse({"error": str(e)}, status=500)) and
change the logging call to logger.exception("Failed to sync template render code
for definition %s", definition_id) while keeping the same JsonResponse return
behavior.
- Line 953: Remove the redundant local import of get_template and use the
module-level TEMPLATES mapping directly; find the import statement "from
commcare_connect.workflow.templates import get_template" and delete it, then
replace any usages of get_template(template_key) (e.g., in the view function
where TEMPLATES is already referenced) with TEMPLATES.get(template_key) or
simply call TEMPLATES.get(...) so the code relies on the existing module-level
TEMPLATES import.
commcare_connect/workflow/templates/mbw_monitoring/followup_analysis.py
Outdated
Show resolved
Hide resolved
commcare_connect/workflow/templates/mbw_monitoring/followup_analysis.py
Outdated
Show resolved
Hide resolved
- Restore 7 missing CommCareDataAccess methods (fetch_forms, fetch_cases_by_ids, get_form_xmlns, list_applications, discover_form_xmlns, _refresh_token, check_token_valid with auto-refresh) that were stripped during the refactor from custom_analysis/ to workflow/templates/mbw_monitoring/ - Restore 3 GPS metric functions in gps_analysis.py (compute_median_meters_per_visit, compute_median_minutes_per_visit, _prepare_daily_visit_pairs) - Fix "Refresh Data" button to pass bust_cache=1 on SSE stream re-fetch - Fix N+1 API calls on workflow listing page (fetch all runs once) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
commcare_connect/workflow/templates/mbw_monitoring/gps_analysis.py
Outdated
Show resolved
Hide resolved
- Fix visit status on-time window to vary by visit type (PNC = 4 days, others = 7 days) per MBW schedule spec via VISIT_ON_TIME_DAYS mapping - Fix naive/aware datetime comparison crash in gps_analysis.py sort keys (datetime.min → datetime.min with UTC timezone) - Fix data_access.py datetime.now() → datetime.now(timezone.utc) for consistent timezone-aware timestamps across all stored records - Fix list.html created_at rendering (truncatechars:10 → slice:":10") - Remove cache.clear() from bust_mbw_hq_cache fallback to prevent wiping unrelated production caches - Wrap TaskDataAccess in try/finally for proper resource cleanup - Remove unused grace_cutoff variable in aggregate_mother_metrics - Remove unused current_date param from compute_overview_quality_metrics - Fix orphaned comment above VISIT_CREATE_FLAGS (unreachable dead code) - Update DOCUMENTATION.md with per-visit-type on-time window table Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/workflow/data_access.py`:
- Around line 116-120: The selected_count property currently uses "or" which
falls back to selected_flws when selected_workers exists but is an empty list;
change the logic to prefer key-presence: in selected_count, check if
"selected_workers" is a key in state and use state.get("selected_workers", [])
in that case, otherwise use state.get("selected_flws", []); then return
len(selected) if it's a list else 0 (refer to the selected_count property and
the state keys selected_workers and selected_flws).
In `@commcare_connect/workflow/templates/mbw_monitoring/data_fetchers.py`:
- Around line 221-226: The cache metadata currently sets cached_count to the
requested size (unique_ids) which can misreport when fewer cases are actually
returned; change cached_count to reflect the actual returned items by using
len(all_cases) (and similarly update the visit and mother cache instances at the
other occurrence) so the cache's cached_count matches the length of the stored
list rather than the requested size.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 796-802: The code currently casts run_id to int when calling
load_monitoring_run (run_id = request.GET.get("run_id") ...; monitoring_session
= load_monitoring_run(request, int(run_id))), which can raise ValueError for
non-numeric input; update the view to validate/parse run_id safely (e.g., check
numeric with run_id.isdigit() or wrap int(run_id) in try/except ValueError) and
return a JsonResponse with a 400 error when parsing fails before calling
load_monitoring_run so only valid integers reach load_monitoring_run.
- Around line 217-227: session_id is cast to int without validation in the block
that calls load_monitoring_run, which can raise ValueError for non-numeric
input; update the logic that reads request.GET.get("run_id") or
request.GET.get("session_id") to validate that session_id is numeric (or safely
parse it) before calling load_monitoring_run(request, int(session_id)), and if
parsing fails skip calling load_monitoring_run and leave
monitoring_session/session_flw_filter as None (or log a warning). Refer to the
symbols session_id, request.GET.get, load_monitoring_run, monitoring_session,
and session_flw_filter when making the change.
- Around line 100-107: The code currently calls int(run_id) without validating
the query value which will raise ValueError for non-numeric run_id/session_id;
update the block that retrieves run_id (self.request.GET.get("run_id") or
self.request.GET.get("session_id")) to validate/parse safely before calling
load_monitoring_run: either check the string is numeric (e.g., isdigit or a
regex) or wrap int(run_id) in a try/except ValueError and skip calling
load_monitoring_run (or handle the error) when parsing fails; keep the rest of
the logic that assigns monitoring_session and derives
session_opp_id/opportunity_id unchanged.
In `@commcare_connect/workflow/views.py`:
- Around line 217-239: This GET handler currently mutates state when
self.request.GET.get("sync") == "true" by calling data_access.save_render_code
with TEMPLATES[matched_template]["render_code"] (using matched_template and
definition_id); remove that write-from-GET logic and instead only compute
matched_template for read/preview purposes, and call or redirect to the POST
sync endpoint you added for performing the actual save_render_code operation.
Locate the block that checks self.request.GET.get("sync"), delete or disable the
data_access.save_render_code call and the logger.info write, and ensure any
template-matching (TEMPLATES, matched_template, name_lower, definition.name)
remains read-only or is moved into the POST handler where save_render_code is
invoked.
- Around line 604-683: After retrieving the run in save_worker_result_api, add
an explicit ownership check: ensure request.labs_context exists and that the
retrieved run's opportunity_id (or run.opportunity_id /
run.data.get("opportunity_id")) matches request.labs_context.opportunity_id; if
not, return a 403 JsonResponse. Do the same explicit check in the other two
endpoints named in the comment—complete_run_api and
sync_template_render_code_api—immediately after their record retrieval and
before performing any mutations, returning 403 on mismatch or missing
labs_context.
- Restore 5 lost FieldComputations (parity, anc_completion_date, pnc_completion_date, baby_dob, app_build_version) and extract_app_build_version extractor from old branch - Fix gps_location/visit_datetime to use extractor= instead of path=/transform= for correct backend compatibility - Add bf_status pipeline field (5 postnatal form paths) and % EBF column to Overview tab with color coding (green 50-85%, yellow 31-49%/86-95%, red 0-30%/96-100%), sorting, toggle, and OCS red flag Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix selected_count to check key presence instead of falsy `or` fallback - Fix cached_count to reflect actual returned count, not requested count - Add _parse_int_param helper to safely handle non-numeric query params Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
commcare_connect/workflow/data_access.py (1)
634-657:⚠️ Potential issue | 🟡 MinorEnsure runs created via
get_or_create_runalso setcreated_at.
WorkflowRunRecord.created_atnow exists andcreate_run()sets it, butget_or_create_run()creates runs without it. That can leavecreated_atempty and break “latest run” sorting or display.🐛 Proposed fix
data = { "definition_id": definition_id, "period_start": week_start.isoformat(), "period_end": week_end.isoformat(), "status": "in_progress", "state": {}, + "created_at": datetime.now(timezone.utc).isoformat(), }#!/bin/bash # Verify call sites and usage of created_at to assess impact. rg -n --type=py '\bget_or_create_run\b' rg -n --type=py '\bcreated_at\b' commcare_connect/workflow🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/data_access.py` around lines 634 - 657, get_or_create_run currently creates a workflow_run record via labs_api.create_record without populating WorkflowRunRecord.created_at, causing empty timestamps; update get_or_create_run to set created_at (e.g., to datetime.now(timezone.utc).isoformat() or by using the same creation-path as create_run) on the data passed to labs_api.create_record so new records include created_at, ensuring consistent behavior with create_run and correct "latest run" sorting/display.commcare_connect/workflow/templates/mbw_monitoring/pipeline_config.py (1)
10-36:⚠️ Potential issue | 🟡 MinorInclude a
metadata.locationfallback inextract_gps_location.The docstring and code comment promise a top-level
metadata.locationcheck, but the implementation skips it entirely. Since visit data is constructed withmetadata.locationat the top level, this currently silences GPS data from that location.🐛 Proposed fix
def extract_gps_location(visit_data: dict) -> str | None: """ Extract GPS location string from visit data. @@ Returns: GPS string "lat lon altitude accuracy" or None """ # First try top-level metadata.location (already extracted by pipeline) + metadata = visit_data.get("metadata", {}) + if isinstance(metadata, dict): + top_level_loc = metadata.get("location") + if isinstance(top_level_loc, str): + return top_level_loc + form_json = visit_data.get("form_json", {}) # Try form.meta.location.#text path🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/pipeline_config.py` around lines 10 - 36, The function extract_gps_location currently only checks form_json.form.meta.location but omits the promised top-level metadata.location fallback; update extract_gps_location to first check visit_data.get("metadata", {}).get("location") (or visit_data.get("metadata") and its "location" key) and return that value if present and truthy, before falling back to the existing form_json -> meta -> location logic (preserve handling for dict vs str and the return of None if nothing found).
🧹 Nitpick comments (1)
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md (1)
382-388: Tighten wording by removing duplicated “only.”Small readability polish for the bullet about the Overview tab actions.
✏️ Suggested tweak
-- **Actions per FLW** (Overview tab only - other tabs have Filter only): +- **Actions per FLW** (Overview tab only - other tabs have Filter):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md` around lines 382 - 388, The heading phrase duplicates "only" — update the "**Actions per FLW**" header to remove the repeated word and clarify scope; for example change "(Overview tab only - other tabs have Filter only):" to a clearer form such as "(Overview tab — other tabs: Filter only):" or "(Overview tab; other tabs: Filter only):" so the meaning is unambiguous; edit the DOCUMENTATION.md text around the "**Actions per FLW**" header to use the chosen phrasing and keep the rest of the bullets (Assessment buttons, Notes button, Filter button, Task creation button) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Around line 56-94: The fenced ASCII diagram blocks in DOCUMENTATION.md are
missing language identifiers (triggering markdownlint MD040); update each
triple-backtick fence that wraps the ASCII diagrams (e.g., the block starting
with "User Browser" and other similar blocks describing endpoints like
"MBWMonitoringStreamView", "WorkflowRunView", "MBWSnapshotView") to use a
language tag such as text (replace ``` with ```text), and apply the same change
to the other flagged blocks mentioned in the comment so all code fences include
an appropriate language identifier.
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 439-456: The ebf_counts_by_flw aggregation uses raw row.username
which doesn't match the previously lowercased active_usernames/flw_names and
causes missing %EBF metrics; normalize the username the same way as earlier
(e.g., username_normalized = (row.username or "").strip().lower()), use that
normalized key throughout the ebf_counts_by_flw accumulation and when producing
ebf_pct_by_flw, and skip empty usernames so the Overview table and red-flag
logic see the same user keys as active_usernames/flw_names.
---
Outside diff comments:
In `@commcare_connect/workflow/data_access.py`:
- Around line 634-657: get_or_create_run currently creates a workflow_run record
via labs_api.create_record without populating WorkflowRunRecord.created_at,
causing empty timestamps; update get_or_create_run to set created_at (e.g., to
datetime.now(timezone.utc).isoformat() or by using the same creation-path as
create_run) on the data passed to labs_api.create_record so new records include
created_at, ensuring consistent behavior with create_run and correct "latest
run" sorting/display.
In `@commcare_connect/workflow/templates/mbw_monitoring/pipeline_config.py`:
- Around line 10-36: The function extract_gps_location currently only checks
form_json.form.meta.location but omits the promised top-level metadata.location
fallback; update extract_gps_location to first check visit_data.get("metadata",
{}).get("location") (or visit_data.get("metadata") and its "location" key) and
return that value if present and truthy, before falling back to the existing
form_json -> meta -> location logic (preserve handling for dict vs str and the
return of None if nothing found).
---
Nitpick comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Around line 382-388: The heading phrase duplicates "only" — update the
"**Actions per FLW**" header to remove the repeated word and clarify scope; for
example change "(Overview tab only - other tabs have Filter only):" to a clearer
form such as "(Overview tab — other tabs: Filter only):" or "(Overview tab;
other tabs: Filter only):" so the meaning is unambiguous; edit the
DOCUMENTATION.md text around the "**Actions per FLW**" header to use the chosen
phrasing and keep the rest of the bullets (Assessment buttons, Notes button,
Filter button, Task creation button) unchanged.
commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md
Outdated
Show resolved
Hide resolved
…nt API calls - Remove redundant onUpdateState call after saveWorkerResult (was duplicating the save via a second POST to /state/) - Add optimistic UI: button state updates instantly, reverts on error - Make assessment buttons toggleable (click active status to clear) - Pass pre-fetched run to update_run_state to skip redundant GET - Add current_record param to api_client.update_record to skip re-fetch - Result: 1 GET + 1 POST per click (down from 5 GETs + 2 POSTs) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
commcare_connect/workflow/data_access.py (1)
636-651:⚠️ Potential issue | 🟡 MinorAdd created_at when get_or_create_run creates a new run.
get_or_create_run doesn't set created_at while create_run does (line 575), creating an inconsistency that could affect downstream code relying on this field.
🛠️ Proposed fix
data = { "definition_id": definition_id, "period_start": week_start.isoformat(), "period_end": week_end.isoformat(), "status": "in_progress", "state": {}, + "created_at": datetime.now(timezone.utc).isoformat(), }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/data_access.py` around lines 636 - 651, get_or_create_run doesn't set the created_at field when it constructs a new run dict, causing inconsistency with create_run which does set created_at; update get_or_create_run (the function that builds the data dict with keys "definition_id","period_start","period_end","status","state") to include a created_at value (e.g., datetime.now(timezone.utc).isoformat() or matching create_run's timestamp format) before calling create_run/listing so the new run records have the same created_at semantics as create_run.
🧹 Nitpick comments (2)
commcare_connect/workflow/views.py (2)
722-727: Direct access tolabs_apibreaks encapsulation.Calling
data_access.labs_api.update_record()directly bypasses the abstraction layer. Consider adding acomplete_run()method toWorkflowDataAccessthat handles status changes, similar to howupdate_run_state()works.Suggested approach
Add to
WorkflowDataAccess:def complete_run(self, run_id: int, overall_result: str = "completed", notes: str = "") -> WorkflowRunRecord | None: """Mark a workflow run as completed.""" run = self.get_run(run_id) if not run: return None current_state = run.data.get("state", {}) updated_data = { **run.data, "status": "completed", "state": { **current_state, "overall_result": overall_result, "notes": notes, }, } result = self.labs_api.update_record( record_id=run_id, experiment=self.EXPERIMENT, type="workflow_run", data=updated_data, ) return WorkflowRunRecord(result.__dict__) if result else NoneThen in the view:
- result = data_access.labs_api.update_record( - record_id=run_id, - experiment=data_access.EXPERIMENT, - type="workflow_run", - data=updated_data, - ) + result = data_access.complete_run(run_id, overall_result, notes)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/views.py` around lines 722 - 727, The view is directly calling data_access.labs_api.update_record which breaks encapsulation; add a complete_run method on WorkflowDataAccess (mirroring update_run_state) that accepts run_id (and optional overall_result: str = "completed", notes: str = ""), uses get_run(run_id) to fetch the run, builds updated_data by setting "status": "completed" and merging current run.data["state"] with {"overall_result": overall_result, "notes": notes}, calls self.labs_api.update_record(record_id=run_id, experiment=self.EXPERIMENT, type="workflow_run", data=updated_data) and returns a WorkflowRunRecord instance or None, then replace the direct labs_api.update_record call in the view with data_access.complete_run(run_id, overall_result="completed", notes="").
980-982: Addexc_info=Truefor consistent exception logging.Other handlers in this file (lines 681, 741) include full stack traces via
exc_info=True. This handler should do the same for debugging parity.Suggested fix
except Exception as e: - logger.error(f"Failed to sync template render code for definition {definition_id}: {e}") + logger.error(f"Failed to sync template render code for definition {definition_id}: {e}", exc_info=True) return JsonResponse({"error": str(e)}, status=500)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/views.py` around lines 980 - 982, The exception handler that logs "Failed to sync template render code for definition {definition_id}" uses logger.error without a stack trace; update the except block in the views.py handler (the one that references logger, definition_id and returns JsonResponse) to pass exc_info=True to logger.error so the full traceback is logged consistently with the other handlers (lines that currently use exc_info=True).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commcare_connect/labs/integrations/connect/api_client.py`:
- Around line 281-283: In update_record, validate that a provided current_record
actually matches the record_id, experiment, and type before using it: check
current_record.id (or appropriate identifier), current_record.experiment and
current_record.type against the record_id, experiment, and type parameters; if
they don't match, either ignore current_record and call
self.get_record_by_id(record_id, experiment=experiment, type=type) or raise a
ValueError/TypeError with a clear message. Update the logic around the existing
current = current_record or self.get_record_by_id(...) so it first performs this
match-check and only uses current_record when all fields align to avoid mixing
unrelated identifiers/metadata.
In `@commcare_connect/workflow/templates/mbw_monitoring/session_adapter.py`:
- Around line 71-77: Update session_adapter to be backward-compatible with
legacy keys: modify the selected_flw_usernames property to return
self._state.get("selected_workers", self._state.get("selected_flws", [])) and
modify the flw_results property to return self._state.get("worker_results",
self._state.get("flw_results", {})); update the validation check (the method
that currently validates presence/format of selected_flws/flw_results) to accept
either variant (selected_workers or selected_flws and worker_results or
flw_results) and treat them as equivalent; and in the code paths that update
results (the functions that currently write flw_results and selected_flws) read
from worker_results if present and write both keys when saving so both
worker_results and flw_results (and selected_workers and selected_flws) are kept
in sync for backward compatibility.
---
Outside diff comments:
In `@commcare_connect/workflow/data_access.py`:
- Around line 636-651: get_or_create_run doesn't set the created_at field when
it constructs a new run dict, causing inconsistency with create_run which does
set created_at; update get_or_create_run (the function that builds the data dict
with keys "definition_id","period_start","period_end","status","state") to
include a created_at value (e.g., datetime.now(timezone.utc).isoformat() or
matching create_run's timestamp format) before calling create_run/listing so the
new run records have the same created_at semantics as create_run.
---
Duplicate comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/DOCUMENTATION.md`:
- Around line 56-94: The fenced code blocks in DOCUMENTATION.md lack language
identifiers causing markdownlint MD040 errors; update each triple-backtick fence
that contains ASCII diagrams or plain text (e.g., the "User Browser" block that
starts with "User Browser" and the other similar blocks referenced in the
comment) to include a language tag such as text (change ``` to ```text), and
apply the same change to the other flagged blocks (the ranges noted in the
review: the blocks around lines 98-126, 130-145, 472-496, and 870-876) so all
code fences have an explicit language identifier for lint compliance and
readability.
In `@commcare_connect/workflow/views.py`:
- Around line 685-743: complete_run_api currently updates a run without
verifying the caller owns it; before mutating the run (after run =
data_access.get_run(run_id)) validate ownership the same way
save_worker_result_api does by comparing run.opportunity_id (or other ownership
field used in WorkflowDataAccess/user context) to the current user's
context/permissions from request (or data_access) and return a 403 JsonResponse
if it does not match; ensure the check occurs immediately after retrieving run
and before constructing updated_data or calling
data_access.labs_api.update_record.
- Around line 929-983: The handler sync_template_render_code_api currently
mutates a workflow definition without verifying the requesting user has
ownership/rights; after retrieving definition via
data_access.get_definition(definition_id) add an ownership/access check (e.g.,
compare definition.opportunity_id or definition.owner_id against the current
user's opportunity/context on request.user or call a helper on
WorkflowDataAccess like user_has_access_to_definition(definition_id,
request.user) or get_opportunity_for_user and compare IDs) and return
JsonResponse({"error":"Forbidden"}, status=403) when the user does not own or
belong to the opportunity; place this check before any mutation (before
save_render_code) so only authorized users can sync render code.
- Around line 217-239: The branch in the view currently performs a write during
a GET (it calls data_access.save_render_code when self.request.GET.get("sync")
== "true"), which must be removed; instead, detect the requested template (use
the existing logic that computes explicit_template, matched_template and looks
up TEMPLATES) but do not call data_access.save_render_code here—instead redirect
or forward the request to the POST sync endpoint (sync_template_render_code_api)
or instruct the client to call that POST, passing definition_id and
matched_template; ensure no state changes occur in this GET handler and that any
logging (logger.info) only records intent, not that the write occurred.
- Around line 604-683: The save_worker_result_api handler currently uses
data_access.get_run(run_id) but does not verify the run belongs to the caller's
opportunity/labs context; add an ownership check after retrieving run (in
save_worker_result_api) that compares the run's opportunity/opp identifier (e.g.
run.opportunity_id or run.data['opportunity_id'] / run.data['opportunity'])
against the request's labs_context/opportunity (e.g.
request.labs_context.opportunity or similar) and return a 403 JsonResponse if
they don’t match or if labs_context is missing; ensure this check runs before
any state modification or calling data_access.update_run_state to prevent
cross-opportunity mutations.
- Line 637: The new endpoints save_worker_result_api, complete_run_api, and
sync_template_render_code_api create WorkflowDataAccess(request=request) but
never call its close(), causing a resource leak; wrap the creation and use of
the WorkflowDataAccess instance in a try/finally (or context manager if
available) and ensure you call data_access.close() in the finally block after
all operations; locate the WorkflowDataAccess instantiations in those three
functions and add the try/finally around code that uses data_access so close()
is always executed even on exceptions.
---
Nitpick comments:
In `@commcare_connect/workflow/views.py`:
- Around line 722-727: The view is directly calling
data_access.labs_api.update_record which breaks encapsulation; add a
complete_run method on WorkflowDataAccess (mirroring update_run_state) that
accepts run_id (and optional overall_result: str = "completed", notes: str =
""), uses get_run(run_id) to fetch the run, builds updated_data by setting
"status": "completed" and merging current run.data["state"] with
{"overall_result": overall_result, "notes": notes}, calls
self.labs_api.update_record(record_id=run_id, experiment=self.EXPERIMENT,
type="workflow_run", data=updated_data) and returns a WorkflowRunRecord instance
or None, then replace the direct labs_api.update_record call in the view with
data_access.complete_run(run_id, overall_result="completed", notes="").
- Around line 980-982: The exception handler that logs "Failed to sync template
render code for definition {definition_id}" uses logger.error without a stack
trace; update the except block in the views.py handler (the one that references
logger, definition_id and returns JsonResponse) to pass exc_info=True to
logger.error so the full traceback is logged consistently with the other
handlers (lines that currently use exc_info=True).
| # Use provided record or fetch current to read metadata | ||
| current = current_record or self.get_record_by_id(record_id, experiment=experiment, type=type) | ||
| if not current: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, examine the update_record implementation
cat -n commcare_connect/labs/integrations/connect/api_client.py | sed -n '260,290p'Repository: jjackson/commcare-connect
Length of output: 1514
🏁 Script executed:
# Find all call sites of update_record
rg -n 'update_record\s*\(' commcare_connect/ -A2 -B2Repository: jjackson/commcare-connect
Length of output: 15216
🏁 Script executed:
# Check how current_record is passed at call sites
rg -n 'current_record\s*=' commcare_connect/ -B2 -A2Repository: jjackson/commcare-connect
Length of output: 1289
🏁 Script executed:
# Look for the LabsAPIError class to understand error handling
rg -n 'class LabsAPIError' commcare_connect/ -A5Repository: jjackson/commcare-connect
Length of output: 519
🏁 Script executed:
# Check the context of the workflow call site (around line 690-696)
sed -n '680,705p' commcare_connect/workflow/data_access.pyRepository: jjackson/commcare-connect
Length of output: 900
🏁 Script executed:
# Check the audit call site (around line 1250-1275)
sed -n '1240,1280p' commcare_connect/audit/data_access.pyRepository: jjackson/commcare-connect
Length of output: 1176
🏁 Script executed:
# Check if there's a LocalLabsRecord class definition to understand the structure
rg -n 'class LocalLabsRecord' commcare_connect/ -A10Repository: jjackson/commcare-connect
Length of output: 850
🏁 Script executed:
# Verify the complete signature of update_record to understand all parameters
sed -n '250,295p' commcare_connect/labs/integrations/connect/api_client.pyRepository: jjackson/commcare-connect
Length of output: 1695
🏁 Script executed:
# Check if audit call site actually passes current_record or relies on fetch
sed -n '1268,1285p' commcare_connect/audit/data_access.pyRepository: jjackson/commcare-connect
Length of output: 521
Add validation to guard against mismatched current_record in update_record.
While all current call sites pass matching records, the API method accepts current_record without verifying it aligns with the record_id, experiment, and type parameters. A future caller could inadvertently pass a record from a different context, causing the payload to mix unrelated identifiers and metadata. Add a check before using it.
🔧 Suggested fix
- # Use provided record or fetch current to read metadata
- current = current_record or self.get_record_by_id(record_id, experiment=experiment, type=type)
+ # Use provided record or fetch current to read metadata
+ if current_record is not None:
+ if current_record.id != record_id:
+ raise LabsAPIError(
+ f"current_record.id ({current_record.id}) does not match record_id ({record_id})"
+ )
+ if current_record.experiment != experiment or current_record.type != type:
+ raise LabsAPIError(
+ "current_record does not match requested experiment/type"
+ )
+ current = current_record or self.get_record_by_id(record_id, experiment=experiment, type=type)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Use provided record or fetch current to read metadata | |
| current = current_record or self.get_record_by_id(record_id, experiment=experiment, type=type) | |
| if not current: | |
| # Use provided record or fetch current to read metadata | |
| if current_record is not None: | |
| if current_record.id != record_id: | |
| raise LabsAPIError( | |
| f"current_record.id ({current_record.id}) does not match record_id ({record_id})" | |
| ) | |
| if current_record.experiment != experiment or current_record.type != type: | |
| raise LabsAPIError( | |
| "current_record does not match requested experiment/type" | |
| ) | |
| current = current_record or self.get_record_by_id(record_id, experiment=experiment, type=type) | |
| if not current: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@commcare_connect/labs/integrations/connect/api_client.py` around lines 281 -
283, In update_record, validate that a provided current_record actually matches
the record_id, experiment, and type before using it: check current_record.id (or
appropriate identifier), current_record.experiment and current_record.type
against the record_id, experiment, and type parameters; if they don't match,
either ignore current_record and call self.get_record_by_id(record_id,
experiment=experiment, type=type) or raise a ValueError/TypeError with a clear
message. Update the logic around the existing current = current_record or
self.get_record_by_id(...) so it first performs this match-check and only uses
current_record when all fields align to avoid mixing unrelated
identifiers/metadata.
- Normalize EBF username to lowercase in ebf_counts_by_flw dict (fixes silent None when CCHQ/Connect casing differs) - Add created_at timestamp to get_or_create_run for consistency with create_run - Add language identifiers to bare markdown fences (MD040) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
commcare_connect/workflow/templates/mbw_monitoring/views.py (1)
680-683: Consider using f-string conversion flag for cleaner string conversion.The static analysis suggests using
!sconversion flag instead of explicitstr()call.♻️ Optional cleanup
- yield send_sse_event("Error", error=f"Failed to load dashboard data: {str(e)}") + yield send_sse_event("Error", error=f"Failed to load dashboard data: {e!s}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@commcare_connect/workflow/templates/mbw_monitoring/views.py` around lines 680 - 683, In the except block handling stream failures, replace the explicit str(e) call in the yield send_sse_event(...) f-string with the f-string conversion flag {e!s} for cleaner string conversion; locate the try/except around the MBW dashboard streaming code in views.py (the except Exception as e handler) and update the argument error=f"Failed to load dashboard data: {e!s}" so the exception is converted via the f-string conversion flag.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@commcare_connect/workflow/templates/mbw_monitoring/views.py`:
- Around line 680-683: In the except block handling stream failures, replace the
explicit str(e) call in the yield send_sse_event(...) f-string with the f-string
conversion flag {e!s} for cleaner string conversion; locate the try/except
around the MBW dashboard streaming code in views.py (the except Exception as e
handler) and update the argument error=f"Failed to load dashboard data: {e!s}"
so the exception is converted via the f-string conversion flag.
…tion - Validate current_record.id matches record_id in update_record to prevent silent metadata corruption (api_client.py) - Session adapter reads worker_results with fallback to flw_results for backward compatibility with generic API writes (session_adapter.py) - Add data_access.close() to save_worker_result_api, complete_run_api, sync_template_render_code_api — matches pattern in 30 other functions - Add complete_run() method to WorkflowDataAccess, replacing direct labs_api.update_record() call in complete_run_api (encapsulation) - Add exc_info=True to sync handler error log (consistency) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 AI Code Reviewnull Powered by Claude — auto-generated review |
|
@jjackson I believe that this PR is stable and largely verified (and tested). |
Summary
Refactors MBW monitoring from the
custom_analysis/audit module into a self-contained workflow template underworkflow/templates/mbw_monitoring/, adds workflow listing and task tracker enhancements, and addresses PR review feedback.Key changes
custom_analysis/mbw_monitoring/toworkflow/templates/mbw_monitoring/as a pluggable workflow template with its own views, URLs, data fetchers, session adapter, and RENDER_CODEsave_worker_result_apiandcomplete_run_apiendpoints inworkflow/views.pyfor FLW assessment persistencetrigger_bot()response captured, server-side polling (3x2s) added, frontend polling extended (8x3s), retry button addedon_bad_lines="warn"to prevent pipeline crashes on malformed API responsesdatetime.utcnow()→datetime.now(timezone.utc),Counterimport cleanup, redundant try/except removal, try/finally resource safety, documentation wordingArchitecture
Files changed (28 files, +6,900 / -1,010)
Area | Files -- | -- New module | workflow/templates/mbw_monitoring/ (12 files) Workflow engine | workflow/views.py, workflow/urls.py, workflow/data_access.py Frontend | workflow-runner.tsx, workflow/types.ts Templates | workflow/run.html, workflow/list.html, tasks/tasks_list.html Infrastructure | config/urls.py, csv_parsing.py Removed | custom_analysis/mbw/ (3 files), gps_analysis.htmlCommits
24843c38— MBW monitoring refactor + workflow enhancements53e0b72a— Add sorting and filtering controls to workflow listing pagec51171ce— Add pagination controls to tasks list page40c7a6d8— Fix CSV parsing crash on malformed rows from Connect APIc9dd0216— Address PR Labs mbw workflow #21 review comments: cleanup and resource safety8d46d725— Add FLW selection table sorting and column enhancementsTest plan
Summary by CodeRabbit
New Features
Improvements
UI/UX
Documentation