Conversation
Memory optimization strategy: - Process locations one at a time using lapply() instead of all at once - Filter to desired intervals and select only necessary columns immediately - Allow garbage collection between location processing - Combine results using bind_rows() at the end This significantly reduces peak memory usage by: 1. Not loading all location data into memory simultaneously 2. Only keeping essential columns (location, nowcast_date, model_id, clade, interval_range, interval_coverage, target_date) 3. Processing smaller chunks that fit in memory Changes: - Modified R/compute_coverage.R to use chunked processing by location - Added bind_rows import to NAMESPACE - Added pivot_wider import to NAMESPACE (for plotting function) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #56 +/- ##
========================================
- Coverage 0.16% 0.13% -0.04%
========================================
Files 13 16 +3
Lines 1856 2285 +429
========================================
Hits 3 3
- Misses 1853 2282 +429 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
No description provided.