Skip to content

Add post-analysis cleanup hook to free per-scan memory#297

Merged
skbarber merged 6 commits intomasterfrom
feat/post-analysis-cleanup
Mar 26, 2026
Merged

Add post-analysis cleanup hook to free per-scan memory#297
skbarber merged 6 commits intomasterfrom
feat/post-analysis-cleanup

Conversation

@skbarber
Copy link
Copy Markdown
Collaborator

@skbarber skbarber commented Mar 26, 2026

Human description

During LiveTaskRunner, persistent ScanAnalzyer objects iteratively run analysis as new data comes in. However, nothing purges the internal states of these analyzers until the next analysis call. Thus, memory is not really ever released. Now, after successful or failed analysis, the internal states are emptied, freeing up the memory.

AI description

Introduces a cleanup() lifecycle method to the analyzer stack to prevent unbounded memory growth when the task runner processes many scans in a single session.

Changes

  • ScanAnalyzer.cleanup() (base) — raises NotImplementedError intentionally; any subclass that omits an implementation will halt the task runner loudly rather than silently leak memory.
  • BaseRenderer.cleanup() — resets display_contents = []; subclasses may override to release additional cached state.
  • SingleDeviceScanAnalyzer.cleanup() — clears raw_data, results, _data_file_map, stateful_results, and _pending_aux_updates, then calls self.renderer.cleanup() directly (no hasattr guard needed since BaseRenderer always provides the method).
  • ScatterPlotterAnalysis.cleanup() — implemented with pass; no large data is held between scans.
  • task_queue.run_worklist() — calls analyzer.cleanup() in the finally block so it runs unconditionally after every analysis (success or failure). NotImplementedError propagates unguarded to halt the runner.

Design decisions

  • Failure is loud by design: a missing cleanup() implementation raises NotImplementedError and stops the runner rather than silently accumulating memory.
  • Placed in finally so cleanup always runs, even after a failed analysis where partial data may still be in memory.

🤖 Generated with Claude Code

skbarber and others added 5 commits March 25, 2026 14:47
Introduce cleanup() to SingleDeviceScanAnalyzer to free memory after analysis by clearing large per-scan attributes (raw_data, results, _data_file_map, stateful_results, _pending_aux_updates). Delegates to renderer.cleanup() when available and emits a debug log on completion. This helps release numpy arrays and result objects held between analyses.
Import logging and create a module logger, and add a cleanup() method to BaseRenderer. The cleanup method resets self.display_contents to an empty list to avoid returning paths from previous scans, and logs completion at debug level. Subclasses can override cleanup to release additional cached state (e.g. large arrays).
Introduce a required cleanup() method on ScanAnalyzer (ScanAnalysis/scan_analysis/base.py) to force subclasses to release per-scan resources and avoid unbounded memory growth. The runner (ScanAnalysis/scan_analysis/task_queue.py) now calls analyzer.cleanup() in the finally block after the heartbeat thread is joined, ensuring cleanup always runs after analysis. NotImplementedError is raised by default to make missing implementations explicit.
- ScatterPlotterAnalysis.cleanup() added with pass (no heavy state)
- Remove hasattr guard in SingleDeviceScanAnalyzer.cleanup() — BaseRenderer
  now always has cleanup(), so the guard was misleading

Co-Authored-By: Claude Sonnet <noreply@anthropic.com>
@skbarber skbarber requested a review from ag6520 March 26, 2026 04:17
@skbarber skbarber merged commit b6e9e1c into master Mar 26, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant