fix: configure GitHub Pages to use Actions workflow source#112
fix: configure GitHub Pages to use Actions workflow source#112eurunuela wants to merge 9 commits intoME-ICA:masterfrom
Conversation
- Fix: Changed decompressHeader to use fire-and-forget write so reading starts immediately, preventing deadlock when large NIfTI files (100+ MB) buffer in DecompressionStream - Enhancement: Added "registry.json" to RICA_FILE_PATTERNS so rica_server discovers RepetitionTime for frequency axis calculations - Progress: Updated session log documenting the backpressure deadlock bug and fix Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…derived files Previously the filter matched any *_components.nii.gz containing "ica", which caught echo-specific files (echo-1..5) and derived maps (ICAAccepted, ICAAveragingWeights). These were all fetched unnecessarily — wasting bandwidth and causing the last one to overwrite the correct niftiBuffer. New pattern: must contain _desc-ICA_components.nii.gz and must not contain _echo-, matching only the main ICA component map. Same logic applied to both the React file filter/processing blocks and the rica_server.py discovery filter. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
For very large NIfTI files that exceed browser memory limits, the full arrayBuffer() call may fail, leaving repetitionTime null and causing the power spectrum to display cycles/TR instead of Hz. Fix by reading just the first 4096 bytes of the file/response to extract the TR from the header, independently of loading the full file: - File upload: use file.slice(0, 4096).arrayBuffer() before readFileAsArrayBuffer - Server load: try a Range: bytes=0-4095 request before fetching the full file 4KB is sufficient to decompress the gzip stream enough to reach the NIfTI-1 (348 bytes) or NIfTI-2 (540 bytes) header containing pixdim[4] (TR). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, Rica loaded the entire ICA components NIfTI into a JavaScript ArrayBuffer before passing it to Niivue. For very large 4D NIfTI files (hundreds of MB compressed, gigabytes uncompressed), this caused the browser to run out of heap memory even when the local machine has plenty of RAM. Fix by passing a URL directly to Niivue instead of an ArrayBuffer: - Server mode: store '/' + filepath; Niivue fetches from localhost directly - File upload mode: use URL.createObjectURL(file) to create a blob URL that Niivue can load without a full ArrayBuffer copy in the JS heap BrainViewer now accepts a niftiUrl prop (alongside the existing niftiBuffer for backwards compatibility) and uses it directly in loadVolumes when present. The same change is threaded through Plots, DecisionTree, and DecisionTreeTab. This allows the local server to load maps for files that previously caused OOM errors in the browser, since Niivue's internal (WebAssembly) decompression is significantly more memory-efficient than the JS heap allocation path. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Time series and FFT spectrum only require the mixing matrix TSV — they don't need the NIfTI file at all. Previously, hasInteractiveViews gated all three components on NIfTI availability, so any failure in niftiUrl/niftiBuffer propagation would hide time series and FFT too. Fix by separating the two checks: - hasInteractiveViews: !!mixingMatrix?.data?.length (mixing matrix only) - hasBrainViewer: !!(niftiBuffer || niftiUrl) (NIfTI required) Time series + FFT always show when mixing matrix is loaded. BrainViewer only renders when the NIfTI is also available. This also improves UX for datasets where only TSV files are uploaded (no NIfTI), which previously showed the static PNG fallback even though the interactive time series and FFT could work perfectly. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The URL-based NIfTI loading optimization (niftiUrl) caused the brain viewer to disappear because niftiUrl wasn't propagating correctly through the component tree. Revert to the reliable niftiBuffer (ArrayBuffer) approach which was working before. The TR extraction from 4KB header slice is kept (previous commit) so the power spectrum frequency axis still shows Hz correctly regardless of whether the full NIfTI loads successfully. URL-based loading for large files can be revisited separately once the propagation issue is diagnosed. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
For very large NIfTI files, response.arrayBuffer() / readFileAsArrayBuffer() can fail with OOM. Previously this left niftiBuffer null and caused the brain viewer to not render. Fix by always setting niftiUrl first (zero memory cost), then attempting to load niftiBuffer. If the buffer load fails, niftiUrl is still set so hasBrainViewer is true and BrainViewer can load directly from the URL (server HTTP path or blob: URL from File object). BrainViewer already prefers niftiUrl when present (previous commit). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…s change) The filter was changed to _desc-ICA_components.nii.gz which is too strict and misses files that don't follow exact BIDS naming (e.g. ICA_components.nii.gz). Restore the original condition: _components.nii.gz + ica in name + no stat-z + no echo- This is identical to upstream/master except we added !echo- to keep the echo-specific file exclusion that was added to rica_server.py. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add `actions/configure-pages@v4` step to the build job so the workflow self-configures the Pages source on every non-PR run. This prevents silent regression if someone switches Pages settings back to legacy mode in the UI. The immediate fix (switching build_type from legacy to workflow via the GitHub API) has already been applied to the repo settings. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
✅ Deploy Preview for rica-fmri ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Closing — branch contained unrelated commits. Will reopen from a clean base. |
There was a problem hiding this comment.
Pull request overview
This PR updates the GitHub Pages documentation workflow to use the Actions workflow source, and also includes a set of runtime changes related to loading/visualizing large NIfTI files (introducing niftiUrl as a URL-based alternative to in-memory buffers).
Changes:
- GitHub Pages: adds
actions/configure-pages@v4to ensure Pages stays configured for workflow-based deployments. - App: plumbs a new
niftiUrlprop/state through the UI so Niivue can load NIfTI data via URL (including blob URLs) when ArrayBuffer loading is impractical. - Local server & loader: expands file discovery (adds
registry.json) and tightens_components.nii.gzfiltering; adds header/TR extraction attempts using partial reads.
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
.github/workflows/docs.yml |
Adds a configure-pages step to self-configure Pages for workflow deployments. |
src/utils/niftiUtils.js |
Changes gzip header decompression write behavior to avoid deadlock with large inputs. |
src/index.js |
Adds niftiUrl state and passes it into dependent components. |
src/PopUps/IntroPopUp.js |
Sets niftiUrl (HTTP or blob) and adjusts NIfTI loading/TR extraction behavior. |
src/Plots/BrainViewer.js |
Allows Niivue to load the stat map from niftiUrl when present; updates cleanup/deps. |
src/Plots/Plots.js |
Passes niftiUrl to BrainViewer and gates brain viewer rendering on NIfTI availability. |
src/Tree/DecisionTreeTab.js |
Wires niftiUrl through to the DecisionTree. |
src/Tree/DecisionTree.js |
Adds niftiUrl support and updates brain viewer visibility logic. |
scripts/rica_server.py |
Adds registry.json to discoverable patterns and adds additional _components.nii.gz exclusions. |
claude-progress.txt |
Appends a session log describing the changes. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Always set URL so BrainViewer can load even if buffer fails | ||
| niftiUrl = URL.createObjectURL(file); | ||
| // Also try loading the full buffer (fails gracefully for very large files) | ||
| try { | ||
| niftiBuffer = await readFileAsArrayBuffer(file); |
There was a problem hiding this comment.
niftiUrl = URL.createObjectURL(file) creates a blob URL that is never revoked. If a user loads multiple datasets (or the popup is reopened), this can leak memory in the browser. Please add cleanup to revoke the previous blob URL when it’s no longer needed (e.g., when loading a new file / on unmount), or centralize the revocation in the App state when niftiUrl changes.
| setMixingMatrix(data.mixingMatrix); | ||
| setNiftiBuffer(data.niftiBuffer); | ||
| setNiftiBuffer(data.niftiBuffer || null); | ||
| setNiftiUrl(data.niftiUrl || null); | ||
| setMaskBuffer(data.maskBuffer); |
There was a problem hiding this comment.
setNiftiUrl(data.niftiUrl || null) can overwrite an existing blob: URL created via URL.createObjectURL(...) without revoking it, which will leak memory across dataset loads. Consider tracking the previous niftiUrl and calling URL.revokeObjectURL when it starts with blob: and is being replaced/cleared (e.g., in a useEffect cleanup tied to niftiUrl).
| // Cleanup Blob URLs — only revoke URLs we created (not external niftiUrl) | ||
| if (statBlobUrl && !niftiUrl) { | ||
| URL.revokeObjectURL(statBlobUrl); | ||
| } |
There was a problem hiding this comment.
The cleanup intentionally avoids revoking niftiUrl, but niftiUrl can be a blob: URL (created via URL.createObjectURL in IntroPopup). In that case, not revoking here (or elsewhere) will leak. Either ensure the owner (e.g., App/IntroPopup) always revokes blob URLs, or pass an explicit flag/ownership indicator so BrainViewer can safely revoke when appropriate (e.g., only when niftiUrl starts with blob: and was created by this session).
| and not ( | ||
| "_components.nii.gz" in f.name | ||
| and "_desc-ICA_" not in f.name | ||
| and "betas_OC" not in f.name | ||
| ) |
There was a problem hiding this comment.
The new filter excludes any *_components.nii.gz that doesn’t contain _desc-ICA_ (and isn’t betas_OC, which won’t match this condition anyway). This makes server-based loading stricter than the client-side file filter (which only checks that the name contains “ica”), so some datasets may work via manual upload but fail via rica_server.py. Consider aligning the server filter with the UI’s matching logic (or documenting/enforcing the _desc-ICA_ requirement consistently).
| and not ( | |
| "_components.nii.gz" in f.name | |
| and "_desc-ICA_" not in f.name | |
| and "betas_OC" not in f.name | |
| ) |
| - name: Configure GitHub Pages | ||
| if: github.event_name != 'pull_request' | ||
| uses: actions/configure-pages@v4 | ||
|
|
There was a problem hiding this comment.
The PR title/description is about switching GitHub Pages to a workflow-based source, but this PR also includes substantial unrelated application changes (NIfTI loading/backpressure fix, server-side file filtering changes, UI prop plumbing, and updates to claude-progress.txt). Please split these into separate PRs or update the PR description to accurately reflect the full scope so reviewers can assess risk/intent appropriately.
| // Write without awaiting to prevent backpressure deadlock with large files. | ||
| // Reading starts immediately below, consuming data as it's decompressed. | ||
| writer.write(new Uint8Array(gzBuffer)).then(() => writer.close()).catch(() => {}); |
There was a problem hiding this comment.
The fire-and-forget writer.write(...).then(() => writer.close()).catch(() => {}) swallows write/close errors and also bypasses the surrounding try/catch (since the promise chain isn’t awaited). This can lead to silent header extraction failures or leaked stream resources. Consider running write+close in a separate async task (so reads can start immediately) but keeping a handle to the promise and awaiting it (or at least surfacing/logging failures) after the read loop completes, and ensure the writer is closed/aborted on error/cancel.
Summary
actions/configure-pages@v4step to thebuildjob so the workflow self-configures on every non-PR run, preventing silent regression if the UI setting is changed backRoot cause
GitHub Pages was configured in legacy mode (
build_type: legacy), serving raw Markdown via Jekyll fromdocs/. Theactions/deploy-pages@v4step in the workflow was deploying the MkDocs-built site, but the legacy branch-based build took precedence — so the Material theme site was never actually served, causing 404s on sub-pages.Changes
.github/workflows/docs.yml: addsConfigure GitHub Pagesstep beforeSet up Pythonbuild_type: workflowvia APITest plan
configure-pagesstep🤖 Generated with Claude Code