Skip to content

fix: configure GitHub Pages to use Actions workflow source#112

Closed
eurunuela wants to merge 9 commits intoME-ICA:masterfrom
eurunuela:fix/github-pages-workflow-source
Closed

fix: configure GitHub Pages to use Actions workflow source#112
eurunuela wants to merge 9 commits intoME-ICA:masterfrom
eurunuela:fix/github-pages-workflow-source

Conversation

@eurunuela
Copy link
Collaborator

Summary

  • Switches GitHub Pages build source from legacy (branch-based Jekyll) to GitHub Actions workflow
  • Adds actions/configure-pages@v4 step to the build job so the workflow self-configures on every non-PR run, preventing silent regression if the UI setting is changed back

Root cause

GitHub Pages was configured in legacy mode (build_type: legacy), serving raw Markdown via Jekyll from docs/. The actions/deploy-pages@v4 step in the workflow was deploying the MkDocs-built site, but the legacy branch-based build took precedence — so the Material theme site was never actually served, causing 404s on sub-pages.

Changes

  • .github/workflows/docs.yml: adds Configure GitHub Pages step before Set up Python
  • Repo Pages setting already switched to build_type: workflow via API

Test plan

🤖 Generated with Claude Code

eurunuela and others added 9 commits February 26, 2026 11:31
- Fix: Changed decompressHeader to use fire-and-forget write so reading starts immediately, preventing deadlock when large NIfTI files (100+ MB) buffer in DecompressionStream
- Enhancement: Added "registry.json" to RICA_FILE_PATTERNS so rica_server discovers RepetitionTime for frequency axis calculations
- Progress: Updated session log documenting the backpressure deadlock bug and fix

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…derived files

Previously the filter matched any *_components.nii.gz containing "ica", which
caught echo-specific files (echo-1..5) and derived maps (ICAAccepted,
ICAAveragingWeights). These were all fetched unnecessarily — wasting bandwidth
and causing the last one to overwrite the correct niftiBuffer.

New pattern: must contain _desc-ICA_components.nii.gz and must not contain _echo-,
matching only the main ICA component map. Same logic applied to both the React
file filter/processing blocks and the rica_server.py discovery filter.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
For very large NIfTI files that exceed browser memory limits, the full
arrayBuffer() call may fail, leaving repetitionTime null and causing the
power spectrum to display cycles/TR instead of Hz.

Fix by reading just the first 4096 bytes of the file/response to extract
the TR from the header, independently of loading the full file:
- File upload: use file.slice(0, 4096).arrayBuffer() before readFileAsArrayBuffer
- Server load: try a Range: bytes=0-4095 request before fetching the full file

4KB is sufficient to decompress the gzip stream enough to reach the NIfTI-1
(348 bytes) or NIfTI-2 (540 bytes) header containing pixdim[4] (TR).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, Rica loaded the entire ICA components NIfTI into a JavaScript
ArrayBuffer before passing it to Niivue. For very large 4D NIfTI files
(hundreds of MB compressed, gigabytes uncompressed), this caused the browser
to run out of heap memory even when the local machine has plenty of RAM.

Fix by passing a URL directly to Niivue instead of an ArrayBuffer:
- Server mode: store '/' + filepath; Niivue fetches from localhost directly
- File upload mode: use URL.createObjectURL(file) to create a blob URL
  that Niivue can load without a full ArrayBuffer copy in the JS heap

BrainViewer now accepts a niftiUrl prop (alongside the existing niftiBuffer
for backwards compatibility) and uses it directly in loadVolumes when present.
The same change is threaded through Plots, DecisionTree, and DecisionTreeTab.

This allows the local server to load maps for files that previously caused
OOM errors in the browser, since Niivue's internal (WebAssembly) decompression
is significantly more memory-efficient than the JS heap allocation path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Time series and FFT spectrum only require the mixing matrix TSV —
they don't need the NIfTI file at all. Previously, hasInteractiveViews
gated all three components on NIfTI availability, so any failure in
niftiUrl/niftiBuffer propagation would hide time series and FFT too.

Fix by separating the two checks:
- hasInteractiveViews: !!mixingMatrix?.data?.length (mixing matrix only)
- hasBrainViewer: !!(niftiBuffer || niftiUrl) (NIfTI required)

Time series + FFT always show when mixing matrix is loaded.
BrainViewer only renders when the NIfTI is also available.

This also improves UX for datasets where only TSV files are uploaded
(no NIfTI), which previously showed the static PNG fallback even though
the interactive time series and FFT could work perfectly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The URL-based NIfTI loading optimization (niftiUrl) caused the brain
viewer to disappear because niftiUrl wasn't propagating correctly through
the component tree. Revert to the reliable niftiBuffer (ArrayBuffer) approach
which was working before.

The TR extraction from 4KB header slice is kept (previous commit) so the
power spectrum frequency axis still shows Hz correctly regardless of whether
the full NIfTI loads successfully.

URL-based loading for large files can be revisited separately once the
propagation issue is diagnosed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
For very large NIfTI files, response.arrayBuffer() / readFileAsArrayBuffer()
can fail with OOM. Previously this left niftiBuffer null and caused the
brain viewer to not render.

Fix by always setting niftiUrl first (zero memory cost), then attempting
to load niftiBuffer. If the buffer load fails, niftiUrl is still set so
hasBrainViewer is true and BrainViewer can load directly from the URL
(server HTTP path or blob: URL from File object).

BrainViewer already prefers niftiUrl when present (previous commit).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…s change)

The filter was changed to _desc-ICA_components.nii.gz which is too strict
and misses files that don't follow exact BIDS naming (e.g. ICA_components.nii.gz).

Restore the original condition:
  _components.nii.gz + ica in name + no stat-z + no echo-

This is identical to upstream/master except we added !echo- to keep the
echo-specific file exclusion that was added to rica_server.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add `actions/configure-pages@v4` step to the build job so the workflow
self-configures the Pages source on every non-PR run. This prevents
silent regression if someone switches Pages settings back to legacy mode
in the UI.

The immediate fix (switching build_type from legacy to workflow via the
GitHub API) has already been applied to the repo settings.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 3, 2026 16:59
@netlify
Copy link

netlify bot commented Mar 3, 2026

Deploy Preview for rica-fmri ready!

Name Link
🔨 Latest commit de22e7f
🔍 Latest deploy log https://app.netlify.com/projects/rica-fmri/deploys/69a7139236ccb30008dda0f8
😎 Deploy Preview https://deploy-preview-112--rica-fmri.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@eurunuela
Copy link
Collaborator Author

Closing — branch contained unrelated commits. Will reopen from a clean base.

@eurunuela eurunuela closed this Mar 3, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the GitHub Pages documentation workflow to use the Actions workflow source, and also includes a set of runtime changes related to loading/visualizing large NIfTI files (introducing niftiUrl as a URL-based alternative to in-memory buffers).

Changes:

  • GitHub Pages: adds actions/configure-pages@v4 to ensure Pages stays configured for workflow-based deployments.
  • App: plumbs a new niftiUrl prop/state through the UI so Niivue can load NIfTI data via URL (including blob URLs) when ArrayBuffer loading is impractical.
  • Local server & loader: expands file discovery (adds registry.json) and tightens _components.nii.gz filtering; adds header/TR extraction attempts using partial reads.

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
.github/workflows/docs.yml Adds a configure-pages step to self-configure Pages for workflow deployments.
src/utils/niftiUtils.js Changes gzip header decompression write behavior to avoid deadlock with large inputs.
src/index.js Adds niftiUrl state and passes it into dependent components.
src/PopUps/IntroPopUp.js Sets niftiUrl (HTTP or blob) and adjusts NIfTI loading/TR extraction behavior.
src/Plots/BrainViewer.js Allows Niivue to load the stat map from niftiUrl when present; updates cleanup/deps.
src/Plots/Plots.js Passes niftiUrl to BrainViewer and gates brain viewer rendering on NIfTI availability.
src/Tree/DecisionTreeTab.js Wires niftiUrl through to the DecisionTree.
src/Tree/DecisionTree.js Adds niftiUrl support and updates brain viewer visibility logic.
scripts/rica_server.py Adds registry.json to discoverable patterns and adds additional _components.nii.gz exclusions.
claude-progress.txt Appends a session log describing the changes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +592 to +596
// Always set URL so BrainViewer can load even if buffer fails
niftiUrl = URL.createObjectURL(file);
// Also try loading the full buffer (fails gracefully for very large files)
try {
niftiBuffer = await readFileAsArrayBuffer(file);
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

niftiUrl = URL.createObjectURL(file) creates a blob URL that is never revoked. If a user loads multiple datasets (or the popup is reopened), this can leak memory in the browser. Please add cleanup to revoke the previous blob URL when it’s no longer needed (e.g., when loading a new file / on unmount), or centralize the revocation in the App state when niftiUrl changes.

Copilot uses AI. Check for mistakes.
Comment on lines 182 to 185
setMixingMatrix(data.mixingMatrix);
setNiftiBuffer(data.niftiBuffer);
setNiftiBuffer(data.niftiBuffer || null);
setNiftiUrl(data.niftiUrl || null);
setMaskBuffer(data.maskBuffer);
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setNiftiUrl(data.niftiUrl || null) can overwrite an existing blob: URL created via URL.createObjectURL(...) without revoking it, which will leak memory across dataset loads. Consider tracking the previous niftiUrl and calling URL.revokeObjectURL when it starts with blob: and is being replaced/cleared (e.g., in a useEffect cleanup tied to niftiUrl).

Copilot uses AI. Check for mistakes.
Comment on lines +215 to 218
// Cleanup Blob URLs — only revoke URLs we created (not external niftiUrl)
if (statBlobUrl && !niftiUrl) {
URL.revokeObjectURL(statBlobUrl);
}
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup intentionally avoids revoking niftiUrl, but niftiUrl can be a blob: URL (created via URL.createObjectURL in IntroPopup). In that case, not revoking here (or elsewhere) will leak. Either ensure the owner (e.g., App/IntroPopup) always revokes blob URLs, or pass an explicit flag/ownership indicator so BrainViewer can safely revoke when appropriate (e.g., only when niftiUrl starts with blob: and was created by this session).

Copilot uses AI. Check for mistakes.
Comment on lines +95 to +99
and not (
"_components.nii.gz" in f.name
and "_desc-ICA_" not in f.name
and "betas_OC" not in f.name
)
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new filter excludes any *_components.nii.gz that doesn’t contain _desc-ICA_ (and isn’t betas_OC, which won’t match this condition anyway). This makes server-based loading stricter than the client-side file filter (which only checks that the name contains “ica”), so some datasets may work via manual upload but fail via rica_server.py. Consider aligning the server filter with the UI’s matching logic (or documenting/enforcing the _desc-ICA_ requirement consistently).

Suggested change
and not (
"_components.nii.gz" in f.name
and "_desc-ICA_" not in f.name
and "betas_OC" not in f.name
)

Copilot uses AI. Check for mistakes.
Comment on lines +29 to +32
- name: Configure GitHub Pages
if: github.event_name != 'pull_request'
uses: actions/configure-pages@v4

Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR title/description is about switching GitHub Pages to a workflow-based source, but this PR also includes substantial unrelated application changes (NIfTI loading/backpressure fix, server-side file filtering changes, UI prop plumbing, and updates to claude-progress.txt). Please split these into separate PRs or update the PR description to accurately reflect the full scope so reviewers can assess risk/intent appropriately.

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +56
// Write without awaiting to prevent backpressure deadlock with large files.
// Reading starts immediately below, consuming data as it's decompressed.
writer.write(new Uint8Array(gzBuffer)).then(() => writer.close()).catch(() => {});
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fire-and-forget writer.write(...).then(() => writer.close()).catch(() => {}) swallows write/close errors and also bypasses the surrounding try/catch (since the promise chain isn’t awaited). This can lead to silent header extraction failures or leaked stream resources. Consider running write+close in a separate async task (so reads can start immediately) but keeping a handle to the promise and awaiting it (or at least surfacing/logging failures) after the read loop completes, and ensure the writer is closed/aborted on error/cancel.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants