Skip to content

Commit 967ccd8

Browse files
committed
Make push-pages
1 parent 5e7eddd commit 967ccd8

17 files changed

+58
-57
lines changed

_sources/notebooks/shortclips/02_download_shortclips.ipynb

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,8 @@
5151
"download more subjects.\n",
5252
"\n",
5353
"We also skip the stimuli files, since the dataset provides two preprocessed\n",
54-
"feature spaces to perform voxelwise modeling without requiring the original\n",
55-
"stimuli.\n",
56-
"\n"
54+
"feature spaces to fit voxelwise encoding models without requiring the original\n",
55+
"stimuli."
5756
]
5857
},
5958
{

_sources/notebooks/shortclips/03_compute_explainable_variance.ipynb

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,10 @@
1515
"stimulus-dependent signal and noise. If we present the same stimulus multiple\n",
1616
"times and we record brain activity for each repetition, the stimulus-dependent\n",
1717
"signal will be the same across repetitions while the noise will vary across\n",
18-
"repetitions. In voxelwise modeling, the features used to model brain activity\n",
19-
"are the same for each repetition of the stimulus. Thus, encoding models will\n",
20-
"predict only the repeatable stimulus-dependent signal.\n",
18+
"repetitions. In the Voxelwise Encoding Model framework, \n",
19+
"the features used to model brain activity are the same for each repetition of the \n",
20+
"stimulus. Thus, encoding models will predict only the repeatable stimulus-dependent \n",
21+
"signal.\n",
2122
"\n",
2223
"The stimulus-dependent signal can be estimated by taking the mean of brain\n",
2324
"responses over repeats of the same stimulus or experiment. The variance of the\n",

_sources/notebooks/shortclips/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Shortclips tutorial
22

3-
This tutorial describes how to use the Voxelwise Encoding Model framework on a visual
3+
This tutorial describes how to use the Voxelwise Encoding Model framework in a visual
44
imaging experiment.
55

66
## Dataset

_sources/notebooks/vim2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ lobe, and with no mappers to plot the data on flatmaps.
77
Using the "Shortclips tutorial" with full brain responses is recommended.
88
:::
99

10-
This tutorial describes how to perform voxelwise modeling on a visual
10+
This tutorial describes how to use the Voxelwise Encoding Model framework in a visual
1111
imaging experiment.
1212

1313
## Data set

_sources/pages/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Welcome to the tutorials on the Voxelwise Encoding Model framework from the
55

66
If you use these tutorials for your work, consider citing the corresponding paper:
77

8-
> T. Dupré La Tour, M. Visconti di Oleggio Castello, and J. L. Gallant. The voxelwise modeling framework: a tutorial introduction to fitting encoding models to fMRI data. PsyArXiv, 2024. [doi:10.31234/osf.io/t975e.](https://doi.org/10.31234/osf.io/t975e)
8+
> T. Dupré La Tour, M. Visconti di Oleggio Castello, and J. L. Gallant. The Voxelwise Encoding Model framework: a tutorial introduction to fitting encoding models to fMRI data. PsyArXiv, 2024. [doi:10.31234/osf.io/t975e.](https://doi.org/10.31234/osf.io/t975e)
99
1010
You can find a copy of the paper [here](https://github.com/gallantlab/voxelwise_tutorials/blob/main/paper/voxelwise_tutorials_paper.pdf).
1111

notebooks/shortclips/02_download_shortclips.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -442,7 +442,7 @@ <h2>Download<a class="headerlink" href="#download" title="Link to this heading">
442442
analysis on the four other subjects. Uncomment the lines in <code class="docutils literal notranslate"><span class="pre">DATAFILES</span></code> to
443443
download more subjects.</p>
444444
<p>We also skip the stimuli files, since the dataset provides two preprocessed
445-
feature spaces to perform voxelwise modeling without requiring the original
445+
feature spaces to fit voxelwise encoding models without requiring the original
446446
stimuli.</p>
447447
<div class="cell docutils container">
448448
<div class="cell_input docutils container">

notebooks/shortclips/03_compute_explainable_variance.html

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -422,9 +422,10 @@ <h1>Compute the explainable variance<a class="headerlink" href="#compute-the-exp
422422
stimulus-dependent signal and noise. If we present the same stimulus multiple
423423
times and we record brain activity for each repetition, the stimulus-dependent
424424
signal will be the same across repetitions while the noise will vary across
425-
repetitions. In voxelwise modeling, the features used to model brain activity
426-
are the same for each repetition of the stimulus. Thus, encoding models will
427-
predict only the repeatable stimulus-dependent signal.</p>
425+
repetitions. In the Voxelwise Encoding Model framework,
426+
the features used to model brain activity are the same for each repetition of the
427+
stimulus. Thus, encoding models will predict only the repeatable stimulus-dependent
428+
signal.</p>
428429
<p>The stimulus-dependent signal can be estimated by taking the mean of brain
429430
responses over repeats of the same stimulus or experiment. The variance of the
430431
estimated stimulus-dependent signal, which we call the explainable variance, is
@@ -436,8 +437,8 @@ <h1>Compute the explainable variance<a class="headerlink" href="#compute-the-exp
436437
across repetitions. For each repeat, we define the residual timeseries between
437438
brain response and average brain response as <span class="math notranslate nohighlight">\(r_i = y_i - \bar{y}\)</span>. The
438439
explainable variance (EV) is estimated as</p>
439-
<div class="amsmath math notranslate nohighlight" id="equation-8571b8dd-458d-4804-9a62-21ea954a14fe">
440-
<span class="eqno">(1)<a class="headerlink" href="#equation-8571b8dd-458d-4804-9a62-21ea954a14fe" title="Permalink to this equation">#</a></span>\[\begin{align}\text{EV} = \frac{1}{N}\sum_{i=1}^N\text{Var}(y_i) - \frac{N}{N-1}\sum_{i=1}^N\text{Var}(r_i)\end{align}\]</div>
440+
<div class="amsmath math notranslate nohighlight" id="equation-636d7741-bd36-4057-88bd-5e66477663ba">
441+
<span class="eqno">(1)<a class="headerlink" href="#equation-636d7741-bd36-4057-88bd-5e66477663ba" title="Permalink to this equation">#</a></span>\[\begin{align}\text{EV} = \frac{1}{N}\sum_{i=1}^N\text{Var}(y_i) - \frac{N}{N-1}\sum_{i=1}^N\text{Var}(r_i)\end{align}\]</div>
441442
<p>In the literature, the explainable variance is also known as the <em>signal
442443
power</em>.</p>
443444
<p>For more information, see <span id="id1">Sahani and Linden [<a class="reference internal" href="merged_for_colab_model_fitting.html#id130" title="M. Sahani and J. Linden. How linear are auditory cortical responses? Adv. Neural Inf. Process. Syst., 2002.">2002</a>]</span>, <span id="id2">Hsu <em>et al.</em> [<a class="reference internal" href="merged_for_colab_model_fitting.html#id131" title="A. Hsu, A. Borst, and F. E. Theunissen. Quantifying variability in neural responses and its application for the validation of model predictions. Network, 2004.">2004</a>]</span>, and <span id="id3">Schoppe <em>et al.</em> [<a class="reference internal" href="merged_for_colab_model_fitting.html#id132" title="O. Schoppe, N. S. Harper, B. Willmore, A. King, and J. Schnupp. Measuring the performance of neural models. Front. Comput. Neurosci., 2016.">2016</a>]</span>.</p>

notebooks/shortclips/04_understand_ridge_regression.html

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -422,13 +422,13 @@ <h1>Understand ridge regression and cross-validation<a class="headerlink" href="
422422
variable <span class="math notranslate nohighlight">\(y \in \mathbb{R}^{n}\)</span> (the target). Specifically, linear
423423
regression uses a vector of coefficient <span class="math notranslate nohighlight">\(w \in \mathbb{R}^{p}\)</span> to
424424
predict the output</p>
425-
<div class="amsmath math notranslate nohighlight" id="equation-010741a2-76d5-4c9c-9869-185c0d3cb2c4">
426-
<span class="eqno">(2)<a class="headerlink" href="#equation-010741a2-76d5-4c9c-9869-185c0d3cb2c4" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = Xw\end{align}\]</div>
425+
<div class="amsmath math notranslate nohighlight" id="equation-da76a1fe-87f6-46bd-9e9c-e09383d7cce8">
426+
<span class="eqno">(2)<a class="headerlink" href="#equation-da76a1fe-87f6-46bd-9e9c-e09383d7cce8" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = Xw\end{align}\]</div>
427427
<p>The model is considered accurate if the predictions <span class="math notranslate nohighlight">\(\hat{y}\)</span> are close
428428
to the true output values <span class="math notranslate nohighlight">\(y\)</span>. Therefore, a good linear regression model
429429
is given by the vector <span class="math notranslate nohighlight">\(w\)</span> that minimizes the sum of squared errors:</p>
430-
<div class="amsmath math notranslate nohighlight" id="equation-04e7999a-297e-4b03-8416-a103d7ff49bf">
431-
<span class="eqno">(3)<a class="headerlink" href="#equation-04e7999a-297e-4b03-8416-a103d7ff49bf" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2\end{align}\]</div>
430+
<div class="amsmath math notranslate nohighlight" id="equation-6da7fe0e-54ba-48f4-9aa1-d9bf075b11ab">
431+
<span class="eqno">(3)<a class="headerlink" href="#equation-6da7fe0e-54ba-48f4-9aa1-d9bf075b11ab" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2\end{align}\]</div>
432432
<p>This is the simplest model for linear regression, and it is known as <em>ordinary
433433
least squares</em> (OLS).</p>
434434
<section id="ordinary-least-squares-ols">
@@ -480,8 +480,8 @@ <h2>Ordinary least squares (OLS)<a class="headerlink" href="#ordinary-least-squa
480480
</div>
481481
<p>The linear coefficient leading to the minimum squared loss can be found
482482
analytically with the formula:</p>
483-
<div class="amsmath math notranslate nohighlight" id="equation-08541323-7db5-4c17-bea4-760241648a56">
484-
<span class="eqno">(4)<a class="headerlink" href="#equation-08541323-7db5-4c17-bea4-760241648a56" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X)^{-1} X^\top y\end{align}\]</div>
483+
<div class="amsmath math notranslate nohighlight" id="equation-f012e063-7522-4b05-8e49-064f95b614c7">
484+
<span class="eqno">(4)<a class="headerlink" href="#equation-f012e063-7522-4b05-8e49-064f95b614c7" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X)^{-1} X^\top y\end{align}\]</div>
485485
<p>This is the OLS solution.</p>
486486
<div class="cell docutils container">
487487
<div class="cell_input docutils container">
@@ -621,8 +621,8 @@ <h2>Ridge regression<a class="headerlink" href="#ridge-regression" title="Link t
621621
<p>To solve the instability and under-determinacy issues of OLS, OLS can be
622622
extended to <em>ridge regression</em>. Ridge regression considers a different
623623
optimization problem:</p>
624-
<div class="amsmath math notranslate nohighlight" id="equation-152e4fe6-61c3-4b05-b6f4-2b165911ccca">
625-
<span class="eqno">(5)<a class="headerlink" href="#equation-152e4fe6-61c3-4b05-b6f4-2b165911ccca" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2 + \alpha ||w||^2\end{align}\]</div>
624+
<div class="amsmath math notranslate nohighlight" id="equation-3b3d4e3f-b071-4c3f-a29b-8960c06b9e4d">
625+
<span class="eqno">(5)<a class="headerlink" href="#equation-3b3d4e3f-b071-4c3f-a29b-8960c06b9e4d" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2 + \alpha ||w||^2\end{align}\]</div>
626626
<p>This optimization problem contains two terms: (i) a <em>data-fitting term</em>
627627
<span class="math notranslate nohighlight">\(||Xw - y||^2\)</span>, which ensures the regression correctly fits the
628628
training data; and (ii) a regularization term <span class="math notranslate nohighlight">\(\alpha||w||^2\)</span>, which
@@ -654,8 +654,8 @@ <h2>Ridge regression<a class="headerlink" href="#ridge-regression" title="Link t
654654
<p>To understand why the regularization term makes the solution more robust to
655655
noise, let’s consider the ridge solution. The ridge solution can be found
656656
analytically with the formula:</p>
657-
<div class="amsmath math notranslate nohighlight" id="equation-492d00e7-ad9c-4c41-8a51-e718f0553dd4">
658-
<span class="eqno">(6)<a class="headerlink" href="#equation-492d00e7-ad9c-4c41-8a51-e718f0553dd4" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X + \alpha I)^{-1} X^\top y\end{align}\]</div>
657+
<div class="amsmath math notranslate nohighlight" id="equation-a0efc9cb-9e9a-4048-9b95-83cf0418e6d0">
658+
<span class="eqno">(6)<a class="headerlink" href="#equation-a0efc9cb-9e9a-4048-9b95-83cf0418e6d0" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X + \alpha I)^{-1} X^\top y\end{align}\]</div>
659659
<p>where <code class="docutils literal notranslate"><span class="pre">I</span></code> is the identity matrix. In this formula, we can see that the
660660
inverted matrix is now <span class="math notranslate nohighlight">\((X^\top X + \alpha I)\)</span>. Compared to OLS, the
661661
additional term <span class="math notranslate nohighlight">\(\alpha I\)</span> adds a positive value <code class="docutils literal notranslate"><span class="pre">alpha</span></code> to all

notebooks/shortclips/06_visualize_hemodynamic_response.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1660,8 +1660,8 @@ <h2>Visualize the HRF<a class="headerlink" href="#visualize-the-hrf" title="Link
16601660
coefficients <span class="math notranslate nohighlight">\(\beta\)</span> obtained with a ridge regression, but the primal
16611661
coefficients can be computed from the dual coefficients using the training
16621662
features <span class="math notranslate nohighlight">\(X\)</span>:</p>
1663-
<div class="amsmath math notranslate nohighlight" id="equation-679d04fd-27f8-45fb-895d-39c2b830da09">
1664-
<span class="eqno">(7)<a class="headerlink" href="#equation-679d04fd-27f8-45fb-895d-39c2b830da09" title="Permalink to this equation">#</a></span>\[\begin{align}\beta = X^\top w\end{align}\]</div>
1663+
<div class="amsmath math notranslate nohighlight" id="equation-7cf49575-3efa-49ea-87a4-318dc69a1259">
1664+
<span class="eqno">(7)<a class="headerlink" href="#equation-7cf49575-3efa-49ea-87a4-318dc69a1259" title="Permalink to this equation">#</a></span>\[\begin{align}\beta = X^\top w\end{align}\]</div>
16651665
<p>To better visualize the HRF, we will refit a model with more delays, but only
16661666
on a selection of voxels to speed up the computations.</p>
16671667
<div class="cell docutils container">

notebooks/shortclips/09_fit_banded_ridge_model.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2827,8 +2827,8 @@ <h2>Plot the banded ridge split<a class="headerlink" href="#plot-the-banded-ridg
28272827
take the kernel weights and the ridge (dual) weights corresponding to each
28282828
feature space, and use them to compute the prediction from each feature space
28292829
separately.</p>
2830-
<div class="amsmath math notranslate nohighlight" id="equation-7b30bc1c-345f-4f87-b7b1-398ba820b074">
2831-
<span class="eqno">(8)<a class="headerlink" href="#equation-7b30bc1c-345f-4f87-b7b1-398ba820b074" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = \sum_i^m \hat{y}_i = \sum_i^m \gamma_i K_i \hat{w}\end{align}\]</div>
2830+
<div class="amsmath math notranslate nohighlight" id="equation-bc87ddee-f7f6-4477-8700-93c7c9d2f516">
2831+
<span class="eqno">(8)<a class="headerlink" href="#equation-bc87ddee-f7f6-4477-8700-93c7c9d2f516" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = \sum_i^m \hat{y}_i = \sum_i^m \gamma_i K_i \hat{w}\end{align}\]</div>
28322832
<p>Then, we use these split predictions to compute split <span class="math notranslate nohighlight">\(\tilde{R}^2_i\)</span>
28332833
scores. These scores are corrected so that their sum is equal to the
28342834
<span class="math notranslate nohighlight">\(R^2\)</span> score of the full prediction <span class="math notranslate nohighlight">\(\hat{y}\)</span>.</p>

0 commit comments

Comments
 (0)