Skip to content

Commit 229c3cb

Browse files
committed
Make push-pages
1 parent 967ccd8 commit 229c3cb

11 files changed

+55
-51
lines changed

_sources/notebooks/shortclips/merged_for_colab.ipynb

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -209,9 +209,10 @@
209209
"stimulus-dependent signal and noise. If we present the same stimulus multiple\n",
210210
"times and we record brain activity for each repetition, the stimulus-dependent\n",
211211
"signal will be the same across repetitions while the noise will vary across\n",
212-
"repetitions. In voxelwise modeling, the features used to model brain activity\n",
213-
"are the same for each repetition of the stimulus. Thus, encoding models will\n",
214-
"predict only the repeatable stimulus-dependent signal.\n",
212+
"repetitions. In the Voxelwise Encoding Model framework, \n",
213+
"the features used to model brain activity are the same for each repetition of the \n",
214+
"stimulus. Thus, encoding models will predict only the repeatable stimulus-dependent \n",
215+
"signal.\n",
215216
"\n",
216217
"The stimulus-dependent signal can be estimated by taking the mean of brain\n",
217218
"responses over repeats of the same stimulus or experiment. The variance of the\n",

_sources/notebooks/shortclips/merged_for_colab_model_fitting.ipynb

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -209,9 +209,10 @@
209209
"stimulus-dependent signal and noise. If we present the same stimulus multiple\n",
210210
"times and we record brain activity for each repetition, the stimulus-dependent\n",
211211
"signal will be the same across repetitions while the noise will vary across\n",
212-
"repetitions. In voxelwise modeling, the features used to model brain activity\n",
213-
"are the same for each repetition of the stimulus. Thus, encoding models will\n",
214-
"predict only the repeatable stimulus-dependent signal.\n",
212+
"repetitions. In the Voxelwise Encoding Model framework, \n",
213+
"the features used to model brain activity are the same for each repetition of the \n",
214+
"stimulus. Thus, encoding models will predict only the repeatable stimulus-dependent \n",
215+
"signal.\n",
215216
"\n",
216217
"The stimulus-dependent signal can be estimated by taking the mean of brain\n",
217218
"responses over repeats of the same stimulus or experiment. The variance of the\n",

_sources/pages/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Voxelwise Encoding Model (VEM) tutorials
22

33
Welcome to the tutorials on the Voxelwise Encoding Model framework from the
4-
[GallantLab](https://gallantlab.org).
4+
[Gallant Lab](https://gallantlab.org).
55

66
If you use these tutorials for your work, consider citing the corresponding paper:
77

notebooks/shortclips/03_compute_explainable_variance.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -437,8 +437,8 @@ <h1>Compute the explainable variance<a class="headerlink" href="#compute-the-exp
437437
across repetitions. For each repeat, we define the residual timeseries between
438438
brain response and average brain response as <span class="math notranslate nohighlight">\(r_i = y_i - \bar{y}\)</span>. The
439439
explainable variance (EV) is estimated as</p>
440-
<div class="amsmath math notranslate nohighlight" id="equation-636d7741-bd36-4057-88bd-5e66477663ba">
441-
<span class="eqno">(1)<a class="headerlink" href="#equation-636d7741-bd36-4057-88bd-5e66477663ba" title="Permalink to this equation">#</a></span>\[\begin{align}\text{EV} = \frac{1}{N}\sum_{i=1}^N\text{Var}(y_i) - \frac{N}{N-1}\sum_{i=1}^N\text{Var}(r_i)\end{align}\]</div>
440+
<div class="amsmath math notranslate nohighlight" id="equation-ebe8439d-b26d-442b-940e-5fe197dcdc70">
441+
<span class="eqno">(1)<a class="headerlink" href="#equation-ebe8439d-b26d-442b-940e-5fe197dcdc70" title="Permalink to this equation">#</a></span>\[\begin{align}\text{EV} = \frac{1}{N}\sum_{i=1}^N\text{Var}(y_i) - \frac{N}{N-1}\sum_{i=1}^N\text{Var}(r_i)\end{align}\]</div>
442442
<p>In the literature, the explainable variance is also known as the <em>signal
443443
power</em>.</p>
444444
<p>For more information, see <span id="id1">Sahani and Linden [<a class="reference internal" href="merged_for_colab_model_fitting.html#id130" title="M. Sahani and J. Linden. How linear are auditory cortical responses? Adv. Neural Inf. Process. Syst., 2002.">2002</a>]</span>, <span id="id2">Hsu <em>et al.</em> [<a class="reference internal" href="merged_for_colab_model_fitting.html#id131" title="A. Hsu, A. Borst, and F. E. Theunissen. Quantifying variability in neural responses and its application for the validation of model predictions. Network, 2004.">2004</a>]</span>, and <span id="id3">Schoppe <em>et al.</em> [<a class="reference internal" href="merged_for_colab_model_fitting.html#id132" title="O. Schoppe, N. S. Harper, B. Willmore, A. King, and J. Schnupp. Measuring the performance of neural models. Front. Comput. Neurosci., 2016.">2016</a>]</span>.</p>

notebooks/shortclips/04_understand_ridge_regression.html

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -422,13 +422,13 @@ <h1>Understand ridge regression and cross-validation<a class="headerlink" href="
422422
variable <span class="math notranslate nohighlight">\(y \in \mathbb{R}^{n}\)</span> (the target). Specifically, linear
423423
regression uses a vector of coefficient <span class="math notranslate nohighlight">\(w \in \mathbb{R}^{p}\)</span> to
424424
predict the output</p>
425-
<div class="amsmath math notranslate nohighlight" id="equation-da76a1fe-87f6-46bd-9e9c-e09383d7cce8">
426-
<span class="eqno">(2)<a class="headerlink" href="#equation-da76a1fe-87f6-46bd-9e9c-e09383d7cce8" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = Xw\end{align}\]</div>
425+
<div class="amsmath math notranslate nohighlight" id="equation-1d4420e8-4c09-4f86-90fe-3e617a7bf61d">
426+
<span class="eqno">(2)<a class="headerlink" href="#equation-1d4420e8-4c09-4f86-90fe-3e617a7bf61d" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = Xw\end{align}\]</div>
427427
<p>The model is considered accurate if the predictions <span class="math notranslate nohighlight">\(\hat{y}\)</span> are close
428428
to the true output values <span class="math notranslate nohighlight">\(y\)</span>. Therefore, a good linear regression model
429429
is given by the vector <span class="math notranslate nohighlight">\(w\)</span> that minimizes the sum of squared errors:</p>
430-
<div class="amsmath math notranslate nohighlight" id="equation-6da7fe0e-54ba-48f4-9aa1-d9bf075b11ab">
431-
<span class="eqno">(3)<a class="headerlink" href="#equation-6da7fe0e-54ba-48f4-9aa1-d9bf075b11ab" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2\end{align}\]</div>
430+
<div class="amsmath math notranslate nohighlight" id="equation-7bfcc882-1d5a-4ee0-86c3-0511cfcc090e">
431+
<span class="eqno">(3)<a class="headerlink" href="#equation-7bfcc882-1d5a-4ee0-86c3-0511cfcc090e" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2\end{align}\]</div>
432432
<p>This is the simplest model for linear regression, and it is known as <em>ordinary
433433
least squares</em> (OLS).</p>
434434
<section id="ordinary-least-squares-ols">
@@ -480,8 +480,8 @@ <h2>Ordinary least squares (OLS)<a class="headerlink" href="#ordinary-least-squa
480480
</div>
481481
<p>The linear coefficient leading to the minimum squared loss can be found
482482
analytically with the formula:</p>
483-
<div class="amsmath math notranslate nohighlight" id="equation-f012e063-7522-4b05-8e49-064f95b614c7">
484-
<span class="eqno">(4)<a class="headerlink" href="#equation-f012e063-7522-4b05-8e49-064f95b614c7" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X)^{-1} X^\top y\end{align}\]</div>
483+
<div class="amsmath math notranslate nohighlight" id="equation-d772aa1c-919c-4bf2-8f9e-93e1faf5834e">
484+
<span class="eqno">(4)<a class="headerlink" href="#equation-d772aa1c-919c-4bf2-8f9e-93e1faf5834e" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X)^{-1} X^\top y\end{align}\]</div>
485485
<p>This is the OLS solution.</p>
486486
<div class="cell docutils container">
487487
<div class="cell_input docutils container">
@@ -621,8 +621,8 @@ <h2>Ridge regression<a class="headerlink" href="#ridge-regression" title="Link t
621621
<p>To solve the instability and under-determinacy issues of OLS, OLS can be
622622
extended to <em>ridge regression</em>. Ridge regression considers a different
623623
optimization problem:</p>
624-
<div class="amsmath math notranslate nohighlight" id="equation-3b3d4e3f-b071-4c3f-a29b-8960c06b9e4d">
625-
<span class="eqno">(5)<a class="headerlink" href="#equation-3b3d4e3f-b071-4c3f-a29b-8960c06b9e4d" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2 + \alpha ||w||^2\end{align}\]</div>
624+
<div class="amsmath math notranslate nohighlight" id="equation-883890df-0016-435a-827a-6cf2be497518">
625+
<span class="eqno">(5)<a class="headerlink" href="#equation-883890df-0016-435a-827a-6cf2be497518" title="Permalink to this equation">#</a></span>\[\begin{align}w = \arg\min_w ||Xw - y||^2 + \alpha ||w||^2\end{align}\]</div>
626626
<p>This optimization problem contains two terms: (i) a <em>data-fitting term</em>
627627
<span class="math notranslate nohighlight">\(||Xw - y||^2\)</span>, which ensures the regression correctly fits the
628628
training data; and (ii) a regularization term <span class="math notranslate nohighlight">\(\alpha||w||^2\)</span>, which
@@ -654,8 +654,8 @@ <h2>Ridge regression<a class="headerlink" href="#ridge-regression" title="Link t
654654
<p>To understand why the regularization term makes the solution more robust to
655655
noise, let’s consider the ridge solution. The ridge solution can be found
656656
analytically with the formula:</p>
657-
<div class="amsmath math notranslate nohighlight" id="equation-a0efc9cb-9e9a-4048-9b95-83cf0418e6d0">
658-
<span class="eqno">(6)<a class="headerlink" href="#equation-a0efc9cb-9e9a-4048-9b95-83cf0418e6d0" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X + \alpha I)^{-1} X^\top y\end{align}\]</div>
657+
<div class="amsmath math notranslate nohighlight" id="equation-4fba5c9d-c0e5-4850-ab14-99c4293d01f3">
658+
<span class="eqno">(6)<a class="headerlink" href="#equation-4fba5c9d-c0e5-4850-ab14-99c4293d01f3" title="Permalink to this equation">#</a></span>\[\begin{align}w = (X^\top X + \alpha I)^{-1} X^\top y\end{align}\]</div>
659659
<p>where <code class="docutils literal notranslate"><span class="pre">I</span></code> is the identity matrix. In this formula, we can see that the
660660
inverted matrix is now <span class="math notranslate nohighlight">\((X^\top X + \alpha I)\)</span>. Compared to OLS, the
661661
additional term <span class="math notranslate nohighlight">\(\alpha I\)</span> adds a positive value <code class="docutils literal notranslate"><span class="pre">alpha</span></code> to all

notebooks/shortclips/06_visualize_hemodynamic_response.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1660,8 +1660,8 @@ <h2>Visualize the HRF<a class="headerlink" href="#visualize-the-hrf" title="Link
16601660
coefficients <span class="math notranslate nohighlight">\(\beta\)</span> obtained with a ridge regression, but the primal
16611661
coefficients can be computed from the dual coefficients using the training
16621662
features <span class="math notranslate nohighlight">\(X\)</span>:</p>
1663-
<div class="amsmath math notranslate nohighlight" id="equation-7cf49575-3efa-49ea-87a4-318dc69a1259">
1664-
<span class="eqno">(7)<a class="headerlink" href="#equation-7cf49575-3efa-49ea-87a4-318dc69a1259" title="Permalink to this equation">#</a></span>\[\begin{align}\beta = X^\top w\end{align}\]</div>
1663+
<div class="amsmath math notranslate nohighlight" id="equation-0a408be5-b219-4ee7-9cc0-f0ee4349ddbb">
1664+
<span class="eqno">(7)<a class="headerlink" href="#equation-0a408be5-b219-4ee7-9cc0-f0ee4349ddbb" title="Permalink to this equation">#</a></span>\[\begin{align}\beta = X^\top w\end{align}\]</div>
16651665
<p>To better visualize the HRF, we will refit a model with more delays, but only
16661666
on a selection of voxels to speed up the computations.</p>
16671667
<div class="cell docutils container">

notebooks/shortclips/09_fit_banded_ridge_model.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2827,8 +2827,8 @@ <h2>Plot the banded ridge split<a class="headerlink" href="#plot-the-banded-ridg
28272827
take the kernel weights and the ridge (dual) weights corresponding to each
28282828
feature space, and use them to compute the prediction from each feature space
28292829
separately.</p>
2830-
<div class="amsmath math notranslate nohighlight" id="equation-bc87ddee-f7f6-4477-8700-93c7c9d2f516">
2831-
<span class="eqno">(8)<a class="headerlink" href="#equation-bc87ddee-f7f6-4477-8700-93c7c9d2f516" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = \sum_i^m \hat{y}_i = \sum_i^m \gamma_i K_i \hat{w}\end{align}\]</div>
2830+
<div class="amsmath math notranslate nohighlight" id="equation-51461ecd-7d8c-4580-8f01-2be33f0cd837">
2831+
<span class="eqno">(8)<a class="headerlink" href="#equation-51461ecd-7d8c-4580-8f01-2be33f0cd837" title="Permalink to this equation">#</a></span>\[\begin{align}\hat{y} = \sum_i^m \hat{y}_i = \sum_i^m \gamma_i K_i \hat{w}\end{align}\]</div>
28322832
<p>Then, we use these split predictions to compute split <span class="math notranslate nohighlight">\(\tilde{R}^2_i\)</span>
28332833
scores. These scores are corrected so that their sum is equal to the
28342834
<span class="math notranslate nohighlight">\(R^2\)</span> score of the full prediction <span class="math notranslate nohighlight">\(\hat{y}\)</span>.</p>

0 commit comments

Comments
 (0)