Skip to content

Commit fae9cd0

Browse files
committed
Make push-pages
1 parent 3daa1c6 commit fae9cd0

35 files changed

+15472
-148
lines changed

README.html

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -182,14 +182,14 @@
182182
<ul class="nav bd-sidenav">
183183
<li class="toctree-l1"><a class="reference internal" href="pages/voxelwise_modeling.html">Overview of the VEM framework</a></li>
184184
<li class="toctree-l1 has-children"><a class="reference internal" href="notebooks/shortclips/README.html">Shortclips tutorial</a><details><summary><span class="toctree-toggle" role="presentation"><i class="fa-solid fa-chevron-down"></i></span></summary><ul>
185-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/00_download_shortclips.html">Download the data set</a></li>
186-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/01_plot_explainable_variance.html">Compute the explainable variance</a></li>
187-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/02_plot_ridge_regression.html">Understand ridge regression and cross-validation</a></li>
188-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/03_plot_wordnet_model.html">Fit a ridge model with wordnet features</a></li>
189-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/04_plot_hemodynamic_response.html">Visualize the hemodynamic response</a></li>
190-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/05_plot_motion_energy_model.html">Fit a ridge model with motion-energy features</a></li>
191-
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/06_plot_banded_ridge_model.html">Fit a banded ridge model with both wordnet and motion-energy features</a></li>
185+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/02_download_shortclips.html">Download the data set</a></li>
186+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/03_compute_explainable_variance.html">Compute the explainable variance</a></li>
187+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/04_understand_ridge_regression.html">Understand ridge regression and cross-validation</a></li>
188+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/05_fit_wordnet_model.html">Fit a ridge model with wordnet features</a></li>
189+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/06_visualize_hemodynamic_response.html">Visualize the hemodynamic response</a></li>
192190
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/07_extract_motion_energy.html">Extract motion-energy features from the stimuli</a></li>
191+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/08_fit_motion_energy_model.html">Fit a ridge model with motion-energy features</a></li>
192+
<li class="toctree-l2"><a class="reference internal" href="notebooks/shortclips/09_fit_banded_ridge_model.html">Fit a banded ridge model with both wordnet and motion-energy features</a></li>
193193
</ul>
194194
</details></li>
195195
<li class="toctree-l1 has-children"><a class="reference internal" href="notebooks/vim2/README.html">Vim-2 tutorial (optional)</a><details><summary><span class="toctree-toggle" role="presentation"><i class="fa-solid fa-chevron-down"></i></span></summary><ul>
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n",
8+
"# Setup Google Colab\n",
9+
"\n",
10+
"In this script, we setup a Google Colab environment. This script will only work\n",
11+
"when run from [Google Colab](https://colab.research.google.com/). You can\n",
12+
"skip it if you run the tutorials on your machine.\n",
13+
"\n",
14+
"> **Note:** This script will install all the required dependencies and download the data.\n",
15+
"> It will take around 10 minutes to run, but you need to run it only once in your Colab session.\n",
16+
"> If your Colab session is disconnected, you will need to run this script again."
17+
]
18+
},
19+
{
20+
"cell_type": "markdown",
21+
"metadata": {},
22+
"source": [
23+
"## Change runtime to use a GPU\n",
24+
"\n",
25+
"This tutorial is much faster when a GPU is available to run the computations.\n",
26+
"In Google Colab you can request access to a GPU by changing the runtime type.\n",
27+
"To do so, click the following menu options in Google Colab:\n",
28+
"\n",
29+
"> (Menu) \"Runtime\" -> \"Change runtime type\" -> \"Hardware accelerator\" -> \"GPU\".\n",
30+
"\n"
31+
]
32+
},
33+
{
34+
"cell_type": "markdown",
35+
"metadata": {},
36+
"source": [
37+
"## Install all required dependencies and download the data\n",
38+
"\n",
39+
"Uncomment and run the following cell to download the required packages.\n",
40+
"\n"
41+
]
42+
},
43+
{
44+
"cell_type": "code",
45+
"execution_count": null,
46+
"metadata": {
47+
"collapsed": false
48+
},
49+
"outputs": [],
50+
"source": [
51+
"#!git config --global user.email \"you@example.com\" && git config --global user.name \"Your Name\"\n",
52+
"#!wget -O- http://neuro.debian.net/lists/jammy.us-ca.libre | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list\n",
53+
"#!apt-key adv --recv-keys --keyserver hkps://keyserver.ubuntu.com 0xA5D32F012649A5A9 > /dev/null\n",
54+
"#!apt-get -qq update > /dev/null\n",
55+
"#!apt-get install -qq inkscape git-annex-standalone > /dev/null\n",
56+
"#!pip install -q voxelwise_tutorials"
57+
]
58+
},
59+
{
60+
"cell_type": "markdown",
61+
"metadata": {},
62+
"source": [
63+
"For the record, here is what each command does:\n",
64+
"\n"
65+
]
66+
},
67+
{
68+
"cell_type": "code",
69+
"execution_count": null,
70+
"metadata": {
71+
"collapsed": false
72+
},
73+
"outputs": [],
74+
"source": [
75+
"# - Set up an email and username to use git, git-annex, and datalad (required to download the data)\n",
76+
"# - Add NeuroDebian to the package sources\n",
77+
"# - Update the gpg keys to use NeuroDebian\n",
78+
"# - Update the list of available packages\n",
79+
"# - Install Inkscape to use more features from Pycortex, and install git-annex to download the data\n",
80+
"# - Install the tutorial helper package, and all the required dependencies"
81+
]
82+
},
83+
{
84+
"cell_type": "code",
85+
"execution_count": null,
86+
"metadata": {
87+
"collapsed": false
88+
},
89+
"outputs": [],
90+
"source": [
91+
"try:\n",
92+
" import google.colab # noqa\n",
93+
" in_colab = True\n",
94+
"except ImportError:\n",
95+
" in_colab = False\n",
96+
"if not in_colab:\n",
97+
" raise RuntimeError(\"This script is only meant to be run from Google \"\n",
98+
" \"Colab. You can skip it if you run the tutorials \"\n",
99+
" \"on your machine.\")"
100+
]
101+
},
102+
{
103+
"cell_type": "markdown",
104+
"metadata": {},
105+
"source": [
106+
"Now run the following cell to download the data for the tutorials.\n",
107+
"\n"
108+
]
109+
},
110+
{
111+
"cell_type": "code",
112+
"execution_count": null,
113+
"metadata": {
114+
"collapsed": false
115+
},
116+
"outputs": [],
117+
"source": [
118+
"from voxelwise_tutorials.io import download_datalad\n",
119+
"\n",
120+
"DATAFILES = [\n",
121+
" \"features/motion_energy.hdf\",\n",
122+
" \"features/wordnet.hdf\",\n",
123+
" \"mappers/S01_mappers.hdf\",\n",
124+
" \"responses/S01_responses.hdf\",\n",
125+
"]\n",
126+
"\n",
127+
"source = \"https://gin.g-node.org/gallantlab/shortclips\"\n",
128+
"destination = \"/content/shortclips\"\n",
129+
"\n",
130+
"for datafile in DATAFILES:\n",
131+
" local_filename = download_datalad(\n",
132+
" datafile,\n",
133+
" destination=destination,\n",
134+
" source=source\n",
135+
" )"
136+
]
137+
},
138+
{
139+
"cell_type": "markdown",
140+
"metadata": {},
141+
"source": [
142+
"Now run the following cell to set up the environment variables for the\n",
143+
"tutorials and pycortex.\n",
144+
"\n"
145+
]
146+
},
147+
{
148+
"cell_type": "code",
149+
"execution_count": null,
150+
"metadata": {
151+
"collapsed": false
152+
},
153+
"outputs": [],
154+
"source": [
155+
"import os\n",
156+
"os.environ['VOXELWISE_TUTORIALS_DATA'] = \"/content\"\n",
157+
"\n",
158+
"import sklearn\n",
159+
"sklearn.set_config(assume_finite=True)"
160+
]
161+
},
162+
{
163+
"cell_type": "markdown",
164+
"metadata": {},
165+
"source": [
166+
"Your Google Colab environment is now set up for the voxelwise tutorials.\n",
167+
"\n"
168+
]
169+
}
170+
],
171+
"metadata": {
172+
"kernelspec": {
173+
"display_name": "Python 3",
174+
"language": "python",
175+
"name": "python3"
176+
},
177+
"language_info": {
178+
"codemirror_mode": {
179+
"name": "ipython",
180+
"version": 3
181+
},
182+
"file_extension": ".py",
183+
"mimetype": "text/x-python",
184+
"name": "python",
185+
"nbconvert_exporter": "python",
186+
"pygments_lexer": "ipython3",
187+
"version": "3.7.12"
188+
}
189+
},
190+
"nbformat": 4,
191+
"nbformat_minor": 0
192+
}
Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n",
8+
"# Download the data set\n",
9+
"\n",
10+
"In this script, we download the data set from Wasabi or GIN. No account is required.\n",
11+
"\n",
12+
":::{note}\n",
13+
"This script will download approximately 2GB of data.\n",
14+
":::\n",
15+
"\n",
16+
"\n",
17+
"## Cite this data set\n",
18+
"\n",
19+
"This tutorial is based on publicly available data [published on GIN](https://gin.g-node.org/gallantlab/shortclips). If you publish any work using\n",
20+
"this data set, please cite the original publication {cite}`huth2012`, and the data set {cite}`huth2022data`."
21+
]
22+
},
23+
{
24+
"cell_type": "markdown",
25+
"metadata": {},
26+
"source": [
27+
"## Download\n",
28+
"\n"
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": null,
34+
"metadata": {
35+
"collapsed": false
36+
},
37+
"outputs": [],
38+
"source": [
39+
"# path of the data directory\n",
40+
"from voxelwise_tutorials.io import get_data_home\n",
41+
"directory = get_data_home(dataset=\"shortclips\")\n",
42+
"print(directory)"
43+
]
44+
},
45+
{
46+
"cell_type": "markdown",
47+
"metadata": {},
48+
"source": [
49+
"We will only use the first subject in this tutorial, but you can run the same\n",
50+
"analysis on the four other subjects. Uncomment the lines in ``DATAFILES`` to\n",
51+
"download more subjects.\n",
52+
"\n",
53+
"We also skip the stimuli files, since the dataset provides two preprocessed\n",
54+
"feature spaces to perform voxelwise modeling without requiring the original\n",
55+
"stimuli.\n",
56+
"\n"
57+
]
58+
},
59+
{
60+
"cell_type": "code",
61+
"execution_count": null,
62+
"metadata": {
63+
"collapsed": false
64+
},
65+
"outputs": [],
66+
"source": [
67+
"from voxelwise_tutorials.io import download_datalad\n",
68+
"\n",
69+
"DATAFILES = [\n",
70+
" \"features/motion_energy.hdf\",\n",
71+
" \"features/wordnet.hdf\",\n",
72+
" \"mappers/S01_mappers.hdf\",\n",
73+
" # \"mappers/S02_mappers.hdf\",\n",
74+
" # \"mappers/S03_mappers.hdf\",\n",
75+
" # \"mappers/S04_mappers.hdf\",\n",
76+
" # \"mappers/S05_mappers.hdf\",\n",
77+
" \"responses/S01_responses.hdf\",\n",
78+
" # \"responses/S02_responses.hdf\",\n",
79+
" # \"responses/S03_responses.hdf\",\n",
80+
" # \"responses/S04_responses.hdf\",\n",
81+
" # \"responses/S05_responses.hdf\",\n",
82+
" # \"stimuli/test.hdf\",\n",
83+
" # \"stimuli/train_00.hdf\",\n",
84+
" # \"stimuli/train_01.hdf\",\n",
85+
" # \"stimuli/train_02.hdf\",\n",
86+
" # \"stimuli/train_03.hdf\",\n",
87+
" # \"stimuli/train_04.hdf\",\n",
88+
" # \"stimuli/train_05.hdf\",\n",
89+
" # \"stimuli/train_06.hdf\",\n",
90+
" # \"stimuli/train_07.hdf\",\n",
91+
" # \"stimuli/train_08.hdf\",\n",
92+
" # \"stimuli/train_09.hdf\",\n",
93+
" # \"stimuli/train_10.hdf\",\n",
94+
" # \"stimuli/train_11.hdf\",\n",
95+
"]\n",
96+
"\n",
97+
"source = \"https://gin.g-node.org/gallantlab/shortclips\"\n",
98+
"\n",
99+
"for datafile in DATAFILES:\n",
100+
" local_filename = download_datalad(datafile, destination=directory,\n",
101+
" source=source)"
102+
]
103+
},
104+
{
105+
"cell_type": "markdown",
106+
"metadata": {},
107+
"source": [
108+
"## References\n",
109+
"\n",
110+
"```{bibliography}\n",
111+
":filter: docname in docnames\n",
112+
"```"
113+
]
114+
}
115+
],
116+
"metadata": {
117+
"kernelspec": {
118+
"display_name": "Python 3",
119+
"language": "python",
120+
"name": "python3"
121+
},
122+
"language_info": {
123+
"codemirror_mode": {
124+
"name": "ipython",
125+
"version": 3
126+
},
127+
"file_extension": ".py",
128+
"mimetype": "text/x-python",
129+
"name": "python",
130+
"nbconvert_exporter": "python",
131+
"pygments_lexer": "ipython3",
132+
"version": "3.7.12"
133+
}
134+
},
135+
"nbformat": 4,
136+
"nbformat_minor": 0
137+
}

0 commit comments

Comments
 (0)