Skip to content

Commit 9dc5ff3

Browse files
authored
Merge pull request #952 from int-brain-lab/docs
Docs
2 parents 1e74ae1 + d283f5b commit 9dc5ff3

File tree

4 files changed

+34
-22
lines changed

4 files changed

+34
-22
lines changed

examples/exploring_data/data_download.ipynb

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -142,16 +142,18 @@
142142
]
143143
},
144144
{
145-
"metadata": {},
146145
"cell_type": "markdown",
146+
"metadata": {},
147147
"source": [
148148
"### Find recordings of a specific brain region\n",
149149
"If we are interested in a given brain region, we can use the `search_insertions` method to find all recordings associated with that region. For example, to find all recordings associated with the **Rhomboid Nucleus (RH)** region of the thalamus."
150150
]
151151
},
152152
{
153-
"metadata": {},
154153
"cell_type": "code",
154+
"execution_count": null,
155+
"metadata": {},
156+
"outputs": [],
155157
"source": [
156158
"# this is the query that yields the few recordings for the Rhomboid Nucleus (RH) region\n",
157159
"insertions_rh = one.search_insertions(atlas_acronym='RH', datasets='spikes.times.npy', project='brainwide')\n",
@@ -161,9 +163,7 @@
161163
"\n",
162164
"# the Allen brain regions parcellation is hierarchical, and searching for Thalamus will return all child Rhomboid Nucleus (RH) regions\n",
163165
"assert set(insertions_rh).issubset(set(insertions_th))\n"
164-
],
165-
"outputs": [],
166-
"execution_count": null
166+
]
167167
},
168168
{
169169
"cell_type": "markdown",
@@ -183,7 +183,7 @@
183183
"outputs": [],
184184
"source": [
185185
"# Find sessions that have spikes.times datasets\n",
186-
"sessions_with_spikes = one.search(project='brainwide', dataset='spikes.times')"
186+
"sessions_with_spikes = one.search(project='brainwide', datasets='spikes.times.npy')"
187187
]
188188
},
189189
{
@@ -253,7 +253,7 @@
253253
"outputs": [],
254254
"source": [
255255
"# Find an example session with trials data\n",
256-
"eid, *_ = one.search(project='brainwide', dataset='_ibl_trials.table.pqt')\n",
256+
"eid, *_ = one.search(project='brainwide', datasets='_ibl_trials.table.pqt')\n",
257257
"# List datasets associated with a session, in the alf collection\n",
258258
"datasets = one.list_datasets(eid, collection='alf*')\n",
259259
"\n",
@@ -279,7 +279,7 @@
279279
"source": [
280280
"# Find an example session with spike data\n",
281281
"# Note: Restricting by task and project makes searching for data much quicker\n",
282-
"eid, *_ = one.search(project='brainwide', dataset='spikes', task='ephys')\n",
282+
"eid, *_ = one.search(project='brainwide', datasets='spikes.times.npy', task='ephys')\n",
283283
"\n",
284284
"# Data for each probe insertion are stored in the alf/probeXX folder.\n",
285285
"datasets = one.list_datasets(eid, collection='alf/probe*')\n",
@@ -375,7 +375,7 @@
375375
"lab_name = list(labs)[0]\n",
376376
"\n",
377377
"# Searching for RS sessions with specific lab name\n",
378-
"sessions_lab = one.search(dataset='spikes', lab=lab_name)"
378+
"sessions_lab = one.search(datasets='spikes', lab=lab_name)"
379379
]
380380
},
381381
{

examples/loading_data/loading_photometry_data.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@
6161
"source": [
6262
"from one.api import ONE\n",
6363
"one = ONE()\n",
64-
"sessions = one.search(dataset='photometry.signal.pqt')\n",
64+
"sessions = one.search(datasets='photometry.signal.pqt')\n",
6565
"print(f'{len(sessions)} sessions with photometry data found')"
6666
]
6767
},
@@ -271,7 +271,7 @@
271271
"name": "python",
272272
"nbconvert_exporter": "python",
273273
"pygments_lexer": "ipython3",
274-
"version": "3.9.16"
274+
"version": "3.11.9"
275275
}
276276
},
277277
"nbformat": 4,

examples/loading_data/loading_trials_data.ipynb

Lines changed: 21 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,10 @@
5050
"cell_type": "markdown",
5151
"id": "a5d358e035a91310",
5252
"metadata": {
53-
"collapsed": false
53+
"collapsed": false,
54+
"jupyter": {
55+
"outputs_hidden": false
56+
}
5457
},
5558
"source": [
5659
"## Loading a single session's trials\n"
@@ -77,7 +80,10 @@
7780
"cell_type": "markdown",
7881
"id": "d6c98a81f5426445",
7982
"metadata": {
80-
"collapsed": false
83+
"collapsed": false,
84+
"jupyter": {
85+
"outputs_hidden": false
86+
}
8187
},
8288
"source": [
8389
"For combining trials data with various recording modalities for a given session, the `SessionLoader` class is more convenient:"
@@ -130,8 +136,12 @@
130136
"from one.api import ONE\n",
131137
"one = ONE()\n",
132138
"subject = 'SWC_043'\n",
139+
"# Load in subject trials table\n",
133140
"trials = one.load_aggregate('subjects', subject, '_ibl_subjectTrials.table')\n",
134141
"\n",
142+
"# Load in subject sessions table\n",
143+
"sessions = one.load_aggregate('subjects', subject, '_ibl_subjectSessions.table')\n",
144+
"\n",
135145
"# Load training status and join to trials table\n",
136146
"training = one.load_aggregate('subjects', subject, '_ibl_subjectTraining.table')\n",
137147
"trials = (trials\n",
@@ -141,10 +151,9 @@
141151
"trials['training_status'] = trials.training_status.fillna(method='ffill')\n",
142152
"\n",
143153
"# Join sessions table for number, task_protocol, etc.\n",
144-
"trials = one.load_aggregate('subjects', subject, '_ibl_subjectTrials.table')\n",
145154
"if 'task_protocol' in trials:\n",
146-
" trials.drop('task_protocol', axis=1)\n",
147-
"trials = trials.set_index('session').join(one._cache.sessions.drop('date', axis=1))"
155+
" trials = trials.drop('task_protocol', axis=1)\n",
156+
"trials = trials.join(sessions.drop('date', axis=1))"
148157
]
149158
},
150159
{
@@ -302,7 +311,10 @@
302311
"cell_type": "markdown",
303312
"id": "55ad2e5d71ac301",
304313
"metadata": {
305-
"collapsed": false
314+
"collapsed": false,
315+
"jupyter": {
316+
"outputs_hidden": false
317+
}
306318
},
307319
"source": [
308320
"### Example 5: Computing the inter-trial interval (ITI)\n",
@@ -345,9 +357,9 @@
345357
"metadata": {
346358
"celltoolbar": "Edit Metadata",
347359
"kernelspec": {
348-
"display_name": "Python [conda env:iblenv] *",
360+
"display_name": "Python 3 (ipykernel)",
349361
"language": "python",
350-
"name": "conda-env-iblenv-py"
362+
"name": "python3"
351363
},
352364
"language_info": {
353365
"codemirror_mode": {
@@ -359,7 +371,7 @@
359371
"name": "python",
360372
"nbconvert_exporter": "python",
361373
"pygments_lexer": "ipython3",
362-
"version": "3.11.6"
374+
"version": "3.11.9"
363375
}
364376
},
365377
"nbformat": 4,

examples/loading_data/loading_widefield_data.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@
8686
"source": [
8787
"from one.api import ONE\n",
8888
"one = ONE()\n",
89-
"sessions = one.search(dataset='widefieldU.images.npy')\n",
89+
"sessions = one.search(datasets='widefieldU.images.npy')\n",
9090
"print(f'{len(sessions)} sessions with widefield data found')"
9191
]
9292
},
@@ -224,7 +224,7 @@
224224
"name": "python",
225225
"nbconvert_exporter": "python",
226226
"pygments_lexer": "ipython3",
227-
"version": "3.9.16"
227+
"version": "3.11.9"
228228
}
229229
},
230230
"nbformat": 4,

0 commit comments

Comments
 (0)