Skip to content

Commit fa94b1a

Browse files
committed
Merge branch 'release/2.23.1'
2 parents e71deaa + 790053c commit fa94b1a

File tree

8 files changed

+183
-37
lines changed

8 files changed

+183
-37
lines changed

examples/atlas/atlas_swanson_flatmap.ipynb

Lines changed: 106 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,116 @@
3030
"outputs": [],
3131
"source": [
3232
"import numpy as np\n",
33-
"from ibllib.atlas.flatmaps import plot_swanson\n",
33+
"from ibllib.atlas.flatmaps import swanson, plot_swanson\n",
3434
"from ibllib.atlas import BrainRegions\n",
35+
"\n",
3536
"br = BrainRegions()\n",
3637
"\n",
37-
"plot_swanson(br=br, annotate=True)\n"
38+
"# Plot Swanson map will default colors and acronyms\n",
39+
"plot_swanson(br=br, annotate=True)"
3840
]
3941
},
42+
{
43+
"cell_type": "markdown",
44+
"source": [
45+
"### What regions are represented in the Swanson flatmap"
46+
],
47+
"metadata": {
48+
"collapsed": false
49+
}
50+
},
51+
{
52+
"cell_type": "markdown",
53+
"source": [
54+
"The Swanson map holds 318 brain region acronyms, some of which are an aggregate of distinct brain regions in the Allen or Beryl parcellation.\n",
55+
"To find the acronyms of the regions represented in Swanson, use:"
56+
],
57+
"metadata": {
58+
"collapsed": false
59+
}
60+
},
61+
{
62+
"cell_type": "code",
63+
"execution_count": null,
64+
"outputs": [],
65+
"source": [
66+
"swanson_ac = np.sort(br.acronym[np.unique(swanson())])"
67+
],
68+
"metadata": {
69+
"collapsed": false,
70+
"pycharm": {
71+
"name": "#%%\n"
72+
}
73+
}
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"source": [
78+
"Regions which are \"children\" of a Swanson region will not be included in the acronyms. For example `PTLp` is in Swanson, but its children `VISa` and `VISrl`(i.e. a finer parcellation of `PTLp`) are not:"
79+
],
80+
"metadata": {
81+
"collapsed": false
82+
}
83+
},
84+
{
85+
"cell_type": "code",
86+
"execution_count": null,
87+
"outputs": [],
88+
"source": [
89+
"# Example: Check if PTLp is in Swanson\n",
90+
"np.isin(['PTLp'], swanson_ac)\n",
91+
"# Example: Check if VISa and VISrl are in Swanson\n",
92+
"np.isin(['VISa', 'VISrl'], swanson_ac)"
93+
],
94+
"metadata": {
95+
"collapsed": false,
96+
"pycharm": {
97+
"name": "#%%\n"
98+
}
99+
}
100+
},
101+
{
102+
"cell_type": "markdown",
103+
"source": [
104+
"As such, you can only plot value for a given region that is in Swanson. This was done to ensure there is no confusion about how data is aggregated and represented per region (for example, if you were to input values for both `VISa` and `VISrl`, it is unclear whether the mean, median or else should have been plotted onto the `PTLp` area - instead, we ask you to do the aggregation yourself and pass this into the plotting function).\n",
105+
"\n",
106+
"For example,"
107+
],
108+
"metadata": {
109+
"collapsed": false
110+
}
111+
},
112+
{
113+
"cell_type": "code",
114+
"execution_count": null,
115+
"outputs": [],
116+
"source": [
117+
"from ibllib.atlas.flatmaps import plot_swanson_vector\n",
118+
"\n",
119+
"# 'PTLp', 'CA1', 'VPM' as in Swanson and all 3 are plotted\n",
120+
"acronyms = ['PTLp', 'CA1', 'VPM']\n",
121+
"values = np.array([1.5, 3, 4])\n",
122+
"plot_swanson_vector( acronyms, values, annotate=True, annotate_list=['PTLp', 'CA1', 'VPM'],empty_color='silver')\n",
123+
"\n",
124+
"# 'VISa','VISrl' are not in Swanson, only 'CA1', 'VPM' are plotted\n",
125+
"acronyms = ['VISa','VISrl', 'CA1', 'VPM']\n",
126+
"values = np.array([1, 2, 3, 4])\n",
127+
"plot_swanson_vector( acronyms, values, annotate=True, annotate_list=['VISa','VISrl', 'CA1', 'VPM'],empty_color='silver')\n"
128+
],
129+
"metadata": {
130+
"collapsed": false,
131+
"pycharm": {
132+
"name": "#%%\n"
133+
}
134+
}
135+
},
136+
{
137+
"cell_type": "markdown",
138+
"source": [],
139+
"metadata": {
140+
"collapsed": false
141+
}
142+
},
40143
{
41144
"cell_type": "markdown",
42145
"metadata": {},
@@ -180,4 +283,4 @@
180283
},
181284
"nbformat": 4,
182285
"nbformat_minor": 1
183-
}
286+
}

examples/loading_data/loading_trials_data.ipynb

Lines changed: 46 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,45 @@
4545
{
4646
"cell_type": "markdown",
4747
"source": [
48-
"## Loading a session's trials"
48+
"## Loading a single session's trials\n",
49+
"\n",
50+
"If you want to load the trials data for a single session, we recommend you use the `SessionLoader`:"
51+
],
52+
"metadata": {
53+
"collapsed": false
54+
}
55+
},
56+
{
57+
"cell_type": "code",
58+
"execution_count": null,
59+
"outputs": [],
60+
"source": [
61+
"'''\n",
62+
"RECOMMENDED\n",
63+
"'''\n",
64+
"from brainbox.io.one import SessionLoader\n",
65+
"from one.api import ONE\n",
66+
"one = ONE()\n",
67+
"eid = '4ecb5d24-f5cc-402c-be28-9d0f7cb14b3a'\n",
68+
"sl = SessionLoader(eid=eid, one=one)\n",
69+
"sl.load_trials()\n",
70+
"\n",
71+
"# The datasets are attributes of the sl.trials, for example probabilityLeft :\n",
72+
"probabilityLeft = sl.trials['probabilityLeft']\n",
73+
"# Find all of them using:\n",
74+
"sl.trials.keys()"
75+
],
76+
"metadata": {
77+
"collapsed": false,
78+
"pycharm": {
79+
"name": "#%%\n"
80+
}
81+
}
82+
},
83+
{
84+
"cell_type": "markdown",
85+
"source": [
86+
"For completeness, we present below how to load the trials object using the `one.load_object` method, however we recommend you use the code above and use the `SessionLoader` instead."
4987
],
5088
"metadata": {
5189
"collapsed": false
@@ -56,6 +94,9 @@
5694
"execution_count": null,
5795
"outputs": [],
5896
"source": [
97+
"'''\n",
98+
"ALTERNATIVE - NOT RECOMMENDED\n",
99+
"'''\n",
59100
"from one.api import ONE\n",
60101
"one = ONE()\n",
61102
"eid = '4ecb5d24-f5cc-402c-be28-9d0f7cb14b3a'\n",
@@ -73,9 +114,10 @@
73114
"id": "0514237a",
74115
"metadata": {},
75116
"source": [
76-
"## Loading a subject's trials\n",
77-
"This loads all trials data for a given subject (all session trials concatenated) into a DataFrame.\n",
78-
"The subjectTraining table contains the training statuses."
117+
"## Loading all the sessions' trials for a single subject at once\n",
118+
"If you want to study several sessions for a single subject, we recommend you use the `one.load_aggregate` method rather than downloading each trials data individually per session.\n",
119+
"This methods loads all the trials data `subjectTrials` for a given subject into a single DataFrame (i.e. all session trials are concatenated).\n",
120+
"You can use the same method to load the `subjectTraining` table, which contains the training statuses."
79121
]
80122
},
81123
{

ibllib/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
import logging
33
import warnings
44

5-
__version__ = '2.23.0'
5+
__version__ = '2.23.1'
66
warnings.filterwarnings('always', category=DeprecationWarning, module='ibllib')
77

88
# if this becomes a full-blown library we should let the logging configuration to the discretion of the dev

ibllib/atlas/flatmaps.py

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -137,13 +137,16 @@ def swanson(filename="swanson2allen.npz"):
137137

138138
def swanson_json(filename="swansonpaths.json"):
139139

140-
OLD_MD5 = ['f848783954883c606ca390ceda9e37d2']
140+
OLD_MD5 = ['97ccca2b675b28ba9b15ca8af5ba4111', # errored map with FOTU and CUL4, 5 mixed up
141+
'56daa7022b5e03080d8623814cda6f38', # old md5 of swanson json without CENT and PTLp
142+
# and CUL4 split (on s3 called swansonpaths_56daa.json)
143+
'f848783954883c606ca390ceda9e37d2']
141144

142145
json_file = AllenAtlas._get_cache_dir().joinpath(filename)
143146
if not json_file.exists() or md5(json_file) in OLD_MD5:
144147
json_file.parent.mkdir(exist_ok=True, parents=True)
145148
_logger.info(f'downloading swanson paths from {aws.S3_BUCKET_IBL} s3 bucket...')
146-
aws.s3_download_file(f'atlas/{json_file.name}', json_file)
149+
aws.s3_download_file(f'atlas/{json_file.name}', json_file, overwrite=True)
147150

148151
with open(json_file) as f:
149152
sw_json = json.load(f)
@@ -198,44 +201,45 @@ def plot_swanson_vector(acronyms=None, values=None, ax=None, hemisphere=None, br
198201
color = empty_color
199202

200203
coords = reg['coordsReg']
204+
reg_id = reg['thisID']
201205

202206
if reg['hole']:
203207
vertices, codes = coords_for_poly_hole(coords)
204208
if orientation == 'portrait':
205209
vertices[:, [0, 1]] = vertices[:, [1, 0]]
206-
plot_polygon_with_hole(ax, vertices, codes, color, **kwargs)
210+
plot_polygon_with_hole(ax, vertices, codes, color, reg_id, **kwargs)
207211
if hemisphere is not None:
208212
color_inv = color if hemisphere == 'mirror' else empty_color
209213
vertices_inv = np.copy(vertices)
210214
vertices_inv[:, 0] = -1 * vertices_inv[:, 0] + (sw.shape[0] * 2)
211-
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, **kwargs)
215+
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, reg_id, **kwargs)
212216
else:
213-
plot_polygon_with_hole(ax, vertices, codes, color, **kwargs)
217+
plot_polygon_with_hole(ax, vertices, codes, color, reg_id, **kwargs)
214218
if hemisphere is not None:
215219
color_inv = color if hemisphere == 'mirror' else empty_color
216220
vertices_inv = np.copy(vertices)
217221
vertices_inv[:, 1] = -1 * vertices_inv[:, 1] + (sw.shape[0] * 2)
218-
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, **kwargs)
222+
plot_polygon_with_hole(ax, vertices_inv, codes, color_inv, reg_id, **kwargs)
219223
else:
220224
coords = [coords] if type(coords) == dict else coords
221225
for c in coords:
222226

223227
if orientation == 'portrait':
224228
xy = np.c_[c['y'], c['x']]
225-
plot_polygon(ax, xy, color, **kwargs)
229+
plot_polygon(ax, xy, color, reg_id, **kwargs)
226230
if hemisphere is not None:
227231
color_inv = color if hemisphere == 'mirror' else empty_color
228232
xy_inv = np.copy(xy)
229233
xy_inv[:, 0] = -1 * xy_inv[:, 0] + (sw.shape[0] * 2)
230-
plot_polygon(ax, xy_inv, color_inv, **kwargs)
234+
plot_polygon(ax, xy_inv, color_inv, reg_id, **kwargs)
231235
else:
232236
xy = np.c_[c['x'], c['y']]
233-
plot_polygon(ax, xy, color, **kwargs)
237+
plot_polygon(ax, xy, color, reg_id, **kwargs)
234238
if hemisphere is not None:
235239
color_inv = color if hemisphere == 'mirror' else empty_color
236240
xy_inv = np.copy(xy)
237241
xy_inv[:, 1] = -1 * xy_inv[:, 1] + (sw.shape[0] * 2)
238-
plot_polygon(ax, xy_inv, color_inv, **kwargs)
242+
plot_polygon(ax, xy_inv, color_inv, reg_id, **kwargs)
239243

240244
if orientation == 'portrait':
241245
ax.set_ylim(0, sw.shape[1])

ibllib/atlas/plots.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,14 +39,14 @@ def get_bc_10():
3939
return bc
4040

4141

42-
def plot_polygon(ax, xy, color, edgecolor='k', linewidth=0.3, alpha=1):
43-
p = Polygon(xy, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha)
42+
def plot_polygon(ax, xy, color, reg_id, edgecolor='k', linewidth=0.3, alpha=1):
43+
p = Polygon(xy, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha, gid=f'region_{reg_id}')
4444
ax.add_patch(p)
4545

4646

47-
def plot_polygon_with_hole(ax, vertices, codes, color, edgecolor='k', linewidth=0.3, alpha=1):
47+
def plot_polygon_with_hole(ax, vertices, codes, color, reg_id, edgecolor='k', linewidth=0.3, alpha=1):
4848
path = mpath.Path(vertices, codes)
49-
patch = PathPatch(path, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha)
49+
patch = PathPatch(path, facecolor=color, edgecolor=edgecolor, linewidth=linewidth, alpha=alpha, gid=f'region_{reg_id}')
5050
ax.add_patch(patch)
5151

5252

ibllib/oneibl/data_handlers.py

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
import logging
22
import pandas as pd
3-
import numpy as np
43
from pathlib import Path
54
import shutil
65
import os
@@ -11,7 +10,6 @@
1110
from one.webclient import AlyxClient
1211
from one.util import filter_datasets
1312
from one.alf.files import add_uuid_string, session_path_parts
14-
from iblutil.io.parquet import np2str
1513
from ibllib.oneibl.registration import register_dataset, get_lab, get_local_data_repository
1614
from ibllib.oneibl.patcher import FTPPatcher, SDSCPatcher, SDSC_ROOT_PATH, SDSC_PATCH_PATH
1715

@@ -168,12 +166,7 @@ def setUp(self):
168166
sess_path = Path(rel_sess_path).joinpath(d['rel_path'])
169167
full_local_path = Path(self.globus.endpoints['local']['root_path']).joinpath(sess_path)
170168
if not full_local_path.exists():
171-
172-
if one._index_type() is int:
173-
uuid = np2str(np.r_[i[0], i[1]])
174-
elif one._index_type() is str:
175-
uuid = i
176-
169+
uuid = i
177170
self.local_paths.append(full_local_path)
178171
target_paths.append(sess_path)
179172
source_paths.append(add_uuid_string(sess_path, uuid))
@@ -382,12 +375,7 @@ def setUp(self):
382375
SDSC_TMP = Path(SDSC_PATCH_PATH.joinpath(self.task.__class__.__name__))
383376
for i, d in df.iterrows():
384377
file_path = Path(d['session_path']).joinpath(d['rel_path'])
385-
386-
if self.one._index_type() is int:
387-
uuid = np2str(np.r_[i[0], i[1]])
388-
elif self.one._index_type() is str:
389-
uuid = i
390-
378+
uuid = i
391379
file_uuid = add_uuid_string(file_path, uuid)
392380
file_link = SDSC_TMP.joinpath(file_path)
393381
file_link.parent.mkdir(exist_ok=True, parents=True)

ibllib/pipes/training_status.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,9 @@
1919
import seaborn as sns
2020

2121

22-
TRAINING_STATUS = {'not_computed': (-2, (0, 0, 0, 0)),
22+
TRAINING_STATUS = {'untrainable': (-4, (0, 0, 0, 0)),
23+
'unbiasable': (-3, (0, 0, 0, 0)),
24+
'not_computed': (-2, (0, 0, 0, 0)),
2325
'habituation': (-1, (0, 0, 0, 0)),
2426
'in training': (0, (0, 0, 0, 0)),
2527
'trained 1a': (1, (195, 90, 80, 255)),

release_notes.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,11 @@
11
## Release Notes 2.23
2+
### Release Notes 2.23.1 2023-06-15
3+
### features
4+
- split swanson areas
5+
### bugfixes
6+
- trainig plots
7+
- fix datahandler on SDSC for ONEv2
8+
29
### Release Notes 2.23.0 2023-05-19
310
- quiescence period extraction
411
- ONEv2 requirement

0 commit comments

Comments
 (0)