Skip to content

Commit d02d8a5

Browse files
committed
Add demo notebooks
1 parent 7bf9f9f commit d02d8a5

File tree

2 files changed

+361
-0
lines changed

2 files changed

+361
-0
lines changed

notebooks/demo_prepare.ipynb

Lines changed: 225 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,225 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"### Demo Preparation Notebook\n",
8+
"\n",
9+
"**Please Note**: This notebook and demo are NOT intended to be used as learning materials. To gain\n",
10+
"a thorough understanding of the DataJoint workflow for extracellular electrophysiology, please\n",
11+
"see the [`tutorial`](./tutorial.ipynb) notebook."
12+
]
13+
},
14+
{
15+
"cell_type": "code",
16+
"execution_count": null,
17+
"metadata": {},
18+
"outputs": [],
19+
"source": [
20+
"# Runs in about 45s\n",
21+
"import datajoint as dj\n",
22+
"import datetime\n",
23+
"from tutorial.pipeline import subject, session, probe, ephys\n",
24+
"from element_array_ephys import ephys_report"
25+
]
26+
},
27+
{
28+
"cell_type": "code",
29+
"execution_count": null,
30+
"metadata": {},
31+
"outputs": [],
32+
"source": [
33+
"subject.Subject.insert1(\n",
34+
" dict(subject=\"subject5\", subject_birth_date=\"2023-01-01\", sex=\"U\")\n",
35+
")"
36+
]
37+
},
38+
{
39+
"cell_type": "code",
40+
"execution_count": null,
41+
"metadata": {},
42+
"outputs": [],
43+
"source": [
44+
"session_key = dict(subject=\"subject5\", session_datetime=\"2023-01-01 00:00:00\")\n",
45+
"\n",
46+
"session.Session.insert1(session_key)\n",
47+
"\n",
48+
"session.SessionDirectory.insert1(dict(session_key, session_dir=\"raw/subject5/session1\"))"
49+
]
50+
},
51+
{
52+
"cell_type": "code",
53+
"execution_count": null,
54+
"metadata": {},
55+
"outputs": [],
56+
"source": [
57+
"probe.Probe.insert1(dict(probe=\"714000838\", probe_type=\"neuropixels 1.0 - 3B\"))\n",
58+
"\n",
59+
"ephys.ProbeInsertion.insert1(\n",
60+
" dict(\n",
61+
" session_key,\n",
62+
" insertion_number=1,\n",
63+
" probe=\"714000838\",\n",
64+
" )\n",
65+
")"
66+
]
67+
},
68+
{
69+
"cell_type": "code",
70+
"execution_count": null,
71+
"metadata": {},
72+
"outputs": [],
73+
"source": [
74+
"populate_settings = {\"display_progress\": True}\n",
75+
"\n",
76+
"ephys.EphysRecording.populate(**populate_settings)"
77+
]
78+
},
79+
{
80+
"cell_type": "code",
81+
"execution_count": null,
82+
"metadata": {},
83+
"outputs": [],
84+
"source": [
85+
"kilosort_params = {\n",
86+
" \"fs\": 30000,\n",
87+
" \"fshigh\": 150,\n",
88+
" \"minfr_goodchannels\": 0.1,\n",
89+
" \"Th\": [10, 4],\n",
90+
" \"lam\": 10,\n",
91+
" \"AUCsplit\": 0.9,\n",
92+
" \"minFR\": 0.02,\n",
93+
" \"momentum\": [20, 400],\n",
94+
" \"sigmaMask\": 30,\n",
95+
" \"ThPr\": 8,\n",
96+
" \"spkTh\": -6,\n",
97+
" \"reorder\": 1,\n",
98+
" \"nskip\": 25,\n",
99+
" \"GPU\": 1,\n",
100+
" \"Nfilt\": 1024,\n",
101+
" \"nfilt_factor\": 4,\n",
102+
" \"ntbuff\": 64,\n",
103+
" \"whiteningRange\": 32,\n",
104+
" \"nSkipCov\": 25,\n",
105+
" \"scaleproc\": 200,\n",
106+
" \"nPCs\": 3,\n",
107+
" \"useRAM\": 0,\n",
108+
"}\n",
109+
"\n",
110+
"ephys.ClusteringParamSet.insert_new_params(\n",
111+
" clustering_method=\"kilosort2\",\n",
112+
" paramset_idx=1,\n",
113+
" params=kilosort_params,\n",
114+
" paramset_desc=\"Spike sorting using Kilosort2\",\n",
115+
")"
116+
]
117+
},
118+
{
119+
"cell_type": "code",
120+
"execution_count": null,
121+
"metadata": {},
122+
"outputs": [],
123+
"source": [
124+
"ephys.ClusteringTask.insert1(\n",
125+
" dict(\n",
126+
" session_key,\n",
127+
" insertion_number=1,\n",
128+
" paramset_idx=1,\n",
129+
" task_mode=\"load\", # load or trigger\n",
130+
" clustering_output_dir=\"processed/subject5/session1/probe_1/kilosort2-5_1\",\n",
131+
" )\n",
132+
")\n",
133+
"\n",
134+
"ephys.Clustering.populate(**populate_settings)"
135+
]
136+
},
137+
{
138+
"cell_type": "code",
139+
"execution_count": null,
140+
"metadata": {},
141+
"outputs": [],
142+
"source": [
143+
"clustering_key = (ephys.ClusteringTask & session_key).fetch1(\"KEY\")\n",
144+
"ephys.Curation().create1_from_clustering_task(clustering_key)"
145+
]
146+
},
147+
{
148+
"cell_type": "code",
149+
"execution_count": null,
150+
"metadata": {},
151+
"outputs": [],
152+
"source": [
153+
"# Runs in about 12m\n",
154+
"ephys.CuratedClustering.populate(**populate_settings)\n",
155+
"ephys.WaveformSet.populate(**populate_settings)\n",
156+
"ephys_report.ProbeLevelReport.populate(**populate_settings)\n",
157+
"ephys_report.UnitLevelReport.populate(**populate_settings)"
158+
]
159+
},
160+
{
161+
"attachments": {},
162+
"cell_type": "markdown",
163+
"metadata": {},
164+
"source": [
165+
"### Drop schemas\n",
166+
"- Schemas are not typically dropped in a production workflow with real data in it.\n",
167+
"- At the developmental phase, it might be required for the table redesign.\n",
168+
"- When dropping all schemas is needed, the following is the dependency order."
169+
]
170+
},
171+
{
172+
"cell_type": "code",
173+
"execution_count": null,
174+
"metadata": {},
175+
"outputs": [],
176+
"source": [
177+
"def drop_databases(databases):\n",
178+
" import pymysql.err\n",
179+
"\n",
180+
" conn = dj.conn()\n",
181+
"\n",
182+
" with dj.config(safemode=False):\n",
183+
" for database in databases:\n",
184+
" schema = dj.Schema(f'{dj.config[\"custom\"][\"database.prefix\"]}{database}')\n",
185+
" while schema.list_tables():\n",
186+
" for table in schema.list_tables():\n",
187+
" try:\n",
188+
" conn.query(f\"DROP TABLE `{schema.database}`.`{table}`\")\n",
189+
" except pymysql.err.OperationalError:\n",
190+
" print(f\"Can't drop `{schema.database}`.`{table}`. Retrying...\")\n",
191+
" schema.drop()\n",
192+
"\n",
193+
"\n",
194+
"# drop_databases(databases=['analysis', 'trial', 'event', 'ephys_report', 'ephys', 'probe', 'session', 'subject', 'project', 'lab'])"
195+
]
196+
}
197+
],
198+
"metadata": {
199+
"kernelspec": {
200+
"display_name": "Python 3",
201+
"language": "python",
202+
"name": "python3"
203+
},
204+
"language_info": {
205+
"codemirror_mode": {
206+
"name": "ipython",
207+
"version": 3
208+
},
209+
"file_extension": ".py",
210+
"mimetype": "text/x-python",
211+
"name": "python",
212+
"nbconvert_exporter": "python",
213+
"pygments_lexer": "ipython3",
214+
"version": "3.9.16"
215+
},
216+
"orig_nbformat": 4,
217+
"vscode": {
218+
"interpreter": {
219+
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
220+
}
221+
}
222+
},
223+
"nbformat": 4,
224+
"nbformat_minor": 2
225+
}

notebooks/demo_run.ipynb

Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
{
2+
"cells": [
3+
{
4+
"attachments": {},
5+
"cell_type": "markdown",
6+
"metadata": {},
7+
"source": [
8+
"# DataJoint Workflow for Neuropixels Analysis\n",
9+
"\n",
10+
"+ This notebook demonstrates using the open-source DataJoint Element to build a workflow for extracellular electrophysiology.\n",
11+
"+ For a detailed tutorial, please see the [tutorial notebook](./tutorial.ipynb)."
12+
]
13+
},
14+
{
15+
"attachments": {},
16+
"cell_type": "markdown",
17+
"metadata": {},
18+
"source": [
19+
"### Import dependencies"
20+
]
21+
},
22+
{
23+
"cell_type": "code",
24+
"execution_count": null,
25+
"metadata": {},
26+
"outputs": [],
27+
"source": [
28+
"import datajoint as dj\n",
29+
"from tutorial.pipeline import subject, session, probe, ephys\n",
30+
"from element_array_ephys.plotting.widget import main"
31+
]
32+
},
33+
{
34+
"attachments": {},
35+
"cell_type": "markdown",
36+
"metadata": {},
37+
"source": [
38+
"### View workflow"
39+
]
40+
},
41+
{
42+
"cell_type": "code",
43+
"execution_count": null,
44+
"metadata": {},
45+
"outputs": [],
46+
"source": [
47+
"dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + dj.Diagram(probe) + dj.Diagram(ephys)"
48+
]
49+
},
50+
{
51+
"attachments": {},
52+
"cell_type": "markdown",
53+
"metadata": {},
54+
"source": [
55+
"### Insert an entry in a manual table by calling the `insert()` method\n",
56+
"\n",
57+
"```python\n",
58+
"subject.Subject.insert1(\n",
59+
" dict(subject='subject1',\n",
60+
" subject_birth_date='2023-01-01',\n",
61+
" sex='U',\n",
62+
" )\n",
63+
")\n",
64+
"```"
65+
]
66+
},
67+
{
68+
"attachments": {},
69+
"cell_type": "markdown",
70+
"metadata": {},
71+
"source": [
72+
"### Automatically process data with the `populate()` method\n",
73+
"\n",
74+
"+ Once data is inserted into manual tables, the `populate()` function automatically runs the ingestion and processing routines. \n",
75+
"\n",
76+
"+ For example, to run Kilosort processing in the `Clustering` table:\n",
77+
"\n",
78+
" ```python\n",
79+
" ephys.Clustering.populate()\n",
80+
" ```"
81+
]
82+
},
83+
{
84+
"attachments": {},
85+
"cell_type": "markdown",
86+
"metadata": {},
87+
"source": [
88+
"### Visualize processed data"
89+
]
90+
},
91+
{
92+
"cell_type": "code",
93+
"execution_count": null,
94+
"metadata": {},
95+
"outputs": [],
96+
"source": [
97+
"main(ephys)"
98+
]
99+
},
100+
{
101+
"attachments": {},
102+
"cell_type": "markdown",
103+
"metadata": {},
104+
"source": [
105+
"For an in-depth tutorial please see the [tutorial notebook](./tutorial.ipynb)."
106+
]
107+
}
108+
],
109+
"metadata": {
110+
"kernelspec": {
111+
"display_name": "python3p10",
112+
"language": "python",
113+
"name": "python3"
114+
},
115+
"language_info": {
116+
"codemirror_mode": {
117+
"name": "ipython",
118+
"version": 3
119+
},
120+
"file_extension": ".py",
121+
"mimetype": "text/x-python",
122+
"name": "python",
123+
"nbconvert_exporter": "python",
124+
"pygments_lexer": "ipython3",
125+
"version": "3.9.16"
126+
},
127+
"orig_nbformat": 4,
128+
"vscode": {
129+
"interpreter": {
130+
"hash": "ff52d424e56dd643d8b2ec122f40a2e279e94970100b4e6430cb9025a65ba4cf"
131+
}
132+
}
133+
},
134+
"nbformat": 4,
135+
"nbformat_minor": 2
136+
}

0 commit comments

Comments
 (0)