The Utah Organoids DataJoint pipelines facilitate cerebral organoid characterization and electrophysiology (ephys) data analysis.
-
Organoid Generation Pipeline: Manages metadata for organoid generation protocols, tracking the process from induced pluripotent stem cells (iPSCs) to single neural rosettes (SNRs) to mature organoids.
-
Array Ephys Pipeline: Manages and analyzes ephys recordings, including spike sorting and quality metrics.
-
Patch-Clamp Ephys Pipeline: Processes whole-cell patch-clamp recordings (ABF files) through feature extraction (AP threshold, input resistance, firing rate), plot generation, and dashboard visualization.
- Request Access: Contact the DataJoint support team for an account.
- Log in: Use your DataJoint credentials to access:
- works.datajoint.com (run notebooks & manage computations)
- Organoids SciViz (enter experimental metadata)
- Database connections (access data through the pipeline)
- Log into works.datajoint.com and navigate to the
Notebooktab. - Run EXPLORE_pipeline_architecture.ipynb to visualize the data pipeline structure, including key schemas, tables, and their relationships.
- Log into Organoids SciViz with your DataJoint credentials (username and password).
- Enter data in the corresponding sections:
Userpage → if you are a new experimenter, register a new experimenter.Lineagepage → create new “Lineage” and “Sequence” and submit.Stem Cellpage → register new “Stem Cell” data.Inductionpage → add new “Induction Culture” and “Induction Culture Condition”Post Inductionpage → add new “Post Induction Culture” and “Post Induction Culture Condition”Isolated Rosettepage → add new “Isolated Rosette Culture” and “Isolated Rosette Culture Condition”Organoidpage → add new “Organoid Culture” and “Organoid Culture Condition”Experimentpage → log new experiments performed on a particular organoid- Include metadata: organoid ID, datetime, experimenter, condition, etc.
- Provide the experiment data directory — the relative path to where the acquired data is stored.
- Ensure data follows the file structure guidelines.
- Request Axon credentials from the DataJoint support team.
- Set up your local machine (if you haven't already):
- Install the pipeline code.
- Configure axon settings (Cloud upload configuration).
- Upload data via the cloud upload notebook using either:
- Jupyter Notebook Server:
- Open a terminal or command prompt.
- Activate the
utah_organoidsenvironment withconda activate utah_organoids. - Ensure
Jupyteris installed in theutah_organoidsenvironment. If not, install it by runningconda install jupyter. - Navigate to the
utah_organoids/notebooksdirectory in the terminal. - Run
jupyter notebookin the terminal which will open the Jupyter notebook web interface. - Click on the notebook there (
UPLOAD_session_data_to_cloud.ipynb) and follow the instructions to upload your data to the cloud. - Note: to execute each code cell sequentially, press
Shift + Enteron your keyboard or click "Run". - Close the browser tab and stop Jupyter with
Ctrl + Cin the terminal when you are done with the upload and notebook.
- Visual Studio Code (VS Code):
- Install VS Code and the Python extension.
- Select the kernel for the notebook by clicking on the kernel name
utah_organoidsin the top right corner of the notebook. - Open the
CREATE_new_session_with_cloud_upload.ipynbnotebook in VS Code. - Click on the "Run Cell" button in the top right corner of each code cell to execute the code.
- Follow the instructions in the notebook to upload your data to the cloud.
- Jupyter Notebook Server:
- Navigate to works.datajoint.com and open the
Dashboardtab. - Click on
Plots>MUA Trace Plots, then select a data entry to explore the MUA results. The interactive plot allows you to zoom in and out of the raw traces and examine detected peaks. - (Optional) For a more detailed analysis, go to the
Notebooktab on works.datajoint.com and run the EXPLORE_MUA_analysis.ipynb notebook to inspect theMUAschema in depth.
- Log into works.datajoint.com and navigate to the
Notebooktab. - Open and execute CREATE_new_session.ipynb.
- Define a time window for analysis:
- For Spike Sorting Analysis: Set
session_typetospike_sorting, and create anEphysSessionProbeto store probe information, including the channel mapping. This triggers probe insertion detection automatically. For spike sorting, you will need to manually select the spike sorting algorithm and parameter set to run in the next step. - For LFP Analysis: Set
session_typetolfp, orboth(spike sorting and lfp analyses for the session selected). This automatically run the LFP analysis pipeline.
- For Spike Sorting Analysis: Set
- Create a
ClusteringTaskby selecting a spike-sorting algorithm and parameter set:- Go to works.datajoint.com →
Notebooktab - Run CREATE_new_clustering_paramset.ipynb to configure a new parameter set.
- Assign parameters to an
EphysSessionusing CREATE_new_clustering_task.ipynb. - The pipeline will automatically run the spike sorting process.
- Follow the download spike sorting results to retrieve results.
- Go to works.datajoint.com →
- Go to works.datajoint.com →
Notebooktab - Open EXPLORE_spike_sorting.ipynb to inspect processed ephys data.
- Go to works.datajoint.com →
Notebooktab - Open EXPLORE_LFP_analysis.ipynb to inspect processed LFP data.
- Request Axon credentials from the DataJoint support team.
- Set up your local machine (if you haven't already):
- Install the pipeline code.
- Configure axon settings (Cloud upload configuration).
- Download spike sorting results via the DOWNLOAD_spike_sorted_data.ipynb using either:
- Jupyter Notebook Server:
- Open a terminal or command prompt.
- Activate the
utah_organoidsenvironment withconda activate utah_organoids. - Ensure
Jupyteris installed in theutah_organoidsenvironment. If not, install it by runningconda install jupyter. - Navigate to the
utah_organoids/notebooksdirectory in the terminal. - Run
jupyter notebookin the terminal which will open the Jupyter notebook web interface. - Click on the notebook there (
DOWNLOAD_spike_sorted_data.ipynb) and follow the instructions to download results. - Note: to execute each code cell sequentially, press
Shift + Enteron your keyboard or click "Run". - Close the browser tab and stop Jupyter with
Ctrl + Cin the terminal when you are done with the upload and notebook.
- Visual Studio Code (VS Code):
- Install VS Code and the Python extension.
- Select the kernel for the notebook by clicking on the kernel name
utah_organoidsin the top right corner of the notebook. - Open the
DOWNLOAD_spike_sorted_data.ipynbnotebook in VS Code. - Click on the "Run Cell" button in the top right corner of each code cell to execute the code.
- Follow the instructions in the notebook to download spike sorting results.
- Jupyter Notebook Server:
Patch-clamp data must follow this folder structure:
patch_clamp/
<experiment>/ # e.g. 2020-08-28
<experiment>/ # double-nested (same name)
<experiment>.xlsx # Excel metadata file
*.abf # ABF recording files
Each experiment folder contains an Excel file with animal metadata (strain, age, solutions) and cell/recording information, plus ABF files from the patch-clamp rig.
- Upload ABF files and Excel metadata to S3 via Axon.
- Register the experiment using CREATE_patch_clamp_experiment.ipynb.
- This inserts entries in
EphysExperimentsForAnalysisandCurrentStepTimeParams.
- This inserts entries in
- Open RUN_patch_clamp.ipynb.
- Execute all cells to run the full pipeline:
- Metadata extraction: Parses Excel files to register animals, cells, and recordings.
- Feature extraction: Reads ABF files and computes AP threshold, input resistance, max firing rate, F-I curve slope, and other intrinsic properties.
- Plot generation: Creates current step traces, F-I curves, V-I curves, spike waveforms, phase planes, derivative plots, combined summary plots, and animated GIFs.
- Report population: Converts plot files to dashboard-compatible attachments.
- Prerequisites:
ffmpegmust be installed for animated GIF/MP4 generation (brew install ffmpegon macOS).
- Open EXPLORE_patch_clamp.ipynb to inspect:
- Pipeline coverage (entries per table)
- Recording metadata and AP status
- Feature statistics and strain comparisons
- Example plot visualization
- View results in the Dashboard under Images & Plots:
- Patch-Clamp Feature Plots — F-I curves, V-I curves, spike waveforms, phase planes
- Patch-Clamp Traces — Current step traces
- Patch-Clamp Spike Analysis — dV/dt and d²V/dt² derivative plots
- Patch-Clamp Combined Plots — Multi-panel summary plots
- Patch-Clamp Animated Traces — Animated GIF recordings
For help, refer to the Documentation, Troubleshooting Guide, or contact the DataJoint support team.
If your work uses DataJoint Python, DataJoint Elements, or any integrated tools within the pipeline, please cite the respective manuscripts and Research Resource Identifiers (RRIDs).
Yatsenko D, Reimer J, Ecker AS, Walker EY, Sinz F, Berens P, Hoenselaar A, Cotton RJ, Siapas AS, Tolias AS.
DataJoint: managing big scientific data using MATLAB or Python. bioRxiv. 2015 Jan 1:031658.
DOI: 10.1101/031658
Resource Identification (RRID): SCR_014543
Yatsenko D, Walker EY, Tolias AS.
DataJoint: a simpler relational data model. arXiv:1807.11104. 2018 Jul 29.
DOI: 10.48550/arXiv.1807.11104
Resource Identification (RRID): SCR_014543
Yatsenko D, Nguyen T, Shen S, Gunalan K, Turner CA, Guzman R, Sasaki M, Sitonic D, Reimer J, Walker EY, Tolias AS.
DataJoint Elements: Data Workflows for Neurophysiology. bioRxiv. 2021 Jan 1.
DOI: 10.1101/2021.03.30.437358
Resource Identification (RRID): SCR_021894
- If your work uses SpikeInterface, please cite the respective manuscript.
- For other integrated tools within the pipeline, cite their respective manuscripts and RRIDs.