Skip to content

AllenNeuralDynamics/aind-behavior-camstim-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

aind-behavior-camstim-pipeline

The aind-behavior-camstim-pipeline processes camstim data using their input ephys or ophys asset. The data from the h5, sync, and pkl files are added as stimulus tables, running speed tables, and lick/behavior tables to the nwb.

The pipeline runs on Nextflow and contains the following capsules:

  • aind-stimulus-packaging-nwb-capsule: The data from the rows of the input csv stimulus table is added to a stimulus table within the nwb. The starting and ending time of each stimulus presentation is represented in this input CSV table.
  • aind-running-speed-nwb-capsule: The data from the sync file is used to create a raw running speed and processed running speed table. These are appended to the nwb file
  • aind-licks-rewards-capsule: The data from the pkl and sync files are used to create a licks/rewards table in the nwb file.
  • aind-ophys-camstim-behavior-qc: The raw data from the sync and pkl files are used to create various plots for quality control purposes. These are all combined into metrics/evaluations for display on the qc portal.

Input

Currently, the pipeline supports the following input data types:

  • aind: data ingestion used at AIND. The input folder must contain a subdirectory called behavior (for planar-ophys) which contains a csv file containing the stimulus table. The root directory must contain JSON files following aind-data-schema.
📦data
 ┣ 📂single-plane-ophys_MouseID_YYYY-MM-DD_HH-M-S
 ┃ ┣ 📂pophys
 ┣ 📜data_description.json
 ┣ 📜session.json
 ┗ 📜processing.json

The pipeline populates the NWB file with structured behavior data derived from 2-photon photostimulation or electrophysiological experiments.

Output

Tools used to read files in python are h5py, json and csv.

  • aind: The pipeline outputs are saved under the results top-level folder with JSON files following aind-data-schema. Each field of view (plane) runs as a parallel process from motion-correction to event detection. The first subdirectory under results is named according to Allen Institute for Neural Dynamics standard for derived asset formatting. Below that folder, an output nwb file is stored with the format of either {Modality}_MouseID_YYYY-MM-DD_HH-M-S-processed_YYYY-MM-DD_HH-M-S.nwb. Additionally, plots are generated by the aind-ophys-camstim-behavior-qc capsule. Examples of these plots are shown below. The quality control.json file containes Evaluations for each of these plots in a view that can be used for QC purposes.
📦results
 ┣ 📂single-plane-ophys_MouseID_YYYY-MM-DD_HH-M-S
 ┃ ┣ 📂single-plane-ophys_MouseID_YYYY-MM-DD_HH-M-S-processed_YYYY-MM-DD_HH-M-S.nwb
 ┃ 📜behavior_monitoring_timing_plot.png
 ┃ 📜eye_tracking_timing_plot.png
 ┃ 📜face_tracking_timing_plot.png
 ┃ 📜metrics.json
 ┃ 📂output
 ┃ 📜photodiode_combined_plot.png
 ┃ 📜photodiode_end_time.png
 ┃ 📜photodiode_start_time.png
 ┃ 📜photodiode_timing_plot.png
 ┃ 📜physio_timing_plot.png
 ┃ 📜procedures.json
 ┃ 📜processing.json
 ┃ 📜quality_control.json
 ┃ 📜rig.json
 ┃ 📜session.json
 ┃ 📜subject.json
 ┃ 📜visual_stim_timing_plot.png
 ┃ 📜wheel_combined_plot.png
 ┗ 📜processing.json
 

Run

aind Runs in the Code Ocean pipeline here. If a user has credentials for aind Code Ocean, the pipeline can be run using the Code Ocean API.

Derived from the example on the Code Ocean API Github

import os

from codeocean import CodeOcean
from codeocean.computation import RunParams
from codeocean.data_asset import (
    DataAssetParams,
    DataAssetsRunParam,
    PipelineProcessParams,
    Source,
    ComputationSource,
    Target,
    AWSS3Target,
)

# Create the client using your domain and API token.

client = CodeOcean(domain=os.environ["CODEOCEAN_URL"], token=os.environ["API_TOKEN"])

# Run a pipeline with ordered parameters.

run_params = RunParams(
    pipeline_id=os.environ["PIPELINE_ID"],
    data_assets=[
        DataAssetsRunParam(
            id="6a980dad-f508-4d81-b879-6c7bb94935a9",
            mount="Reference",
        ),
)

computation = client.computations.run_capsule(run_params)

# Wait for pipeline to finish.

computation = client.computations.wait_until_completed(computation)

# Create an external (S3) data asset from computation results.

data_asset_params = DataAssetParams(
    name="My External Result",
    description="Computation result",
    mount="my-result",
    tags=["my", "external", "result"],
    source=Source(
        computation=ComputationSource(
            id=computation.id,
        ),
    ),
    target=Target(
        aws=AWSS3Target(
            bucket=os.environ["EXTERNAL_S3_BUCKET"],
            prefix=os.environ.get("EXTERNAL_S3_BUCKET_PREFIX"),
        ),
    ),
)

data_asset = client.data_assets.create_data_asset(data_asset_params)

data_asset = client.data_assets.wait_until_ready(data_asset)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •