|
| 1 | +# Replay Spatio-Temporal Dynamics Analysis |
| 2 | + |
| 3 | +This repo contains the code for Krause and Drugowitsch (2022). "A large majority of |
| 4 | +awake hippocampal sharp-wave ripples feature spatial trajectories with momentum". Neuron. |
| 5 | + |
| 6 | +This repo uses [Poetry](https://python-poetry.org/docs/) to manage Python dependences. |
| 7 | + |
| 8 | +## Code file structure |
| 9 | + |
| 10 | +`replay_structure` - python files that contain analysis code |
| 11 | + |
| 12 | +`scripts` - python cli scripts for running analysis |
| 13 | + |
| 14 | +`notebooks` - jupyter notebooks for producing the figures in the paper |
| 15 | + |
| 16 | +## High-level overview of analysis pipeline |
| 17 | + |
| 18 | +This code was written for analyzing the dataset from Pfieffer and Foster (2013, 2015). |
| 19 | +Please contact Pfeiffer and Foster for the dataset. |
| 20 | + |
| 21 | +The code is split into modules that house the analysis code (stored in |
| 22 | +`/replay-structure`), and command line interfaces for running |
| 23 | +the analyses (stored in `/scripts`). The analysis pipeline can roughly be broken down |
| 24 | +into 3 parts: |
| 25 | +1. Data preprocessing |
| 26 | +2. Dynamics model analysis |
| 27 | +3. Behavioral analysis |
| 28 | + |
| 29 | +I will briefly describe how the files are interrelated, to hopefully help interpret the |
| 30 | +structure of the code in this repo. |
| 31 | +I will describe the steps in terms of the cli used to run the analysis, referencing the |
| 32 | +modules that are implicated in each step. |
| 33 | + |
| 34 | +*Note:* The "dynamics" models described in Krause and Drugowitsch (2022) are referred to |
| 35 | +in the code as "structure" models. Additionally, the "Gaussian" model is referred to as |
| 36 | +"Stationary Gaussian" in the code. |
| 37 | + |
| 38 | +### Data preprocessing |
| 39 | + |
| 40 | +The Pfieffer and Foster (2013, 2015) dataset is initially loaded and reformatted by |
| 41 | +running the cli `preproccess_ratday_data.py` (imports the module `ratday.py`). |
| 42 | +From this initial preprocessing stage, neural data is |
| 43 | +extracted from SWRs, HSEs, or run snippets by running `preprocess_spikemat_data.py` |
| 44 | +(which imports the modules `ripple_preprocessing.py`, |
| 45 | +`highsynchronyevents.py`, or `run_snippet_preprocessing.py`, respectively, for each |
| 46 | +type of data preprocessing). |
| 47 | + |
| 48 | +Simulated neural data is generated using `generate_model_recovery_data.py` (imports |
| 49 | +`simulated_trajectories.py`, `simulated_neural_data.py` and `model_recovery.py`). |
| 50 | + |
| 51 | +Running `reformat_data_for_structure_analysis.py` (imports `structure_analysis_input.py`) |
| 52 | +then puts the preprocessed data (SWRs, HSEs, run snippets, and simulated SWRS) into a |
| 53 | +consistent format for feeding into the dynamics models. |
| 54 | + |
| 55 | +### Dynamics model analysis |
| 56 | + |
| 57 | + |
| 58 | +#### Dynamics models |
| 59 | + |
| 60 | +The dynamics models are implemented in the `structure_models.py` module. The |
| 61 | +Diffusion and Momentum models both utilize the forward-backward algorithm (Bishop 2006), |
| 62 | +which is implemented in `forward_backward.py`. |
| 63 | + |
| 64 | +The dynamics models without parameters that require a parameter gridsearch ( |
| 65 | +the Stationary and Random model) are run using the cli `run_model.py`. |
| 66 | +For the dynamics models with parameters (the Diffusion, Momentum, and Gaussian |
| 67 | +models), we used the Harvard Medical School computing cluster (called o2) to parallelize |
| 68 | +running the same model many times across a parameter gridsearch (code is stored in |
| 69 | + `scripts/o2/`). The module `structure_models_gridsearch.py` houses the code for |
| 70 | +running the dynamics models across a parameter gridsearch. |
| 71 | + |
| 72 | +The cli `run_deviance_explained.py` (imports `deviance_models.py`) calculates the |
| 73 | +deviance explained of each model for each SWR. |
| 74 | + |
| 75 | +For visualization purposes, the cli `get_marginals.py` (imports `marginals.py`) |
| 76 | +calculates the position marginals for each dynamics model. |
| 77 | + |
| 78 | +#### Model comparison |
| 79 | + |
| 80 | +Running the dynamics models calculates the model evidence of each model for each SWR. |
| 81 | +Model comparison across the dynamics modes is run using the cli `run_model_comparion.py` |
| 82 | +(imports `model_comparison.py`). |
| 83 | + |
| 84 | +#### Trajectory decoding |
| 85 | + |
| 86 | +In addition to identifying the spatio-temporal dynamics of SWRs, we also decoded the |
| 87 | +most likely trajectories within each SWR. Trajectories are extracted by running |
| 88 | +`get_trajectories.py` (imports `structure_trajectories.py`). We use the viterbi |
| 89 | +algorithm (Bishop, 2006) for decoding, which is implemented in `viterbi_algorithm.py`. |
| 90 | + |
| 91 | + |
| 92 | +### Behavioral analysis |
| 93 | + |
| 94 | +The behavioral analysis described in Figures 6 and 7 is run using |
| 95 | +`get_descriptive_stats.py` (imports `descriptive_stats.py`) and |
| 96 | +`run_predictive_analysis.py` (imports `predictive_analysis.py`), respectively. |
| 97 | + |
| 98 | + |
| 99 | +### Other |
| 100 | + |
| 101 | +#### Configuration files |
| 102 | + |
| 103 | +The parameters used in for running the data preprocessing and dynamics models are |
| 104 | +stored in `config.py`. |
| 105 | + |
| 106 | +The custom Python types for the data type (e.g. ripples, run_snippets), |
| 107 | +session type (e.g. real recording sessions, simulated sessions), |
| 108 | +dynamics model type (e.g. diffusion, stationary, etc.), and |
| 109 | +likelihood type (e.g. poisson, neg_binomial) are defined in `metadata.py`. |
| 110 | + |
| 111 | +#### Implementations of previous methods |
| 112 | + |
| 113 | +The comparison of our dynamics models to the method described |
| 114 | +in Stella et al. (2019), described in Figure 5, is run using `run_diffusion_constant.py` |
| 115 | +(imports `diffusion_constant.py`). |
| 116 | + |
| 117 | +We also implemented the decoding method from Pfeiffer and Foster (2013) for |
| 118 | +visualization purposes, which is run using `run_pf_analysis.py` |
| 119 | +(imports `pf_analysis.py`). |
0 commit comments