Skip to content

Commit 4ff3d8d

Browse files
committed
Initial commit
1 parent 151f8a8 commit 4ff3d8d

10 files changed

+4665
-1
lines changed

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
BSD 3-Clause License
22

3-
Copyright (c) 2024, Jan Drugowitsch
3+
Copyright (c) 2022-2024, Albert Chen, Anna Kutschireiter and Jan Drugowitsch
44

55
Redistribution and use in source and binary forms, with or without
66
modification, are permitted provided that the following conditions are met:

README.md

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Code for Multimodal Head Direction Cue Integration
2+
3+
This repository contains the code for network simulations and figure generation in
4+
5+
Melanie A. Basnak, Anna Kutschireiter, Tatsuo S. Okubo, Albert Chen, Pavel Gorelik, Jan Drugowitsch, and Rachel I. Wilson (2024). Multimodal cue integration and learning in a neural representation of head direction. *Nature Neuroscience*.
6+
7+
Link to simulated data files on Figshare: https://doi.org/10.6084/m9.figshare.26510239
8+
9+
## Installation: conda environment set-up instructions
10+
11+
This code uses Python 3.9.16. A list of dependencies can be found in `environment.yml`. To recreate the conda environment, run
12+
13+
conda env create -f environment.yml
14+
15+
To activate this conda environment, run
16+
17+
conda activate venv-basnak-etal-2022
18+
19+
## Overview of repository
20+
21+
The repository contains the Python script `ra_network.py` and the following six Jupyter notebooks:
22+
* `fig1_ER_to_EPG_weights.ipynb` generates Figure 1c
23+
* `fig2_changing_cue_intensity.ipynb` generates Figure 2h,i
24+
* `fig3_individual_variation_in_weights.ipynb` generates Figure 3d,f
25+
* `fig4_cue_shift.ipynb` generates Figure 4f,g
26+
* `fig5_cue_combination.ipynb` generates Figure 5f,g
27+
* `fig7_inverted_gain_learning.ipynb` generates Figure 7g,h
28+
29+
The file `ra_network.py` contains `class RingAttractorNetwork` for simulating the ring attractor network, including class methods and functions for analyzing its properties and dynamics, as well as helper functions for generating ground truth angular velocity and head direction trajectories and inputs to the ring attractor network. Additional functions specific to simulations and analyses for individual figures can be found in the notebooks corresponding to the figures. With the exception of Figure 3d,f, `ra_network.py` is used in all of the notebooks.
30+
31+
At the top of each notebook (in the second cell), please specify whether you would like to save your simulated data and/or figure(s) locally, your local directory to which you would like to save your simulated data and/or figure(s), and your file names.
32+
33+
## Load data on file vs. simulate new data
34+
35+
For Figures 1c, 2h, 4f,g, 5f, and 7g, you have the option to either load in existing data or simulate new data on which further analyses will be performed. Choose your option by setting `have_data_file` at the top of the notebook (in the second cell) to either `True` (load existing data) or `False` (simulate new data). By default, new data will be simulated from scratch, i.e. `have_data_file = False`.
36+
37+
If you plan to use existing data rather than simulate data from scratch, be sure to download the appropriate data file(s) from [Figshare](https://doi.org/10.6084/m9.figshare.26510239 "Basnak et al., network simulation data"):
38+
* `data-ER-EPG-weights_gamma-decay=17_2022-211-30_02.npz` contains data for Figure 1c
39+
* `data-varying-cue-intensity_2022-11-30.npz` contains data for Figure 2h, 3d,f
40+
* `data-cue-shift_2022-12-07.npz` contains data for Figure 4f,g
41+
* `data-cue-combination_2022-11-30.npz` contains data for Figure 5f
42+
* `data-inverted-gain_2022-11-30.npz` contains data for Figure 7g
43+
44+
Save the data file(s) to your local directory. In each notebook where you want to use existing data, set `have_data_file = True` and set `data_path` to your local directory where you saved the data file.
45+
46+
Note that Figure 3d,f resamples simulated data for Figure 2h. Therefore, set `data_path` in `fig3_individual_variation_in_weights.ipynb` to your local directory where simulated data (whether it is new data or `data-varying-cue-intensity_2022-11-30.npz`) for Figure 2h is saved.
47+
48+
## Details for individual code files
49+
50+
For each code file below, we refer you to the relevant section in the paper and provide any additional usage information and details about numerical implementation. Note that all lengths of time mentioned below are in simulation time rather than experiment time (24s of experiment time corresponds to 1s of simulation time).
51+
52+
### `ra_network.py`
53+
54+
Relevant section in the paper: Methods - Network model
55+
56+
Note that we numerically implemented the learning rule described in the paper using a different parametrization. Recall that the learning is described as follows in the paper:
57+
58+
$$\frac{dW_{k,nm}}{dt} = \eta|v(t)|f_{n}\Bigl(w_\textrm{max} - W_{k,nm}(t)\Bigr) - \eta|v(t)|f_{n}w_\textrm{max}\frac{g_{k,m}(t)}{g_0}$$
59+
60+
where the first term represents non-associative LTP, and the second term represents associative LTD. We re-expressed this learning rule as
61+
62+
$$\frac{dW_{k,nm}}{dt} = \eta|v(t)|\Bigl(-\gamma_\textrm{Hebb}f_ng_{k,m}(t) + \gamma_\textrm{postboost}f_n - \gamma_\textrm{decay}W_{k,nm}(t)f_n\Bigr)$$
63+
64+
where the first term represents inhibitory Hebbian plasticity, the second term represents a post-synaptically gated activity boost, and the third term represents post-synaptically gated weight decay. The re-parametrization is given by $w_\textrm{max} = \frac{\gamma_\textrm{postboost}}{\gamma_\textrm{decay}}, g_0 = \frac{\gamma_\textrm{decay}}{\gamma_\textrm{Hebb}}$ and absorbing $\frac{1}{\gamma_\textrm{decay}}$ into the learning rate $\eta$. The learning rate in the re-expressed learning rule is $\eta = 0.02$; in the simulations, however, we set `eta = 5e-5` because `eta` implicitly includes the step size `dt = 0.0025`, i.e. $\eta$ $\times$ `dt` $=$ `eta`. The learning rule parameters $\gamma_\textrm{Hebb}, \gamma_\textrm{postboost}, \gamma_\textrm{decay}$ correspond to the variables `gamma_Hebb`, `gamma_postboost`, `gamma_decay`, respectively, in the code files. We set `gamma_Hebb = 1`, `gamma_postboost = 1`, `gamma_decay = 17`.
65+
66+
### `fig1_ER_to_EPG_weights.ipynb`
67+
68+
This notebook simulates a ring attractor network with two external inputs (visual + wind) that are aligned and equal in strength for 240s, and visualizes the final visual ER $\rightarrow$ EPG weights and wind ER $\rightarrow$ EPG weights as heat maps.
69+
70+
### `fig2_changing_cue_intensity.ipynb`
71+
72+
Relevant section in the paper: "Impact of cue intensity on bump parameters, and across-individual variability (Figs. 2h,i & 3d,f)" under Methods - Network model
73+
74+
For Figure 2i, we simulated a ring attractor network with a single external input with zero, low, or high cue intensity for 150s each. The network was set to the same initial state for each cue intensity and the external inputs were identical (i.e. identical ground truth angular velocity and head direction trajectories) except for the cue intensity. We visualized the final ER activity, EPG activity, and ER $\rightarrow$ EPG weights.
75+
76+
### `fig3_individual_variation_in_weights.ipynb`
77+
78+
Relevant section in the paper: "Impact of cue intensity on bump parameters, and across-individual variability (Figs. 2h,i & 3d,f)" under Methods - Network model
79+
80+
For Figure 3d,f, there is the option to run a subsampling analysis, in which we subsampled the data in Figure 3d,f such that the size of each subsample matches the number of flies in the experiment (`n_flies = 15`). We created `n_subsamples = 1e8` subsamples and, for each subsample, we regressed bump width or amplitude onto HD encoding accuracy. We then calculated the average p-value across all subsamples and the fraction of p-values that are significant (we set the alpha level at 0.05). To rerun this subsampling analysis, set `run_subsampling_analysis = True` in the notebook. By default, this subsampling analysis will not be run, i.e. `run_subsampling_analysis = False`.
81+
82+
### `fig4_cue_shift.ipynb`
83+
84+
Relevant section in the paper: "Impact of individual variations in experienced cue intensity on cue weighting (Fig. 4f,g)" under Methods - Network model
85+
86+
In each simulation in this notebook, the cue of one modality is presented before the cue of the other modality; the ER amplitude for one modality is fixed while the ER amplitude for the other modality is varied. This creates four conditions:
87+
1. Visual cue is presented first; visual ER amplitude is fixed
88+
2. Visual cue is presented first; wind ER amplitude is fixed
89+
3. Wind cue is presented first; visual ER amplitude is fixed
90+
4. Wind cue is presented first; wind ER amplitude is fixed
91+
92+
Therefore, the total number of simulations `n_sims` has to be a multiple of four for the conditions to be balanced across simulations. This is achieved by having the cue with fixed ER amplitude be presented first in the first half of all simulations *and* assigning the cue with varied ER amplitudes to be the wind cue on the simulations indexed by even integers.
93+
94+
### `fig5_cue_combination.ipynb`
95+
96+
Relevant section in the paper: "Temporal evolution of bump parameters during cue combination (Fig. 5f,g)" under Methods - Network model
97+
98+
### `fig7_inverted_gain_learning.ipynb`
99+
100+
Relevant section in the paper: "Impact of variation in ER amplitude on remapping during inverted gain (Fig. 7g,h)" under Methods - Network model
101+
102+
For Figure 7g, either the normalized or the unnormalized HD encoding accuracy across time is visualized in the top panel. The normalized HD encoding accuracy is obtained by dividing the unnormalized HD encoding accuracy by the steady-state HD encoding accuracy. The steady-state HD encoding accuracy is the mean HD encoding accuracy during normal gain. By default, the normalized HD encoding accuracy is plotted, i.e. `plot_normalized_HD_encoding_accuracy = True`. To plot the unnormalized HD encoding accuracy, set `plot_normalized_HD_encoding_accuracy = False`.

environment.yml

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
name: venv-basnak-etal-2022
2+
channels:
3+
- conda-forge
4+
- defaults
5+
dependencies:
6+
- appdirs=1.4.4=pyhd3eb1b0_0
7+
- appnope=0.1.3=pyhd8ed1ab_0
8+
- asttokens=2.2.1=pyhd8ed1ab_0
9+
- backcall=0.2.0=pyh9f0ad1d_0
10+
- backports=1.0=pyhd8ed1ab_3
11+
- backports.functools_lru_cache=1.6.4=pyhd8ed1ab_0
12+
- blas=1.0=openblas
13+
- brotli=1.0.9=h1a28f6b_7
14+
- brotli-bin=1.0.9=h1a28f6b_7
15+
- brotlipy=0.7.0=py39h1a28f6b_1002
16+
- ca-certificates=2023.01.10=hca03da5_0
17+
- certifi=2023.5.7=py39hca03da5_0
18+
- cffi=1.15.1=py39h80987f9_3
19+
- charset-normalizer=2.0.4=pyhd3eb1b0_0
20+
- contourpy=1.0.5=py39h525c30c_0
21+
- cryptography=39.0.1=py39h834c97f_0
22+
- cycler=0.11.0=pyhd3eb1b0_0
23+
- debugpy=1.5.1=py39hc377ac9_0
24+
- decorator=5.1.1=pyhd8ed1ab_0
25+
- executing=1.2.0=pyhd8ed1ab_0
26+
- fonttools=4.25.0=pyhd3eb1b0_0
27+
- freetype=2.12.1=h1192e45_0
28+
- giflib=5.2.1=h80987f9_3
29+
- idna=3.4=py39hca03da5_0
30+
- importlib-metadata=6.6.0=pyha770c72_0
31+
- importlib_metadata=6.6.0=hd8ed1ab_0
32+
- importlib_resources=5.2.0=pyhd3eb1b0_1
33+
- ipykernel=6.15.0=pyh736e0ef_0
34+
- ipython=8.14.0=pyhd1c38e8_0
35+
- jedi=0.18.2=pyhd8ed1ab_0
36+
- jpeg=9e=h80987f9_1
37+
- jupyter_client=8.2.0=pyhd8ed1ab_0
38+
- jupyter_core=5.3.0=py39hca03da5_0
39+
- kiwisolver=1.4.4=py39h313beb8_0
40+
- lcms2=2.12=hba8e193_0
41+
- lerc=3.0=hc377ac9_0
42+
- libbrotlicommon=1.0.9=h1a28f6b_7
43+
- libbrotlidec=1.0.9=h1a28f6b_7
44+
- libbrotlienc=1.0.9=h1a28f6b_7
45+
- libcxx=14.0.6=h848a8c0_0
46+
- libdeflate=1.17=h80987f9_0
47+
- libffi=3.4.4=hca03da5_0
48+
- libgfortran=5.0.0=11_3_0_hca03da5_28
49+
- libgfortran5=11.3.0=h009349e_28
50+
- libopenblas=0.3.21=h269037a_0
51+
- libpng=1.6.39=h80987f9_0
52+
- libsodium=1.0.18=h27ca646_1
53+
- libtiff=4.5.0=h313beb8_2
54+
- libwebp=1.2.4=ha3663a8_1
55+
- libwebp-base=1.2.4=h80987f9_1
56+
- llvm-openmp=14.0.6=hc6e5704_0
57+
- lz4-c=1.9.4=h313beb8_0
58+
- matplotlib=3.7.1=py39hca03da5_1
59+
- matplotlib-base=3.7.1=py39h78102c4_1
60+
- matplotlib-inline=0.1.6=pyhd8ed1ab_0
61+
- munkres=1.1.4=py_0
62+
- ncurses=6.4=h313beb8_0
63+
- nest-asyncio=1.5.6=pyhd8ed1ab_0
64+
- numpy=1.24.3=py39h1398885_0
65+
- numpy-base=1.24.3=py39h90707a3_0
66+
- openssl=1.1.1t=h1a28f6b_0
67+
- packaging=23.0=py39hca03da5_0
68+
- parso=0.8.3=pyhd8ed1ab_0
69+
- pexpect=4.8.0=pyh1a96a4e_2
70+
- pickleshare=0.7.5=py_1003
71+
- pillow=9.4.0=py39h313beb8_0
72+
- pip=23.0.1=py39hca03da5_0
73+
- platformdirs=3.5.1=pyhd8ed1ab_0
74+
- pooch=1.4.0=pyhd3eb1b0_0
75+
- prompt-toolkit=3.0.38=pyha770c72_0
76+
- prompt_toolkit=3.0.38=hd8ed1ab_0
77+
- psutil=5.9.0=py39h1a28f6b_0
78+
- ptyprocess=0.7.0=pyhd3deb0d_0
79+
- pure_eval=0.2.2=pyhd8ed1ab_0
80+
- pycparser=2.21=pyhd3eb1b0_0
81+
- pygments=2.15.1=pyhd8ed1ab_0
82+
- pyopenssl=23.0.0=py39hca03da5_0
83+
- pyparsing=3.0.9=py39hca03da5_0
84+
- pysocks=1.7.1=py39hca03da5_0
85+
- python=3.9.16=hc0d8a6c_2
86+
- python-dateutil=2.8.2=pyhd3eb1b0_0
87+
- pyzmq=25.0.2=py39h313beb8_0
88+
- readline=8.2=h1a28f6b_0
89+
- requests=2.29.0=py39hca03da5_0
90+
- scipy=1.10.1=py39h9d039d2_1
91+
- setuptools=67.8.0=py39hca03da5_0
92+
- six=1.16.0=pyhd3eb1b0_1
93+
- sqlite=3.41.2=h80987f9_0
94+
- stack_data=0.6.2=pyhd8ed1ab_0
95+
- tk=8.6.12=hb8d0fd4_0
96+
- tornado=6.2=py39h1a28f6b_0
97+
- tqdm=4.65.0=py39h86d0a89_0
98+
- traitlets=5.9.0=pyhd8ed1ab_0
99+
- typing-extensions=4.6.3=hd8ed1ab_0
100+
- typing_extensions=4.6.3=pyha770c72_0
101+
- tzdata=2023c=h04d1e81_0
102+
- urllib3=1.26.15=py39hca03da5_0
103+
- wcwidth=0.2.6=pyhd8ed1ab_0
104+
- wheel=0.38.4=py39hca03da5_0
105+
- xz=5.4.2=h80987f9_0
106+
- zeromq=4.3.4=hbdafb3b_1
107+
- zipp=3.11.0=py39hca03da5_0
108+
- zlib=1.2.13=h5a0b063_0
109+
- zstd=1.5.5=hd90d995_0
110+
prefix: /opt/homebrew/anaconda3/envs/venv-basnak-etal-2022

0 commit comments

Comments
 (0)