Numerical simulations of olfactory habituation models for the manuscript
François X. P. Bourassa, Paul François, Gautam Reddy, Massimo Vergassola. "Manifold Learning for Olfactory Habituation to Strongly Fluctuating Backgrounds", PRX Life, 4, 013008, January 16, 2026. https://doi.org/10.1103/q62z-7j1d
bioRxiv preprint: https://doi.org/10.1101/2025.05.26.656161
Final plots are produced by code in final_plotting/, using the results produced by the following code.
manifold_learning_filtering_tradeoffs.pyplots the theoretical calculation results in a simple case.
run_performance_recognition.pyandanalyze_comparison_results.pylaunch and analyze, respectively, the numerical experiments of habituation and new odor recognition.
non-gaussian_habituation_recognition.ipynbgenerates and analyzes a simulation example of habituation and new odor recognition in a background with weakly non-Gaussian statistics.
turbulent_habituation_recognition.ipynbgenerates and analyzes a simulation example of habituation and new odor recognition in a background with turbulent statistics.
run_performance_dimensionality.pyandanalyze_dimensionality_results.pylaunch and analyze simulations of habituation and new odor recognition for different odor space dimensions and new odor concentrations.
nonlinear_osn_turbulent_illustrations.ipynbproduces example simulations of habituation to nonlinear olfactory backgroundsrun_performance_nl_osn.pyandanalyze_nl_osn_results.pyrun and analyze multiple habituation and new odor recognition tests in the presence of nonlinear OSN responses.supplementary_scripts/si2019_hill_tanh_distribution_fits.ipynbfits the empirical OR-odor affinity distribution from Si et al., 2019 used to generated odors for the nonlinear OSN model.
manifold_learning_filtering_tradeoffs.pyproduces the detailed plots of the loss vs variance and dimensionality (S1A, S1B).supplementary_scripts/autocorrelation_turbulent.pycomputes the autocorrelation function of the turbulent odor concentration process (figure S1C).
run_performance_recognition.pyandrun_performance_dimensionality.py, used for Figure 2, also generate the supplementary results in Figure S2.
non-gaussian_habituation_recognition.ipynbcomputes IBCM eigenvalues at the fixed point in a weakly non-Gaussian background, for S3A.supplementary_scripts/lognormal_habituation_recognition.ipynbcomputes eigenvalues in a log-normal background, for S3B.turbulent_habituation_recognition.ipynbcomputes eigenvalues in a turbulent background, for S3C.
supplementary_scripts/toymodel_habituation_recognition.ipynbprovides a detailed analysis of habituation on a toy two-odor background model, described in the appendix section 6, and analyzes IBCM convergence time in particular.
supplementary_scripts/importance_thirdmoment_ibcm.ipynbgenerates example simulations of the IBCM model on Gaussian versus weakly non-Gaussian backgrounds.
supplementary_scripts/lognormal_habituation_recognition.ipynbruns example habituation and recognition simulations in log-normal background statistics.
supplementary_scripts/toymodel_habituation_recognition.ipynbalso includes simluations for the BioPCA model in the toy 2D background.
supplementary_scripts/convergence_time_scales_non-gaussian.ipynbruns sample simulations to compare convergence times to analytical predictions in the IBCM and BioPCA models.
supplementary_scripts/run_ibcm_convergence_turbulent.pyruns multiple IBCM habituation simulations to measure convergence as a function of various model and background parameters.supplementary_scripts/run_biopca_convergence_turbulent.pydoes the same, for BioPCA.supplementary_scripts/convergence_ibcm_turbulent.ipynbruns example simulations and plot the full results of the above scripts for the analysis of convergence time in the IBCM and BioPCA models; one needs to runrun_biopca_convergence_turbulent.pyandrun_ibcm_convergence_turbulent.pyfirst.
supplementary_scripts/check_gaussian_noise_robustness.ipynbruns example simulation of the different habituation models in the presence of OSN noisesupplementary_scripts/run_performance_noise.pyandsupplementary_scripts/analyze_noise_results.pylaunch and analyze multiple simulations to test habituation and new odor recognition in the presence of OSN noise.
-
supplementary_scripts/run_performance_w_norm_choice.pyandsupplementary_scripts/analyze_w_norm_results.pyrun and analyze multiple habituation and new odor recognition tests for various choices of$W$ learning rates based on using diffrent$L^p$ norms in the cost function. -
supplementary_scripts/turbulent_habituation_test_w_norms.ipynbruns sample simulations with alternate$L^p, L^q$ norms in the$W$ update rule.
run_performance_dimensionality.pyandanalyze_dimensionality_results.py, used for Fig. 5, generate the full results shown in S12.
supplementary_scripts/correlated_odors_turbulent.ipynbruns sample habituation and odor recognition simulations for different strengths of correlations between a pair of background odor concentrations.supplementary_scripts/run_performance_correlation.pyandsupplementary_scripts/analyze_correlation_results.pyassess habituation and odor recognition performance across multiple simulations for various strengths of correlations between a pair of odors.
The code for Fig. 6 also generates the supplementary results for strong OSN nonlinearity shown in Fig. S14:
-
nonlinear_osn_turbulent_illustrations.ipynbfor sample simulations at different$\epsilon$ values. -
run_performance_nl_osn.pyandanalyze_nl_osn_results.pyto assess performance as a function of$\epsilon$ , across multiple simulation seeds for each$\epsilon$ , for different new odor concentrations. -
supplementary_scripts/si2019_hill_tanh_distribution_fits.ipynbfits the empirical odor affinity distribution.
supplementary_scripts/nonlinear_adaptation_osn_turbulent.ipynbgenerates an example simulation with OSN adaptation.supplementary_scripts/run_adaptation_performance_tests.pyis an all-in-one script that runs multiple habituation simulations and assesses new odor recognition performance with OSN adaptation (results in Fig. S15).supplementary_scripts/si2019_hill_tanh_distribution_fits.ipynbfits the empirical odor affinity distribution also used for nonlinear (but not adapting) OSNs in Figs. 6 and S14
-
supplementary_scripts/run_performance_lambda.pyandsupplementary_scripts/analyze_lambda_results.pyrun and analyze a range of habituation simulations on the same background for different values of the$M$ weights scaling parameter,$\Lambda$ .
Implementation of the habituation models and of the background processes.
average_sub,biopca,ideal,ibcm: implementation of the different habituation models and functions to integrate them numerically.tagging: odor tag (Kenyon cell layer) computation.ibcm_analytics,pca_analytics: analytical results for the IBCM and BioPCA networks.backgrounds,distribs: implementation of different stochastic background processes.nonlin_adapt_osn: background generation and update functions with the nonlinear OSN response model and OSN adaptation.checktools: functions to test different parts of the models and code.
Functions used by the main scripts to run numerical simulations, using parallel processing, saving results to disk, etc.
-
analysis: functions used by the mainanalyze...scripts, to process simulation results. -
habituation_recognition: functions to set up and launch parallel simulations of the various habituation models. -
habituation_recognition_lambda: simulation functions to re-run the same simulation for various$\Lambda$ scales. -
habituation_recognition_nonlin_osn: functions to set up and launch multiple simulations with nonlinear OSNs. -
idealized_recognition: functions to compute ideal/optimal habituation model outputs, especially those that would have occured in existing (saved) simulations. -
plotting: auxiliary plotting functions.
Scripts to test different parts of the numerical implementation of habituation models and background processes.
Code to produce the final main and supplementary figures from the simulation results generated by main and supplementary scripts.