You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/01-paper.md
+14-23Lines changed: 14 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,36 +104,26 @@ bibliography:
104
104
% {"part": "key-points"}
105
105
**Key Points:**
106
106
- We present a simple yet powerful generative computational model for large-scale brain dynamics
107
-
- The model uses a functional connectivity-based attractor artificial neural network (fcANN) architecture rooted in first principles of self-organization
108
-
- We provide first evidence that the brain's functional organization approximates a Kanter-Sompolinsky projector network
109
-
- fcANN attractor dynamics accurately reconstruct the several characteristics of resting state brain dynamics and confirm theoretical predictions of attractor orthogonality
110
-
- fcANNs conceptualize both task-induced and pathological changes in brain activity as a non-linear alteration of these dynamics
111
-
- Our approach is validated using large-scale neuroimaging data from seven studies
107
+
- Based on theory of artificial attractor neural networks emerging from first principles of self-organization
108
+
- Model dynamics accurately reconstruct several characteristics of resting state brain dynamics and confirm theoretical predictions of emergent attractor self-orthogonalization
109
+
- Our model captures both task-induced and pathological changes in brain activity
112
110
- fcANNs offers a simple and interpretable computational alternative to conventional descriptive analyses of brain function
113
-
%+++
114
-
115
-
116
-
%+++ {"part": "abstract"}
117
111
118
-
```{card}
119
-
:header: Abstract
120
112
113
+
+++ {"part": "abstract"}
121
114
Understanding large-scale brain dynamics is a grand challenge in neuroscience.
122
-
We propose functional connectivity-based Hopfield Neural Networks (fcANNs), a theoretically-inspired model of macro-scale brain dynamics, simulating recurrent activity flow among brain regions based on first principles of self-organization. An fcANN is neither optimized to mimic certain brain characteristics, nor trained to solve specific tasks; its weights are simply initialized with empirical functional connectivity values.
123
-
In the fcANN framework, brain dynamics are understood in relation to attractor states, i.e. neurobiologically meaningful activity configurations that minimize the free energy of the system.
115
+
We propose functional connectivity-based Attractor Neural Networks (fcANNs), a theoretically-inspired model of macro-scale brain dynamics, simulating recurrent activity flow among brain regions based on first principles of self-organization. An fcANN is neither optimized to mimic certain brain characteristics, nor trained to solve specific tasks; its weights are simply initialized with empirical functional connectivity values.
116
+
In the fcANN framework, brain dynamics are understood in relation to attractor states; neurobiologically meaningful activity configurations that minimize the free energy of the system.
124
117
Analyses of 7 distinct datasets demonstrate that fcANNs can accurately reconstruct and predict brain dynamics under a wide range of conditions, including resting and task states and brain disorders.
125
118
By establishing a mechanistic link between connectivity and activity, fcANNs offers a simple and interpretable computational alternative to conventional descriptive analyses of brain function. Being a generative framework, fcANNs can yield mechanistic insights and hold potential to uncover novel treatment targets.
126
-
```
127
-
128
-
%+++
119
+
+++
129
120
130
121
## Introduction
131
122
132
123
Brain function is characterized by the continuous activation and deactivation of anatomically distributed neuronal
133
124
populations {cite:p}`buzsaki2006rhythms`.
134
-
Irrespective of the presence or absence of explicit stimuli, brain regions appear to work in concert, giving rise to a rich and spatiotemporally complex fluctuation {cite:p}`bassett2017network`.
135
-
This fluctuation is neither random nor stationary over time {cite:p}`liu2013time; zalesky2014time`.
136
-
It is organized around large-scale gradients {cite:p}`margulies2016situating; huntenburg2018large` and exhibits quasi-periodic properties, with a limited number of recurring patterns often termed as "brain states" {cite:p}`greene2023everyone; vidaurre2017brain; liu2013time`.
125
+
Irrespective of the presence or absence of explicit stimuli, brain regions appear to work in concert, giving rise to rich and spatiotemporally complex fluctuations {cite:p}`bassett2017network`.
126
+
These fluctuations are neither random {cite:p}`liu2013time; zalesky2014time`; they organize around large-scale gradients {cite:p}`margulies2016situating; huntenburg2018large` and exhibit quasi-periodic properties, with a limited number of recurring patterns often termed as "brain states" {cite:p}`greene2023everyone; vidaurre2017brain; liu2013time`.
137
127
A wide variety of descriptive techniques have been previously employed to characterize whole-brain dynamics {cite:p}`smith2012temporally; vidaurre2017brain; liu2013time; chen2018human`.
138
128
These efforts have provided accumulating evidence not only for the existence of dynamic brain states but also for their clinical significance {cite:p}`hutchison2013dynamic; barttfeld2015signature; meer2020movie`.
139
129
However, the underlying driving forces remain elusive due to the descriptive nature of such studies.
@@ -176,7 +166,7 @@ Capitalizing on the generative nature of our model, we also demonstrate how it c
176
166
177
167
:::{figure} figures/concept.png
178
168
:name: concept
179
-
**Functional connectivity-based attractor neural networks as models of macro-scale brain dynamics.** <br/><br/>
169
+
**Functional connectivity-based attractor neural networks as models of macro-scale brain dynamics.** <br/>
180
170
**A** Free energy minimizing artificial neural networks {cite:p}`10.48550/ARXIV.2505.22749` are a form of recurrent stochastic artificial neural networks that, similarly to classical Hopfield networks {cite:p}`hopfield1982neural; koiran1994dynamics`, can serve as content-addressable ("associative") memory systems. More generally, through the learning rule emerging from local free energy minimization, the weights of these network will encode a global internal model of the external world, as represented by the external inputs (or internal biases) presented to the network. The priors of this internal generative model are represented by the attractor states of the network that - as a special consequence of free energy minimization - will tend to be orthogonal to each other. During stochastic inference (local free energy minimization), the network samples from the posterior that combines these priors with the previous brain states (also encompassing incoming stimuli), akin to Markov-chain Monte Carlo (MCMC) sampling.
181
171
**B** In accordance to this theoretical framework, we consider regions of the brain as nodes of a free energy minimizing artificial neural network. Instead of initializing the network with the structural wiring of the brain or training it to solve specific tasks, we set its weights empirically, using information about the interregional "activity flow" across regions, as estimated via functional brain connectivity. Applying the inference rule of our framework - which displays strong analogies with the relaxation rule of Hopfield networks and the activity flow principle that links activity to connectivity in brain networks - results in a generative computational model of macro-scale brain dynamics, that we term functional connectivity-based (stochastic) attractor neural network (fcANN).
182
172
**C** The proposed computational framework assigns a free energy level, a probability density and a trajectory fo least action towards an attractor state to any brain activation pattern and predicts changes of the corresponding dynamics in response to alterations in activity and/or connectivity. Throughout the present paper, we will illustrate these concepts with the aid of a low-dimensional embedding of the state-space, which we will refer to as the fcANN projection.
@@ -366,7 +356,7 @@ Panel E shows that the system does not converge to an attractor state anymore wi
366
356
367
357
In study 1, we have investigated the convergence process of the functional connectivity-based HNN and contrasted it with a null model based on permuted variations of the connectome (while retaining the symmetry of the matrix). This null model preserves the sparseness and the degree distribution of the connectome, but destroys its topological structure (e.g. clusteredness). We found that the topology of the original (unpermuted) functional brain connectome makes it significantly better suited to function as an attractor network; than the permuted null model.
368
358
While the original connectome based HNN converged to an attractor state in less than 150 iterations in more than 50% of the cases, the null model did not reach convergence in more than 98% of the cases, even after 10000 iterations ({numref}`attractors`G, {numref}`Supplementary Figure %s <si_convergence>`). This result was robustly observed, independent of the inverse temperature parameter beta.
369
-
We set the temperature parameter for the rest of the paper to a value providing the fastest convergence ($\beta=0.4$, median iterations: 107), resulting in 4 distinct attractor states. The primary motivation for selecting $\beta=0.4$ was to reduce computational burden for further analyses. However, as with increasing temperature, attractor states emerge in a nested fashion (i.e. the basin of a new attractor state is fully contained within the basin of a previous one), we expect that the results of the following analyses would be, although more detailed, but qualitatively similar with higher $\beta$ values.
359
+
We set the temperature parameter for the rest of the paper to a value providing the fastest convergence ($\beta=0.04$, median iterations: 107), resulting in 4 distinct attractor states. The primary motivation for selecting $\beta=0.04$ was to reduce computational burden for further analyses. However, as with increasing temperature, attractor states emerge in a nested fashion (i.e. the basin of a new attractor state is fully contained within the basin of a previous one), we expect that the results of the following analyses would be, although more detailed, but qualitatively similar with higher $\beta$ values.
370
360
371
361
We optimized the noise parameter $\sigma$ of the stochastic relaxation procedure for 8 different $\sigma$ values over a logarithmic range between $\sigma=0.1$ and $1$ so that the similarity (the timeframes distribution over the attractor basins) is maximized between the empirical data and the fcANN-generated data. We contrasted this similarity with two null-models ({numref}`attractors`H). First we generated null-data as random draws from a multivariate normal distribution with co-variance matrix set to the functional connectome's covariance matrix (partial correlation-based connectivity estimates). This serves as a baseline for generating data that optimally matches the empirical data in terms of distribution and spatial autocorrelation, as based on information on the underlying co-variance structure (and given Gaussian assumptions), but without any mechanistic model of the generative process, e.g. without modelling any non-linear and non-Gaussian effects and temporal autocorrelations stemming from recurrent activity flow). We found that The fcANN only reached multistability with $\sigma>0.19$, and it provided more accurate reconstruction of the real data than the null model for $\sigma=0.37$ and $\sigma=0.52$ (p=0.007 and 0.015, $\chi^2$ dissimilarity: 11.16 and 21.57, respectively).
372
362
With our second null model, we investigated whether the fcANN-reconstructed data is more similar to the empirical data than synthetic data with identical spatial autocorrelation structure (generated by spatial phase randomization of the original volumes, see [Methods](#evaluation-resting-state-dynamics)).
@@ -398,7 +388,7 @@ better (p<0.0001) to out-of-sample data (study 2). Error bars denote 99% bootstr
398
388
The discovered attractor states demonstrate high replicability (mean Pearson's correlation 0.93) across the discovery dataset (study 1) and two independent replication datasets ([study 2 and 3](tab-samples), {numref}`rest-validity`C). Moreover, they were found to be significantly more robust to noise added to the connectome than nodal strengths scores (used as a reference, see {numref}`Supplementary Figure %s <si_noise_robustness_weights>` for details).
399
389
400
390
Further analysis in study 1 showed that connectome-based attractor models accurately reconstructed multiple characteristics of true resting-state data.
401
-
First, the two axes (first two PCs) of the fcANN projection accounted for a substantial amount of variance in the real resting-state fMRI data in study 1 (mean $R^2=0.399$) and generalized well to out-of-sample data (study 2, mean $R^2=0.396$) ({numref}`rest-validity`E). The explained variance of the fcANN projection significantly exceeded that of the first two PCs derived directly from the real resting-state fMRI data itself ($R^2=0.37$ and $0.364$ for in- and out-of-sample analyses).
391
+
First, the two axes (first two PCs) of the fcANN projection accounted for a substantial amount of variance in the real resting-state fMRI data in study 1 (mean $R^2=0.399$) and generalized well to out-of-sample data (study 2, mean $R^2=0.396$) ({numref}`rest-validity`E). The explained variance of the fcANN projection significantly exceeded that of the first two PCs derived directly from the real resting-state fMRI data itself ($R^2=0.37$ and $0.364$ for in- and out-of-sample analyses). PCA - by identifying variance-heavy orthogonal directions - aims to explain the highest amount of variance possible in the data (with the assumption of Gaussian conditionals). While empirical attractors are closely aligned to the PCs (i.e. eigenvectors of the inverse covariance matrix), the alignment is only approximate. Here we quantified, weather attractor states are a better fit to the unseen data than the PCs. Obviously, due to the otherwise strong PC-attractor correspondence, this is expected to be only a small improvement. However, this provides important evidence for the validity of our framework, as it shows that attractors are not just a complementary, perhaps "noisier" variety of the PCs, but a "substrate" that generalizes better to unseen data than the PCs themselves.
402
392
403
393
Second, during stochastic relaxation, the fcANN model was found to spend approximately three-quarters of the time on the basis of the first two attractor states and one-quarter on the basis of the second pair of attractor states (approximately equally distributed between pairs). We observed similar temporal occupancies in the real data {numref}`rest-validity`D left column), statistically significant with two different null models ({numref}`Supplementary Figure %s <si_state_occupancy_null_models>`). Fine-grained details of the bimodal distribution observed in the real resting-state fMRI data were also convincingly reproduced by the fcANN model ({numref}`rest-validity`F and {numref}`attractors`D, second column).
404
394
@@ -562,7 +552,8 @@ Seciond, as a generative model, fcHNNs provide testable predictions about the ef
562
552
563
553
Third, the theoretical integration of the fcANN model with the FEP-ANN framework positions our work within a broader scientific program that seeks to understand the brain as a self-organizing, information-processing system governed by fundamental physical and computational principles. The empirical validation of attractor orthogonality represents a crucial step toward establishing this unified framework for understanding brain function across scales and contexts.
564
554
565
-
The proposed approach is not without limitations. First, the fcANN model is obviously a simplification of the brain's dynamics, and it does not aim to explain biophysical details underlying (altered) polysynaptic connections. Nevertheless, our approach showcases that many characteristics of brain dynamics, like multistability, temporal autocorrelations, states and gradients, can be explained, and predicted, by a very simple nonlinear phenomenological model.
555
+
The proposed approach is not without limitations. First, as the proposed approach does not incorporate information about anatomical connectivity and does not explitly model biophysical details. Thus, in its present form, the model is not suitable to study the structure-function coupling and cannot yiled mechanistic explanations underlying (altered) polysynaptic connections, at the level of biophysical details.
556
+
Nevertheless, our approach showcases that many characteristics of brain dynamics, like multistability, temporal autocorrelations, states and gradients, can be explained, and predicted, by a very simple nonlinear phenomenological model.
566
557
Second, our model assumes a stationary connectome, which seems to contradict notions of dynamic connectivity. However, while the underlying FEP-ANN framework focuses on the long-terms steady-state distribution of the system, it also naturally incorporates multistable fluctuations and the related dynamic connectivity changes through the stochastic relaxation dynamics. This is in line with the notion of "latent functional connectivity"; an intrinsic brain network architecture built up from connectivity properties that are persistent across brain states {cite:p}`https://doi.org/10.1162/netn_a_00234`.
567
558
568
559
In this initial work, we presented the simplest possible implementation of the fcANN concept. It is clear that the presented analyses exploit only a small proportion of the richness of the full state-space dynamics reconstructed by the fcANN model.
0 commit comments