Skip to content

Commit 0c04cf9

Browse files
committed
revision #1
1 parent b576c38 commit 0c04cf9

File tree

4 files changed

+12
-25
lines changed

4 files changed

+12
-25
lines changed

docs/01-paper.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ Inference arises from minimizing free energy with respect to the states $\sigma$
196196

197197
```{math}
198198
:label: fep-update
199-
\mathbb{E}_{q}[\sigma_i] = L(b_q) = \underbrace{ L \left( \underbrace{ b_i}_{\textit{bias}} + \underbrace{\sum_{j \ne i} J_{ij} \sigma_j}_{\textit{weighted input}} \right) }_{ \textit{sigmoid (Langevin)} }
199+
\mathbb{E}_{q}[\sigma_i] = L(b_q) = \underbrace{ L \left( \underbrace{ b_i}_{\textit{bias}} + \underbrace{\sum_{j \ne i} J_{ij} \sigma_j}_{\textit{local potential}} \right) }_{ \textit{sigmoid (Langevin)} }
200200
```
201201

202202
where $L$ is a sigmoidal activation function (a Langevin function in our case). This rule dictates that each unit updates its activity stochastically, based on a weighted sum of the activity of other units, plus its own intrinsic bias. See {cite:p}`10.48550/ARXIV.2505.22749` and [](#Supplementary-Information-2) for a detailed derivation of the inference dynamics.

docs/02-methods.md

Lines changed: 6 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -144,36 +144,29 @@ Next, in all datasets, we estimated study‑level mean connectivity matrices by
144144

145145
### fcANN inference (FEP-ANN) and update rules
146146

147-
Our fcANN instantiates the inference dynamics of free-energy-minimizing attractor neural networks (FEP-ANNs) at the macro-scale. Each node represents a brain region with continuous activity \( \boldsymbol{a} = (a_1,\dots,a_m) \), and couplings are given by the symmetrized matrix \( \boldsymbol{J} \) (see Functional connectome). Unless noted otherwise, biases are zero (\(\boldsymbol{b}=\boldsymbol{0}\)).
147+
Our fcANN instantiates the inference dynamics of free-energy-minimizing attractor neural networks (FEP-ANNs) at the macro-scale. Each node represents a brain region with continuous activity $\boldsymbol{\sigma} = (\sigma_1,\dots,\sigma_m)$, and couplings are given by the symmetrized matrix $\boldsymbol{J}$ (see Functional connectome). Unless noted otherwise, biases are zero ($\boldsymbol{b}=\boldsymbol{0}$).
148148

149149
Deterministic inference. In the noise‑free symmetric case, activities are updated by repeatedly applying a sigmoidal nonlinearity to the weighted input
150150

151151
```{math}
152152
:label: fep-deterministic-update
153-
\boldsymbol{a}^{(t+1)} = S\!\left( \beta\, \boldsymbol{J}\, \boldsymbol{a}^{(t)} \right),
153+
\boldsymbol{\sigma}^{(t+1)} = S\!\left( \beta\, \boldsymbol{J}\, \boldsymbol{\sigma}^{(t)} \right),
154154
```
155155

156-
where \(S\) is a smooth odd sigmoid (we used \(\tanh\) as a practical, fast surrogate for the Langevin function) and \(\beta\) is the inverse temperature (precision) scaling the couplings. Iterations monotonically decrease a Lyapunov (free energy) function equivalent—up to an additive constant—to
157-
158-
```{math}
159-
:label: energy-function
160-
E(\boldsymbol{a}) = - \tfrac{1}{2}\, \boldsymbol{a}^\top \boldsymbol{J}\, \boldsymbol{a} + \boldsymbol{a}^\top \boldsymbol{b},
161-
```
162-
163-
and therefore converge to a local free‑energy minimum without any external optimizer. Thus, convergence does not require any optimization procedure with an external optimizer. Instead, it arises as the fixed point of repeated local inference updates, which implement gradient descent on free energy in the deterministic symmetric case (see main text).
156+
where $S$ is a smooth odd sigmoid (we used $\tanh$ as a practical, fast surrogate for the Langevin function) and $\beta$ is the inverse temperature (precision) scaling the couplings. As the inference rule was derived as a gradient descent on free energy, iterations monotonically decrease the free energy function and therefore converge to a local free‑energy minimum without any external optimizer. Thus, convergence does not require any optimization procedure with an external optimizer. Instead, it arises as the fixed point of repeated local inference updates, which implement gradient descent on free energy in the deterministic symmetric case (see main text).
164157

165158
Stochastic (Langevin‑style) inference. For generative modeling of dynamics, we adopt a slight variation of the FEP‑ANN inference rule: starting from the deterministic update above, we add zero‑mean Gaussian noise directly to the post‑activation state (Langevin‑style)
166159

167160
```{math}
168161
:label: langevin-update
169-
\boldsymbol{a}^{(t+1)} = S\!\left( \beta\, \boldsymbol{J}\, \boldsymbol{a}^{(t)} \right) + \boldsymbol{\epsilon}^{(t)}, \quad \boldsymbol{\epsilon}^{(t)} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2 \boldsymbol{I}).
162+
\boldsymbol{\sigma}^{(t+1)} = S\!\left( \beta\, \boldsymbol{J}\, \boldsymbol{\sigma}^{(t)} \right) + \boldsymbol{\omega}^{(t)}, \quad \boldsymbol{\omega}^{(t)} \sim \mathcal{N}(\boldsymbol{0}, \epsilon^2 \boldsymbol{I}).
170163
```
171164

172165
This explicit additive Gaussian noise differs from the continuous‑Bernoulli noise implied by the theoretical derivation but aligns with common Langevin formulations and was empirically robust. We use deterministic updates to identify attractors and study convergence; we use stochastic updates as a generative model of multistable dynamics.
173166

174167
### fcANN convergence and attractors
175168

176-
We investigated convergence under the deterministic update (Eq. [](#fep-deterministic-update)) by contrasting iterations-to-convergence of the empirical fcANN against a permutation-based null. The null was constructed by randomly permuting the upper triangle of \(\boldsymbol{J}\) and reflecting it to preserve symmetry (destroying topology while preserving weight distribution). For each of 1,000 permutations, we initialized both models with the same random state and counted iterations to convergence. Statistical significance of faster convergence in the empirical connectome was assessed via a one-sided Wilcoxon signed-rank test on paired iteration counts (1,000 pairs), testing whether the empirical connectome converges in fewer iterations than its permuted counterpart. We repeated this procedure across inverse-temperature values \(\beta \in \{0.035, 0.040, 0.045, 0.050, 0.055, 0.060\}\) (yielding 2–8 attractor states). See {numref}`Supplementary Figure %s <si_convergence>` for detailed results.
169+
We investigated convergence under the deterministic update (Eq. [](#fep-deterministic-update)) by contrasting iterations-to-convergence of the empirical fcANN against a permutation-based null. The null was constructed by randomly permuting the upper triangle of $\boldsymbol{J}$ and reflecting it to preserve symmetry (destroying topology while preserving weight distribution). For each of 1,000 permutations, we initialized both models with the same random state and counted iterations to convergence. Statistical significance of faster convergence in the empirical connectome was assessed via a one-sided Wilcoxon signed-rank test on paired iteration counts (1,000 pairs), testing whether the empirical connectome converges in fewer iterations than its permuted counterpart. We repeated this procedure across inverse-temperature values $\beta \in \{0.035, 0.040, 0.045, 0.050, 0.055, 0.060\}$ (yielding 2–8 attractor states). See {numref}`Supplementary Figure %s <si_convergence>` for detailed results.
177170

178171

179172
### fcANN projection
@@ -212,7 +205,7 @@ First, runs 1, 3 and 7, investigating the passive experience and the down- and u
212205
To further highlight the difference between the task and rest conditions, a "flow analysis" was performed to investigate the dynamic trajectory differences between the conditions rest and pain. The analysis method was identical to the flow analysis of the resting state data ([](#evaluation-resting-state-dynamics)). First, we calculated the direction in the projection plane between each successive TR during the rest conditions (a vector on the fcANN projection plane for each TR transition). Next, we obtained two-dimensional binned means for the x and y coordinates of these transition vectors (pooled across all participants), calculated over a two-dimensional grid of 100×100 uniformly distributed bins in the [-6,6] range (arbitrary units) and applied Gaussian smoothing with $\sigma=5$ bins.
213206
The same procedure was repeated for the pain condition and the difference in the mean directions between the two conditions was visualized as “streamplots” (using Python’s matplotlib). We used the same approach to quantify the difference in characteristic state transition trajectories between the up- and downregulation conditions. The empirically estimated trajectory differences (from real fMRI data) were contrasted to the trajectory differences predicted by the fcANN model from study 1. The similarity between real and simulated flow maps was quantified with Pearson’s correlation coefficient (two-sided), and significance was assessed via permutation testing (1,000 permutations) by randomly swapping condition labels within participants.
214207

215-
To obtain fcANN-simulated state transitions in resting conditions, we used the stochastic relaxation procedure ({numref}`hopfield-update-matrix-stochastic`), with $\mathbf{\mu}$ set zero.
208+
To obtain fcANN-simulated state transitions in resting conditions, we used the stochastic relaxation procedure (Eq. [](#langevin-update)), with $\mathbf{\mu}$ set to zero.
216209
To simulate the effect of pain-related activation on large-scale brain dynamics, we set $\mu_i$ during the stochastic relaxation procedure to a value representing pain-elicited activity in region i. The region-wise activations were obtained calculating the parcel-level mean activations from the meta-analytic pain activation map from {cite:p}`zunhammer2021meta`, which contained Hedges' g effect sizes from an individual participant-level meta-analysis of 20 pain studies, encompassing a total of n=603 participants. The whole activation map was scaled with five different values ranging from $10^{-3}$ to $10^{-1}$, spaced logarithmically, to investigate various signal-to-noise scenarios.
217210
We obtained the activity patterns of $10^5$ iterations from this stochastic relaxation procedure and calculated the state transition trajectories with the same approach used with the empirical data.
218211
Next we calculated the fcANN-generated difference between the rest and pain conditions and compared it to the actual difference through a permutation test with 1,000 permutations, randomly swapping the conditions within each participant in the real data and using Pearson's correlation coefficient between the real (permuted) and fcANN-generated flow maps as the test statistic.

docs/03-supplement.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -331,15 +331,11 @@ Therefore, **equilibrium (detailed balance) is possible only when the coupling i
331331
>
332332
> For Continuous–Bernoulli on [−1,1], $h(x) = 1/2$ on the support ⇒ $\log h(x) − log h(\sigma_i) = 0$, so:
333333
>
334-
> $$
335-
> \Phi(\sigma^{(i,x)}) − \Phi(σ) = \kappa(x − \sigma_i)
336-
> $$
334+
> $$\Phi(\sigma^{(i,x)}) − \Phi(σ) = \kappa(x − \sigma_i)$$
337335
>
338336
> Differentiating at $x=\sigma_i$ yields
339337
>
340-
> $$
341-
> \partial_{\sigma_i}\Phi(\sigma) = \kappa_i(\sigma_{-i}) = b_i + \sum_{k\neq i} J_{ik} \sigma_k
342-
> $$
338+
> $$ \partial_{\sigma_i}\Phi(\sigma) = \kappa_i(\sigma_{-i}) = b_i + \sum_{k\neq i} J_{ik} \sigma_k $$
343339
344340
### 4. Non-equilibrium Steady State (NESS)
345341

docs/README.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,12 @@
1-
# Functional connectome-based Hopfield Neural Networks
1+
# Functional connectivity-based Attractor Neural Networks
22

33
```{image} figures/concept.png
44
:alt: paper
55
:align: left
66
```
77

8-
The fcHNN framework offers a simple, interpretable computational alternative to conventional descriptive analyses of brain function.
9-
- [x] Map your activation patterns and contrasts on the fcHNN-projection and understand how they relate to brain attractors
10-
- [x] Analyze activity and connectivity in the same framework
11-
- [x] Predict brain dynamics and its alterations due to tasks, stimuli or brain disorders as a change in fcHNN attractor dynamics
8+
Functional connectivity-based Attractor Neural Networks (fcANNs) offer a simple, interpretable computational alternative to conventional descriptive analyses of brain function.
9+
In this theoretically-inspired computational framework, large-scale brain dynamics are understood in relation to attractor states; neurobiologically meaningful activity configurations that minimize the free energy of the system.
1210

1311
### Read more
1412

0 commit comments

Comments
 (0)