Skip to content

Commit a1784eb

Browse files
committed
update of Axon overview page with initial framing according to search principles: nicely motivates everything.
1 parent 125fdec commit a1784eb

File tree

8 files changed

+39
-37
lines changed

8 files changed

+39
-37
lines changed

citedrefs.json

Lines changed: 1 addition & 1 deletion
Large diffs are not rendered by default.

content/axon.md

Lines changed: 18 additions & 14 deletions
Large diffs are not rendered by default.

content/constraint-satisfaction.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,15 @@ bibfile = "ccnlab.json"
55

66
**Constraint satisfaction** is one of the most important concepts for understanding the power of [[bidirectional connectivity]] in the [[neocortex]], and is central to the promise of the [[Axon]] approach, which is one of the few neural network models to incorporate extensive bidirectional connectivity in a way that tames its wild side while retaining its many benefits.
77

8-
The constraint satisfaction problem (CSP) is defined as finding a set of values for N variables that satisfy a set of constraints defined over those variables ([[@Tsang14]]). A classic example is the N-queens problem, where you need to place N chess queens on a board such that no two queens should threaten each other. Another such problem is the _travelling salesman problem_ (TSP), analyzed by [[@^HopfieldTank85]] using a bidirectionally-connected Hopfield network, which involves finding a minimum distance route between _N_ cities.
8+
{id="figure_hopfield" style="height:30em"}
9+
![The bidirectionally-connected network from Hopfield & Tank (1985) solving the traveling salesman problem (TSP). Cities are represented as rows (A-J) and the position of each city in a path is represented by columns. The network's solution in panel **(d)** is the city order DHIFGEAJCB (i.e., city D has the first position active, H is next, etc). The synaptic weights in the network encode how close each city is to the others, and there are inhibitory connections within the same column and row, so that each city is only represented once, and each position only has one city. An iterative _settling_ process updates the neuron activities (shown by the size of the square), across panels a-d. This process involves computing the gradients of each unit relative to the others, producing efficient search.](media/fig_hopfield_tank_85_tsp.png)
910

10-
Thus, the CSP is essentially a problem of [[search]]ing over all possible states to find the one(s) that best fit the set of constraints imposed. As the number of states increases, the number of possible states explodes exponentially due to the [[curse of dimensionality]],
11-
12-
A bidirectionally-connected neural network can implement this search process in a highly efficient manner, by integrating all of the constraints in a single gradient-based computational step over _dedicated-parallel_ representations, effectively performing a _stochastic gradient descent_ process over possible solution states. This is essentially the same strategy used in [[error-backpropagation]] learning, to search over the high-dimensional space of possible representations, as explained in [[search]].
11+
The constraint satisfaction problem (CSP) is defined as finding a set of values for N variables that satisfy a set of constraints defined over those variables ([[@Tsang14]]). A classic example is the N-queens problem, where you need to place N chess queens on a board such that no two queens should threaten each other. Another such problem is the _travelling salesman problem_ (TSP), analyzed by [[@^HopfieldTank85]] using a bidirectionally-connected Hopfield network, which involves finding a minimum distance route between _N_ cities ([[#figure_hopfield]]).
1312

14-
<!--- TODO: old: -->
15-
16-
In the context of the [[curse of dimensionality]] problem, bidirectional connectivity enables the network to rapidly converge through parallel processing on a set of representations that satisfy constraints communicated throughout the network, where each active neuron contributes its own constraint, and is in turn subject to the constraints from the other neurons. Given the _small world_ nature of neural connectivity, each neuron is effectively only a few synapses away from any other neuron, so effectively any constraint anywhere can be felt anywhere else in the network, in principle.
13+
Thus, the CSP is essentially a problem of [[search]]ing over all possible states to find the one(s) that best fit the set of constraints imposed. As the number of states increases, the number of possible states explodes exponentially due to the [[curse of dimensionality]],
1714

18-
This parallel, distributed constraint satisfaction process is effectively performing a massive search over the entire space of possible ways of representing the current combination of external and internal constraints, in a way that would otherwise be impossible in any kind of more serial process, due to the combinatorial explosion of possible such representations.
15+
A bidirectionally-connected neural network can implement this search process in a highly efficient manner, by integrating all of the constraints in a single gradient-based computational step over _dedicated-parallel_ representations, effectively performing a _stochastic gradient descent_ process over possible solution states. This is essentially the same strategy used in [[error-backpropagation]] learning, to search over the high-dimensional space of possible representations, as explained in [[search]]. Mathematically, it is effectively a process of [[error backpropagation#backpropagation to activations]].
1916

2017
Purely feedforward networks do not adapt their representations dynamically as they process the current set of inputs, and instead just generate a representation in one sweep, based on the learned weights. Thus, they are not optimizing these representations to find the most _satisfying_ way of interpreting the current situation. By contrast, the iterative back-and-forth interactions among bidirectionally-connected neurons ends up optimizing the active representation, which then provides the basis for subsequent learning.
2118

22-
Mathematically, it is effectively a process of [[error backpropagation#backpropagation to activations]], where the error gradient that is otherwise used to drive learning is actually used to drive the activation state of the network, and then this activation state, via the [[temporal derivative]] learning principle, drives learning according to this gradient.
2319

41.7 KB
Loading

content/references.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1044,6 +1044,8 @@
10441044

10451045
<p id="ParnaudeauBolkanKellendonk18">Parnaudeau, S., Bolkan, S.S., & Kellendonk, C. (2018). The mediodorsal thalamus: An essential partner of the prefrontal cortex for cognition. <i>Biological Psychiatry, 83</i>, 648-656. <a href="http://www.sciencedirect.com/science/article/pii/S0006322317321935">http://www.sciencedirect.com/science/article/pii/S0006322317321935</a><a href="http://doi.org/10.1016/j.biopsych.2017.11.008"> http://doi.org/10.1016/j.biopsych.2017.11.008</a></p>
10461046

1047+
<p id="Pashler94">Pashler, H. (1994). Dual-task interference in simple tasks: data and theory. <i>Psychological bulletin, 116</i>, 220-244. <a href="http://www.ncbi.nlm.nih.gov/pubmed/7972591">http://www.ncbi.nlm.nih.gov/pubmed/7972591</a></p>
1048+
10471049
<p id="PateriaSubagdjaTanEtAl21">Pateria, S., Subagdja, B., Tan, A., & Quek, C. (2021). Hierarchical Reinforcement Learning: A Comprehensive Survey. <i>ACM Comput. Surv., 54</i>, 109:1–109:35. <a href="https://dl.acm.org/doi/10.1145/3453160">https://dl.acm.org/doi/10.1145/3453160</a><a href="http://doi.org/10.1145/3453160"> http://doi.org/10.1145/3453160</a></p>
10481050

10491051
<p id="PauliHazyOReilly12">Pauli, W.M., Hazy, T.E., & O'Reilly, R.C. (2012). Expectancy, ambiguity, and behavioral flexibility: separable and complementary roles of the orbital frontal cortex and amygdala in processing reward expectancies. <i>Journal of Cognitive Neuroscience, 24</i>, 351-366. <a href="http://www.ncbi.nlm.nih.gov/pubmed/22004047">http://www.ncbi.nlm.nih.gov/pubmed/22004047</a></p>

go.mod

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ module github.com/compcogneuro/web
33
go 1.23.4
44

55
require (
6-
cogentcore.org/core v0.3.13-0.20251014114320-b1d9c0ba7526
7-
cogentcore.org/lab v0.1.3-0.20251009131026-b81fa706d621
6+
cogentcore.org/core v0.3.13-0.20251104151908-1427c267da44
7+
cogentcore.org/lab v0.1.3-0.20251014144642-a12de9e660c7
88
github.com/cogentcore/yaegi v0.0.0-20250622201820-b7838bdd95eb
9-
github.com/emer/axon/v2 v2.0.0-dev0.2.58.0.20251009132128-45f2eea74684
10-
github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250917165214-89adea4c1b2c
9+
github.com/emer/axon/v2 v2.0.0-dev0.2.73
10+
github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20251017224053-7004cc176576
1111
)
1212

1313
require (

go.sum

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
cogentcore.org/core v0.3.13-0.20251014114320-b1d9c0ba7526 h1:di/mlkmv3RgLw1uXjLMdqDoMxyc7vKcCB1iagm05LTk=
2-
cogentcore.org/core v0.3.13-0.20251014114320-b1d9c0ba7526/go.mod h1:eDHnTCy1sBhAKN9NPsSCnBW3VAnwQBNA9nbGMo9r+Xs=
3-
cogentcore.org/lab v0.1.3-0.20251009131026-b81fa706d621 h1:0Itxb8CjZQ02jCMpMRinWfOWKYOHJoJ3MMbqL8o534o=
4-
cogentcore.org/lab v0.1.3-0.20251009131026-b81fa706d621/go.mod h1:rDUbYdRbrWdyVWeXJgMKaqJ71gyGtIiKQ71Iv7V72Og=
1+
cogentcore.org/core v0.3.13-0.20251104151908-1427c267da44 h1:jHo4ddxpACmAjveDCe9Q/csxdzbN4v5TO1hb3sPORJk=
2+
cogentcore.org/core v0.3.13-0.20251104151908-1427c267da44/go.mod h1:eDHnTCy1sBhAKN9NPsSCnBW3VAnwQBNA9nbGMo9r+Xs=
3+
cogentcore.org/lab v0.1.3-0.20251014144642-a12de9e660c7 h1:il1qQnBsNNZEEuxtCgvbczAbuefo7vriDtL3DQRsHkQ=
4+
cogentcore.org/lab v0.1.3-0.20251014144642-a12de9e660c7/go.mod h1:0fJ6n1CfFj1ijpUq6zCx9UsQpx5lBMkDsVEmnCy238A=
55
github.com/Bios-Marcel/wastebasket/v2 v2.0.3 h1:TkoDPcSqluhLGE+EssHu7UGmLgUEkWg7kNyHyyJ3Q9g=
66
github.com/Bios-Marcel/wastebasket/v2 v2.0.3/go.mod h1:769oPCv6eH7ugl90DYIsWwjZh4hgNmMS3Zuhe1bH6KU=
77
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
@@ -42,10 +42,10 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1
4242
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
4343
github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI=
4444
github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
45-
github.com/emer/axon/v2 v2.0.0-dev0.2.58.0.20251009132128-45f2eea74684 h1:S8kKBXD+PVI3xXWHHG2Xp5hKomw43qH1ZWZ8PXWpL28=
46-
github.com/emer/axon/v2 v2.0.0-dev0.2.58.0.20251009132128-45f2eea74684/go.mod h1:KgzxTXD4WecOf1ODiCqRRK4xdffe3F94e5+M4GFrc5A=
47-
github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250917165214-89adea4c1b2c h1:p381v/q/s0OtlRuV54b8BD8JULzrU3cIJz8InHuINwQ=
48-
github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250917165214-89adea4c1b2c/go.mod h1:YKMUU2BIT2qt8+IFjf5agjyKswPaJP2AvNSTKn6txlc=
45+
github.com/emer/axon/v2 v2.0.0-dev0.2.73 h1:BR8v0uHCWNzSAd9X8m5jcVId4Nj2bzZpU5zNZNdzCug=
46+
github.com/emer/axon/v2 v2.0.0-dev0.2.73/go.mod h1:pLtkvHIr9pTTXEPZmLsHumuflX2DZEA0SwziCCr+qzo=
47+
github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20251017224053-7004cc176576 h1:KYoWtwk8ReiC/fLCXD9OHLjH5eT3+QrhhztP6zSS9dA=
48+
github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20251017224053-7004cc176576/go.mod h1:fF4MgcBMmTgQ+in6H0TxKHnw3uoF/Jk+LFAxEJalYio=
4949
github.com/ericchiang/css v1.3.0 h1:e0vS+vpujMjtT3/SYu7qTHn1LVzXWcLCCDjlfq3YlLY=
5050
github.com/ericchiang/css v1.3.0/go.mod h1:sVSdL+MFR9Q4cKJMQzpIkHIDOLiK+7Wmjjhq7D+MubA=
5151
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=

sims/stability/stability.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@ func (ss *Sim) ConfigLoops() {
279279
AddLevelIncr(Trial, ss.Config.Run.Trials, 1).
280280
AddLevel(Cycle, cycles)
281281

282-
axon.LooperStandard(ls, ss.Net, ss.NetViewUpdater, cycles-plusPhase, cycles-1, Cycle, Trial, Train)
282+
axon.LooperStandard(ls, ss.Net, ss.NetViewUpdater, cycles-plusPhase, Cycle, Trial, Train)
283283

284284
ls.Stacks[Test].OnInit.Add("Init", func() { ss.Init() })
285285

0 commit comments

Comments
 (0)