Skip to content

Commit 9bc1b68

Browse files
committed
update documentations
1 parent df8053e commit 9bc1b68

File tree

4 files changed

+53
-46
lines changed

4 files changed

+53
-46
lines changed

docs/core_concept/brainpy_dynamical_system.ipynb

Lines changed: 46 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
{
2222
"cell_type": "markdown",
2323
"source": [
24-
"BrainPy supports models in brain simulation and brain-inspired computing.\n",
24+
"BrainPy supports modelings in brain simulation and brain-inspired computing.\n",
2525
"\n",
2626
"All these supports are based on one common concept: **Dynamical System** via ``brainpy.DynamicalSystem``.\n",
2727
"\n",
@@ -71,7 +71,26 @@
7171
{
7272
"cell_type": "markdown",
7373
"source": [
74-
"All models used in brain simulation and brain-inspired computing is ``DynamicalSystem``.\n",
74+
"All models used in brain simulation and brain-inspired computing is ``DynamicalSystem``.\n"
75+
],
76+
"metadata": {
77+
"collapsed": false
78+
}
79+
},
80+
{
81+
"cell_type": "markdown",
82+
"source": [
83+
"```{note}\n",
84+
"``DynamicalSystem`` is a subclass of ``BrainPyOject``. Therefore it supports to use [object-oriented transformations](./brainpy_transform_concept.ipynb) as stated in the previous tutorial.\n",
85+
"```"
86+
],
87+
"metadata": {
88+
"collapsed": false
89+
}
90+
},
91+
{
92+
"cell_type": "markdown",
93+
"source": [
7594
"\n",
7695
"A ``DynamicalSystem`` defines the updating rule of the model at single time step.\n",
7796
"\n",
@@ -117,14 +136,20 @@
117136
"\n",
118137
"First, *all ``DynamicalSystem`` should implement ``.update()`` function*, which receives two arguments:\n",
119138
"\n",
139+
"```\n",
140+
"class YourModel(bp.DynamicalSystem):\n",
141+
" def update(self, s, x):\n",
142+
" pass\n",
143+
"```\n",
144+
"\n",
120145
"- `s` (or named as others): A dict, to indicate shared arguments across all nodes/layers in the network, like\n",
121146
" - the current time ``t``, or\n",
122147
" - the current running index ``i``, or\n",
123148
" - the current time step ``dt``, or\n",
124149
" - the current phase of training or testing ``fit=True/False``.\n",
125150
"- `x` (or named as others): The individual input for this node/layer.\n",
126151
"\n",
127-
"We call `s` as shared arguments because they are shared and same for all nodes/layers at current time step. On the contrary, different nodes/layers have different input `x`."
152+
"We call `s` as shared arguments because they are same and shared for all nodes/layers. On the contrary, different nodes/layers have different input `x`."
128153
],
129154
"metadata": {
130155
"collapsed": false
@@ -180,7 +205,7 @@
180205
" def update(self, s, x):\n",
181206
" # define how the model states update\n",
182207
" # according to the external input\n",
183-
" t, dt = s.get('t'), s.get('dt', bm.dt)\n",
208+
" t, dt = s.get('t'), s.get('dt')\n",
184209
" V = self.integral(self.V, t, x, dt=dt)\n",
185210
" spike = V >= self.V_th\n",
186211
" self.V.value = bm.where(spike, self.V_rest, V)\n",
@@ -198,9 +223,9 @@
198223
"\n",
199224
"Second, **explicitly consider which computing mode your ``DynamicalSystem`` supports**.\n",
200225
"\n",
201-
"Brain simulation usually constructs models without batching dimension (we refer to it as *non-batching mode*, as seen in above LIF model), while brain-inspired computation trains models with a batch of data (*batching mode* or *training mode*).\n",
226+
"Brain simulation usually builds models without batching dimension (we refer to it as *non-batching mode*, as seen in above LIF model), while brain-inspired computation trains models with a batch of data (*batching mode* or *training mode*).\n",
202227
"\n",
203-
"So, to write a model applicable to the abroad applications in brain simulation and brain-inspired computing, you need to consider which mode your model supports, one of them, or both of them."
228+
"So, to write a model applicable to abroad applications in brain simulation and brain-inspired computing, you need to consider which mode your model supports, one of them, or both of them."
204229
],
205230
"metadata": {
206231
"collapsed": false
@@ -213,13 +238,13 @@
213238
"\n",
214239
"When considering the computing mode, we can program a general LIF model for brain simulation and brain-inspired computing.\n",
215240
"\n",
216-
"To overcome the non-differential property of the spike in the LIF model for brain simulation, for the code\n",
241+
"To overcome the non-differential property of the spike in the LIF model for brain simulation, i.e., at the code of\n",
217242
"\n",
218243
"```python\n",
219244
"spike = V >= self.V_th\n",
220245
"```\n",
221246
"\n",
222-
"LIF models used in brain-inspired computing calculate the spiking state using the surrogate gradient function, i.e., replacing the backward gradient with a smooth function, like\n",
247+
"LIF models used in brain-inspired computing calculate the spiking state using the surrogate gradient function. Usually, we replace the backward gradient of the spike with a smooth function, like\n",
223248
"\n",
224249
"$$\n",
225250
"g'(x) = \\frac{1}{(\\alpha * |x| + 1.) ^ 2}\n",
@@ -289,6 +314,15 @@
289314
"collapsed": false
290315
}
291316
},
317+
{
318+
"cell_type": "markdown",
319+
"source": [
320+
"The following code snippet utilizes the LIF model to build an E/I balanced network ``EINet``, which is a classical network model in brain simulation."
321+
],
322+
"metadata": {
323+
"collapsed": false
324+
}
325+
},
292326
{
293327
"cell_type": "code",
294328
"execution_count": 21,
@@ -327,7 +361,7 @@
327361
{
328362
"cell_type": "markdown",
329363
"source": [
330-
"Here the ``EINet`` defines an E/I balanced network which is a classical network model in brain simulation. The following ``AINet`` utilizes the LIF model to construct a model for AI training."
364+
"Moreover, our LIF model can also be used in brain-inspired computing scenario. The following ``AINet`` uses the LIF model to construct a model for AI training."
331365
],
332366
"metadata": {
333367
"collapsed": false
@@ -389,7 +423,7 @@
389423
"source": [
390424
"### 1. ``brainpy.math.for_loop``\n",
391425
"\n",
392-
"``for_loop`` is a structural control flow API which runs a function with the looping over the inputs.\n",
426+
"``for_loop`` is a structural control flow API which runs a function with the looping over the inputs. Moreover, this API just-in-time compile the looping process into the machine code.\n",
393427
"\n",
394428
"Suppose we have 200 time steps with the step size of 0.1, we can run the model with:"
395429
],
@@ -430,9 +464,9 @@
430464
{
431465
"cell_type": "markdown",
432466
"source": [
433-
"### 2. ``brainpy.DSRunner`` and ``brainpy.DSTrainer``\n",
467+
"### 2. ``brainpy.DSRunner``\n",
434468
"\n",
435-
"Another way to run the model in BrainPy is using the structural running object ``DSRunner`` and ``DSTrainer``. They provide more flexible way to monitoring the variables in a ``DynamicalSystem``.\n"
469+
"Another way to run the model in BrainPy is using the structural running object ``DSRunner`` and ``DSTrainer``. They provide more flexible way to monitoring the variables in a ``DynamicalSystem``. The details users should refer to the [DSRunner tutorial](../tutorial_simulation/simulation_dsrunner.ipynb).\n"
436470
],
437471
"metadata": {
438472
"collapsed": false

docs/core_concept/brainpy_transform_concept.ipynb

Lines changed: 0 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -556,31 +556,6 @@
556556
"![](./imgs/grad_with_loss.png)"
557557
]
558558
},
559-
{
560-
"cell_type": "markdown",
561-
"metadata": {},
562-
"source": [
563-
"To inspect what OO transformations currently BrainPy supports, you can use"
564-
]
565-
},
566-
{
567-
"cell_type": "code",
568-
"execution_count": 19,
569-
"metadata": {},
570-
"outputs": [
571-
{
572-
"data": {
573-
"text/plain": "['grad',\n 'vector_grad',\n 'jacobian',\n 'jacrev',\n 'jacfwd',\n 'hessian',\n 'make_loop',\n 'make_while',\n 'make_cond',\n 'cond',\n 'ifelse',\n 'for_loop',\n 'while_loop',\n 'jit',\n 'to_object']"
574-
},
575-
"execution_count": 19,
576-
"metadata": {},
577-
"output_type": "execute_result"
578-
}
579-
],
580-
"source": [
581-
"bm.object_transform.__all__"
582-
]
583-
},
584559
{
585560
"cell_type": "markdown",
586561
"metadata": {},

docs/quickstart/analysis.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -339,7 +339,7 @@
339339
"execution_count": 8,
340340
"outputs": [],
341341
"source": [
342-
"class GJCoupledFHN(bp.dyn.DynamicalSystem):\n",
342+
"class GJCoupledFHN(bp.DynamicalSystem):\n",
343343
" def __init__(self, num=4, method='exp_auto'):\n",
344344
" super(GJCoupledFHN, self).__init__()\n",
345345
"\n",
@@ -421,7 +421,7 @@
421421
"\n",
422422
"# simulation with an input\n",
423423
"Iext = bm.asarray([0., 0., 0., 0.6])\n",
424-
"runner = bp.dyn.DSRunner(model, monitors=['V'], inputs=['Iext', Iext])\n",
424+
"runner = bp.DSRunner(model, monitors=['V'], inputs=['Iext', Iext])\n",
425425
"runner.run(300.)\n",
426426
"\n",
427427
"# visualization\n",

docs/quickstart/training.ipynb

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -983,20 +983,18 @@
983983
" self.num_out = num_out\n",
984984
"\n",
985985
" # neuron groups\n",
986-
" self.i = bp.neurons.InputGroup(num_in, mode=bp.modes.training)\n",
987-
" self.r = bp.neurons.LIF(num_rec, tau=10, V_reset=0, V_rest=0, V_th=1., mode=bp.modes.training)\n",
988-
" self.o = bp.neurons.LeakyIntegrator(num_out, tau=5, mode=bp.modes.training)\n",
986+
" self.i = bp.neurons.InputGroup(num_in)\n",
987+
" self.r = bp.neurons.LIF(num_rec, tau=10, V_reset=0, V_rest=0, V_th=1.)\n",
988+
" self.o = bp.neurons.LeakyIntegrator(num_out, tau=5)\n",
989989
"\n",
990990
" # synapse: i->r\n",
991991
" self.i2r = bp.synapses.Exponential(self.i, self.r, bp.conn.All2All(), tau=10.,\n",
992992
" output=bp.synouts.CUBA(target_var=None),\n",
993-
" g_max=bp.init.KaimingNormal(scale=20.),\n",
994-
" mode=bp.modes.training)\n",
993+
" g_max=bp.init.KaimingNormal(scale=20.))\n",
995994
" # synapse: r->o\n",
996995
" self.r2o = bp.synapses.Exponential(self.r, self.o, bp.conn.All2All(), tau=10.,\n",
997996
" output=bp.synouts.CUBA(target_var=None),\n",
998-
" g_max=bp.init.KaimingNormal(scale=20.),\n",
999-
" mode=bp.modes.training)\n",
997+
" g_max=bp.init.KaimingNormal(scale=20.))\n",
1000998
"\n",
1001999
" # whole model\n",
10021000
" self.model = bp.Sequential(self.i, self.i2r, self.r, self.r2o, self.o)\n",

0 commit comments

Comments
 (0)