|
21 | 21 | { |
22 | 22 | "cell_type": "markdown", |
23 | 23 | "source": [ |
24 | | - "BrainPy supports models in brain simulation and brain-inspired computing.\n", |
| 24 | + "BrainPy supports modelings in brain simulation and brain-inspired computing.\n", |
25 | 25 | "\n", |
26 | 26 | "All these supports are based on one common concept: **Dynamical System** via ``brainpy.DynamicalSystem``.\n", |
27 | 27 | "\n", |
|
71 | 71 | { |
72 | 72 | "cell_type": "markdown", |
73 | 73 | "source": [ |
74 | | - "All models used in brain simulation and brain-inspired computing is ``DynamicalSystem``.\n", |
| 74 | + "All models used in brain simulation and brain-inspired computing is ``DynamicalSystem``.\n" |
| 75 | + ], |
| 76 | + "metadata": { |
| 77 | + "collapsed": false |
| 78 | + } |
| 79 | + }, |
| 80 | + { |
| 81 | + "cell_type": "markdown", |
| 82 | + "source": [ |
| 83 | + "```{note}\n", |
| 84 | + "``DynamicalSystem`` is a subclass of ``BrainPyOject``. Therefore it supports to use [object-oriented transformations](./brainpy_transform_concept.ipynb) as stated in the previous tutorial.\n", |
| 85 | + "```" |
| 86 | + ], |
| 87 | + "metadata": { |
| 88 | + "collapsed": false |
| 89 | + } |
| 90 | + }, |
| 91 | + { |
| 92 | + "cell_type": "markdown", |
| 93 | + "source": [ |
75 | 94 | "\n", |
76 | 95 | "A ``DynamicalSystem`` defines the updating rule of the model at single time step.\n", |
77 | 96 | "\n", |
|
117 | 136 | "\n", |
118 | 137 | "First, *all ``DynamicalSystem`` should implement ``.update()`` function*, which receives two arguments:\n", |
119 | 138 | "\n", |
| 139 | + "```\n", |
| 140 | + "class YourModel(bp.DynamicalSystem):\n", |
| 141 | + " def update(self, s, x):\n", |
| 142 | + " pass\n", |
| 143 | + "```\n", |
| 144 | + "\n", |
120 | 145 | "- `s` (or named as others): A dict, to indicate shared arguments across all nodes/layers in the network, like\n", |
121 | 146 | " - the current time ``t``, or\n", |
122 | 147 | " - the current running index ``i``, or\n", |
123 | 148 | " - the current time step ``dt``, or\n", |
124 | 149 | " - the current phase of training or testing ``fit=True/False``.\n", |
125 | 150 | "- `x` (or named as others): The individual input for this node/layer.\n", |
126 | 151 | "\n", |
127 | | - "We call `s` as shared arguments because they are shared and same for all nodes/layers at current time step. On the contrary, different nodes/layers have different input `x`." |
| 152 | + "We call `s` as shared arguments because they are same and shared for all nodes/layers. On the contrary, different nodes/layers have different input `x`." |
128 | 153 | ], |
129 | 154 | "metadata": { |
130 | 155 | "collapsed": false |
|
180 | 205 | " def update(self, s, x):\n", |
181 | 206 | " # define how the model states update\n", |
182 | 207 | " # according to the external input\n", |
183 | | - " t, dt = s.get('t'), s.get('dt', bm.dt)\n", |
| 208 | + " t, dt = s.get('t'), s.get('dt')\n", |
184 | 209 | " V = self.integral(self.V, t, x, dt=dt)\n", |
185 | 210 | " spike = V >= self.V_th\n", |
186 | 211 | " self.V.value = bm.where(spike, self.V_rest, V)\n", |
|
198 | 223 | "\n", |
199 | 224 | "Second, **explicitly consider which computing mode your ``DynamicalSystem`` supports**.\n", |
200 | 225 | "\n", |
201 | | - "Brain simulation usually constructs models without batching dimension (we refer to it as *non-batching mode*, as seen in above LIF model), while brain-inspired computation trains models with a batch of data (*batching mode* or *training mode*).\n", |
| 226 | + "Brain simulation usually builds models without batching dimension (we refer to it as *non-batching mode*, as seen in above LIF model), while brain-inspired computation trains models with a batch of data (*batching mode* or *training mode*).\n", |
202 | 227 | "\n", |
203 | | - "So, to write a model applicable to the abroad applications in brain simulation and brain-inspired computing, you need to consider which mode your model supports, one of them, or both of them." |
| 228 | + "So, to write a model applicable to abroad applications in brain simulation and brain-inspired computing, you need to consider which mode your model supports, one of them, or both of them." |
204 | 229 | ], |
205 | 230 | "metadata": { |
206 | 231 | "collapsed": false |
|
213 | 238 | "\n", |
214 | 239 | "When considering the computing mode, we can program a general LIF model for brain simulation and brain-inspired computing.\n", |
215 | 240 | "\n", |
216 | | - "To overcome the non-differential property of the spike in the LIF model for brain simulation, for the code\n", |
| 241 | + "To overcome the non-differential property of the spike in the LIF model for brain simulation, i.e., at the code of\n", |
217 | 242 | "\n", |
218 | 243 | "```python\n", |
219 | 244 | "spike = V >= self.V_th\n", |
220 | 245 | "```\n", |
221 | 246 | "\n", |
222 | | - "LIF models used in brain-inspired computing calculate the spiking state using the surrogate gradient function, i.e., replacing the backward gradient with a smooth function, like\n", |
| 247 | + "LIF models used in brain-inspired computing calculate the spiking state using the surrogate gradient function. Usually, we replace the backward gradient of the spike with a smooth function, like\n", |
223 | 248 | "\n", |
224 | 249 | "$$\n", |
225 | 250 | "g'(x) = \\frac{1}{(\\alpha * |x| + 1.) ^ 2}\n", |
|
289 | 314 | "collapsed": false |
290 | 315 | } |
291 | 316 | }, |
| 317 | + { |
| 318 | + "cell_type": "markdown", |
| 319 | + "source": [ |
| 320 | + "The following code snippet utilizes the LIF model to build an E/I balanced network ``EINet``, which is a classical network model in brain simulation." |
| 321 | + ], |
| 322 | + "metadata": { |
| 323 | + "collapsed": false |
| 324 | + } |
| 325 | + }, |
292 | 326 | { |
293 | 327 | "cell_type": "code", |
294 | 328 | "execution_count": 21, |
|
327 | 361 | { |
328 | 362 | "cell_type": "markdown", |
329 | 363 | "source": [ |
330 | | - "Here the ``EINet`` defines an E/I balanced network which is a classical network model in brain simulation. The following ``AINet`` utilizes the LIF model to construct a model for AI training." |
| 364 | + "Moreover, our LIF model can also be used in brain-inspired computing scenario. The following ``AINet`` uses the LIF model to construct a model for AI training." |
331 | 365 | ], |
332 | 366 | "metadata": { |
333 | 367 | "collapsed": false |
|
389 | 423 | "source": [ |
390 | 424 | "### 1. ``brainpy.math.for_loop``\n", |
391 | 425 | "\n", |
392 | | - "``for_loop`` is a structural control flow API which runs a function with the looping over the inputs.\n", |
| 426 | + "``for_loop`` is a structural control flow API which runs a function with the looping over the inputs. Moreover, this API just-in-time compile the looping process into the machine code.\n", |
393 | 427 | "\n", |
394 | 428 | "Suppose we have 200 time steps with the step size of 0.1, we can run the model with:" |
395 | 429 | ], |
|
430 | 464 | { |
431 | 465 | "cell_type": "markdown", |
432 | 466 | "source": [ |
433 | | - "### 2. ``brainpy.DSRunner`` and ``brainpy.DSTrainer``\n", |
| 467 | + "### 2. ``brainpy.DSRunner``\n", |
434 | 468 | "\n", |
435 | | - "Another way to run the model in BrainPy is using the structural running object ``DSRunner`` and ``DSTrainer``. They provide more flexible way to monitoring the variables in a ``DynamicalSystem``.\n" |
| 469 | + "Another way to run the model in BrainPy is using the structural running object ``DSRunner`` and ``DSTrainer``. They provide more flexible way to monitoring the variables in a ``DynamicalSystem``. The details users should refer to the [DSRunner tutorial](../tutorial_simulation/simulation_dsrunner.ipynb).\n" |
436 | 470 | ], |
437 | 471 | "metadata": { |
438 | 472 | "collapsed": false |
|
0 commit comments