Skip to content

Commit f52c86a

Browse files
author
Alexander Ororbia
committed
revised metrics/plotting neurocog docs
1 parent 13cfbd4 commit f52c86a

File tree

2 files changed

+14
-52
lines changed

2 files changed

+14
-52
lines changed

docs/tutorials/neurocog/metrics.md

Lines changed: 9 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,11 @@
11
# Metrics and Measurement Functions
22

3-
Inside of `ngclearn.utils.metric_utils`, ngc-learn offers metrics and measurement
4-
utility functions that can be quite useful when building neurocognitive models using
5-
ngc-learn's node-and-cables system for specific tasks. While this utilities
6-
sub-module will not always contain every possible function you might need,
7-
given that measurements are often dependent on the task the experimenter wants
8-
to conduct, there are several commonly-used ones drawn from machine intelligence
9-
and computational neuroscience that are (jit-i-fied) in-built to ngc-learn you
10-
can readily use.
11-
In this small lesson, we will briefly examine two examples of importing such
12-
functions and examine what they do.
3+
Inside of `ngclearn.utils.metric_utils`, ngc-learn offers metrics and measurement utility functions that can be quite useful when building neurocognitive models using ngc-learn's node-and-cables system for specific tasks. While this utilities sub-module will not always contain every possible function you might need, given that measurements are often dependent on the task the experimenter wants to conduct, there are several commonly-used ones drawn from machine intelligence and computational neuroscience that are (jit-i-fied) in-built to ngc-learn you can readily use.
4+
In this small lesson, we will briefly examine two examples of importing such functions and examine what they do.
135

146
## Measuring Task-Level Quantities
157

16-
For many tasks that you might be interested in, a useful measurement
17-
is the performance of the model in some supervised learning context. For example,
18-
you might want to measure a model's accuracy on a classification task. To do so,
19-
assuming we have some model outputs extracted from a model that you have constructed
20-
elsewhere -- say a matrix of scores `Y_scores` -- and a target set of predictions
21-
that you are testing against -- such as `Y_labels` (in one-hot binary encoded form )
22-
-- then you can write some code to compute the accuracy, mean squared error (MSE),
23-
and categorical log likelihood (Cat-NLL), like so:
8+
For many tasks that you might be interested in, a useful measurement is the performance of the model in some supervised learning context. For example, you might want to measure a model's accuracy on a classification task. To do so, assuming we have some model outputs extracted from a model that you have constructed elsewhere -- say a matrix of scores `Y_scores` -- and a target set of predictions that you are testing against -- such as `Y_labels` (in one-hot binary encoded form ) -- then you can write some code to compute the accuracy, mean squared error (MSE), and categorical log likelihood (Cat-NLL), like so:
249

2510
```python
2611
from jax import numpy as jnp
@@ -55,24 +40,18 @@ and you should obtain the following in I/O like so:
5540
> Cat-NLL = 4.003
5641
```
5742

58-
Notice that we imported the utility function `softmax` from
59-
`ngclearn.utils.model_utils` to convert our raw theoretical model scores to
60-
probability values so that using `measure_CatNLL()` makes sense (as this
61-
assumes the model scores are normalized probability values).
43+
Notice that we imported the utility function `softmax` from `ngclearn.utils.model_utils` to convert our raw theoretical model scores to
44+
probability values so that using `measure_CatNLL()` makes sense (as this assumes the model scores are normalized probability values).
6245

6346
## Measuring Some Model Statistics
6447

65-
In some cases, you might be interested in measuring certain statistics
66-
related to aspects of a model that you construct. For example, you might have
67-
collected a (binary) spike train produced by one of the internal neuronal layers
68-
of your ngc-learn-simulated spiking neural network and want to compute the
69-
firing rates and Fano factors associated with each neuron. Doing so with
70-
ngc-learn utility functions would entail writing something like:
48+
In some cases, you might be interested in measuring certain statistics related to properties of a model that you construct. For example, you might have collected a (binary) spike train produced by one of the internal neuronal layers of your ngc-learn-simulated spiking neural network and want to compute the firing rates and Fano factors associated with each neuron. Doing so with ngc-learn utility functions would entail writing something like:
7149

7250
```python
7351
from jax import numpy as jnp
7452
from ngclearn.utils.metric_utils import measure_fanoFactor, measure_firingRate
7553

54+
## let's create a fake synthetic spike train for 3 neurons (one per column)
7655
spikes = jnp.asarray([[0., 0., 0.],
7756
[0., 0., 1.],
7857
[0., 1., 0.],
@@ -92,6 +71,7 @@ spikes = jnp.asarray([[0., 0., 0.],
9271
[0., 1., 0.],
9372
[0., 0., 1.]], dtype=jnp.float32)
9473

74+
## measure the firing rates and Fano factors of the 3 neurons
9575
fr = measure_firingRate(spikes, preserve_batch=True)
9676
fano = measure_fanoFactor(spikes, preserve_batch=True)
9777

@@ -106,8 +86,4 @@ which should result in the following to be printed to I/O:
10686
> Fano Factor = [[0.8888888 0.77777773 0.55555546]]
10787
```
10888

109-
The Fano factor is a useful secondary statistic for characterizing the
110-
variable of a neuronal spike train -- as we see in the measurement above,
111-
the first and second neurons have a higher Fano factor (given they are
112-
more irregular in their spiking patterns) whereas the third neuron is far more
113-
regular in its spiking pattern and thus has a lower Fano factor.
89+
The Fano factor is a useful secondary statistic for characterizing the variability of a neuronal spike train -- as we see in the measurement above, the first and second neurons have a higher Fano factor (given they are more irregular in their spiking patterns) whereas the third neuron is far more regular in its spiking pattern and thus has a lower Fano factor.

docs/tutorials/neurocog/plotting.md

Lines changed: 5 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,20 @@
11
# Plotting and Visualization
22

3-
While writing one's own custom task-specific matplotlib visualization code
4-
might be needed for specific experimental setups, there are several useful tools
5-
already in-built to ngc-learn, organized under the package sub-directory
6-
`ngclearn.utils.viz`, including utilities for generating raster plots and
7-
synaptic receptive field views (useful for biophysical models such as spiking
8-
neural networks) as well as t-SNE plots of model latent codes. While the other
9-
lesson/tutorials demonstrate some of these useful routines (e.g., raster plots
10-
for spiking neuronal cells), in this small lesson, we will demonstrate how to
11-
produce a t-SNE plot using ngc-learn's in-built tool.
3+
While writing one's own custom task-specific matplotlib visualization code might be needed for specific experimental setups, there are several useful tools already in-built to ngc-learn, organized under the package sub-directory `ngclearn.utils.viz`, including utilities for generating raster plots and synaptic receptive field views (useful for biophysical models such as spiking neural networks) as well as t-SNE plots of model latent codes. While the other lesson/tutorials demonstrate some of these useful routines (e.g., raster plots for spiking neuronal cells), in this small lesson, we will demonstrate how to produce a t-SNE plot using ngc-learn's in-built tool.
124

135
## Generating a t-SNE Plot
146

15-
Let's say you have a labeled five-dimensional (5D) dataset -- which we will
16-
synthesize artificially in this lesson from an "unobserved" trio of multivariate
17-
Gaussians -- and wanted to visualize these "model outputs" and their
18-
corresponding labels in 2D via ngc-learn's in-built t-SNE.
7+
Let's say you have a labeled five-dimensional (5D) dataset -- which we will artificially synthesize in this lesson from an "unobserved" trio of multivariate Gaussians -- and that you wanted to visualize these "model outputs" and their corresponding labels in 2D via ngc-learn's in-built t-SNE.
198

20-
The following bit of Python code will do this for you (including the artificial
21-
data generator):
9+
The following bit of Python code will do this for you (including setting up the data generator):
2210

2311
```python
2412
from jax import numpy as jnp, random
2513
from ngclearn.utils.viz.dim_reduce import extract_tsne_latents, plot_latents
2614

2715
dkey = random.PRNGKey(1234)
2816

29-
def gen_data(dkey, N): ## artificial data generator (or proxy model)
17+
def gen_data(dkey, N): ## data generator (or proxy stochastic data generating process)
3018
mu1 = jnp.asarray([[2.1, 3.2, 0.6, -4., -2.]])
3119
cov1 = jnp.eye(5) * 0.78
3220
mu2 = jnp.asarray([[-1.8, 0.2, -0.1, 1.99, 1.56]])
@@ -59,6 +47,4 @@ which should produce a plot, i.e., `codes.jpg`, similar to the one below:
5947

6048
<img src="../../images/tutorials/neurocog/simple_codes.jpg" width="400" />
6149

62-
In this example scenario, we see that we can successfully map the 5D model output
63-
data to a plottable 2D space, facilitating some level of downstream qualitative
64-
interpretation of the model.
50+
In this example scenario, we see that we can successfully map the 5D model output data to a plottable 2D space, facilitating some level of downstream qualitative interpretation of the model.

0 commit comments

Comments
 (0)