You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/neurocog/metrics.md
+9-33Lines changed: 9 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,26 +1,11 @@
1
1
# Metrics and Measurement Functions
2
2
3
-
Inside of `ngclearn.utils.metric_utils`, ngc-learn offers metrics and measurement
4
-
utility functions that can be quite useful when building neurocognitive models using
5
-
ngc-learn's node-and-cables system for specific tasks. While this utilities
6
-
sub-module will not always contain every possible function you might need,
7
-
given that measurements are often dependent on the task the experimenter wants
8
-
to conduct, there are several commonly-used ones drawn from machine intelligence
9
-
and computational neuroscience that are (jit-i-fied) in-built to ngc-learn you
10
-
can readily use.
11
-
In this small lesson, we will briefly examine two examples of importing such
12
-
functions and examine what they do.
3
+
Inside of `ngclearn.utils.metric_utils`, ngc-learn offers metrics and measurement utility functions that can be quite useful when building neurocognitive models using ngc-learn's node-and-cables system for specific tasks. While this utilities sub-module will not always contain every possible function you might need, given that measurements are often dependent on the task the experimenter wants to conduct, there are several commonly-used ones drawn from machine intelligence and computational neuroscience that are (jit-i-fied) in-built to ngc-learn you can readily use.
4
+
In this small lesson, we will briefly examine two examples of importing such functions and examine what they do.
13
5
14
6
## Measuring Task-Level Quantities
15
7
16
-
For many tasks that you might be interested in, a useful measurement
17
-
is the performance of the model in some supervised learning context. For example,
18
-
you might want to measure a model's accuracy on a classification task. To do so,
19
-
assuming we have some model outputs extracted from a model that you have constructed
20
-
elsewhere -- say a matrix of scores `Y_scores` -- and a target set of predictions
21
-
that you are testing against -- such as `Y_labels` (in one-hot binary encoded form )
22
-
-- then you can write some code to compute the accuracy, mean squared error (MSE),
23
-
and categorical log likelihood (Cat-NLL), like so:
8
+
For many tasks that you might be interested in, a useful measurement is the performance of the model in some supervised learning context. For example, you might want to measure a model's accuracy on a classification task. To do so, assuming we have some model outputs extracted from a model that you have constructed elsewhere -- say a matrix of scores `Y_scores` -- and a target set of predictions that you are testing against -- such as `Y_labels` (in one-hot binary encoded form ) -- then you can write some code to compute the accuracy, mean squared error (MSE), and categorical log likelihood (Cat-NLL), like so:
24
9
25
10
```python
26
11
from jax import numpy as jnp
@@ -55,24 +40,18 @@ and you should obtain the following in I/O like so:
55
40
> Cat-NLL = 4.003
56
41
```
57
42
58
-
Notice that we imported the utility function `softmax` from
59
-
`ngclearn.utils.model_utils` to convert our raw theoretical model scores to
60
-
probability values so that using `measure_CatNLL()` makes sense (as this
61
-
assumes the model scores are normalized probability values).
43
+
Notice that we imported the utility function `softmax` from `ngclearn.utils.model_utils` to convert our raw theoretical model scores to
44
+
probability values so that using `measure_CatNLL()` makes sense (as this assumes the model scores are normalized probability values).
62
45
63
46
## Measuring Some Model Statistics
64
47
65
-
In some cases, you might be interested in measuring certain statistics
66
-
related to aspects of a model that you construct. For example, you might have
67
-
collected a (binary) spike train produced by one of the internal neuronal layers
68
-
of your ngc-learn-simulated spiking neural network and want to compute the
69
-
firing rates and Fano factors associated with each neuron. Doing so with
70
-
ngc-learn utility functions would entail writing something like:
48
+
In some cases, you might be interested in measuring certain statistics related to properties of a model that you construct. For example, you might have collected a (binary) spike train produced by one of the internal neuronal layers of your ngc-learn-simulated spiking neural network and want to compute the firing rates and Fano factors associated with each neuron. Doing so with ngc-learn utility functions would entail writing something like:
71
49
72
50
```python
73
51
from jax import numpy as jnp
74
52
from ngclearn.utils.metric_utils import measure_fanoFactor, measure_firingRate
75
53
54
+
## let's create a fake synthetic spike train for 3 neurons (one per column)
The Fano factor is a useful secondary statistic for characterizing the
110
-
variable of a neuronal spike train -- as we see in the measurement above,
111
-
the first and second neurons have a higher Fano factor (given they are
112
-
more irregular in their spiking patterns) whereas the third neuron is far more
113
-
regular in its spiking pattern and thus has a lower Fano factor.
89
+
The Fano factor is a useful secondary statistic for characterizing the variability of a neuronal spike train -- as we see in the measurement above, the first and second neurons have a higher Fano factor (given they are more irregular in their spiking patterns) whereas the third neuron is far more regular in its spiking pattern and thus has a lower Fano factor.
Copy file name to clipboardExpand all lines: docs/tutorials/neurocog/plotting.md
+5-19Lines changed: 5 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,32 +1,20 @@
1
1
# Plotting and Visualization
2
2
3
-
While writing one's own custom task-specific matplotlib visualization code
4
-
might be needed for specific experimental setups, there are several useful tools
5
-
already in-built to ngc-learn, organized under the package sub-directory
6
-
`ngclearn.utils.viz`, including utilities for generating raster plots and
7
-
synaptic receptive field views (useful for biophysical models such as spiking
8
-
neural networks) as well as t-SNE plots of model latent codes. While the other
9
-
lesson/tutorials demonstrate some of these useful routines (e.g., raster plots
10
-
for spiking neuronal cells), in this small lesson, we will demonstrate how to
11
-
produce a t-SNE plot using ngc-learn's in-built tool.
3
+
While writing one's own custom task-specific matplotlib visualization code might be needed for specific experimental setups, there are several useful tools already in-built to ngc-learn, organized under the package sub-directory `ngclearn.utils.viz`, including utilities for generating raster plots and synaptic receptive field views (useful for biophysical models such as spiking neural networks) as well as t-SNE plots of model latent codes. While the other lesson/tutorials demonstrate some of these useful routines (e.g., raster plots for spiking neuronal cells), in this small lesson, we will demonstrate how to produce a t-SNE plot using ngc-learn's in-built tool.
12
4
13
5
## Generating a t-SNE Plot
14
6
15
-
Let's say you have a labeled five-dimensional (5D) dataset -- which we will
16
-
synthesize artificially in this lesson from an "unobserved" trio of multivariate
17
-
Gaussians -- and wanted to visualize these "model outputs" and their
18
-
corresponding labels in 2D via ngc-learn's in-built t-SNE.
7
+
Let's say you have a labeled five-dimensional (5D) dataset -- which we will artificially synthesize in this lesson from an "unobserved" trio of multivariate Gaussians -- and that you wanted to visualize these "model outputs" and their corresponding labels in 2D via ngc-learn's in-built t-SNE.
19
8
20
-
The following bit of Python code will do this for you (including the artificial
21
-
data generator):
9
+
The following bit of Python code will do this for you (including setting up the data generator):
22
10
23
11
```python
24
12
from jax import numpy as jnp, random
25
13
from ngclearn.utils.viz.dim_reduce import extract_tsne_latents, plot_latents
26
14
27
15
dkey = random.PRNGKey(1234)
28
16
29
-
defgen_data(dkey, N): ## artificial data generator (or proxy model)
17
+
defgen_data(dkey, N): ## data generator (or proxy stochastic data generating process)
In this example scenario, we see that we can successfully map the 5D model output
63
-
data to a plottable 2D space, facilitating some level of downstream qualitative
64
-
interpretation of the model.
50
+
In this example scenario, we see that we can successfully map the 5D model output data to a plottable 2D space, facilitating some level of downstream qualitative interpretation of the model.
0 commit comments