|
| 1 | +With a lack of writing ablilty. The introduction will be a clone of PyTorch's introduction document. This is also a good chance to see how well Etaler is doing at it's tensor operators. |
| 2 | + |
1 | 3 | # A quick introduction to Etaler |
2 | 4 |
|
3 | | -Include Etaler's headers. |
| 5 | +At Etaler's core are Tensors. They are generalized matrix in more then 2 dimentions; or called N-Dimentional Arrays in some places. We'll see how they are used in-depth later. Now, let's look what we could do with tensors |
4 | 6 |
|
5 | 7 | ```C++ |
| 8 | +// Etaler.hpp packs most of the core headers together |
6 | 9 | #include <Etaler/Etaler.hpp> |
| 10 | +#include <Etaler/Algorithms/SpatialPooler.hpp> |
| 11 | +#include <Etaler/Encoders/Scalar.hpp> |
7 | 12 | using namespace et; |
| 13 | + |
| 14 | +#include <iostream> |
| 15 | +using namespace std; |
8 | 16 | ``` |
9 | 17 |
|
10 | | -Declare a HTM layer. |
| 18 | +## Creating Tensors |
| 19 | + |
| 20 | +Tensors can be created from pointers holding the data and an appropriate shape. |
11 | 21 |
|
12 | 22 | ```C++ |
13 | | -#include <Etaler/Algorithms/SpatialPooler.hpp> |
14 | | -SpatialPooler sp(/*input_shape=*/{256}, /*output_shape=*/{64}); |
| 23 | +int d[] = {1, 2, 3, 4, 5, 6, 7, 8}; |
| 24 | +Tensor v = Tensor({8}, d); |
| 25 | +cout << v << endl; |
| 26 | + |
| 27 | +// Create a matrix |
| 28 | +Tensor m = Tensor({2, 4}, d); |
| 29 | +cout << m << endl; |
| 30 | + |
| 31 | +// Create a 2x2x2 tensor |
| 32 | +Tensor t = Tensor({2, 2, 2}, d); |
| 33 | +cout << t << endl; |
15 | 34 | ``` |
16 | 35 |
|
17 | | -Encode values |
| 36 | +Out |
| 37 | +
|
| 38 | +``` |
| 39 | +{ 1, 2, 3, 4, 5, 6, 7, 8} |
| 40 | +{{ 1, 2, 3, 4}, |
| 41 | + { 5, 6, 7, 8}} |
| 42 | +{{{ 1, 2}, |
| 43 | + { 3, 4}}, |
| 44 | + |
| 45 | + {{ 5, 6}, |
| 46 | + { 7, 8}}} |
| 47 | +``` |
| 48 | +
|
| 49 | +Just like a vector is a list of scalars, a matrix is a list of vectors. A 3D Tensor is a list of matrices. Think it like so: indexing into a 3D Tensor gives you a matrix, indexing into a matrix gives you a vector and indexing. into a vector gives you a scalar, |
| 50 | +
|
| 51 | +Just for clearifcation. When I say "Tensor" I ment a `et::Tensor` object. |
18 | 52 |
|
19 | 53 | ```C++ |
20 | | -#include <Etaler/Encoders/Scalar.hpp> |
21 | | -Tensor x = encoders::scalar(/*value=*/0.3, |
| 54 | +// Index into v wii give you a scalar |
| 55 | +cout << v[{0}] << endl; |
| 56 | +
|
| 57 | +// Scalars are special as you can convert them into native C++ types |
| 58 | +cout << v[{0}].item<int>() << endl; |
| 59 | +
|
| 60 | +// Indexing a matrix gives you a vector |
| 61 | +cout << m[{0}] << endl; |
| 62 | +
|
| 63 | +// And indexing into a 3d Tensor to get a matrix |
| 64 | +cout << t[{0}] << endl; |
| 65 | +``` |
| 66 | + |
| 67 | +Out |
| 68 | + |
| 69 | +``` |
| 70 | +{ 1} |
| 71 | +1 |
| 72 | +{ 1, 2, 3, 4} |
| 73 | +{{ 1, 2}, |
| 74 | + { 3, 4}} |
| 75 | +``` |
| 76 | + |
| 77 | +You can also create tensors of other types. The default, as you can see, is whatever the pointer is point to. To create floating-point Tensors, just point to an floating point array then call `Tensor()`. |
| 78 | + |
| 79 | +You can also create a tensor of zeros with an supplied shape using `zeros()` |
| 80 | + |
| 81 | +```C++ |
| 82 | +Tensor x = zeros({3, 4, 5}, DType::Float); |
| 83 | +cout << x << endl; |
| 84 | +``` |
| 85 | +
|
| 86 | +Out |
| 87 | +
|
| 88 | +``` |
| 89 | +{{{ 0, 0, 0, 0, 0}, |
| 90 | + { 0, 0, 0, 0, 0}, |
| 91 | + { 0, 0, 0, 0, 0}, |
| 92 | + { 0, 0, 0, 0, 0}}, |
| 93 | + |
| 94 | + {{ 0, 0, 0, 0, 0}, |
| 95 | + { 0, 0, 0, 0, 0}, |
| 96 | + { 0, 0, 0, 0, 0}, |
| 97 | + { 0, 0, 0, 0, 0}}, |
| 98 | + |
| 99 | + {{ 0, 0, 0, 0, 0}, |
| 100 | + { 0, 0, 0, 0, 0}, |
| 101 | + { 0, 0, 0, 0, 0}, |
| 102 | + { 0, 0, 0, 0, 0}}} |
| 103 | +``` |
| 104 | +
|
| 105 | +## Operation with Tensors |
| 106 | +
|
| 107 | +You can operate on tensors in the ways you would expect. |
| 108 | +
|
| 109 | +```C++ |
| 110 | +Tensor x = ones({3}); |
| 111 | +Tensor y = constant({3}, 7); |
| 112 | +Tensor z = x + y; |
| 113 | +cout << z << endl; |
| 114 | +``` |
| 115 | + |
| 116 | +Out |
| 117 | + |
| 118 | +``` |
| 119 | +{ 8, 8, 8} |
| 120 | +``` |
| 121 | + |
| 122 | +One helpful operation that we will make use of later is concatenation. |
| 123 | + |
| 124 | +```C++ |
| 125 | +// By default, the cat/concat/concatenate function works on the 1st axis |
| 126 | +Tensor x_1 = zeros({2, 5}); |
| 127 | +Tensor y_1 = zeros({3, 5}); |
| 128 | +Tensor z_1 = cat({x_1, y_1}); |
| 129 | +cout << z_1 << endl; |
| 130 | + |
| 131 | +// Concatenate columns: |
| 132 | +x_2 = zeros({2, 3}); |
| 133 | +y_2 = zeros({2, 5}); |
| 134 | +z_2 = cat({x_2, y_2}, 1); |
| 135 | +cout << z_2 << endl; |
| 136 | +``` |
| 137 | +
|
| 138 | +Out |
| 139 | +
|
| 140 | +``` |
| 141 | +{{ 0, 0, 0, 0, 0}, |
| 142 | + { 0, 0, 0, 0, 0}, |
| 143 | + { 0, 0, 0, 0, 0}, |
| 144 | + { 0, 0, 0, 0, 0}, |
| 145 | + { 0, 0, 0, 0, 0}} |
| 146 | +{{ 0, 0, 0, 0, 0, 0, 0, 0}, |
| 147 | + { 0, 0, 0, 0, 0, 0, 0, 0}} |
| 148 | +``` |
| 149 | +
|
| 150 | +## Reshaping Tensors |
| 151 | +Use the `reshape` method to reshape a tensor. Unlike PyTorch using `view()` to reshapce tensots. view in Etaler works like the one in [xtensor](https://github.com/xtensor-stack/xtensor); it performs indexing. |
| 152 | +
|
| 153 | +
|
| 154 | +```C++ |
| 155 | +Tensor x = zeros({2, 3, 4}); |
| 156 | +cout << x << endl; |
| 157 | +cout << x.reshape({2, 12}) << endl; // Reshape to 2 rows, 12 columns |
| 158 | +``` |
| 159 | + |
| 160 | +Out |
| 161 | + |
| 162 | +``` |
| 163 | +{{{ 0, 0, 0, 0}, |
| 164 | + { 0, 0, 0, 0}, |
| 165 | + { 0, 0, 0, 0}}, |
| 166 | +
|
| 167 | + {{ 0, 0, 0, 0}, |
| 168 | + { 0, 0, 0, 0}, |
| 169 | + { 0, 0, 0, 0}}} |
| 170 | +{{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, |
| 171 | + { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}} |
| 172 | +``` |
| 173 | + |
| 174 | +## HTM Algorithms |
| 175 | + |
| 176 | +HTM algorithms, like what the name indicates. Implements HTM related algorithms. They are the sole perpose why Etaler exists. |
| 177 | + |
| 178 | +Without getting too deep. Typically the first thing we do in HTM is encode values into SDRs. SDRs are sparse binary tensors. i.e. Elements in the tensor are either 1 or 0 and most of them are 0s. |
| 179 | + |
| 180 | +```C++ |
| 181 | +Tensor x = encoder::scalar(/*value=*/0.3, |
22 | 182 | /*min_val=*/0.f, |
23 | 183 | /*max_val=*/5.f); |
| 184 | +cout << x << endl; |
24 | 185 | ``` |
25 | 186 |
|
26 | | -Run (i.e. inference) the SpatialPooler. |
| 187 | +Out |
27 | 188 |
|
28 | | -```c++ |
29 | | -Tensor y = sp.compute(x); |
30 | 189 | ``` |
| 190 | +{ 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} |
| 191 | +``` |
| 192 | + |
| 193 | +[Spatial Pooler](https://numenta.com/neuroscience-research/research-publications/papers/htm-spatial-pooler-neocortical-algorithm-for-online-sparse-distributed-coding/) is a very commonly used layer in HTM. Trough an unsurpervised learning process, it can extract patterns from SDRs fed to it. |
31 | 194 |
|
32 | | -If you want the SpatialPooler to learn from the data. Call the `learn()` function. |
33 | 195 |
|
34 | 196 | ```C++ |
| 197 | +SpatialPooler sp(/*input_shape=*/{32}, /*output_shape=*/{64}); |
| 198 | +// Run (i.e. inference) the SpatialPooler. |
| 199 | +Tensor y = sp.compute(x); |
| 200 | +cout << y.cast(DType::Bool) << endl; |
| 201 | +// If you want the SpatialPooler to learn from the data. Call the `learn()` function. |
35 | 202 | sp.learn(x, y); |
36 | 203 | ``` |
37 | 204 |
|
38 | | -Save the parameters of the SpatialPooler: |
| 205 | +Out |
| 206 | +``` |
| 207 | +{ 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} |
| 208 | +``` |
| 209 | +
|
| 210 | +After traning, you might want like to save the Spatial Pooler's weights for use in the future. |
39 | 211 |
|
40 | 212 | ```C++ |
41 | | -#include <Etaler/Encoders/Serealize.hpp> |
42 | 213 | auto states = sp.states(); |
43 | 214 | save(states, "sp.cereal"); |
44 | 215 |
|
|
0 commit comments