Skip to content

Commit 36d00f0

Browse files
authored
Merge pull request #107 from marty1885/master
Update Introduction
2 parents ab45cab + 73875b9 commit 36d00f0

File tree

4 files changed

+195
-17
lines changed

4 files changed

+195
-17
lines changed

docs/source/Introduction.md

Lines changed: 184 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,44 +1,215 @@
1+
With a lack of writing ablilty. The introduction will be a clone of PyTorch's introduction document. This is also a good chance to see how well Etaler is doing at it's tensor operators.
2+
13
# A quick introduction to Etaler
24

3-
Include Etaler's headers.
5+
At Etaler's core are Tensors. They are generalized matrix in more then 2 dimentions; or called N-Dimentional Arrays in some places. We'll see how they are used in-depth later. Now, let's look what we could do with tensors
46

57
```C++
8+
// Etaler.hpp packs most of the core headers together
69
#include <Etaler/Etaler.hpp>
10+
#include <Etaler/Algorithms/SpatialPooler.hpp>
11+
#include <Etaler/Encoders/Scalar.hpp>
712
using namespace et;
13+
14+
#include <iostream>
15+
using namespace std;
816
```
917

10-
Declare a HTM layer.
18+
## Creating Tensors
19+
20+
Tensors can be created from pointers holding the data and an appropriate shape.
1121

1222
```C++
13-
#include <Etaler/Algorithms/SpatialPooler.hpp>
14-
SpatialPooler sp(/*input_shape=*/{256}, /*output_shape=*/{64});
23+
int d[] = {1, 2, 3, 4, 5, 6, 7, 8};
24+
Tensor v = Tensor({8}, d);
25+
cout << v << endl;
26+
27+
// Create a matrix
28+
Tensor m = Tensor({2, 4}, d);
29+
cout << m << endl;
30+
31+
// Create a 2x2x2 tensor
32+
Tensor t = Tensor({2, 2, 2}, d);
33+
cout << t << endl;
1534
```
1635
17-
Encode values
36+
Out
37+
38+
```
39+
{ 1, 2, 3, 4, 5, 6, 7, 8}
40+
{{ 1, 2, 3, 4},
41+
{ 5, 6, 7, 8}}
42+
{{{ 1, 2},
43+
{ 3, 4}},
44+
45+
{{ 5, 6},
46+
{ 7, 8}}}
47+
```
48+
49+
Just like a vector is a list of scalars, a matrix is a list of vectors. A 3D Tensor is a list of matrices. Think it like so: indexing into a 3D Tensor gives you a matrix, indexing into a matrix gives you a vector and indexing. into a vector gives you a scalar,
50+
51+
Just for clearifcation. When I say "Tensor" I ment a `et::Tensor` object.
1852
1953
```C++
20-
#include <Etaler/Encoders/Scalar.hpp>
21-
Tensor x = encoders::scalar(/*value=*/0.3,
54+
// Index into v wii give you a scalar
55+
cout << v[{0}] << endl;
56+
57+
// Scalars are special as you can convert them into native C++ types
58+
cout << v[{0}].item<int>() << endl;
59+
60+
// Indexing a matrix gives you a vector
61+
cout << m[{0}] << endl;
62+
63+
// And indexing into a 3d Tensor to get a matrix
64+
cout << t[{0}] << endl;
65+
```
66+
67+
Out
68+
69+
```
70+
{ 1}
71+
1
72+
{ 1, 2, 3, 4}
73+
{{ 1, 2},
74+
{ 3, 4}}
75+
```
76+
77+
You can also create tensors of other types. The default, as you can see, is whatever the pointer is point to. To create floating-point Tensors, just point to an floating point array then call `Tensor()`.
78+
79+
You can also create a tensor of zeros with an supplied shape using `zeros()`
80+
81+
```C++
82+
Tensor x = zeros({3, 4, 5}, DType::Float);
83+
cout << x << endl;
84+
```
85+
86+
Out
87+
88+
```
89+
{{{ 0, 0, 0, 0, 0},
90+
{ 0, 0, 0, 0, 0},
91+
{ 0, 0, 0, 0, 0},
92+
{ 0, 0, 0, 0, 0}},
93+
94+
{{ 0, 0, 0, 0, 0},
95+
{ 0, 0, 0, 0, 0},
96+
{ 0, 0, 0, 0, 0},
97+
{ 0, 0, 0, 0, 0}},
98+
99+
{{ 0, 0, 0, 0, 0},
100+
{ 0, 0, 0, 0, 0},
101+
{ 0, 0, 0, 0, 0},
102+
{ 0, 0, 0, 0, 0}}}
103+
```
104+
105+
## Operation with Tensors
106+
107+
You can operate on tensors in the ways you would expect.
108+
109+
```C++
110+
Tensor x = ones({3});
111+
Tensor y = constant({3}, 7);
112+
Tensor z = x + y;
113+
cout << z << endl;
114+
```
115+
116+
Out
117+
118+
```
119+
{ 8, 8, 8}
120+
```
121+
122+
One helpful operation that we will make use of later is concatenation.
123+
124+
```C++
125+
// By default, the cat/concat/concatenate function works on the 1st axis
126+
Tensor x_1 = zeros({2, 5});
127+
Tensor y_1 = zeros({3, 5});
128+
Tensor z_1 = cat({x_1, y_1});
129+
cout << z_1 << endl;
130+
131+
// Concatenate columns:
132+
x_2 = zeros({2, 3});
133+
y_2 = zeros({2, 5});
134+
z_2 = cat({x_2, y_2}, 1);
135+
cout << z_2 << endl;
136+
```
137+
138+
Out
139+
140+
```
141+
{{ 0, 0, 0, 0, 0},
142+
{ 0, 0, 0, 0, 0},
143+
{ 0, 0, 0, 0, 0},
144+
{ 0, 0, 0, 0, 0},
145+
{ 0, 0, 0, 0, 0}}
146+
{{ 0, 0, 0, 0, 0, 0, 0, 0},
147+
{ 0, 0, 0, 0, 0, 0, 0, 0}}
148+
```
149+
150+
## Reshaping Tensors
151+
Use the `reshape` method to reshape a tensor. Unlike PyTorch using `view()` to reshapce tensots. view in Etaler works like the one in [xtensor](https://github.com/xtensor-stack/xtensor); it performs indexing.
152+
153+
154+
```C++
155+
Tensor x = zeros({2, 3, 4});
156+
cout << x << endl;
157+
cout << x.reshape({2, 12}) << endl; // Reshape to 2 rows, 12 columns
158+
```
159+
160+
Out
161+
162+
```
163+
{{{ 0, 0, 0, 0},
164+
{ 0, 0, 0, 0},
165+
{ 0, 0, 0, 0}},
166+
167+
{{ 0, 0, 0, 0},
168+
{ 0, 0, 0, 0},
169+
{ 0, 0, 0, 0}}}
170+
{{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
171+
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}
172+
```
173+
174+
## HTM Algorithms
175+
176+
HTM algorithms, like what the name indicates. Implements HTM related algorithms. They are the sole perpose why Etaler exists.
177+
178+
Without getting too deep. Typically the first thing we do in HTM is encode values into SDRs. SDRs are sparse binary tensors. i.e. Elements in the tensor are either 1 or 0 and most of them are 0s.
179+
180+
```C++
181+
Tensor x = encoder::scalar(/*value=*/0.3,
22182
/*min_val=*/0.f,
23183
/*max_val=*/5.f);
184+
cout << x << endl;
24185
```
25186

26-
Run (i.e. inference) the SpatialPooler.
187+
Out
27188

28-
```c++
29-
Tensor y = sp.compute(x);
30189
```
190+
{ 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
191+
```
192+
193+
[Spatial Pooler](https://numenta.com/neuroscience-research/research-publications/papers/htm-spatial-pooler-neocortical-algorithm-for-online-sparse-distributed-coding/) is a very commonly used layer in HTM. Trough an unsurpervised learning process, it can extract patterns from SDRs fed to it.
31194

32-
If you want the SpatialPooler to learn from the data. Call the `learn()` function.
33195

34196
```C++
197+
SpatialPooler sp(/*input_shape=*/{32}, /*output_shape=*/{64});
198+
// Run (i.e. inference) the SpatialPooler.
199+
Tensor y = sp.compute(x);
200+
cout << y.cast(DType::Bool) << endl;
201+
// If you want the SpatialPooler to learn from the data. Call the `learn()` function.
35202
sp.learn(x, y);
36203
```
37204
38-
Save the parameters of the SpatialPooler:
205+
Out
206+
```
207+
{ 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
208+
```
209+
210+
After traning, you might want like to save the Spatial Pooler's weights for use in the future.
39211
40212
```C++
41-
#include <Etaler/Encoders/Serealize.hpp>
42213
auto states = sp.states();
43214
save(states, "sp.cereal");
44215

docs/source/PythonBindings.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ et = ROOT.et
2828
Then you can call Etaler functions from Python.
2929

3030
```Python
31-
s = et.Shape();
31+
s = et.Shape()
3232
[s.push_back(v) for v in [2, 2]] #HACK: ROOT doesn't support initalizer lists.
3333
t = et.ones(s)
3434
print(t)
@@ -40,4 +40,11 @@ print(t)
4040
```
4141

4242
## PyEtaler
43-
The offical Python binding - [PyEtaler](https://guthub.com/etaler/pyetaler) in currently work in progress. But we recomment using ROOT to bind from Python before PyEtaler leaves WIP.
43+
The offical Python binding - [PyEtaler](https://guthub.com/etaler/pyetaler) in currently work in progress. We recomment using ROOT to bind from Python before PyEtaler leaves WIP.
44+
45+
```
46+
>>> from etaler import et
47+
>>> et.ones([2, 2])
48+
{{ 1, 1},
49+
{ 1, 1}}
50+
```

docs/source/Tensor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ std::cout << (a+b).shape() << std::endl;
180180
Unlike PyTorch and NumPy, Etaler does not support the lagecy brodcasting rule. It doesn't allow certain tensors with
181181
different shapes but have the same amount of elements to brodcast together.
182182

183-
```
183+
```C++
184184
//This would get you a warning in PyTorch and works in NumPy.
185185
//But not in Etaler
186186
a = ones({4})

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,13 +42,13 @@ Be aware that Numenta holds the rights to HTM related patents. And only allows f
4242
Tensor
4343
PythonBindings
4444
UsingWithClingROOT
45-
Contribution
4645
GUI
4746

4847
.. toctree::
4948
:caption: DEVELOPER ZONE
5049
:maxdepth: 2
5150

51+
Contribution
5252
Backends
5353
DeveloperNotes
5454
OpenCLBackend

0 commit comments

Comments
 (0)