Skip to content

Commit 265b075

Browse files
committed
Merge pull request #41 from pakozm/devel
Release v0.3.0-beta, from Devel branch
2 parents b207366 + 293170b commit 265b075

File tree

269 files changed

+4752
-3347
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

269 files changed

+4752
-3347
lines changed

CHANGELIST.md

Lines changed: 88 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,82 +1,123 @@
11
ChangeList
22
==========
33

4-
Master branch unstable release
5-
------------------------------
4+
Master branch release
5+
---------------------
66

7-
- Added `matrix.abs`.
8-
- Solved bug at method `matrix::best_span_iterator::setAtWindow`. Becaose of It
9-
the method didn't works when the matrix was a sub-matrix (slice) of other
10-
matrix.
11-
- Added `matrix.join` method.
12-
- Added PCA-GS algorithm for efficient computation of PCA (iterative algorithm),
7+
v0.3.0-beta relase
8+
------------------
9+
10+
### API Changes
11+
12+
- Added `loss` parameter to `trainable.supervised_trainer` methods.
13+
- Added `optimizer` parameter to `trainable.supervised_trainer` methods.
14+
- Added `ann.optimizer` package, which has the implementation of weight update
15+
based on weight gradient. So, the ANN components only compute gradients.
16+
This allows to implement different optimization methods (as "Conjugate
17+
Gradient", or "Linear Search Brack-Propagation") with the same gradient
18+
computation.
19+
- Loss functions `ann.loss` API has been changed, now the loss computation is
20+
done in two separated steps:
21+
- `matrix = loss:compute_loss(output,target)`: which returns a matrix with
22+
the loss of every pair of patterns, allowing to perform several loss
23+
computations without taken them into account.
24+
- `matrix = loss:accum_loss(matrix)`: which accumulates the loss in the
25+
internal state of the class.
26+
- Added a new version of `loss` function, which computes mean and
27+
sample variance of the loss. Besides, the loss computation is done
28+
with doubles, being more accurated than before.
29+
- `replacement` parameter in SDAE doesn't force `on_the_fly` parameter, they are
30+
independent.
31+
- SDAE training has been changed in order to allow the use of LUA datasets,
32+
- Replaced `cpp_class_binding_extension` by `class_extension` function,
33+
adding Lua classes support besides to CPP binded classes.
34+
- Modified `class` and `class_instance` functions to be more homogeneous
35+
with C++ binding.
36+
- Added support for GZipped matrices load and save from C++, so functions
37+
`matrix.savefile` and `matrix.loadfile` (and its correspondence for complex
38+
numbers, double, int32, and char) were removed. Methods `matrix.fromFilename`
39+
and `matrix.toFilename` accept '.gz' extension.
40+
41+
### Packages rename
42+
43+
- Changed package `sdae` to `ann.autoencoders`.
44+
- Changed package `loss_functions` to `ann.loss`.
45+
- Splitted `mathcore` package into `mathcore` and `complex` packages.
46+
- Renamed `math` package to `mathcore` to avoid the collision with Lua standard
47+
math library.
48+
49+
### New features
50+
51+
- April-ANN is deployed as a standalone executable and as a shared library for
52+
Lua 5.2.
53+
- Modified `lua.h` to incorporate the GIT commit number in the disclaimer.
54+
- Added Lua autocompletion when readline is available.
55+
- Implemented SignalHandler class in C++.
56+
- Added `signal.register` and `signal.receive` functions to Lua.
57+
- Added to `matrix` the methods `map`, `contiguous`, `join`, `abs`, `tan`,
58+
`atan`, `atanh`, `sinh`, `asin`, `asinh`, `cosh`, `acos`, `acosh`, `fromMMap`,
59+
`toMMap`, `div`, `max`, `min`.
60+
- Added `iterator` class, which is a wrapper around Lua iterators, but
61+
provides a more natural interface with functional programming procedures
62+
as `map`, `filter`, `apply`, or `reduce`.
63+
- Added methods `iterate`, `field`, `select` to iterator Lua class.
64+
- `table.insert` returns the table, which is useful for reduction operations.
65+
- Added `table` method to `iterator` class.
66+
- Added naive `L1_norm` regularization.
67+
- Added `dataset.clamp`.
68+
- Added `mathcore.set_mmap_allocation` function, which allows to forces the
69+
use of mmap for `matrix` memory allocation.
70+
- Added `ann.components.slice`.
71+
- Added GS-PCA algorithm for efficient computation of PCA (iterative algorithm),
1372
`stats.iterative_pca` Lua function.
14-
- Added `fromMMap` and `toMMap` for `matrix` class, currently only with floats.
1573
- Added basic MapReduce implementation in Lua.
1674
- Added `stats.correlation.pearson` Lua class.
1775
- Added `stats.bootstrap_resampling` function.
18-
- Added method `iterate` to iterator Lua class.
19-
- Modified `lua.h` to incorporate the GIT commit number in the disclaimer.
20-
- `table.insert` returns the table, which is useful for reduction operations.
21-
- Added `table` method to `iterator` class.
22-
- Added a new version of `loss` function, which computes mean and
23-
sample variance of the loss. Besides, the loss computation is done
24-
with doubles, being more accurated than before.
25-
- Added `loss` parameter to `trainable.supervised_trainer` methods.
2676
- Added `math.add`, `math.sub`, `math.mul`, `math.div` functions.
27-
- Methods `field` and `select` added to `iterator` class.
28-
- Added `div` method to `matrix`.
29-
- Added `signal.register` and `signal.receive` functions to Lua.
30-
- Implemented SignalHandler class in C++.
3177
- `trainable` and `ann.mlp.all_all` are using `matrix:to_lua_string()`
3278
method.
3379
- Added method `to_lua_string()` in all matrix types, so the method produce
3480
a Lua chunk which is loadable and produce a matrix.
35-
- Added `iterator` class, which is a wrapper around Lua iterators, but
36-
provides a more natural interface with functional programming procedures
37-
as `map`, `filter`, `apply`, or `reduce`.
3881
- Added serialization to `parallel_foreach`, allowing to produce outputs which
3982
could be loaded by the caller process.
4083
- Declaration of `luatype` function as global, it wasn't.
41-
- Added BIND_STRING_CONSTANT to luabind, so it is possible to export C string
42-
constants to Lua.
43-
- Removed warning of clang about unused variables, adding a new macro
44-
`UNUSED_VARIABLE(x)` defined in the header `utils/c_src/unused_variable.h`.
4584
- Added `iterable_map` and `multiple_ipairs` functions to the Lua utilities.
46-
- `replacement` parameter in SDAE doesn't force `on_the_fly` parameter, they are
47-
independent.
48-
- SDAE training has been changed in order to allow the use of LUA datasets,
49-
improving-
50-
- Solved bugs at Matrix template constructor which affects to `rewrap` lua
51-
method, and to select method, which affects to `select` lua method.
52-
- Configured Lua package path to be in /usr/ instead of /usr/local/. It is
53-
the default place in Ubuntu.
54-
- Replaced `cpp_class_binding_extension` by `class_extension` function,
55-
adding Lua classes support besides to CPP binded classes.
56-
- Modified `class` and `class_instance` functions to be more homogeneous
57-
with C++ binding.
5885
- Added SubAndDivNormalizationDataSet, applies a substraction and a division of
5986
the feature vectors.
6087
- Added stepDataset.
88+
89+
### Bugs removed
90+
91+
- Solved bug at `luabind_template.cc`, which introduced spurious segmentation
92+
faults due to Lua execution of garbage collection in the middle of a
93+
`lua_pushClassName`.
6194
- Solved bug at glob function.
95+
- Solved bug at matrix iterators operator=.
96+
- Solved bug at method `matrix::best_span_iterator::setAtWindow`. Becaose of It
97+
the method didn't works when the matrix was a sub-matrix (slice) of other
98+
matrix.
99+
- Solved bugs at Matrix template constructor which affects to `rewrap` lua
100+
method, and to select method, which affects to `select` lua method.
62101
- Added binarizer::init() to a binded static_constructor, it is needed to
63102
execute init() before decode/encode double numbers, because of endianism.
64103
- Solved bug at constString when extracting double numbers in binary format.
65-
- Added max and min methods over a given dimension for `matrix`.
66104
- MacOS compilation problems solved.
105+
- Solved problems with CUDA, it is working again.
106+
- Dynamic loading of C modules is working now.
107+
108+
### C/C++ code changes
109+
110+
- Added BIND_STRING_CONSTANT to luabind, so it is possible to export C string
111+
constants to Lua.
112+
- Removed warning of clang about unused variables, adding a new macro
113+
`UNUSED_VARIABLE(x)` defined in the header `utils/c_src/unused_variable.h`.
67114
- Matrix fromString and toString Lua methods have been improved to write/read
68115
directly from Lua string buffer, so the memory print has been reduced.
69116
- The C++ routines to write and read files is generalized to work with streams,
70117
under the BufferedStream template, and it is instantiated to FILE and gzFile
71118
formats.
72119
- Added sanity check to cross-entropy and multi-class cross-entropy loss
73120
functions, to detect the use of non logarithmic outputs.
74-
- Solved problems with CUDA, it is working again.
75-
- Dynamic loading of C modules is working now.
76-
- Added support for GZipped matrices load and save from C++, so functions
77-
`matrix.savefile` and `matrix.loadfile` (and its correspondence for complex
78-
numbers, double, int32, and char) were removed. Methods `matrix.fromFilename`
79-
and `matrix.toFilename` accept '.gz' extension.
80121

81122
v0.2.1-beta relase
82123
------------------

DISCLAIMER.lua

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
return {
2+
'"April-ANN v" APRILANN_VERSION_MAJOR "." APRILANN_VERSION_MINOR "." APRILANN_VERSION_RELEASE "-beta COMMIT " TOSTRING(GIT_COMMIT) " Copyright (C) 2012-2013 DSIC-UPV, CEU-UCH"',
3+
'"This program comes with ABSOLUTELY NO WARRANTY; for details see LICENSE.txt.\\nThis is free software, and you are welcome to redistribute it\\nunder certain conditions; see LICENSE.txt for details."',
4+
}

EXAMPLES/digits.lua

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -53,10 +53,13 @@ trainer:randomize_weights{
5353
inf = -1,
5454
sup = 1,
5555
}
56+
trainer:set_option("learning_rate", 0.01)
57+
trainer:set_option("momentum", 0.01)
58+
trainer:set_option("weight_decay", 1e-05)
59+
-- bias has weight_decay of ZERO
60+
trainer:set_layerwise_option("b.", "weight_decay", 0)
61+
5662
trainer:save("jarl.net", "binary")
57-
thenet:set_option("learning_rate", 0.01)
58-
thenet:set_option("momentum", 0.01)
59-
thenet:set_option("weight_decay", 1e-05)
6063

6164
training_data = {
6265
input_dataset = train_input,

EXAMPLES/xor.lua

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ trainer:randomize_weights{
77
inf = -0.1,
88
sup = 0.1 }
99

10-
thenet:set_option("learning_rate", 8.0)
11-
thenet:set_option("momentum", 0.5)
12-
thenet:set_option("weight_decay", 1e-05)
10+
trainer:set_option("learning_rate", 8.0)
11+
trainer:set_option("momentum", 0.5)
12+
trainer:set_option("weight_decay", 1e-05)
13+
trainer:set_layerwise_option("b.*", "weight_decay", 0.0)
1314

1415
m_xor = matrix.fromString[[
1516
4 3

README.md

Lines changed: 54 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,8 +78,61 @@ You need to especify the `-I` option to the compiler, and all the extra_libs stu
7878
Exists one build file for each possible target: build_release.lua, build_debug.lua, build_mkl_release.lua,
7979
build_mkl_debug.lua, ... and so on.
8080

81+
The binary will be generated at `bin/april-ann`, which incorporates the Lua 5.2
82+
interpreter and works without any dependency in Lua. Besides, a shared library
83+
will be generated at `lib/aprilann.so`, so it is possible to use `require` from
84+
Lua to load April-ANN in a standard Lua 5.2 interpreter.
85+
8186
ENJOY!
8287

88+
Installation
89+
------------
90+
91+
The installation is done executing:
92+
93+
```
94+
$ sudo make install
95+
```
96+
97+
This procedure copies the binary to system location in `/usr` (or in
98+
`/opt/local` for Mac OS X via MacPorts). The shared library is copied
99+
to Lua default directory, in order to load it by using `require` function.
100+
101+
Use
102+
---
103+
104+
- You can execute the standalone binary:
105+
106+
```
107+
$ april-ann
108+
April-ANN v0.2.1-beta COMMIT 920 Copyright (C) 2012-2013 DSIC-UPV, CEU-UCH
109+
This program comes with ABSOLUTELY NO WARRANTY; for details see LICENSE.txt.
110+
This is free software, and you are welcome to redistribute it
111+
under certain conditions; see LICENSE.txt for details.
112+
Lua 5.2.2 Copyright (C) 1994-2013 Lua.org, PUC-Rio
113+
> print "Hello World!"
114+
Hello World!
115+
```
116+
117+
- It is possible to use April-ANN as a Lua module, loading only the packages
118+
which you need (i.e. `require("aprilann.matrix")`), or loading the full
119+
library (`require("aprilann")`). **Be careful**, the April-ANN modules doesn't
120+
follow Lua guidelines and have lateral effects because of the declaration of
121+
tables, functions, and other values at the GLOBALs Lua table:
122+
123+
```
124+
$ lua
125+
Lua 5.2.2 Copyright (C) 1994-2013 Lua.org, PUC-Rio
126+
> require "aprilann.matrix"
127+
> require "aprilann"
128+
April-ANN v0.2.1-beta COMMIT 920 Copyright (C) 2012-2013 DSIC-UPV, CEU-UCH
129+
This program comes with ABSOLUTELY NO WARRANTY; for details see LICENSE.txt.
130+
This is free software, and you are welcome to redistribute it
131+
under certain conditions; see LICENSE.txt for details.
132+
> print "Hello World!"
133+
Hello World!
134+
```
135+
83136
Citation
84137
--------
85138

@@ -123,7 +176,7 @@ Includes these sources
123176
- MersenneTwister: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html
124177
- Median filter from Simon Perreault: http://nomis80.org/ctmf.html
125178
- RuningStat class for efficient and stable computation of mean and variance: http://www.johndcook.com/standard_deviation.html
126-
179+
- Lua autocompletion rlcompleter release 2, by rthomas: https://github.com/rrthomas/lua-rlcompleter
127180

128181
Wiki documentation
129182
------------------

TEST/digitos/learndigits.lua

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,13 @@ thenet = ann.mlp.all_all.generate(description)
7272
trainer = trainable.supervised_trainer(thenet,
7373
ann.loss.multi_class_cross_entropy(10))
7474
trainer:build()
75+
76+
trainer:set_option("learning_rate", 0.04)
77+
trainer:set_option("momentum", 0.02)
78+
trainer:set_option("weight_decay", 1e-05)
79+
-- bias has weight_decay of ZERO
80+
trainer:set_layerwise_option("b.", "weight_decay", 0)
81+
7582
trainer:randomize_weights{
7683
random = aleat,
7784
inf = inf,
@@ -126,10 +133,6 @@ datosvalidar = {
126133
bunch_size = bunch_size,
127134
}
128135

129-
thenet:set_option("learning_rate", 0.04)
130-
thenet:set_option("momentum", 0.02)
131-
thenet:set_option("weight_decay", 1e-05)
132-
133136
-- datos para guardar una matriz de salida de la red
134137
msalida = matrix(200,10)
135138
dsalida = dataset.matrix(msalida, {

TEST/digitos/test.lua

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -74,13 +74,17 @@ val_output = dataset.matrix(m2,
7474

7575
thenet = ann.mlp.all_all.generate(description)
7676
if util.is_cuda_available() then thenet:set_use_cuda(true) end
77-
thenet:set_option("learning_rate", learning_rate)
78-
thenet:set_option("momentum", momentum)
79-
thenet:set_option("weight_decay", weight_decay)
8077
trainer = trainable.supervised_trainer(thenet,
8178
ann.loss.multi_class_cross_entropy(10),
8279
bunch_size)
8380
trainer:build()
81+
82+
trainer:set_option("learning_rate", learning_rate)
83+
trainer:set_option("momentum", momentum)
84+
trainer:set_option("weight_decay", weight_decay)
85+
-- bias has weight_decay of ZERO
86+
trainer:set_layerwise_option("b.", "weight_decay", 0)
87+
8488
trainer:randomize_weights{
8589
random = weights_random,
8690
inf = inf,

TEST/xor/xor-with-sparse-input.lua

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,10 @@ end
4141
-----------------------------------------------------------
4242

4343
net_component=ann.mlp.all_all.generate("2 inputs 2 logistic 1 logistic")
44-
net_component:set_option("learning_rate", learning_rate)
45-
net_component:set_option("momentum", momentum)
46-
net_component:set_option("weight_decay", weight_decay)
4744
trainer=trainable.supervised_trainer(net_component)
45+
trainer:set_option("learning_rate", learning_rate)
46+
trainer:set_option("momentum", momentum)
47+
trainer:set_option("weight_decay", weight_decay)
4848
trainer:build()
4949
trainer:set_loss_function(ann.loss.mse(net_component:get_output_size()))
5050
load_initial_weights(trainer.weights_table)

TEST/xor/xor.lua

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,11 +39,12 @@ end
3939
-----------------------------------------------------------
4040

4141
net_component=ann.mlp.all_all.generate("2 inputs 2 logistic 1 logistic")
42-
net_component:set_option("learning_rate", learning_rate)
43-
net_component:set_option("momentum", momentum)
44-
net_component:set_option("weight_decay", weight_decay)
4542
trainer=trainable.supervised_trainer(net_component)
4643
trainer:build()
44+
trainer:set_option("learning_rate", learning_rate)
45+
trainer:set_option("momentum", momentum)
46+
trainer:set_option("weight_decay", weight_decay)
47+
trainer:set_layerwise_option("b.*", "weight_decay", 0.0)
4748
trainer:set_loss_function(ann.loss.mse(net_component:get_output_size()))
4849
load_initial_weights(trainer.weights_table)
4950

@@ -135,6 +136,10 @@ net_component:pop()
135136
net_component:push( ann.components.actf.log_logistic() )
136137
trainer=trainable.supervised_trainer(net_component)
137138
trainer:build()
139+
trainer:set_option("learning_rate", learning_rate)
140+
trainer:set_option("momentum", momentum)
141+
trainer:set_option("weight_decay", weight_decay)
142+
trainer:set_layerwise_option("b.*", "weight_decay", 0.0)
138143
trainer:set_loss_function(ann.loss.cross_entropy(net_component:get_output_size()))
139144
trainer:save("ll.net", "ascii")
140145
load_initial_weights(trainer.weights_table)

VERSION.lua

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
return {
2+
'-DAPRILANN_VERSION_MAJOR=\'"0"\'',
3+
'-DAPRILANN_VERSION_MINOR=\'"3"\'',
4+
'-DAPRILANN_VERSION_RELEASE=\'"0"\'',
5+
}

0 commit comments

Comments
 (0)