Skip to content

Commit 0bafbaa

Browse files
authored
Upgrade ensmallen 1 16 0 (#13)
* Upgrade to ensmallen 1.16.0 * Bump package version * Add new release notes * Bump cran comments * Address return type issue identified in mlpack/ensmallen#123 * Ensure const reference * Fix another type return mlpack/ensmallen#126
1 parent 0096379 commit 0bafbaa

File tree

90 files changed

+2914
-237
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

90 files changed

+2914
-237
lines changed

ChangeLog

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,15 @@
1+
2019-08-09 James Balamuta <[email protected]>
2+
3+
* DESCRIPTION (Version, Date): Release 1.16.0
4+
5+
* NEWS.md: Update for Ensmallen release 1.16.0
6+
7+
* inst/include/ensmallen_bits: Upgraded to Ensmallen 1.16.0
8+
* inst/include/ensmallen.hpp: ditto
9+
10+
* inst/include/ensmallen_bits/pso/pso.hpp: Fixed return type issue detected
11+
by gcc 9.
12+
113
2019-05-20 James Balamuta <[email protected]>
214

315
* DESCRIPTION (Version, Date): Release 1.15.0

DESCRIPTION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
Package: RcppEnsmallen
22
Title: Header-Only C++ Mathematical Optimization Library for 'Armadillo'
3-
Version: 0.1.15.0.1
3+
Version: 0.1.16.0.1
44
Authors@R: c(
55
person("James Joseph", "Balamuta", email = "[email protected]",
66
role = c("aut", "cre", "cph"),

NEWS.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,14 @@
1+
# RcppEnsmallen 0.1.16.0.1
2+
3+
- Upgraded to ensmallen release 1.16.0 "Loud Alarm Clock" (2019-08-09)
4+
- Add option to avoid computing exact objective at the end of the optimization
5+
([#109](https://github.com/mlpack/ensmallen/pull/109)).
6+
- Fix handling of curvature for BigBatchSGD ([#118](https://github.com/mlpack/ensmallen/pull/118)).
7+
- Reduce runtime of tests ([#118](https://github.com/mlpack/ensmallen/pull/118)).
8+
- Introduce local-best particle swarm optimization, `LBestPSO`, for
9+
unconstrained optimization problems ([#86](https://github.com/mlpack/ensmallen/pull/86)).
10+
- Fix return type error in `PSO` ([#123](https://github.com/mlpack/ensmallen/pull/123))
11+
112
# RcppEnsmallen 0.1.15.0.1
213

314
- Upgraded to ensmallen release 1.15.0 "Wrong Side Of The Road" (2019-05-14)

cran-comments.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
## Test environments
22

3-
* local OS X install, R 3.6.0
4-
* ubuntu 14.04 (on travis-ci), R 3.6.0
3+
* local OS X install, R 3.6.1
4+
* ubuntu 14.04 (on travis-ci), R 3.6.1
55
* win-builder (devel and release)
66

77
## R CMD check results

inst/include/ensmallen.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@
8383
#include "ensmallen_bits/lbfgs/lbfgs.hpp"
8484
#include "ensmallen_bits/padam/padam.hpp"
8585
#include "ensmallen_bits/parallel_sgd/parallel_sgd.hpp"
86+
#include "ensmallen_bits/pso/pso.hpp"
8687
#include "ensmallen_bits/rmsprop/rmsprop.hpp"
8788

8889
#include "ensmallen_bits/sa/sa.hpp"

inst/include/ensmallen_bits/ada_delta/ada_delta.hpp

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,8 @@ class AdaDelta
6666
* function is visited in linear order.
6767
* @param resetPolicy If true, parameters are reset before every Optimize
6868
* call; otherwise, their values are retained.
69+
* @param exactObjective Calculate the exact objective (Default: estimate the
70+
* final objective obtained on the last pass over the data).
6971
*/
7072
AdaDelta(const double stepSize = 1.0,
7173
const size_t batchSize = 32,
@@ -74,7 +76,8 @@ class AdaDelta
7476
const size_t maxIterations = 100000,
7577
const double tolerance = 1e-5,
7678
const bool shuffle = true,
77-
const bool resetPolicy = true);
79+
const bool resetPolicy = true,
80+
const bool exactObjective = false);
7881

7982
/**
8083
* Optimize the given function using AdaDelta. The given starting point will
@@ -128,6 +131,11 @@ class AdaDelta
128131
//! Modify whether or not the individual functions are shuffled.
129132
bool& Shuffle() { return optimizer.Shuffle(); }
130133

134+
//! Get whether or not the actual objective is calculated.
135+
bool ExactObjective() const { return optimizer.ExactObjective(); }
136+
//! Modify whether or not the actual objective is calculated.
137+
bool& ExactObjective() { return optimizer.ExactObjective(); }
138+
131139
//! Get whether or not the update policy parameters
132140
//! are reset before Optimize call.
133141
bool ResetPolicy() const { return optimizer.ResetPolicy(); }

inst/include/ensmallen_bits/ada_delta/ada_delta_impl.hpp

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,17 @@ inline AdaDelta::AdaDelta(const double stepSize,
2626
const size_t maxIterations,
2727
const double tolerance,
2828
const bool shuffle,
29-
const bool resetPolicy) :
29+
const bool resetPolicy,
30+
const bool exactObjective) :
3031
optimizer(stepSize,
3132
batchSize,
3233
maxIterations,
3334
tolerance,
3435
shuffle,
3536
AdaDeltaUpdate(rho, epsilon),
3637
NoDecay(),
37-
resetPolicy)
38+
resetPolicy,
39+
exactObjective)
3840
{ /* Nothing to do. */ }
3941

4042
} // namespace ens

inst/include/ensmallen_bits/ada_grad/ada_grad.hpp

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,14 +64,17 @@ class AdaGrad
6464
* function is visited in linear order.
6565
* @param resetPolicy If true, parameters are reset before every Optimize
6666
* call; otherwise, their values are retained.
67+
* @param exactObjective Calculate the exact objective (Default: estimate the
68+
* final objective obtained on the last pass over the data).
6769
*/
6870
AdaGrad(const double stepSize = 0.01,
6971
const size_t batchSize = 32,
7072
const double epsilon = 1e-8,
7173
const size_t maxIterations = 100000,
7274
const double tolerance = 1e-5,
7375
const bool shuffle = true,
74-
const bool resetPolicy = true);
76+
const bool resetPolicy = true,
77+
const bool exactObjective = false);
7578

7679
/**
7780
* Optimize the given function using AdaGrad. The given starting point will
@@ -119,6 +122,11 @@ class AdaGrad
119122
//! Modify whether or not the individual functions are shuffled.
120123
bool& Shuffle() { return optimizer.Shuffle(); }
121124

125+
//! Get whether or not the actual objective is calculated.
126+
bool ExactObjective() const { return optimizer.ExactObjective(); }
127+
//! Modify whether or not the actual objective is calculated.
128+
bool& ExactObjective() { return optimizer.ExactObjective(); }
129+
122130
//! Get whether or not the update policy parameters
123131
//! are reset before Optimize call.
124132
bool ResetPolicy() const { return optimizer.ResetPolicy(); }

inst/include/ensmallen_bits/ada_grad/ada_grad_impl.hpp

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,15 +23,17 @@ inline AdaGrad::AdaGrad(const double stepSize,
2323
const size_t maxIterations,
2424
const double tolerance,
2525
const bool shuffle,
26-
const bool resetPolicy) :
26+
const bool resetPolicy,
27+
const bool exactObjective) :
2728
optimizer(stepSize,
2829
batchSize,
2930
maxIterations,
3031
tolerance,
3132
shuffle,
3233
AdaGradUpdate(epsilon),
3334
NoDecay(),
34-
resetPolicy)
35+
resetPolicy,
36+
exactObjective)
3537
{ /* Nothing to do. */ }
3638

3739
} // namespace ens

inst/include/ensmallen_bits/adam/adam.hpp

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,8 @@ class AdamType
8888
* function is visited in linear order.
8989
* @param resetPolicy If true, parameters are reset before every Optimize
9090
* call; otherwise, their values are retained.
91+
* @param exactObjective Calculate the exact objective (Default: estimate the
92+
* final objective obtained on the last pass over the data).
9193
*/
9294
AdamType(const double stepSize = 0.001,
9395
const size_t batchSize = 32,
@@ -97,7 +99,8 @@ class AdamType
9799
const size_t maxIterations = 100000,
98100
const double tolerance = 1e-5,
99101
const bool shuffle = true,
100-
const bool resetPolicy = true);
102+
const bool resetPolicy = true,
103+
const bool exactObjective = false);
101104

102105
/**
103106
* Optimize the given function using Adam. The given starting point will be
@@ -155,6 +158,11 @@ class AdamType
155158
//! Modify whether or not the individual functions are shuffled.
156159
bool& Shuffle() { return optimizer.Shuffle(); }
157160

161+
//! Get whether or not the actual objective is calculated.
162+
bool ExactObjective() const { return optimizer.ExactObjective(); }
163+
//! Modify whether or not the actual objective is calculated.
164+
bool& ExactObjective() { return optimizer.ExactObjective(); }
165+
158166
//! Get whether or not the update policy parameters
159167
//! are reset before Optimize call.
160168
bool ResetPolicy() const { return optimizer.ResetPolicy(); }

0 commit comments

Comments
 (0)