Skip to content

Commit 42927e2

Browse files
authored
Merge pull request #129 from JuliaOpt/dev-nightly
Dev nightly
2 parents 71f9df7 + 1539126 commit 42927e2

File tree

6 files changed

+262
-50
lines changed

6 files changed

+262
-50
lines changed

doc/conf.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
# ones.
3232
extensions = [
3333
'sphinx.ext.coverage',
34-
'sphinx.ext.mathjax',
34+
'sphinx.ext.mathjax',
3535
]
3636

3737
# Add any paths that contain templates here, relative to this directory.
@@ -56,9 +56,9 @@
5656
# built documents.
5757
#
5858
# The short X.Y version.
59-
version = '0.1'
59+
version = '0.2'
6060
# The full version, including alpha/beta/rc tags.
61-
release = '0.1'
61+
release = '0.2.0'
6262

6363
# The language for content autogenerated by Sphinx. Refer to documentation
6464
# for a list of supported languages.

doc/index.rst

Lines changed: 28 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,18 @@
77
StochDynamicProgramming
88
=======================
99

10-
This package implements the Stochastic Dual Dynamic Programming (SDDP) algorithm with Julia. It relies upon MathProgBase_ and JuMP_.
10+
This package implements the Stochastic Dual Dynamic Programming (SDDP)
11+
algorithm with Julia. It relies upon MathProgBase_ and JuMP_, and is compatible
12+
with both Julia v0.4 and v0.5.
1113

1214
A complete overview of this algorithm could be found here_.
1315

14-
At the moment the plan is to create a type such that :
16+
Description of SDDP
17+
^^^^^^^^^^^^^^^^^^^
1518

16-
- you can fix linear time :math:`t` cost function (then convex piecewise linear)
19+
At the moment:
20+
21+
- you can fix linear or quadratic time :math:`t` cost function (then convex piecewise linear)
1722
- you can fix linear dynamic (with a physical state :math:`x` and a control :math:`u`)
1823
- the scenarios are created by assuming that the noise :math:`\xi_t` is independent in time, each given by a table (value, probability)
1924

@@ -27,19 +32,34 @@ Then it is standard SDDP :
2732

2833
Once solved the SDDP model should be able to :
2934

30-
- give the lower bound on the cost
35+
- give the lower bound on the cost and its evolution along iterations
3136
- simulate trajectories to evaluate expected cost
3237
- give the optimal control given current time state and noise
3338

3439

40+
Supported features
41+
^^^^^^^^^^^^^^^^^^
42+
43+
.. .. table:
44+
.. ====== =========== ===============
45+
.. Solver Is working? Quadratic costs
46+
.. ====== =========== ===============
47+
.. Linear Cost Yes No
48+
.. Quadratic Cost Yes Yes
49+
.. Integer controls
50+
.. Quadratic regularization
51+
.. Cuts pruning
52+
53+
54+
Ongoing developments
55+
^^^^^^^^^^^^^^^^^^^^
56+
3557
We have a lot of ideas to implement further :
3658

3759
- spatial construction (i.e : defining stock one by one and then their interactions)
3860
- noise as AR (eventually with fitting on historic datas)
3961
- convex solvers
4062
- refined stopping rules
41-
- cut pruning
42-
- paralellization
4363

4464

4565

@@ -49,7 +69,9 @@ Contents:
4969
.. toctree::
5070
install
5171
quickstart
72+
sddp_api
5273
tutorial
74+
quickstart_sdp
5375
install_windows
5476

5577
.. _MathProgBase: http://mathprogbasejl.readthedocs.org/en/latest/

doc/install.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -32,18 +32,18 @@ Refer to the documentation of JuMP_ to get another solver and interface it with
3232
The following solvers have been tested:
3333

3434
.. table:
35-
====== ===========
36-
Solver Is working?
37-
====== ===========
38-
Clp Yes
39-
CPLEX Yes
40-
Gurobi Yes
41-
GLPK **No**
42-
====== ===========
35+
====== =========== ===============
36+
Solver Is working? Quadratic costs
37+
====== =========== ===============
38+
Clp Yes No
39+
CPLEX Yes Yes
40+
Gurobi Yes Yes
41+
GLPK **No** **No**
42+
====== =========== ===============
4343

4444
Run Unit-Tests
4545
--------------
46-
To run unit-tests::
46+
To run unit-tests (depend upon `FactCheck.jl`)::
4747

4848
$ julia test/runtests.jl
4949

doc/quickstart.rst

Lines changed: 41 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -4,29 +4,33 @@
44
SDDP: Step-by-step example
55
====================
66

7-
This page gives a short introduction to the interface of this package. It explains the resolution with SDDP of a classical example: the management of a dam over one year with random inflow.
7+
This page gives a short introduction to the interface of this package. It
8+
explains the resolution with SDDP of a classical example: the management of a
9+
dam over one year with random inflow.
810

911
Use case
1012
========
11-
In the following, :math:`x_t` will denote the state and :math:`u_t` the control at time :math:`t`.
12-
We will consider a dam, whose dynamic is:
13+
In the following, :math:`x_t` denotes the state and :math:`u_t` the control at time :math:`t`.
14+
We consider a dam, whose dynamic is:
1315

1416
.. math::
1517
x_{t+1} = x_t - u_t + w_t
1618
17-
At time :math:`t`, we have a random inflow :math:`w_t` and we choose to turbine a quantity :math:`u_t` of water.
19+
At time :math:`t`, we have a random inflow :math:`w_t` and we choose to turbine
20+
a quantity :math:`u_t` of water.
1821

19-
The turbined water is used to produce electricity, which is being sold at a price :math:`c_t`. At time :math:`t` we gain:
22+
The turbined water is used to produce electricity, which is being sold at a
23+
price :math:`c_t`. At time :math:`t` we gain:
2024

2125
.. math::
2226
C(x_t, u_t, w_t) = c_t \times u_t
2327
2428
We want to minimize the following criterion:
2529

2630
.. math::
27-
J = \underset{x, u}{\min} \sum_{t=0}^{T-1} C(x_t, u_t, w_t)
31+
J = \underset{x, u}{\min} \mathbb{E} \;\left[ \sum_{t=0}^{T-1} C(x_t, u_t, w_t) \right]
2832
29-
We will assume that both states and controls are bounded:
33+
We assume that both states and controls are bounded:
3034

3135
.. math::
3236
x_t \in [0, 100], \qquad u_t \in [0, 7]
@@ -35,7 +39,8 @@ We will assume that both states and controls are bounded:
3539
Problem definition in Julia
3640
===========================
3741

38-
We will consider 52 time steps as we want to find optimal value functions for one year::
42+
We consider 52 time steps as we want to find optimal value functions every week
43+
during one year::
3944

4045
N_STAGES = 52
4146

@@ -44,7 +49,7 @@ and we consider the following initial position::
4449

4550
X0 = [50]
4651

47-
Note that X0 is a vector.
52+
Note that `X0` is a vector.
4853

4954
Dynamic
5055
^^^^^^^
@@ -59,7 +64,8 @@ We write the dynamic (which return a vector)::
5964
Cost
6065
^^^^
6166

62-
we store evolution of costs :math:`c_t` in an array `COSTS`, and we define the cost function (which return a float)::
67+
We store evolution of costs :math:`c_t` in an array `COSTS`, and we define
68+
the cost function (which return a float)::
6369

6470
function cost_t(t, x, u, w)
6571
return COSTS[t] * u[1]
@@ -68,7 +74,7 @@ we store evolution of costs :math:`c_t` in an array `COSTS`, and we define the c
6874
Noises
6975
^^^^^^
7076

71-
Noises are defined in an array of Noiselaw. This type defines a discrete probability.
77+
Noises are defined in an array of `Noiselaw`. This type defines a discrete probability.
7278

7379

7480
For instance, if we want to define a uniform probability with size :math:`N= 10`, such that:
@@ -80,7 +86,7 @@ we write::
8086

8187
N = 10
8288
proba = 1/N*ones(N) # uniform probabilities
83-
xi_support = collect(linspace(1,N,N))
89+
xi_support = collect(linspace(1, N, N))
8490
xi_law = NoiseLaw(xi_support, proba)
8591

8692

@@ -92,7 +98,7 @@ Thus, we could define a different probability laws for each time :math:`t`. Here
9298
Bounds
9399
^^^^^^
94100

95-
We add bounds over the state and the control::
101+
Both state and control are bounded:
96102

97103
s_bounds = [(0, 100)]
98104
u_bounds = [(0, 7)]
@@ -103,7 +109,7 @@ Problem definition
103109

104110
As our problem is purely linear, we instantiate::
105111

106-
spmodel = LinearDynamicLinearCostSPmodel(N_STAGES,u_bounds,X0,cost_t,dynamic,xi_laws)
112+
spmodel = LinearSPModel(N_STAGES, u_bounds, X0, cost_t, dynamic, xi_laws)
107113

108114
We add the state bounds to the model afterward::
109115

@@ -119,42 +125,52 @@ First, we need to use a LP solver::
119125
using Clp
120126
SOLVER = ClpSolver()
121127

122-
Clp is automatically installed during package installation. To install different solvers on your machine, refer to the JuMP_ documentation.
128+
Clp is automatically installed during packages' installation. To install
129+
different solvers on your machine, refer to the JuMP_ documentation.
123130

124-
Once the solver installed, we define SDDP algorithm parameters::
131+
Once the solver is installed, we define SDDP parameters::
125132

126133
forwardpassnumber = 2 # number of forward pass
127-
sensibility = 0. # admissible gap between upper and lower bound
134+
gap = 0. # admissible gap between upper and lower bound
128135
max_iter = 20 # maximum number of iterations
129136

130-
paramSDDP = SDDPparameters(SOLVER, forwardpassnumber, sensibility, max_iter)
137+
paramSDDP = SDDPparameters(SOLVER,
138+
passnumber=forwardpassnumber,
139+
gap=gap,
140+
max_iterations=max_iter)
131141

132142

133143
Now, we solve the problem by computing Bellman values::
134144

135-
V, pbs = solve_SDDP(spmodel, paramSDDP, 10) # display information every 10 iterations
145+
V, pbs, stats = solve_SDDP(spmodel, paramSDDP, 10) # display information every 10 iterations
136146

137-
:code:`V` is an array storing the value functions, and :code:`pbs` a vector of JuMP.Model storing each value functions as a linear problem.
147+
:code:`V` is an array storing the value functions, and :code:`pbs` a vector of
148+
JuMP.Model storing each value functions as a linear problem.
149+
:code:`stats` is an object which stores a few informations about the convergence
150+
of SDDP (execution time, evolution of upper and lower bounds, number of calls to
151+
solver, etc.).
138152

139-
We have an exact lower bound given by :code:`V` with the function::
153+
The exact lower bound is given by the function::
140154

141155
lb_sddp = StochDynamicProgramming.get_lower_bound(spmodel, paramSDDP, V)
142156

143157

144158
Find optimal controls
145159
=====================
146160

147-
Once Bellman functions are computed, we can control our system over assessments scenarios, without assuming knowledge of the future.
161+
Once Bellman functions are computed, we can control our system over
162+
assessments scenarios, without assuming knowledge of the future.
148163

149164
We build 1000 scenarios according to the laws stored in :code:`xi_laws`::
150165

151-
scenarios = StochDynamicProgramming.simulate_scenarios(xi_laws,1000)
166+
scenarios = StochDynamicProgramming.simulate_scenarios(xi_laws, 1000)
152167

153168
We compute 1000 simulations of the system over these scenarios::
154169

155-
costsddp, stocks = forward_simulations(spmodel, paramSDDP, V, pbs, scenarios)
170+
costsddp, stocks = forward_simulations(spmodel, paramSDDP, pbs, scenarios)
156171

157-
:code:`costsddp` returns the costs for each scenario, and :code:`stocks` the evolution of each stock along time, for each scenario.
172+
:code:`costsddp` returns the costs for each scenario, and :code:`stocks`
173+
the evolution of stocks along time, for each scenario.
158174

159175
.. _JuMP: http://jump.readthedocs.io/en/latest/installation.html#coin-or-clp-and-cbc
160176

0 commit comments

Comments
 (0)