Skip to content

Commit 55081e1

Browse files
committed
[DEV] Update sphinx documentation
1 parent 7665569 commit 55081e1

File tree

4 files changed

+287
-10
lines changed

4 files changed

+287
-10
lines changed

doc/index.rst

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@
33
You can adapt this file completely to your liking, but it should at least
44
contain the root `toctree` directive.
55
6-
StochDynamicProgramming Index
7-
=============================
6+
=======================
7+
StochDynamicProgramming
8+
=======================
89

910
This package implements the Stochastic Dual Dynamic Programming (SDDP) algorithm with Julia. It relies upon MathProgBase_ and JuMP_.
1011

@@ -43,18 +44,14 @@ We have a lot of ideas to implement further :
4344

4445

4546
Contents:
47+
=========
4648

4749
.. toctree::
48-
:maxdepth: 2
50+
install
51+
quickstart
52+
tutorial
4953

5054
.. _MathProgBase: http://mathprogbasejl.readthedocs.org/en/latest/
5155
.. _here: http://www2.isye.gatech.edu/people/faculty/Alex_Shapiro/ONS-FR.pdf
5256
.. _JuMP: http://jump.readthedocs.org/en/latest/
5357

54-
Indices and tables
55-
==================
56-
57-
* :ref:`genindex`
58-
* :ref:`modindex`
59-
* :ref:`search`
60-

doc/install.rst

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
.. _install:
2+
3+
==================
4+
Installation guide
5+
==================
6+
7+
8+
StochDynamicProgramming installation
9+
------------------------------------
10+
11+
To install StochDynamicProgramming::
12+
13+
julia> Pkg.clone("https://github.com/leclere/StochDynamicProgramming.jl")
14+
15+
Once the package is installed, you can import it in the REPL::
16+
17+
julia> using StochDynamicProgramming
18+
19+
20+
Install a linear programming solver
21+
-----------------------------------
22+
23+
SDDP need a linear programming solver to run.
24+
25+
You could refer to the documentation of JuMP_ to get a solver and interface it with SDDP.
26+
27+
28+
29+
The following solvers have been tested:
30+
31+
.. table:
32+
====== ===========
33+
Solver Is working?
34+
====== ===========
35+
Clp Yes
36+
CPLEX Yes
37+
Gurobi Yes
38+
GLPK **No**
39+
====== ===========
40+
41+
Run Unit-Tests
42+
--------------
43+
To run unit-tests::
44+
45+
$ julia test/runtests.jl
46+
47+
48+
.. _JuMP: http://jump.readthedocs.org/en/latest/installation.html#getting-solvers
49+

doc/quickstart.rst

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
.. _quickstart:
2+
3+
====================
4+
Step-by-step example
5+
====================
6+
7+
This page gives a short introduction to the interface of this package. It explains the resolution with SDDP of a classical example: the management of a dam over one year with random inflow.
8+
9+
Use case
10+
========
11+
In the following, :math:`x_t` will denote the state and :math:`u_t` the control at time :math:`t`.
12+
We will consider a dam, whose dynamic is:
13+
14+
.. math::
15+
x_{t+1} = x_t - u_t + w_t
16+
17+
At time :math:`t`, we have a random inflow :math:`w_t` and we choose to turbine a quantity :math:`u_t` of water.
18+
19+
The turbined water is used to produce electricity, which is being sold at a price :math:`c_t`. At time :math:`t` we gain:
20+
21+
.. math::
22+
C(x_t, u_t, w_t) = c_t \times u_t
23+
24+
We want to minimize the following criterion:
25+
26+
.. math::
27+
J = \underset{x, u}{\min} \sum_{t=0}^{T-1} C(x_t, u_t, w_t)
28+
29+
We will assume that both states and controls are bounded:
30+
31+
.. math::
32+
x_t \in [0, 100], \qquad u_t \in [0, 7]
33+
34+
35+
Problem definition in Julia
36+
===========================
37+
38+
We will consider 52 time steps as we want to find optimal value functions for one year::
39+
40+
N_STAGES = 52
41+
42+
43+
and we consider the following initial position::
44+
X0 = [50, 50]
45+
46+
47+
Dynamic
48+
^^^^^^^
49+
50+
We write the dynamic::
51+
52+
function dynamic(t, x, u, xi)
53+
return [x[1] + u[1] - xi[1]]
54+
end
55+
56+
57+
Cost
58+
^^^^
59+
60+
we store evolution of costs :math:`c_t` in an array `COSTS`, and we define the cost function::
61+
62+
function cost_t(t, x, u, w)
63+
return COSTS[t] * u[1]
64+
end
65+
66+
Noises
67+
^^^^^^
68+
69+
Noises are defined in an array of Noiselaw. This type defines a discrete probability.
70+
71+
72+
For instance, if we want to define a uniform probability with size :math:`N= 10`, such that:
73+
74+
.. math::
75+
\mathbb{P} \left(X_i = i \right) = \dfrac{1}{N} \qquad \forall i \in 1 .. N
76+
77+
we write::
78+
79+
N = 10
80+
proba = 1/N*ones(N) # uniform probabilities
81+
xi_support = collect(linspace(1,N,N))
82+
xi_law = NoiseLaw(xi_support, proba)
83+
84+
85+
Thus, we could define a different probability laws for each time :math:`t`. Here, we suppose that the probability is constant over time, so we could build the following vector::
86+
87+
xi_laws = NoiseLaw[xi_law for t in 1:N_STAGES-1]
88+
89+
90+
Bounds
91+
^^^^^^
92+
93+
We could add bounds over the state and the control::
94+
95+
s_bounds = [(0, 100)]
96+
u_bounds = [(0, 7)]
97+
98+
99+
Problem definition
100+
^^^^^^^^^^^^^^^^^^
101+
102+
As our problem is purely linear, we could instantiate::
103+
104+
spmodel = LinearDynamicLinearCostSPmodel(N_STAGES,u_bounds,X0,cost_t,dynamic,xi_laws)
105+
106+
107+
Solver
108+
^^^^^^
109+
We define a SDDP solver for our problem.
110+
111+
First, we need to use a LP solver::
112+
113+
using Clp
114+
SOLVER = ClpSolver()
115+
116+
Clp is automatically installed during package installation. To install the solver on your machine, refer to the JuMP_ documentation.
117+
118+
Once the solver installed, we could define the parameters of the SDDP algorithm::
119+
120+
forwardpassnumber = 2 # number of forward pass
121+
sensibility = 0. # admissible gap between upper and lower bound
122+
max_iter = 20 # maximum number of iterations
123+
124+
paramSDDP = SDDPparameters(SOLVER, forwardpassnumber, sensibility, max_iter)
125+
126+
127+
Now, we could compute Bellman values::
128+
129+
V, pbs = solve_SDDP(spmodel, paramSDDP, 10) # display information every 10 iterations
130+
131+
:code:`V` is an array storing the value functions, and :code:`pbs` a vector of JuMP.Model storing each value functions as a linear problem.
132+
133+
We could estimate the lower bound given by :code:`V` with the function::
134+
135+
lb_sddp = StochDynamicProgramming.get_lower_bound(spmodel, paramSDDP, V)
136+
137+
138+
Find optimal control over given scenarios
139+
=========================================
140+
141+
Once Bellman functions are computed, we could control our system over assessments scenarios.
142+
143+
We build 1000 scenarios according to the laws stored in :code:`xi_laws`::
144+
145+
scenarios = StochDynamicProgramming.simulate_scenarios(xi_laws,1000)
146+
147+
And we could compute 1000 simulations over these scenarios::
148+
149+
costsddp, stocks = forward_simulations(spmodel, paramSDDP, V, pbs, scenarios)
150+
151+
:code:`costsddp` returns the value of costs along each scenario, and :code:`stocks` the evolution of each stock along time.
152+
153+
.. _JuMP: http://jump.readthedocs.io/en/latest/installation.html#coin-or-clp-and-cbc
154+

doc/tutorial.rst

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
2+
========
3+
Tutorial
4+
========
5+
6+
This page gives an overview of the functions implemented in the package.
7+
8+
In the following, :code:`model` will design a :code:`SPModel` storing the definition of a stochastic problem, and :code:`param` a SDDPparameters instance which stores the parameters specified for SDDP. See quickstart_ for more information about these two objects.
9+
10+
Work with PolyhedralFunction
11+
============================
12+
13+
Get Bellman value at a given point
14+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
15+
We could estimate the Bellman value at a given position :code:`xt` with a :code:`PolyhedralFunction` :code:`Vt` ::
16+
17+
vx = get_bellman_value(model, param, t, Vt, xt)
18+
19+
20+
Get optimal control at a given point
21+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
22+
23+
To get optimal control at a given point :code:`xt` and for a given alea :code:`xi`::
24+
25+
get_control(model, param, lpproblem, t, xt, xi)
26+
27+
where :code:`lpproblem` is the linear problem storing evaluation of Bellman function at time :math:`t`.
28+
29+
30+
31+
Save and load pre-computed cuts
32+
===============================
33+
34+
We suppose that we have computed Bellman functions with SDDP. These functions are stored in a vector of :code:`PolyhedralFunctions` denoted :code:`V`
35+
36+
These functions could be stored in a text file with the command::
37+
38+
StochDynamicProgramming.dump_polyhedral_functions("yourfile.dat", V)
39+
40+
And we could load them with the function::
41+
42+
Vdump = StochDynamicProgramming.read_polyhedral_functions("yourfile.dat")
43+
44+
45+
46+
Build LP Models with PolyhedralFunctions
47+
=======================================
48+
49+
We could build a vector of :code:`JuMP.Model` with a vector of :code:`PolyhedralFunction` to perform simulation. For this, use the function::
50+
51+
problems = StochDynamicProgramming.hotstart_SDDP(model, param, V)
52+
53+
54+
SDDP hotstart
55+
=============
56+
57+
If cuts are already available, we could hotstart SDDP while overloading the function :code:`solve_SDDP`::
58+
59+
V, pbs = solve_SDDP(model, params, 0, V)
60+
61+
62+
Cuts pruning
63+
============
64+
65+
The more SDDP run, the more cuts you need to store. It is sometimes useful to delete cuts which are useless for the computation of the approximated Bellman functions.
66+
67+
68+
To clean a single :code:`PolyhedralFunction` :code:`Vt`::
69+
70+
Vt = exact_prune_cuts(model, params, Vt)
71+
72+
To clean a vector of :code:`PolyhedralFunction` :code:`V`::
73+
74+
prune_cuts!(model, params, V)
75+
76+
77+
.. _quickstart: quickstart.html

0 commit comments

Comments
 (0)