|
1 | 1 | # StochDynamicProgramming |
2 | 2 |
|
3 | 3 |
|
4 | | -**WARNING:** *This package is currently in development. Any help or feedback is appreciated.* |
5 | | - |
6 | | - |
7 | | -**Latest release:** v0.5.0 |
8 | | - |
9 | 4 | | **Documentation** | **Build Status** | **Social** | |
10 | 5 | |:-----------------:|:----------------:|:----------:| |
11 | 6 | | | [![Build Status][build-img]][build-url] | [![Gitter][gitter-img]][gitter-url] | |
12 | 7 | | [![][docs-stable-img]][docs-stable-url] | [![Codecov branch][codecov-img]][codecov-url] | [<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Discourse_logo.png/799px-Discourse_logo.png" width="64">][discourse-url] | |
13 | 8 |
|
14 | | - |
15 | | - |
16 | | -This is a Julia package for optimizing controlled stochastic dynamic system (in discrete time). It offers three methods of resolution : |
| 9 | +This is a Julia package for optimizing controlled stochastic dynamic system, |
| 10 | +in discrete time. It offers three methods of resolution : |
17 | 11 |
|
18 | 12 | - *Stochastic Dual Dynamic Programming* (SDDP) algorithm. |
19 | 13 | - *Extensive formulation*. |
20 | 14 | - *Stochastic Dynamic Programming*. |
21 | 15 |
|
| 16 | +It is built on top of [JuMP](https://github.com/JuliaOpt/JuMP.jl). |
| 17 | + |
| 18 | +StochDynamicProgramming asks the user to provide explicit the cost `c(t, x, u, w)` and |
| 19 | +dynamics `f(t, x, u, w)` functions. Also, the package was developed back |
| 20 | +in 2016, and some parts of its API are not idiomatic, in a Julia sense. |
| 21 | +For other implementations of the SDDP algorithm in Julia, we advise to |
| 22 | +have a look at these two packages: |
| 23 | + |
| 24 | +* [SDDP.jl](https://github.com/odow/SDDP.jl) |
| 25 | +* [StructDualDynProg.jl](https://github.com/JuliaStochOpt/StructDualDynProg.jl) |
| 26 | + |
| 27 | + |
22 | 28 |
|
23 | | -It is built upon [JuMP] |
| 29 | +## What problems solves this package ? |
24 | 30 |
|
25 | | -## What problem can we solve with this package ? |
| 31 | +StochDynamicProgramming targets problems with |
26 | 32 |
|
27 | 33 | - Stage-wise independent discrete noise |
28 | 34 | - Linear dynamics |
29 | | -- Linear or convex piecewise linear cost |
| 35 | +- Linear or convex piecewise linear costs |
30 | 36 |
|
31 | 37 | Extension to non-linear formulation are under development. |
32 | | -Extension to more complex alea dependance are under developpment. |
33 | 38 |
|
34 | | -## Why Extensive formulation ? |
| 39 | + |
| 40 | +### Why SDDP? |
| 41 | + |
| 42 | +SDDP is a dynamic programming algorithm relying on cutting planes. The algorithm requires convexity |
| 43 | +of the value function but does not discretize the state space. The complexity is linear in the |
| 44 | +number of stage, and can accomodate higher dimension state spaces than standard dynamic programming. |
| 45 | +The algorithm returns exact lower bound and estimated upper bound as well as approximate optimal |
| 46 | +control strategies. |
| 47 | + |
| 48 | +### Why Extensive formulation ? |
35 | 49 |
|
36 | 50 | An extensive formulation approach consists in representing the stochastic problem as a deterministic |
37 | | -one with more variable and call a standard deterministic solver. Mainly usable in a linear |
| 51 | +one and then calling a standard deterministic solver. It is mainly usable in a linear |
38 | 52 | setting. Computational complexity is exponential in the number of stages. |
39 | 53 |
|
40 | | -## Why Stochastic Dynamic Programming ? |
| 54 | +### Why Stochastic Dynamic Programming ? |
41 | 55 |
|
42 | 56 | Dynamic Programming is a standard tool to solve stochastic optimal control problem with |
43 | | -independent noise. The method require discretisation of the state space, and is exponential |
44 | | -in the dimension of the state space. |
| 57 | +independent noise. The method requires discretizing the state space, and its |
| 58 | +complexity is exponential in the dimension of the state space. |
45 | 59 |
|
46 | | -## Why SDDP? |
47 | | - |
48 | | -SDDP is a dynamic programming algorithm relying on cutting planes. The algorithm require convexity |
49 | | -of the value function but does not discretize the state space. The complexity is linear in the |
50 | | -number of stage, and can accomodate higher dimension state than standard dynamic programming. |
51 | | -The algorithm return exact lower bound and estimated upper bound as well as approximate optimal |
52 | | -control strategies. |
53 | 60 |
|
54 | 61 | ## Installation |
55 | | -Installing StochDynamicProgramming is an easy process. |
56 | | -Currently, the package depends upon `StochasticDualDynamicProgramming.jl`, which is not |
57 | | -yet registered in Julia's METADATA. To install the package, |
58 | | -open Julia and enter |
| 62 | + |
| 63 | +StochDynamicProgramming is a registered Julia package. |
| 64 | +To install the package, open Julia and enter |
59 | 65 |
|
60 | 66 | ```julia |
61 | | -julia> Pkg.update() |
62 | | -julia> Pkg.add("StochDynamicProgramming") |
| 67 | +julia> ] |
| 68 | +pkg> add StochDynamicProgramming |
63 | 69 |
|
64 | 70 | ``` |
65 | 71 |
|
66 | 72 |
|
67 | 73 | ## Usage |
68 | 74 |
|
69 | | -IJulia Notebooks will be provided to explain how this package work. |
| 75 | +IJulia Notebooks are provided to explain how this package works. |
70 | 76 | A first example on a two dams valley [here](http://nbviewer.jupyter.org/github/leclere/StochDP-notebooks/blob/master/notebooks/damsvalley.ipynb). |
71 | 77 |
|
72 | 78 |
|
|
0 commit comments