@@ -91,7 +91,7 @@ Thus, we could define a different probability laws for each time :math:`t`. Here
9191Bounds
9292^^^^^^
9393
94- We could add bounds over the state and the control::
94+ We add bounds over the state and the control::
9595
9696 s_bounds = [(0, 100)]
9797 u_bounds = [(0, 7)]
@@ -100,7 +100,7 @@ We could add bounds over the state and the control::
100100Problem definition
101101^^^^^^^^^^^^^^^^^^
102102
103- As our problem is purely linear, we could instantiate::
103+ As our problem is purely linear, we instantiate::
104104
105105 spmodel = LinearDynamicLinearCostSPmodel(N_STAGES,u_bounds,X0,cost_t,dynamic,xi_laws)
106106
@@ -114,9 +114,9 @@ First, we need to use a LP solver::
114114 using Clp
115115 SOLVER = ClpSolver()
116116
117- Clp is automatically installed during package installation. To install the solver on your machine, refer to the JuMP _ documentation.
117+ Clp is automatically installed during package installation. To install different solvers on your machine, refer to the JuMP _ documentation.
118118
119- Once the solver installed, we could define the parameters of the SDDP algorithm::
119+ Once the solver installed, we define SDDP algorithm parameters ::
120120
121121 forwardpassnumber = 2 # number of forward pass
122122 sensibility = 0. # admissible gap between upper and lower bound
@@ -125,31 +125,31 @@ Once the solver installed, we could define the parameters of the SDDP algorithm:
125125 paramSDDP = SDDPparameters(SOLVER, forwardpassnumber, sensibility, max_iter)
126126
127127
128- Now, we could compute Bellman values::
128+ Now, we solve the problem by computing Bellman values::
129129
130130 V, pbs = solve_SDDP(spmodel, paramSDDP, 10) # display information every 10 iterations
131131
132132:code: `V ` is an array storing the value functions, and :code: `pbs ` a vector of JuMP.Model storing each value functions as a linear problem.
133133
134- We could estimate the lower bound given by :code: `V ` with the function::
134+ We have an exact lower bound given by :code: `V ` with the function::
135135
136136 lb_sddp = StochDynamicProgramming.get_lower_bound(spmodel, paramSDDP, V)
137137
138138
139- Find optimal control over given scenarios
140- =========================================
139+ Find optimal controls
140+ =====================
141141
142- Once Bellman functions are computed, we could control our system over assessments scenarios.
142+ Once Bellman functions are computed, we can control our system over assessments scenarios, without assuming knowledge of the future .
143143
144144We build 1000 scenarios according to the laws stored in :code: `xi_laws `::
145145
146146 scenarios = StochDynamicProgramming.simulate_scenarios(xi_laws,1000)
147147
148- And we could compute 1000 simulations over these scenarios::
148+ We compute 1000 simulations of the system over these scenarios::
149149
150150 costsddp, stocks = forward_simulations(spmodel, paramSDDP, V, pbs, scenarios)
151151
152- :code: `costsddp ` returns the value of costs along each scenario, and :code: `stocks ` the evolution of each stock along time.
152+ :code: `costsddp ` returns the costs for each scenario, and :code: `stocks ` the evolution of each stock along time, for each scenario .
153153
154154.. _JuMP : http://jump.readthedocs.io/en/latest/installation.html#coin-or-clp-and-cbc
155155
0 commit comments