Skip to content

Commit 668521e

Browse files
committed
improve docstrings and add docs
1 parent 8b6e5f9 commit 668521e

File tree

11 files changed

+421
-81
lines changed

11 files changed

+421
-81
lines changed

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
[deps]
2+
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"

docs/make.jl

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
using Documenter
2+
3+
push!(LOAD_PATH, "../src")
4+
using ApplicationDrivenLearning
5+
6+
makedocs(;
7+
modules=[ApplicationDrivenLearning],
8+
doctest=false,
9+
clean=true,
10+
sitename="ApplicationDrivenLearning.jl",
11+
authors="Giovanni Amorim, Joaquim Garcia",
12+
pages=[
13+
"Home" => "index.md",
14+
"API Reference" => "reference.md"
15+
]
16+
)

docs/src/index.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
# ApplicationDrivenLearning.jl Documentation
2+
3+
## Introduction
4+
5+
ApplicationDrivenLearning.jl is a Julia package for training time series models using the application driven learning framework, that connects the optimization problem final cost with predictive model parameters in order to achieve the best model for a given application.
6+
7+
## Usage
8+
9+
```julia
10+
import Pkg
11+
12+
Pkg.add("ApplicationDrivenLearning") # not working yet! clone the repo instead
13+
14+
using ApplicationDrivenLearning
15+
16+
## Single power plan problem
17+
18+
# data
19+
X = reshape([1 1], (2, 1))
20+
Y = reshape([0 2], (2, 1))
21+
22+
# main model and policy / forecast variables
23+
model = ApplicationDrivenLearning.Model()
24+
@variables(model, begin
25+
z, ApplicationDrivenLearning.Policy
26+
θ, ApplicationDrivenLearning.Forecast
27+
end)
28+
29+
# plan model
30+
@variables(ApplicationDrivenLearning.Plan(model), begin
31+
c1 0
32+
c2 0
33+
end)
34+
@constraints(ApplicationDrivenLearning.Plan(model), begin
35+
c1 100 *.plan-z.plan)
36+
c2 20 * (z.plan-θ.plan)
37+
end)
38+
@objective(ApplicationDrivenLearning.Plan(model), Min, 10*z.plan + c1 + c2)
39+
40+
# assess model
41+
@variables(ApplicationDrivenLearning.Assess(model), begin
42+
c1 0
43+
c2 0
44+
end)
45+
@constraints(ApplicationDrivenLearning.Assess(model), begin
46+
c1 100 *.assess-z.assess)
47+
c2 20 * (z.assess-θ.assess)
48+
end)
49+
@objective(ApplicationDrivenLearning.Assess(model), Min, 10*z.assess + c1 + c2)
50+
51+
# basic setting
52+
set_optimizer(model, HiGHS.Optimizer)
53+
set_silent(model)
54+
55+
# forecast model
56+
nn = Chain(Dense(1 => 1; bias=false))
57+
ApplicationDrivenLearning.set_forecast_model(model, nn)
58+
59+
# training and getting solution
60+
solution = ApplicationDrivenLearning.train!(
61+
model,
62+
X,
63+
Y,
64+
ApplicationDrivenLearning.Options(
65+
ApplicationDrivenLearning.NelderMeadMode
66+
)
67+
)
68+
print(solution.params)
69+
```
70+
71+
## Installation
72+
73+
This package is **not yet** registered so if you want to use or test the code clone this repo and include source code from `src` directory.
74+
75+
## Contributing
76+
77+
* PRs such as adding new models and fixing bugs are very welcome!
78+
* For nontrivial changes, you'll probably want to first discuss the changes via issue.

docs/src/reference.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# [API](@id API)
2+
3+
This section documents the ApplicationDrivenLearning API.
4+
5+
## Constructors
6+
7+
```@docs
8+
Model
9+
PredictiveModel
10+
Plan
11+
Assess
12+
```
13+
14+
## JuMP variable types
15+
16+
```@docs
17+
Policy
18+
Forecast
19+
```
20+
21+
## Structs
22+
23+
```@docs
24+
ApplicationDrivenLearning.Options
25+
ApplicationDrivenLearning.Solution
26+
```
27+
28+
## Modes
29+
30+
```@docs
31+
ApplicationDrivenLearning.NelderMeadMode
32+
ApplicationDrivenLearning.GradientMode
33+
ApplicationDrivenLearning.NelderMeadMPIMode
34+
ApplicationDrivenLearning.GradientMPIMode
35+
ApplicationDrivenLearning.BilevelMode
36+
```
37+
38+
## Attributes getters and setters
39+
40+
```@docs
41+
ApplicationDrivenLearning.plan_policy_vars
42+
ApplicationDrivenLearning.assess_policy_vars
43+
ApplicationDrivenLearning.plan_forecast_vars
44+
ApplicationDrivenLearning.assess_forecast_vars
45+
ApplicationDrivenLearning.set_forecast_model
46+
ApplicationDrivenLearning.extract_params
47+
ApplicationDrivenLearning.apply_params
48+
```
49+
50+
### Flux attributes getters and setters
51+
52+
```@docs
53+
ApplicationDrivenLearning.extract_flux_params
54+
ApplicationDrivenLearning.fix_flux_params_single_model
55+
ApplicationDrivenLearning.fix_flux_params_multi_model
56+
ApplicationDrivenLearning.has_params
57+
ApplicationDrivenLearning.apply_gradient!
58+
```
59+
60+
## Other functions
61+
62+
```@docs
63+
forecast
64+
compute_cost
65+
train!
66+
ApplicationDrivenLearning.build_plan_model_forecast_params
67+
ApplicationDrivenLearning.build_assess_model_policy_constraint
68+
ApplicationDrivenLearning.build
69+
```

src/ApplicationDrivenLearning.jl

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,11 @@ import Base.*, Base.+
1111
include("flux_utils.jl")
1212
include("predictive_model.jl")
1313

14-
# special variable types
14+
"""
15+
Policy{T}
16+
17+
Policy variable type that holds plan and assess variables.
18+
"""
1519
struct Policy{T}
1620
plan::T
1721
assess::T
@@ -20,6 +24,11 @@ end
2024
+(p1::Policy, p2::Policy) = Policy(p1.plan + p2.plan, p1.assess + p2.assess)
2125
*(c::Number, p::Policy) = Policy(c*p.plan, c*p.assess)
2226

27+
"""
28+
Forecast{T}
29+
30+
Forecast variable type that holds plan and assess variables.
31+
"""
2332
struct Forecast{T}
2433
plan::T
2534
assess::T
@@ -28,7 +37,12 @@ end
2837
+(p1::Forecast, p2::Forecast) = Forecast(p1.plan + p2.plan, p1.assess + p2.assess)
2938
*(c::Number, p::Forecast) = Forecast(c*p.plan, c*p.assess)
3039

31-
# main model
40+
"""
41+
Model <: JuMP.AbstractModel
42+
43+
Create an empty ApplicationDrivenLearning.Model with empty plan and assess models,
44+
missing forecast model and default settings.
45+
"""
3246
mutable struct Model <: JuMP.AbstractModel
3347
plan::JuMP.Model
3448
assess::JuMP.Model
@@ -100,6 +114,11 @@ function set_forecast_model(
100114
model.forecast = forecast
101115
end
102116

117+
"""
118+
forecast(model, X)
119+
120+
Return forecast model output for given input.
121+
"""
103122
function forecast(model::Model, X::AbstractMatrix)
104123
return model.forecast(X)
105124
end
@@ -159,7 +178,11 @@ include("optimizers/nelder_mead_mpi.jl")
159178
include("optimizers/gradient_mpi.jl")
160179
include("optimizers/bilevel.jl")
161180

181+
"""
182+
train!(model, X, y, options)
162183
184+
Train model using given data and options.
185+
"""
163186
function train!(
164187
model::Model,
165188
X::Matrix{<:Real},

src/flux_utils.jl

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,20 @@
11
using Flux
22

3+
"""
4+
extract_flux_params(model)
5+
6+
Extract the parameters of a Flux model (Flux.Chain or Flux.Dense) into a single vector.
7+
"""
38
function extract_flux_params(model::Union{Flux.Chain, Flux.Dense})
49
θ = Flux.params(model)
510
return reduce(vcat, [vec(p) for p in θ])
611
end
712

13+
"""
14+
fix_flux_params_single_model(model, θ)
15+
16+
Return model after fixing the parameters from an adequate vector of parameters.
17+
"""
818
function fix_flux_params_single_model(model::Union{Flux.Chain, Flux.Dense}, θ::Vector{<:Real})
919
i = 1
1020
for p in Flux.params(model)
@@ -15,6 +25,11 @@ function fix_flux_params_single_model(model::Union{Flux.Chain, Flux.Dense}, θ::
1525
return model
1626
end
1727

28+
"""
29+
fix_flux_params_multi_model(models, θ)
30+
31+
Return iterable of models after fixing the parameters from an adequate vector of parameters.
32+
"""
1833
function fix_flux_params_multi_model(
1934
models,
2035
θ::Vector{<:Real}
@@ -30,6 +45,11 @@ function fix_flux_params_multi_model(
3045
return models
3146
end
3247

48+
"""
49+
has_params(layer)
50+
51+
Check if a Flux layer has parameters.
52+
"""
3353
function has_params(layer)
3454
try
3555
# Attempt to get parameters; if it works and isn't empty, return true

src/jump.jl

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,11 +80,20 @@ function JuMP.add_variable(
8080
return forecast
8181
end
8282

83-
# plan and assess models
83+
"""
84+
Upper(model::ApplicationDrivenLearning.Model)
85+
86+
Create a reference to the plan model of an application driven learning model.
87+
"""
8488
function Plan(model::Model)
8589
return model.plan::JuMP.Model
8690
end
8791

92+
"""
93+
Assess(model::ApplicationDrivenLearning.Model)
94+
95+
Create a reference to the assess model of an application driven learning model.
96+
"""
8897
function Assess(model::Model)
8998
return model.assess::JuMP.Model
9099
end

src/options.jl

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,82 @@
11
abstract type AbstractOptimizationMode end
22

3+
"""
4+
BilevelMode <: AbstractOptimizationMode
5+
6+
Used to solve the application driven learning training problem as a bilevel optimization problem
7+
by using the BilevelJuMP.jl package.
8+
9+
...
10+
# Parameters
11+
- `optimizer::Function` is equivalent to `solver` in BilevelJuMP.BilevelModel.
12+
- `silent::Bool` is equivalent to `silent` in BilevelJuMP.BilevelModel.
13+
- `mode::Union{Nothing, BilevelJuMP.BilevelMode}` is equivalent to `mode` in BilevelJuMP.BilevelModel.
14+
...
15+
"""
316
struct BilevelMode <: AbstractOptimizationMode end
17+
18+
"""
19+
NelderMeadMode <: AbstractOptimizationMode
20+
21+
Used to solve the application driven learning training problem using the Nelder-Mead optimization method
22+
implementation from Optim.jl package.
23+
24+
...
25+
# Parameters
26+
- `initial_simplex` is the initial simplex of solutions to be applied.
27+
- `parameters` is the parameters to be applied to the Nelder-Mead optimization method.
28+
...
29+
"""
430
struct NelderMeadMode <: AbstractOptimizationMode end
31+
32+
"""
33+
GradientMode <: AbstractOptimizationMode
34+
35+
Used to solve the application driven learning training problem using the gradient optimization method
36+
37+
...
38+
# Parameters
39+
- `rule` is the optimiser object to be used in the gradient optimization process.
40+
- 'epochs' is the number of epochs to be used in the gradient optimization process.
41+
- 'batch_size' is the batch size to be used in the gradient optimization process.
42+
- 'verbose' is the flag of whether to print the training process.
43+
- 'compute_cost_every' is the epoch frequency for computing the cost and evaluating best solution.
44+
- 'time_limit' is the time limit for the training process.
45+
...
46+
"""
547
struct GradientMode <: AbstractOptimizationMode end
48+
49+
"""
50+
NelderMeadMPIMode <: AbstractOptimizationMode
51+
52+
MPI implementation of NelderMeadMode.
53+
"""
654
struct NelderMeadMPIMode <: AbstractOptimizationMode end
55+
56+
"""
57+
GradientMPIMode <: AbstractOptimizationMode
58+
59+
MPI implementation of GradientMode.
60+
"""
761
struct GradientMPIMode <: AbstractOptimizationMode end
862

63+
"""
64+
Options(mode; params...)
65+
66+
Options struct to hold optimization mode and mode parameters.
67+
68+
...
69+
# Example
70+
```julia
71+
options = Options(
72+
GradientMode;
73+
rule=Optim.RMSProp(0.01),
74+
epochs=100,
75+
batch_size=10
76+
)
77+
```
78+
...
79+
"""
980
struct Options
1081
mode
1182
params::Dict{Symbol, Any}

0 commit comments

Comments
 (0)