Skip to content

Commit 7b733e5

Browse files
update documentation
1 parent 418cd13 commit 7b733e5

File tree

6 files changed

+334
-4
lines changed

6 files changed

+334
-4
lines changed

.github/workflows/CI.yml

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,22 @@ jobs:
3636
- uses: codecov/codecov-action@v3
3737
with:
3838
files: lcov.info
39+
40+
docs:
41+
name: Documentation
42+
runs-on: ubuntu-latest
43+
steps:
44+
- uses: actions/checkout@v2
45+
- uses: julia-actions/setup-julia@v1
46+
with:
47+
version: '1'
48+
- run: |
49+
julia --project=docs -e '
50+
using Pkg
51+
Pkg.develop(PackageSpec(path=pwd()))
52+
Pkg.instantiate()
53+
include("docs/make.jl")'
54+
env:
55+
JULIA_PKG_SERVER: ""
56+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
57+
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }}

docs/make.jl

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,11 @@ makedocs(;
1414
edit_link="main",
1515
assets=String[],
1616
),
17-
pages=["Home" => "index.md"],
17+
pages=["Home" => "index.md",
18+
"Arrow" => "arrow.md",
19+
"Parameter Type" => "parametertype.md",
20+
"API" => "api.md",
21+
],
1822
)
1923

2024
deploydocs(; repo="github.com/andrewrosemberg/LearningToOptimize.jl", devbranch="main")

docs/src/api.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Api
2+
## LearningToOptimize
3+
4+
<!-- ```@index
5+
``` -->
6+
7+
```@autodocs
8+
Modules = [LearningToOptimize]
9+
```

docs/src/arrow.md

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
### **Reading from and Compressing Arrow Files**
2+
3+
#### **Introduction**
4+
The package provides tools to work with Arrow files efficiently, allowing users to store and retrieve large datasets efficiently.
5+
6+
```julia
7+
output_file = joinpath(save_path, "$(case_name)_output_$(batch_id)")
8+
recorder = Recorder{filetype}(output_file)
9+
successfull_solves = solve_batch(problem_iterator, recorder)
10+
```
11+
12+
#### **Compressing Arrow Files**
13+
14+
Since appending data to Arrow files is slow and inefficient,
15+
each instance of data is stored in a separate file. Thefore, in this case, the output files will look like this:
16+
17+
```
18+
<case_name>_output_<batch_id>_<instance_1_id>.arrow
19+
<case_name>_output_<batch_id>_<instance_2_id>.arrow
20+
...
21+
<case_name>_output_<batch_id>_<instance_n_id>.arrow
22+
```
23+
24+
`LearningToOptimize.jl` supports compressing batches of Arrow files for streamlined storage and retrieval.
25+
26+
Use the `LearningToOptimize.compress_batch_arrow` function to compress a batch of Arrow files into a single file. This reduces disk usage and simplifies file management.
27+
28+
**Function Signature**:
29+
```julia
30+
LearningToOptimize.compress_batch_arrow(
31+
save_path,
32+
case_name;
33+
keyword_all = "output",
34+
batch_id = string(batch_id),
35+
# keyword_any = [string(batch_id)]
36+
)
37+
```
38+
39+
- **Arguments**:
40+
- `save_path`: Path to save the compressed file.
41+
- `case_name`: Name of the case or batch.
42+
- `keyword_all`: Filter files containing this keyword (default: `"output"`).
43+
- `batch_id`: Identifier for the batch of files.
44+
- `keyword_any`: Array of keywords to further filter files.
45+
46+
The compressed file will be saved as `<case_name>_output_<batch_id>.arrow`.
47+
48+
49+
#### **Reading Arrow Files**
50+
Arrow files can be read using Julia’s Arrow library, which provides a tabular interface for data access.
51+
52+
**Example**:
53+
```julia
54+
using Arrow
55+
56+
# Read compressed Arrow file
57+
data = Arrow.Table("<case_name>_output_<batch_id>.arrow")
58+
59+
# Access data as a DataFrame
60+
using DataFrames
61+
df = DataFrame(data)
62+
63+
println("DataFrame content:")
64+
println(df)
65+
```

docs/src/index.md

Lines changed: 149 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,161 @@
22
CurrentModule = LearningToOptimize
33
```
44

5+
```@raw html
6+
<div style="width:100%; height:150px;border-width:4px;border-style:solid;padding-top:25px;
7+
border-color:#000;border-radius:10px;text-align:center;background-color:#99DDFF;
8+
color:#000">
9+
<h3 style="color: black;">Star us on GitHub!</h3>
10+
<a class="github-button" href="https://github.com/andrewrosemberg/LearningToOptimize.jl" data-icon="octicon-star" data-size="large" data-show-count="true" aria-label="Star andrewrosemberg/LearningToOptimize.jl on GitHub" style="margin:auto">Star</a>
11+
<script async defer src="https://buttons.github.io/buttons.js"></script>
12+
</div>
13+
```
14+
515
# LearningToOptimize
616

717
Documentation for [LearningToOptimize](https://github.com/andrewrosemberg/LearningToOptimize.jl).
818

919
Learning to optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for optimization.
1020

11-
```@index
21+
## Installation
22+
23+
```julia
24+
] add LearningToOptimize
1225
```
1326

14-
```@autodocs
15-
Modules = [LearningToOptimize]
27+
# Flowchart Summary
28+
29+
![flowchart](docs/L2O.png)
30+
31+
## Generate Dataset
32+
This package provides a basic way of generating a dataset of the solutions of an optimization problem by varying the values of the parameters in the problem and recording it.
33+
34+
### The Problem Iterator
35+
36+
The user needs to first define a problem iterator:
37+
38+
```julia
39+
# The problem to iterate over
40+
model = Model(() -> POI.Optimizer(HiGHS.Optimizer()))
41+
@variable(model, x)
42+
p = @variable(model, p in MOI.Parameter(1.0)) # The parameter (defined using POI)
43+
@constraint(model, cons, x + p >= 3)
44+
@objective(model, Min, 2x)
45+
46+
# The parameter values
47+
parameter_values = Dict(p => collect(1.0:10.0))
48+
49+
# The iterator
50+
problem_iterator = ProblemIterator(parameter_values)
51+
```
52+
53+
The parameter values of the problem iterator can be saved by simply:
54+
55+
```julia
56+
save(problem_iterator, "input_file", CSVFile)
57+
```
58+
59+
Which creates the following CSV:
60+
61+
| id | p |
62+
|----|-----|
63+
| 1 | 1.0 |
64+
| 2 | 2.0 |
65+
| 3 | 3.0 |
66+
| 4 | 4.0 |
67+
| 5 | 5.0 |
68+
| 6 | 6.0 |
69+
| 7 | 7.0 |
70+
| 8 | 8.0 |
71+
| 9 | 9.0 |
72+
| 10 | 10.0|
73+
74+
ps.: For illustration purpose, I have represented the id's here as integers, but in reality they are generated as UUIDs.
75+
76+
### The Recorder
77+
78+
Then chose what values to record:
79+
80+
```julia
81+
# CSV recorder to save the optimal primal and dual decision values
82+
recorder = Recorder{CSVFile}("output_file.csv", primal_variables=[x], dual_variables=[cons])
83+
84+
# Finally solve all problems described by the iterator
85+
solve_batch(problem_iterator, recorder)
1686
```
87+
88+
Which creates the following CSV:
89+
90+
| id | x | dual_cons |
91+
|----|------|-----------|
92+
| 1 | 2.0 | 2.0 |
93+
| 2 | 1.0 | 2.0 |
94+
| 3 | -0.0 | 2.0 |
95+
| 4 | -1.0 | 2.0 |
96+
| 5 | -2.0 | 2.0 |
97+
| 6 | -3.0 | 2.0 |
98+
| 7 | -4.0 | 2.0 |
99+
| 8 | -5.0 | 2.0 |
100+
| 9 | -6.0 | 2.0 |
101+
| 10 | -7.0 | 2.0 |
102+
103+
ps.: Ditto id's.
104+
105+
Similarly, there is also the option to save the database in arrow files:
106+
107+
```julia
108+
recorder = Recorder{ArrowFile}("output_file.arrow", primal_variables=[x], dual_variables=[cons])
109+
```
110+
111+
## Learning proxies
112+
113+
In order to train models to be able to forecast optimization solutions from parameter values, one option is to use the package Flux.jl:
114+
115+
```julia
116+
# read input and output data
117+
input_data = CSV.read("input_file.csv", DataFrame)
118+
output_data = CSV.read("output_file.csv", DataFrame)
119+
120+
# Separate input and output variables
121+
output_variables = output_data[!, Not(:id)]
122+
input_features = innerjoin(input_data, output_data[!, [:id]], on = :id)[!, Not(:id)] # just use success solves
123+
124+
# Define model
125+
model = Chain(
126+
Dense(size(input_features, 2), 64, relu),
127+
Dense(64, 32, relu),
128+
Dense(32, size(output_variables, 2))
129+
)
130+
131+
# Define loss function
132+
loss(x, y) = Flux.mse(model(x), y)
133+
134+
# Convert the data to matrices
135+
input_features = Matrix(input_features)'
136+
output_variables = Matrix(output_variables)'
137+
138+
# Define the optimizer
139+
optimizer = Flux.ADAM()
140+
141+
# Train the model
142+
Flux.train!(loss, Flux.params(model), [(input_features, output_variables)], optimizer)
143+
144+
# Make predictions
145+
predictions = model(input_features)
146+
```
147+
148+
## Coming Soon
149+
150+
Future features:
151+
- ML objectives that penalize infeasible predictions;
152+
- Warm-start from predicted solutions.
153+
154+
155+
<!-- ```@index
156+
157+
``` -->
158+
159+
160+
<!-- ```@autodocs
161+
Modules = [LearningToOptimize]
162+
``` -->

docs/src/parametertype.md

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
2+
### **Parameter Types for Optimization Problems**
3+
4+
#### **Introduction**
5+
When working with optimization problems, `LearningToOptimize.jl` supports multiple parameterization strategies. These strategies influence the behavior and performance of the solver and allow flexibility in retrieving information such as dual variables.
6+
7+
---
8+
9+
#### **Comparison**
10+
11+
| **Parameter Type** | **Supported Problems** | **Dual Support** | **Performance** | **Notes** |
12+
|--------------------------|------------------------|------------------|-----------------|----------------------------------|
13+
| `JuMPParameterType` | All | Yes | Moderate | General-purpose; slower. |
14+
| `JuMPNLPParameterType` | NLP Only | No | Fast | Optimized for NLP problems. |
15+
| `POIParameterType` (Default) | Linear, Conic | Yes | Fast | Requires POI-wrapped solvers. |
16+
17+
18+
19+
---
20+
21+
#### **Supported Parameter Types**
22+
23+
1. **`POIParameterType` (Default)**:
24+
- **Description**:
25+
- Extends MOI.Parameters for linear and conic problems.
26+
- Compatible with solvers wrapped using `ParametricOptInterface`.
27+
- Supports fetching duals w.r.t. parameters.
28+
- **Limitations**:
29+
- Not compatible with nonlinear solvers.
30+
- **Usage Example**:
31+
Default behavior when using `ProblemIterator` without specifying `param_type`.
32+
33+
2. **`JuMPParameterType`**:
34+
- **Description**:
35+
- Adds a variable as a parameter with an additional constraint during `solve_batch`.
36+
- Slower compared to other types but supports fetching duals w.r.t. the parameter.
37+
- Compatible with all problem types.
38+
- **Usage Example**:
39+
```julia
40+
using JuMP, LearningToOptimize
41+
42+
model = JuMP.Model(HiGHS.Optimizer)
43+
@variable(model, x)
44+
p = @variable(model, _p)
45+
@constraint(model, cons, x + _p >= 3)
46+
@objective(model, Min, 2x)
47+
48+
num_p = 10
49+
problem_iterator = ProblemIterator(
50+
Dict(p => collect(1.0:num_p));
51+
param_type = LearningToOptimize.JuMPParameterType
52+
)
53+
54+
recorder = Recorder{ArrowFile}("output.arrow"; primal_variables = [x], dual_variables = [cons])
55+
solve_batch(problem_iterator, recorder)
56+
```
57+
- **Advantages**:
58+
- Works with all solvers and problem types.
59+
- Duals w.r.t. parameters are available.
60+
61+
3. **`JuMPNLPParameterType`**:
62+
- **Description**:
63+
- Utilizes MOI’s internal parameter structure.
64+
- Optimized for speed but limited to nonlinear programming (NLP) problems.
65+
- Does not support fetching duals w.r.t. parameters.
66+
- **Usage Example**:
67+
```julia
68+
using JuMP, LearningToOptimize
69+
70+
model = JuMP.Model(Ipopt.Optimizer)
71+
@variable(model, x)
72+
p = @variable(model, _p in MOI.Parameter(1.0))
73+
@constraint(model, cons, x + _p >= 3)
74+
@objective(model, Min, 2x)
75+
76+
num_p = 10
77+
problem_iterator = ProblemIterator(
78+
Dict(p => collect(1.0:num_p));
79+
param_type = LearningToOptimize.JuMPNLPParameterType
80+
)
81+
82+
recorder = Recorder{ArrowFile}("output.arrow"; primal_variables = [x], dual_variables = [cons])
83+
solve_batch(problem_iterator, recorder)
84+
```
85+
- **Advantages**:
86+
- Fast and efficient for NLP problems.
87+
- No external wrappers required.

0 commit comments

Comments
 (0)