|
2 | 2 | CurrentModule = LearningToOptimize |
3 | 3 | ``` |
4 | 4 |
|
| 5 | +```@raw html |
| 6 | +<div style="width:100%; height:150px;border-width:4px;border-style:solid;padding-top:25px; |
| 7 | + border-color:#000;border-radius:10px;text-align:center;background-color:#99DDFF; |
| 8 | + color:#000"> |
| 9 | + <h3 style="color: black;">Star us on GitHub!</h3> |
| 10 | + <a class="github-button" href="https://github.com/andrewrosemberg/LearningToOptimize.jl" data-icon="octicon-star" data-size="large" data-show-count="true" aria-label="Star andrewrosemberg/LearningToOptimize.jl on GitHub" style="margin:auto">Star</a> |
| 11 | + <script async defer src="https://buttons.github.io/buttons.js"></script> |
| 12 | +</div> |
| 13 | +``` |
| 14 | + |
5 | 15 | # LearningToOptimize |
6 | 16 |
|
7 | 17 | Documentation for [LearningToOptimize](https://github.com/andrewrosemberg/LearningToOptimize.jl). |
8 | 18 |
|
9 | 19 | Learning to optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for optimization. |
10 | 20 |
|
11 | | -```@index |
| 21 | +## Installation |
| 22 | + |
| 23 | +```julia |
| 24 | +] add LearningToOptimize |
12 | 25 | ``` |
13 | 26 |
|
14 | | -```@autodocs |
15 | | -Modules = [LearningToOptimize] |
| 27 | +# Flowchart Summary |
| 28 | + |
| 29 | + |
| 30 | + |
| 31 | +## Generate Dataset |
| 32 | +This package provides a basic way of generating a dataset of the solutions of an optimization problem by varying the values of the parameters in the problem and recording it. |
| 33 | + |
| 34 | +### The Problem Iterator |
| 35 | + |
| 36 | +The user needs to first define a problem iterator: |
| 37 | + |
| 38 | +```julia |
| 39 | +# The problem to iterate over |
| 40 | +model = Model(() -> POI.Optimizer(HiGHS.Optimizer())) |
| 41 | +@variable(model, x) |
| 42 | +p = @variable(model, p in MOI.Parameter(1.0)) # The parameter (defined using POI) |
| 43 | +@constraint(model, cons, x + p >= 3) |
| 44 | +@objective(model, Min, 2x) |
| 45 | + |
| 46 | +# The parameter values |
| 47 | +parameter_values = Dict(p => collect(1.0:10.0)) |
| 48 | + |
| 49 | +# The iterator |
| 50 | +problem_iterator = ProblemIterator(parameter_values) |
| 51 | +``` |
| 52 | + |
| 53 | +The parameter values of the problem iterator can be saved by simply: |
| 54 | + |
| 55 | +```julia |
| 56 | +save(problem_iterator, "input_file", CSVFile) |
| 57 | +``` |
| 58 | + |
| 59 | +Which creates the following CSV: |
| 60 | + |
| 61 | +| id | p | |
| 62 | +|----|-----| |
| 63 | +| 1 | 1.0 | |
| 64 | +| 2 | 2.0 | |
| 65 | +| 3 | 3.0 | |
| 66 | +| 4 | 4.0 | |
| 67 | +| 5 | 5.0 | |
| 68 | +| 6 | 6.0 | |
| 69 | +| 7 | 7.0 | |
| 70 | +| 8 | 8.0 | |
| 71 | +| 9 | 9.0 | |
| 72 | +| 10 | 10.0| |
| 73 | + |
| 74 | +ps.: For illustration purpose, I have represented the id's here as integers, but in reality they are generated as UUIDs. |
| 75 | + |
| 76 | +### The Recorder |
| 77 | + |
| 78 | +Then chose what values to record: |
| 79 | + |
| 80 | +```julia |
| 81 | +# CSV recorder to save the optimal primal and dual decision values |
| 82 | +recorder = Recorder{CSVFile}("output_file.csv", primal_variables=[x], dual_variables=[cons]) |
| 83 | + |
| 84 | +# Finally solve all problems described by the iterator |
| 85 | +solve_batch(problem_iterator, recorder) |
16 | 86 | ``` |
| 87 | + |
| 88 | +Which creates the following CSV: |
| 89 | + |
| 90 | +| id | x | dual_cons | |
| 91 | +|----|------|-----------| |
| 92 | +| 1 | 2.0 | 2.0 | |
| 93 | +| 2 | 1.0 | 2.0 | |
| 94 | +| 3 | -0.0 | 2.0 | |
| 95 | +| 4 | -1.0 | 2.0 | |
| 96 | +| 5 | -2.0 | 2.0 | |
| 97 | +| 6 | -3.0 | 2.0 | |
| 98 | +| 7 | -4.0 | 2.0 | |
| 99 | +| 8 | -5.0 | 2.0 | |
| 100 | +| 9 | -6.0 | 2.0 | |
| 101 | +| 10 | -7.0 | 2.0 | |
| 102 | + |
| 103 | +ps.: Ditto id's. |
| 104 | + |
| 105 | +Similarly, there is also the option to save the database in arrow files: |
| 106 | + |
| 107 | +```julia |
| 108 | +recorder = Recorder{ArrowFile}("output_file.arrow", primal_variables=[x], dual_variables=[cons]) |
| 109 | +``` |
| 110 | + |
| 111 | +## Learning proxies |
| 112 | + |
| 113 | +In order to train models to be able to forecast optimization solutions from parameter values, one option is to use the package Flux.jl: |
| 114 | + |
| 115 | +```julia |
| 116 | +# read input and output data |
| 117 | +input_data = CSV.read("input_file.csv", DataFrame) |
| 118 | +output_data = CSV.read("output_file.csv", DataFrame) |
| 119 | + |
| 120 | +# Separate input and output variables |
| 121 | +output_variables = output_data[!, Not(:id)] |
| 122 | +input_features = innerjoin(input_data, output_data[!, [:id]], on = :id)[!, Not(:id)] # just use success solves |
| 123 | + |
| 124 | +# Define model |
| 125 | +model = Chain( |
| 126 | + Dense(size(input_features, 2), 64, relu), |
| 127 | + Dense(64, 32, relu), |
| 128 | + Dense(32, size(output_variables, 2)) |
| 129 | +) |
| 130 | + |
| 131 | +# Define loss function |
| 132 | +loss(x, y) = Flux.mse(model(x), y) |
| 133 | + |
| 134 | +# Convert the data to matrices |
| 135 | +input_features = Matrix(input_features)' |
| 136 | +output_variables = Matrix(output_variables)' |
| 137 | + |
| 138 | +# Define the optimizer |
| 139 | +optimizer = Flux.ADAM() |
| 140 | + |
| 141 | +# Train the model |
| 142 | +Flux.train!(loss, Flux.params(model), [(input_features, output_variables)], optimizer) |
| 143 | + |
| 144 | +# Make predictions |
| 145 | +predictions = model(input_features) |
| 146 | +``` |
| 147 | + |
| 148 | +## Coming Soon |
| 149 | + |
| 150 | +Future features: |
| 151 | + - ML objectives that penalize infeasible predictions; |
| 152 | + - Warm-start from predicted solutions. |
| 153 | + |
| 154 | + |
| 155 | +<!-- ```@index |
| 156 | +
|
| 157 | +``` --> |
| 158 | + |
| 159 | + |
| 160 | +<!-- ```@autodocs |
| 161 | +Modules = [LearningToOptimize] |
| 162 | +``` --> |
0 commit comments