You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@
6
6
</div>
7
7
</div>
8
8
9
-
Learning to optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for parametric optimization problems.
9
+
Learning to Optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for parametric optimization problems.
10
10
11
11
Have a look at our sister [HugginFace Organization](https://huggingface.co/LearningToOptimize), for datasets, pre-trained models and benchmarks.
12
12
@@ -21,7 +21,7 @@ Have a look at our sister [HugginFace Organization](https://huggingface.co/Learn
21
21
22
22
# Background
23
23
24
-
Parametric optimization problems arise in scenarios where certain elements (e.g., coefficients, constraints) may vary according to problem parameters. A general form of a parameterized convex optimization problem is
24
+
Parametric optimization problems arise when certain elements (e.g., coefficients, constraints) may vary according to problem parameters. A general form of a parameterized convex optimization problem is
25
25
26
26
$$
27
27
\begin{aligned}
@@ -31,11 +31,11 @@ $$
31
31
\end{aligned}
32
32
$$
33
33
34
-
where $\theta$ is the parameter.
34
+
where $\theta$ is the parameter.
35
35
36
-
**Learning to Optimize (L2O)** is an emerging paradigm where machine learning models *learn* to solve optimization problems efficiently. This approach is also known as using **optimization proxies** or **amortized optimization**.
36
+
**Learning to Optimize (L2O)** is an emerging paradigm where machine learning models *learn* to solve optimization problems efficiently. This approach is also known as **optimization proxies** or **amortized optimization**.
37
37
38
-
In more technical terms, **amortized optimization** seeks to learn a function \\( f_\theta(x)\\) that maps problem parameters \\( x \\) to solutions \\( y \\) that (approximately) minimize a given objective function subject to constraints. Modern methods leverage techniques like **differentiable optimization layers**, **input-convex neural networks**, or constraint-enforcing architectures (e.g., [DC3](https://openreview.net/pdf?id=0Ow8_1kM5Z)) to ensure that the learned proxy solutions are both feasible and performant. By coupling the solver and the model in an **end-to-end** pipeline, these approaches let the training objective directly reflect downstream metrics, improving speed and reliability.
38
+
In more technical terms, **amortized optimization** seeks to learn a function $f_\theta(x)$ that maps problem parameters $x$ to solutions $y$ that (approximately) minimize a given objective function subject to constraints. Modern methods leverage techniques like **differentiable optimization layers**, **input-convex neural networks**, or constraint-enforcing architectures (e.g., [DC3](https://openreview.net/pdf?id=0Ow8_1kM5Z)) to ensure that the learned proxy solutions are both feasible and performant. By coupling the solver and the model in an **end-to-end** pipeline, these approaches let the training objective directly reflect downstream metrics, improving speed and reliability.
39
39
40
40
Recent advances also focus on **trustworthy** or **certifiable** proxies, where constraint satisfaction or performance bounds are guaranteed. This is crucial in domains like energy systems or manufacturing, where infeasible solutions can have large penalties or safety concerns. Overall, learning-based optimization frameworks aim to combine the advantages of ML (data-driven generalization) with the rigor of mathematical programming (constraint handling and optimality).
Instead of defining parameter instances manually, one may sample parameter values using pre-defined samplers - e.g. `scaled_distribution_sampler`, `box_sampler`- or define their own sampler. Samplers are functions that take a vector of parameter of type `MOI.Parameter` and return a matrix of parameter values.
103
+
Instead of defining parameter instances manually, one may sample parameter values using pre-defined samplers - e.g. `scaled_distribution_sampler`, `box_sampler`- or define their own sampler. Samplers are functions that take a vector of parameters of type `MOI.Parameter` and return a matrix of parameter values.
104
104
105
105
The easiest way to go from problem definition, sampling parameter values and saving them is to use the `general_sampler` function:
106
106
@@ -120,7 +120,7 @@ It loads the underlying model from a passed `file` that works with JuMP's `read_
120
120
121
121
### The Recorder
122
122
123
-
Then chose what values to record:
123
+
Then choose what values to record:
124
124
125
125
```julia
126
126
# CSV recorder to save the optimal primal and dual decision values
0 commit comments