Skip to content

Commit 1c47508

Browse files
Update README.md
1 parent 61eec0b commit 1c47508

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
</div>
77
</div>
88

9-
Learning to optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for parametric optimization problems.
9+
Learning to Optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for parametric optimization problems.
1010

1111
Have a look at our sister [HugginFace Organization](https://huggingface.co/LearningToOptimize), for datasets, pre-trained models and benchmarks.
1212

@@ -21,7 +21,7 @@ Have a look at our sister [HugginFace Organization](https://huggingface.co/Learn
2121

2222
# Background
2323

24-
Parametric optimization problems arise in scenarios where certain elements (e.g., coefficients, constraints) may vary according to problem parameters. A general form of a parameterized convex optimization problem is
24+
Parametric optimization problems arise when certain elements (e.g., coefficients, constraints) may vary according to problem parameters. A general form of a parameterized convex optimization problem is
2525

2626
$$
2727
\begin{aligned}
@@ -31,11 +31,11 @@ $$
3131
\end{aligned}
3232
$$
3333

34-
where $ \theta $ is the parameter.
34+
where $\theta$ is the parameter.
3535

36-
**Learning to Optimize (L2O)** is an emerging paradigm where machine learning models *learn* to solve optimization problems efficiently. This approach is also known as using **optimization proxies** or **amortized optimization**.
36+
**Learning to Optimize (L2O)** is an emerging paradigm where machine learning models *learn* to solve optimization problems efficiently. This approach is also known as **optimization proxies** or **amortized optimization**.
3737

38-
In more technical terms, **amortized optimization** seeks to learn a function \\( f_\theta(x) \\) that maps problem parameters \\( x \\) to solutions \\( y \\) that (approximately) minimize a given objective function subject to constraints. Modern methods leverage techniques like **differentiable optimization layers**, **input-convex neural networks**, or constraint-enforcing architectures (e.g., [DC3](https://openreview.net/pdf?id=0Ow8_1kM5Z)) to ensure that the learned proxy solutions are both feasible and performant. By coupling the solver and the model in an **end-to-end** pipeline, these approaches let the training objective directly reflect downstream metrics, improving speed and reliability.
38+
In more technical terms, **amortized optimization** seeks to learn a function $f_\theta(x)$ that maps problem parameters $x$ to solutions $y$ that (approximately) minimize a given objective function subject to constraints. Modern methods leverage techniques like **differentiable optimization layers**, **input-convex neural networks**, or constraint-enforcing architectures (e.g., [DC3](https://openreview.net/pdf?id=0Ow8_1kM5Z)) to ensure that the learned proxy solutions are both feasible and performant. By coupling the solver and the model in an **end-to-end** pipeline, these approaches let the training objective directly reflect downstream metrics, improving speed and reliability.
3939

4040
Recent advances also focus on **trustworthy** or **certifiable** proxies, where constraint satisfaction or performance bounds are guaranteed. This is crucial in domains like energy systems or manufacturing, where infeasible solutions can have large penalties or safety concerns. Overall, learning-based optimization frameworks aim to combine the advantages of ML (data-driven generalization) with the rigor of mathematical programming (constraint handling and optimality).
4141

@@ -100,7 +100,7 @@ problem_iterator = load("input_file.csv", CSVFile)
100100

101101
### Samplers
102102

103-
Instead of defining parameter instances manually, one may sample parameter values using pre-defined samplers - e.g. `scaled_distribution_sampler`, `box_sampler`- or define their own sampler. Samplers are functions that take a vector of parameter of type `MOI.Parameter` and return a matrix of parameter values.
103+
Instead of defining parameter instances manually, one may sample parameter values using pre-defined samplers - e.g. `scaled_distribution_sampler`, `box_sampler`- or define their own sampler. Samplers are functions that take a vector of parameters of type `MOI.Parameter` and return a matrix of parameter values.
104104

105105
The easiest way to go from problem definition, sampling parameter values and saving them is to use the `general_sampler` function:
106106

@@ -120,7 +120,7 @@ It loads the underlying model from a passed `file` that works with JuMP's `read_
120120

121121
### The Recorder
122122

123-
Then chose what values to record:
123+
Then choose what values to record:
124124

125125
```julia
126126
# CSV recorder to save the optimal primal and dual decision values
@@ -155,7 +155,7 @@ recorder = Recorder{ArrowFile}("output_file.arrow", primal_variables=[x], dual_v
155155

156156
## Learning proxies
157157

158-
In order to train models to be able to forecast optimization solutions from parameter values, one option is to use the package Flux.jl:
158+
To train models to be able to forecast optimization solutions from parameter values, one option is to use the package Flux.jl:
159159

160160
```julia
161161
using CSV, DataFrames, Flux

0 commit comments

Comments
 (0)