@@ -210,143 +210,143 @@ md"[^OptProx]"
210210
211211"""
212212
213- # ╔═╡ 45275d44-e268-43cb-8156-feecd916a6da
214- @htl """
215- <div style="
216- border:1px solid #ccc;
217- border-radius:6px;
218- padding:1rem;
219- font-size:0.9rem;
220- max-width:760px;
221- line-height:1.45;
222- ">
223-
224- <!-- ─────────────────────── header ─────────────────────── -->
225- <h2 style="margin-top:0">LearningToOptimize Organization</h2>
226-
227- <p>
228- <strong>LearningToOptimize (L2O)</strong> is a collection of open-source tools
229- focused on the emerging paradigm of <em>amortized optimization</em>—using machine-learning
230- methods to accelerate traditional constrained-optimization solvers.
231- <em>L2O is a work-in-progress; existing functionality is considered experimental and may
232- change.</em>
233- </p>
234-
235- <!-- ─────────────────── repositories table ──────────────── -->
236- <h3>Open-Source Repositories</h3>
237-
238- <table style="border-collapse:collapse;width:100%">
239- <tbody>
240- <tr>
241- <td style="padding:4px 6px;vertical-align:top;">
242- <a href="https://github.com/LearningToOptimize/LearningToOptimize.jl"
243- target="_blank">LearningToOptimize.jl</a>
244- </td>
245- <td style="padding:4px 6px;">
246- Flagship Julia package that wraps data generation, training loops and evaluation
247- utilities for fitting surrogate models to parametric optimization problems.
248- </td>
249- </tr>
250-
251- <tr>
252- <td style="padding:4px 6px;vertical-align:top;">
253- <a href="https://github.com/andrewrosemberg/DecisionRules.jl"
254- target="_blank">DecisionRules.jl</a>
255- </td>
256- <td style="padding:4px 6px;">
257- Build decision rules for multistage stochastic programs, as proposed in
258- <a href="https://arxiv.org/pdf/2405.14973" target="_blank"><em>Efficiently
259- Training Deep-Learning Parametric Policies using Lagrangian Duality</em></a>.
260- </td>
261- </tr>
262-
263- <tr>
264- <td style="padding:4px 6px;vertical-align:top;">
265- <a href="https://github.com/LearningToOptimize/L2OALM.jl"
266- target="_blank">L2OALM.jl</a>
267- </td>
268- <td style="padding:4px 6px;">
269- Implementation of the primal-dual learning method <strong>ALM</strong>,
270- introduced in
271- <a href="https://ojs.aaai.org/index.php/AAAI/article/view/25520" target="_blank">
272- <em>Self-Supervised Primal-Dual Learning for Constrained Optimization</em></a>.
273- </td>
274- </tr>
275-
276- <tr>
277- <td style="padding:4px 6px;vertical-align:top;">
278- <a href="https://github.com/LearningToOptimize/L2ODLL.jl"
279- target="_blank">L2ODLL.jl</a>
280- </td>
281- <td style="padding:4px 6px;">
282- Implementation of the dual learning method <strong>DLL</strong>,
283- proposed in
284- <a href="https://neurips.cc/virtual/2024/poster/94146" target="_blank">
285- <em>Dual Lagrangian Learning for Conic Optimization</em></a>.
286- </td>
287- </tr>
288-
289- <tr>
290- <td style="padding:4px 6px;vertical-align:top;">
291- <a href="https://github.com/LearningToOptimize/L2ODC3.jl"
292- target="_blank">L2ODC3.jl</a>
293- </td>
294- <td style="padding:4px 6px;">
295- Implementation of the primal learning method <strong>DC3</strong>, as described in
296- <a href="https://openreview.net/forum?id=V1ZHVxJ6dSS" target="_blank">
297- <em>DC3: A Learning Method for Optimization with Hard Constraints</em></a>.
298- </td>
299- </tr>
300-
301- <tr>
302- <td style="padding:4px 6px;vertical-align:top;">
303- <a href="https://github.com/LearningToOptimize/BatchNLPKernels.jl"
304- target="_blank">BatchNLPKernels.jl</a>
305- </td>
306- <td style="padding:4px 6px;">
307- GPU kernels that evaluate objectives, Jacobians and Hessians for
308- <strong>batches</strong> of
309- <a href="https://github.com/exanauts/ExaModels.jl" target="_blank">ExaModels</a>,
310- useful when defining loss functions for large-batch ML predictions.
311- </td>
312- </tr>
313-
314- <tr>
315- <td style="padding:4px 6px;vertical-align:top;">
316- <a href="https://github.com/LearningToOptimize/BatchConeKernels.jl"
317- target="_blank">BatchConeKernels.jl</a>
318- </td>
319- <td style="padding:4px 6px;">
320- GPU kernels for batched cone operations (projections, distances, etc.),
321- enabling advanced architectures such as repair layers.
322- </td>
323- </tr>
324-
325- <tr>
326- <td style="padding:4px 6px;vertical-align:top;">
327- <a href="https://github.com/LearningToOptimize/LearningToControlClass"
328- target="_blank">LearningToControlClass</a>
329- </td>
330- <td style="padding:4px 6px;">
331- Course repository for <em>Special Topics on Optimal Control & Learning</em>
332- (Fall 2025, Georgia Tech).
333- </td>
334- </tr>
335- </tbody>
336- </table>
337-
338- <!-- ─────────────── datasets and weights ──────────────── -->
339- <h3 style="margin-top:1.25rem;">Open Datasets and Weights</h3>
340-
341- <p>
342- The
343- <a href="https://huggingface.co/LearningToOptimize" target="_blank">
344- LearningToOptimize 🤗 Hugging Face organization</a>
345- hosts datasets and pre-trained weights that can be used with L2O packages.
346- </p>
347-
348- </div>
349- """
213+ # # ╔═╡ 45275d44-e268-43cb-8156-feecd916a6da
214+ # @htl """
215+ # <div style="
216+ # border:1px solid #ccc;
217+ # border-radius:6px;
218+ # padding:1rem;
219+ # font-size:0.9rem;
220+ # max-width:760px;
221+ # line-height:1.45;
222+ # ">
223+
224+ # <!-- ─────────────────────── header ─────────────────────── -->
225+ # <h2 style="margin-top:0">LearningToOptimize Organization</h2>
226+
227+ # <p>
228+ # <strong>LearningToOptimize (L2O)</strong> is a collection of open-source tools
229+ # focused on the emerging paradigm of <em>amortized optimization</em>—using machine-learning
230+ # methods to accelerate traditional constrained-optimization solvers.
231+ # <em>L2O is a work-in-progress; existing functionality is considered experimental and may
232+ # change.</em>
233+ # </p>
234+
235+ # <!-- ─────────────────── repositories table ──────────────── -->
236+ # <h3>Open-Source Repositories</h3>
237+
238+ # <table style="border-collapse:collapse;width:100%">
239+ # <tbody>
240+ # <tr>
241+ # <td style="padding:4px 6px;vertical-align:top;">
242+ # <a href="https://github.com/LearningToOptimize/LearningToOptimize.jl"
243+ # target="_blank">LearningToOptimize.jl</a>
244+ # </td>
245+ # <td style="padding:4px 6px;">
246+ # Flagship Julia package that wraps data generation, training loops and evaluation
247+ # utilities for fitting surrogate models to parametric optimization problems.
248+ # </td>
249+ # </tr>
250+
251+ # <tr>
252+ # <td style="padding:4px 6px;vertical-align:top;">
253+ # <a href="https://github.com/andrewrosemberg/DecisionRules.jl"
254+ # target="_blank">DecisionRules.jl</a>
255+ # </td>
256+ # <td style="padding:4px 6px;">
257+ # Build decision rules for multistage stochastic programs, as proposed in
258+ # <a href="https://arxiv.org/pdf/2405.14973" target="_blank"><em>Efficiently
259+ # Training Deep-Learning Parametric Policies using Lagrangian Duality</em></a>.
260+ # </td>
261+ # </tr>
262+
263+ # <tr>
264+ # <td style="padding:4px 6px;vertical-align:top;">
265+ # <a href="https://github.com/LearningToOptimize/L2OALM.jl"
266+ # target="_blank">L2OALM.jl</a>
267+ # </td>
268+ # <td style="padding:4px 6px;">
269+ # Implementation of the primal-dual learning method <strong>ALM</strong>,
270+ # introduced in
271+ # <a href="https://ojs.aaai.org/index.php/AAAI/article/view/25520" target="_blank">
272+ # <em>Self-Supervised Primal-Dual Learning for Constrained Optimization</em></a>.
273+ # </td>
274+ # </tr>
275+
276+ # <tr>
277+ # <td style="padding:4px 6px;vertical-align:top;">
278+ # <a href="https://github.com/LearningToOptimize/L2ODLL.jl"
279+ # target="_blank">L2ODLL.jl</a>
280+ # </td>
281+ # <td style="padding:4px 6px;">
282+ # Implementation of the dual learning method <strong>DLL</strong>,
283+ # proposed in
284+ # <a href="https://neurips.cc/virtual/2024/poster/94146" target="_blank">
285+ # <em>Dual Lagrangian Learning for Conic Optimization</em></a>.
286+ # </td>
287+ # </tr>
288+
289+ # <tr>
290+ # <td style="padding:4px 6px;vertical-align:top;">
291+ # <a href="https://github.com/LearningToOptimize/L2ODC3.jl"
292+ # target="_blank">L2ODC3.jl</a>
293+ # </td>
294+ # <td style="padding:4px 6px;">
295+ # Implementation of the primal learning method <strong>DC3</strong>, as described in
296+ # <a href="https://openreview.net/forum?id=V1ZHVxJ6dSS" target="_blank">
297+ # <em>DC3: A Learning Method for Optimization with Hard Constraints</em></a>.
298+ # </td>
299+ # </tr>
300+
301+ # <tr>
302+ # <td style="padding:4px 6px;vertical-align:top;">
303+ # <a href="https://github.com/LearningToOptimize/BatchNLPKernels.jl"
304+ # target="_blank">BatchNLPKernels.jl</a>
305+ # </td>
306+ # <td style="padding:4px 6px;">
307+ # GPU kernels that evaluate objectives, Jacobians and Hessians for
308+ # <strong>batches</strong> of
309+ # <a href="https://github.com/exanauts/ExaModels.jl" target="_blank">ExaModels</a>,
310+ # useful when defining loss functions for large-batch ML predictions.
311+ # </td>
312+ # </tr>
313+
314+ # <tr>
315+ # <td style="padding:4px 6px;vertical-align:top;">
316+ # <a href="https://github.com/LearningToOptimize/BatchConeKernels.jl"
317+ # target="_blank">BatchConeKernels.jl</a>
318+ # </td>
319+ # <td style="padding:4px 6px;">
320+ # GPU kernels for batched cone operations (projections, distances, etc.),
321+ # enabling advanced architectures such as repair layers.
322+ # </td>
323+ # </tr>
324+
325+ # <tr>
326+ # <td style="padding:4px 6px;vertical-align:top;">
327+ # <a href="https://github.com/LearningToOptimize/LearningToControlClass"
328+ # target="_blank">LearningToControlClass</a>
329+ # </td>
330+ # <td style="padding:4px 6px;">
331+ # Course repository for <em>Special Topics on Optimal Control & Learning</em>
332+ # (Fall 2025, Georgia Tech).
333+ # </td>
334+ # </tr>
335+ # </tbody>
336+ # </table>
337+
338+ # <!-- ─────────────── datasets and weights ──────────────── -->
339+ # <h3 style="margin-top:1.25rem;">Open Datasets and Weights</h3>
340+
341+ # <p>
342+ # The
343+ # <a href="https://huggingface.co/LearningToOptimize" target="_blank">
344+ # LearningToOptimize 🤗 Hugging Face organization</a>
345+ # hosts datasets and pre-trained weights that can be used with L2O packages.
346+ # </p>
347+
348+ # </div>
349+ # """
350350
351351# ╔═╡ c08f511e-b91d-4d17-a286-96469c31568a
352352md " ## Example: Robotic Arm Manipulation"
0 commit comments