You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where \f$x \in \mathbb{R}^n\f$ is the optimization variable. The objective function is defined by a positive semidefinite matrix \f$H(\theta) \in \mathcal{S}^n_+\f$ and a vector \f$g(\theta) \in \mathbb{R}^n\f$. The linear constraints are defined by the equality-contraint matrix \f$A(\theta) \in \mathbb{R}^{n_\text{eq} \times n}\f$ and the inequality-constraint matrix \f$C(\theta) \in \mathbb{R}^{n_\text{in} \times n}\f$ and the vectors \f$b \in \mathbb{R}^{n_\text{eq}}\f$, \f$l(\theta) \in \mathbb{R}^{n_\text{in}}\f$ and \f$u(\theta) \in \mathbb{R}^{n_\text{in}}\f$ so that \f$b_i \in \mathbb{R},~ \forall i = 1,...,n_\text{eq}\f$ and \f$l_i \in \mathbb{R} \cup \{ -\infty \}\f$ and \f$u_i \in \mathbb{R} \cup \{ +\infty \}, ~\forall i = 1,...,n_\text{in}\f$.
14
14
15
-
We provide in the file qplayer_sudoku.py an example which enables training LP layer in two different settings: (i) either we learn only the equality constraint matrix \f$A\f$, or (ii) we learn on the same time \f$A\f$ and \f$b\f$, such that \f$b\f$ is structurally in the range space of \f$A\f$. The procedure (i) is harder since a priori the fixed right hand side does not ensure the QP to be feasible. Yet, this learning procedure is more structured, and for some problem can produce better prediction quicker (i.e., in fewer epochs).
15
+
We provide in the file `qplayer_sudoku.py` an example which enables training LP layer in two different settings: (i) either we learn only the equality constraint matrix \f$A\f$, or (ii) we learn on the same time \f$A\f$ and \f$b\f$, such that \f$b\f$ is structurally in the range space of \f$A\f$. The procedure (i) is harder since a priori the fixed right hand side does not ensure the QP to be feasible. Yet, this learning procedure is more structured, and for some problem can produce better prediction quicker (i.e., in fewer epochs).
16
+
17
+
The differentiable QP layer is implemented in \ref proxsuite.torch.qplayer.QPFunction.
16
18
17
19
\section QPLayerCite How to cite QPLayer ?
18
20
@@ -32,4 +34,4 @@ If you are using QPLayer for your work, we encourage you to cite the related pap
32
34
}
33
35
\endcode
34
36
35
-
The paper is publicly available in HAL ([ref 04133055](https://inria.hal.science/hal-04133055/file/QPLayer_Preprint.pdf)).
37
+
The paper is publicly available in HAL ([ref 04133055](https://inria.hal.science/hal-04133055/file/QPLayer_Preprint.pdf)).
0 commit comments