Skip to content

Commit a4b66c0

Browse files
Merge pull request #1 from floriankozikowski/elasticnet
Docstring update for ElasticNet in SparseLogisticRegression (completes #244)
2 parents fc6bc21 + 1d75bb2 commit a4b66c0

File tree

2 files changed

+16
-7
lines changed

2 files changed

+16
-7
lines changed

doc/changes/0.4.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Version 0.4 (in progress)
66
- Add support and tutorial for positive coefficients to :ref:`Group Lasso Penalty <skglm.penalties.WeightedGroupL2>` (PR: :gh:`221`)
77
- Check compatibility with datafit and penalty in solver (PR :gh:`137`)
88
- Add support to weight samples in the quadratic datafit :ref:`Weighted Quadratic Datafit <skglm.datafit.WeightedQuadratic>` (PR: :gh:`258`)
9-
9+
- Add support for ElasticNet regularization (`penalty="l1_plus_l2"`) to :ref:`SparseLogisticRegression <skglm.SparseLogisticRegression>` (PR: :gh:`244`)
1010

1111
Version 0.3.1 (2023/12/21)
1212
--------------------------

skglm/estimators.py

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -959,19 +959,28 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim
959959
960960
The optimization objective for sparse Logistic regression is:
961961
962-
.. math:: 1 / n_"samples" sum_(i=1)^(n_"samples") log(1 + exp(-y_i x_i^T w))
963-
+ alpha ||w||_1
962+
.. math::
963+
\frac{1}{n_{\text{samples}}} \sum_{i=1}^{n_{\text{samples}}}
964+
\log\left(1 + \exp(-y_i x_i^T w)\right)
965+
+ \alpha \cdot \left( \text{l1_ratio} \cdot \|w\|_1 +
966+
(1 - \text{l1_ratio}) \cdot \|w\|_2^2 \right)
967+
968+
By default, ``l1_ratio=1.0`` corresponds to Lasso (pure L1 penalty).
969+
When ``0 < l1_ratio < 1``, the penalty is a convex combination of L1 and L2
970+
(i.e., ElasticNet). ``l1_ratio=0.0`` corresponds to Ridge (pure L2), but note
971+
that pure Ridge is not typically used with this class.
964972
965973
Parameters
966974
----------
967975
alpha : float, default=1.0
968976
Regularization strength; must be a positive float.
969977
970978
l1_ratio : float, default=1.0
971-
The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``. For
972-
``l1_ratio = 0`` the penalty is an L2 penalty. ``For l1_ratio = 1`` it
973-
is an L1 penalty. For ``0 < l1_ratio < 1``, the penalty is a
974-
combination of L1 and L2.
979+
The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``.
980+
Only used when ``penalty="l1_plus_l2"``.
981+
For ``l1_ratio = 0`` the penalty is an L2 penalty.
982+
``For l1_ratio = 1`` it is an L1 penalty.
983+
For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2.
975984
976985
tol : float, optional
977986
Stopping criterion for the optimization.

0 commit comments

Comments
 (0)