Skip to content

Commit 75b92cc

Browse files
Docstring update for L2 penalty in SparseLogisticRegression (scikit-learn-contrib#281)
1 parent 7d274d8 commit 75b92cc

File tree

2 files changed

+15
-7
lines changed

2 files changed

+15
-7
lines changed

doc/changes/0.4.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Version 0.4 (in progress)
66
- Add support and tutorial for positive coefficients to :ref:`Group Lasso Penalty <skglm.penalties.WeightedGroupL2>` (PR: :gh:`221`)
77
- Check compatibility with datafit and penalty in solver (PR :gh:`137`)
88
- Add support to weight samples in the quadratic datafit :ref:`Weighted Quadratic Datafit <skglm.datafit.WeightedQuadratic>` (PR: :gh:`258`)
9-
9+
- Add support for ElasticNet regularization (`penalty="l1_plus_l2"`) to :ref:`SparseLogisticRegression <skglm.SparseLogisticRegression>` (PR: :gh:`244`)
1010

1111
Version 0.3.1 (2023/12/21)
1212
--------------------------

skglm/estimators.py

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -959,19 +959,27 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim
959959
960960
The optimization objective for sparse Logistic regression is:
961961
962-
.. math:: 1 / n_"samples" sum_(i=1)^(n_"samples") log(1 + exp(-y_i x_i^T w))
963-
+ alpha ||w||_1
962+
.. math::
963+
1 / n_"samples" \sum_{i=1}^{n_"samples"} log(1 + exp(-y_i x_i^T w))
964+
+ tt"l1_ratio" xx alpha ||w||_1
965+
+ (1 - tt"l1_ratio") xx alpha/2 ||w||_2 ^ 2
966+
967+
By default, ``l1_ratio=1.0`` corresponds to Lasso (pure L1 penalty).
968+
When ``0 < l1_ratio < 1``, the penalty is a convex combination of L1 and L2
969+
(i.e., ElasticNet). ``l1_ratio=0.0`` corresponds to Ridge (pure L2), but note
970+
that pure Ridge is not typically used with this class.
964971
965972
Parameters
966973
----------
967974
alpha : float, default=1.0
968975
Regularization strength; must be a positive float.
969976
970977
l1_ratio : float, default=1.0
971-
The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``. For
972-
``l1_ratio = 0`` the penalty is an L2 penalty. ``For l1_ratio = 1`` it
973-
is an L1 penalty. For ``0 < l1_ratio < 1``, the penalty is a
974-
combination of L1 and L2.
978+
The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``.
979+
Only used when ``penalty="l1_plus_l2"``.
980+
For ``l1_ratio = 0`` the penalty is an L2 penalty.
981+
``For l1_ratio = 1`` it is an L1 penalty.
982+
For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2.
975983
976984
tol : float, optional
977985
Stopping criterion for the optimization.

0 commit comments

Comments
 (0)