Skip to content
2 changes: 1 addition & 1 deletion doc/changes/0.4.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Version 0.4 (in progress)
- Add support and tutorial for positive coefficients to :ref:`Group Lasso Penalty <skglm.penalties.WeightedGroupL2>` (PR: :gh:`221`)
- Check compatibility with datafit and penalty in solver (PR :gh:`137`)
- Add support to weight samples in the quadratic datafit :ref:`Weighted Quadratic Datafit <skglm.datafit.WeightedQuadratic>` (PR: :gh:`258`)

- Add support for ElasticNet regularization (`penalty="l1_plus_l2"`) to :ref:`SparseLogisticRegression <skglm.SparseLogisticRegression>` (PR: :gh:`244`)

Version 0.3.1 (2023/12/21)
--------------------------
Expand Down
20 changes: 14 additions & 6 deletions skglm/estimators.py
Original file line number Diff line number Diff line change
Expand Up @@ -959,19 +959,27 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim

The optimization objective for sparse Logistic regression is:

.. math:: 1 / n_"samples" sum_(i=1)^(n_"samples") log(1 + exp(-y_i x_i^T w))
+ alpha ||w||_1
.. math::
1 / n_"samples" \sum_{i=1}^{n_"samples"} log(1 + exp(-y_i x_i^T w))
+ tt"l1_ratio" xx alpha ||w||_1
+ (1 - tt"l1_ratio") xx alpha/2 ||w||_2 ^ 2

By default, ``l1_ratio=1.0`` corresponds to Lasso (pure L1 penalty).
When ``0 < l1_ratio < 1``, the penalty is a convex combination of L1 and L2
(i.e., ElasticNet). ``l1_ratio=0.0`` corresponds to Ridge (pure L2), but note
that pure Ridge is not typically used with this class.

Parameters
----------
alpha : float, default=1.0
Regularization strength; must be a positive float.

l1_ratio : float, default=1.0
The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``. For
``l1_ratio = 0`` the penalty is an L2 penalty. ``For l1_ratio = 1`` it
is an L1 penalty. For ``0 < l1_ratio < 1``, the penalty is a
combination of L1 and L2.
The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``.
Only used when ``penalty="l1_plus_l2"``.
For ``l1_ratio = 0`` the penalty is an L2 penalty.
``For l1_ratio = 1`` it is an L1 penalty.
For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2.

tol : float, optional
Stopping criterion for the optimization.
Expand Down
Loading