@@ -959,12 +959,12 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim
959959
960960 The optimization objective for sparse Logistic regression is:
961961
962- .. math::
962+ .. math::
963963 \frac{1}{n_{\text{samples}}} \sum_{i=1}^{n_{\text{samples}}}
964964 \log\left(1 + \exp(-y_i x_i^T w)\right)
965965 + \alpha \cdot \left( \text{l1_ratio} \cdot \|w\|_1 +
966966 (1 - \text{l1_ratio}) \cdot \|w\|_2^2 \right)
967-
967+
968968 By default, ``l1_ratio=1.0`` corresponds to Lasso (pure L1 penalty).
969969 When ``0 < l1_ratio < 1``, the penalty is a convex combination of L1 and L2
970970 (i.e., ElasticNet). ``l1_ratio=0.0`` corresponds to Ridge (pure L2), but note
@@ -977,9 +977,9 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim
977977
978978 l1_ratio : float, default=1.0
979979 The ElasticNet mixing parameter, with ``0 <= l1_ratio <= 1``.
980- Only used when ``penalty="l1_plus_l2"``.
981- For ``l1_ratio = 0`` the penalty is an L2 penalty.
982- ``For l1_ratio = 1`` it is an L1 penalty.
980+ Only used when ``penalty="l1_plus_l2"``.
981+ For ``l1_ratio = 0`` the penalty is an L2 penalty.
982+ ``For l1_ratio = 1`` it is an L1 penalty.
983983 For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2.
984984
985985 tol : float, optional
0 commit comments