Skip to content

Commit 54ec416

Browse files
add metatags for better SEO and change some wordings
1 parent 9df1640 commit 54ec416

File tree

11 files changed

+68
-14
lines changed

11 files changed

+68
-14
lines changed

doc/api.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
.. _api_documentation:
22

3+
.. meta::
4+
:description: Browse the skglm API documentation covering estimators (Lasso, ElasticNet, Cox), penalties (L1, SCAD, MCP), datafits (Logistic, Poisson), and optimized solvers.
5+
36
=================
47
API Documentation
58
=================

doc/contribute.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
.. _contribute:
22

3+
.. meta::
4+
:description: Contribute to skglm by reporting bugs, suggesting features, or submitting pull requests. Join us in making skglm even better!
5+
:og:title: Contribute to skglm
6+
37
Contribute to ``skglm``
48
=======================
59

doc/getting_started.rst

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
.. _getting_started:
22

3+
.. meta::
4+
:description: Learn how to fit Lasso and custom GLM estimators with skglm, a modular Python library compatible with scikit-learn. Includes examples and code snippets.
5+
36
===============
47
Getting started
58
===============
@@ -31,7 +34,7 @@ Fitting a Lasso estimator
3134
-------------------------
3235

3336
Let's start first by generating a toy dataset and splitting it to train and test sets.
34-
For that, we will use ``scikit-learn``
37+
For that, we will use ``scikit-learn``
3538
`make_regression <https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_regression.html#sklearn.datasets.make_regression>`_
3639

3740
.. code-block:: python
@@ -42,7 +45,7 @@ For that, we will use ``scikit-learn``
4245
4346
# generate toy data
4447
X, y = make_regression(n_samples=100, n_features=1000)
45-
48+
4649
# split data
4750
X_train, X_test, y_train, y_test = train_test_split(X, y)
4851
@@ -52,7 +55,7 @@ Then let's fit ``skglm`` :ref:`Lasso <skglm.Lasso>` estimator and prints its sco
5255
5356
# import estimator
5457
from skglm import Lasso
55-
58+
5659
# init and fit
5760
estimator = Lasso()
5861
estimator.fit(X_train, y_train)
@@ -63,7 +66,7 @@ Then let's fit ``skglm`` :ref:`Lasso <skglm.Lasso>` estimator and prints its sco
6366
6467
.. note::
6568

66-
- The first fit after importing ``skglm`` has an overhead as ``skglm`` uses `Numba <https://numba.pydata.org/>`_
69+
- The first fit after importing ``skglm`` has an overhead as ``skglm`` uses `Numba <https://numba.pydata.org/>`_
6770
The subsequent fits will achieve top speed since Numba compilation is cached.
6871

6972
``skglm`` has several other ``scikit-learn`` compatible estimators.
@@ -138,7 +141,7 @@ and pass it to ``GeneralizedLinearEstimator``. Explore the list of supported :re
138141

139142
.. important::
140143

141-
- It is possible to create your own datafit and penalties. Check the tutorials on :ref:`how to add a custom datafit <how_to_add_custom_datafit>`
144+
- It is possible to create your own datafit and penalties. Check the tutorials on :ref:`how to add a custom datafit <how_to_add_custom_datafit>`
142145
and :ref:`how to add a custom penalty <how_to_add_custom_penalty>`.
143146

144147

doc/index.rst

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,17 +3,26 @@
33
You can adapt this file completely to your liking, but it should at least
44
contain the root `toctree` directive.
55
6+
.. meta::
7+
:og:title: skglm: Fast, Scalable & Flexible Regularized GLMs and Sparse Modeling for Python
8+
:description: skglm is the fastest, most modular Python library for regularized GLMs—fully scikit-learn compatible for advanced statistical modeling.
9+
:og:image: _static/images/logo.svg
10+
:og:url: https://contrib.scikit-learn.org/skglm/
11+
:keywords: Generalized Linear Models, GLM, scikit-learn, Lasso, ElasticNet, Cox, modular, efficient, regularized
12+
613
=========
714
``skglm``
815
=========
9-
*A fast and modular scikit-learn replacement for regularized GLMs*
16+
*Fast and Flexible Generalized Linear Models for Python*
1017

1118
--------
1219

1320

1421
``skglm`` is a Python package that offers **fast estimators** for regularized Generalized Linear Models (GLMs)
15-
that are **100% compatible with** ``scikit-learn``. It is **highly flexible** and supports a wide range of GLMs.
16-
You get to choose from ``skglm``'s already-made estimators or **customize your own** by combining the available datafits and penalties.
22+
that are **100% compatible with** ``scikit-learn``. It is **highly flexible** and supports a wide range of GLMs,
23+
designed to tackle high-dimensional data and scalable machine learning problems.
24+
Whether you choose from our ready-made estimators or **customize your own** using a modular combination of datafits and penalties,
25+
skglm delivers performance and flexibility for both academic research and production environments.
1726

1827
Get a hands-on glimpse on ``skglm`` through the :ref:`Getting started page <getting_started>`.
1928

@@ -79,6 +88,22 @@ It is also available on conda-forge and can be installed using, for instance:
7988
$ conda install -c conda-forge skglm
8089
8190
With ``skglm`` being installed, Get the first steps with the package via the :ref:`Getting started section <getting_started>`.
91+
92+
Applications and Use Cases
93+
---------------------------
94+
95+
``skglm`` drives impactful solutions across diverse sectors with its fast, modular approach to regularized GLMs and sparse modeling. Some examples include:
96+
97+
.. list-table::
98+
:widths: 20 80
99+
100+
* - **Healthcare:**
101+
- Enhance clinical trial analytics and early biomarker discovery by efficiently analyzing high-dimensional biological data and features like cox regression modeling.
102+
* - **Finance:**
103+
- Conduct transparent and interpretable risk modeling with scalable, robust sparse regression across vast datasets.
104+
* - **Energy:**
105+
- Optimize real-time electricity forecasting and load analysis by processing large time-series datasets for predictive maintenance and anomaly detection.
106+
82107
Other advanced topics and uses-cases are covered in :ref:`Tutorials <tutorials>`.
83108

84109
.. it is mandatory to keep the toctree here although it doesn't show up in the page

doc/tutorials/add_datafit.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
:orphan:
22

33
.. _how_to_add_custom_datafit:
4+
.. meta::
5+
:description: Tutorial on creating and implementing a custom datafit in skglm. Step-by-step guide includes deriving gradients, Hessians, and an example with Poisson datafit.
46

57
How to add a custom datafit
68
~~~~~~~~~~~~~~~~~~~~~~~~~~~

doc/tutorials/add_penalty.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22

33
.. _how_to_add_custom_penalty:
44

5+
.. meta::
6+
:description: Step-by-step tutorial on adding custom penalties in skglm. Covers implementation details, proximal operators, and optimality conditions using the L1 penalty.
7+
58
How to add a custom penalty
69
~~~~~~~~~~~~~~~~~~~~~~~~~~~
710

doc/tutorials/alpha_max.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
.. _alpha_max:
22

3+
.. meta::
4+
:description: Tutorial explaining the critical regularization strength (alpha_max) in skglm. Learn conditions for zero solutions in L1-regularized optimization problems.
5+
36
==========================================================
47
Critical regularization strength above which solution is 0
58
==========================================================

doc/tutorials/cox_datafit.rst

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,13 @@
11
.. _maths_cox_datafit:
22

3+
.. meta::
4+
:description: Detailed mathematical guide on Cox datafit implementation in skglm, covering Breslow and Efron estimates. Useful for survival analysis.
5+
36
=============================
47
Mathematic behind Cox datafit
58
=============================
69

7-
This tutorial presents the mathematics behind Cox datafit using both Breslow and Efron estimate.
10+
This tutorial presents the mathematics behind Cox datafit using both Breslow and Efron estimate.
811

912

1013
Problem setup
@@ -95,7 +98,7 @@ where the division and the square operations are performed element-wise.
9598
The Hessian, as it is, is costly to evaluate because of the right hand-side term.
9699
In particular, the latter involves a :math:`\mathcal{O}(n^3)` operations. We overcome this limitation by using a diagonal upper bound on the Hessian.
97100

98-
We construct such an upper bound by noticing that
101+
We construct such an upper bound by noticing that
99102

100103
#. the function :math:`F` is convex and hence :math:`\nabla^2 F(u)` is positive semi-definite
101104
#. the second term is positive semi-definite
@@ -127,10 +130,10 @@ We can define :math:`y_{i_1}, \ldots, y_{i_m}` the unique times, assumed to be i
127130

128131
.. math::
129132
:label: def-H
130-
133+
131134
H_{y_{i_l}} = \{ i \ | \ s_i = 1 \ ;\ y_i = y_{i_l} \}
132135
,
133-
136+
134137
the set of uncensored observations with the same time :math:`y_{i_l}`.
135138

136139
Again, we refer to the expression of the negative log-likelihood according to Efron estimate [`2`_, Section 6, equation (6.7)] to get the datafit formula
@@ -139,7 +142,7 @@ Again, we refer to the expression of the negative log-likelihood according to Ef
139142
:label: efron-estimate
140143
141144
l(\beta) = \frac{1}{n} \sum_{l=1}^{m} (
142-
\sum_{i \in H_{i_l}} - \langle x_i, \beta \rangle
145+
\sum_{i \in H_{i_l}} - \langle x_i, \beta \rangle
143146
+ \sum_{i \in H_{i_l}} \log(\sum_{y_j \geq y_{i_l}} e^{\langle x_j, \beta \rangle} - \frac{\#(i) - 1}{ |H_{i_l} |}\sum_{j \in H_{i_l}} e^{\langle x_j, \beta \rangle}))
144147
,
145148
@@ -158,7 +161,7 @@ On the other hand, the minus term within :math:`\log` can be rewritten as a line
158161

159162
.. math::
160163
161-
- \frac{\#(i) - 1}{| H_{i_l} |}\sum_{j \in H_{i_l}} e^{\langle x_j, \beta \rangle}
164+
- \frac{\#(i) - 1}{| H_{i_l} |}\sum_{j \in H_{i_l}} e^{\langle x_j, \beta \rangle}
162165
= \sum_{j=1}^{n} -\frac{\#(i) - 1}{| H_{i_l} |} \ \mathbb{1}_{j \in H_{i_l}} \ e^{\langle x_j, \beta \rangle}
163166
= \sum_{j=1}^n a_{i,j} e^{\langle x_j, \beta \rangle}
164167
= \langle a_i, e^{\mathbf{X}\beta} \rangle

doc/tutorials/intercept.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
.. _maths_unpenalized_intercept:
22

3+
.. meta::
4+
:description: In-depth guide on intercept handling in skglm solvers. Covers mathematical derivations, gradient updates, Lipschitz constants, and examples for quadratic, logistic, and Huber datafits.
5+
36
Computation of the intercept
47
============================
58

doc/tutorials/prox_nn_group_lasso.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
11
.. _prox_nn_group_lasso:
2+
.. meta::
3+
:description: Detailed tutorial on deriving the proximity operator and subdifferential for the positive group Lasso penalty in skglm. Includes mathematical proofs and examples.
24

35
===================================
46
Details on the Positive Group Lasso

0 commit comments

Comments
 (0)