|
1 | | -r""" |
2 | | -This module contains a set of easily-interpretable error measures of the |
3 | | -relative information capacity of feature space `F` with respect to feature |
4 | | -space `F'`. The methods returns a value between 0 and 1, where 0 means that |
5 | | -`F` and `F'` are completey distinct in terms of linearly-decodable |
6 | | -information, and where 1 means that `F'` is contained in `F`. All methods |
7 | | -are implemented as the root mean-square error for the regression of the |
8 | | -feature matrix `X_F'` (or sometimes called `Y` in the doc) from `X_F` (or |
9 | | -sometimes called `X` in the doc) for transformations with different |
10 | | -constraints (linear, orthogonal, locally-linear). By default a custom 2-fold |
11 | | -cross-validation :py:class:`skosmo.linear_model.RidgeRegression2FoldCV` is |
12 | | -used to ensure the generalization of the transformation and efficiency of |
13 | | -the computation, since we deal with a multi-target regression problem. |
14 | | -Methods were applied to compare different forms of featurizations through |
15 | | -different hyperparameters and induced metrics and kernels [Goscinski2021]_ . |
| 1 | +""" |
| 2 | +This module contains a set of easily-interpretable error measures of the relative |
| 3 | +information capacity of feature space `F` with respect to feature space `F'`. The |
| 4 | +methods returns a value between 0 and 1, where 0 means that `F` and `F'` are completey |
| 5 | +distinct in terms of linearly-decodable information, and where 1 means that `F'` is |
| 6 | +contained in `F`. All methods are implemented as the root mean-square error for the |
| 7 | +regression of the feature matrix `X_F'` (or sometimes called `Y` in the doc) from `X_F` |
| 8 | +(or sometimes called `X` in the doc) for transformations with different constraints |
| 9 | +(linear, orthogonal, locally-linear). By default a custom 2-fold cross-validation |
| 10 | +:py:class:`skosmo.linear_model.RidgeRegression2FoldCV` is used to ensure the |
| 11 | +generalization of the transformation and efficiency of the computation, since we deal |
| 12 | +with a multi-target regression problem. Methods were applied to compare different forms |
| 13 | +of featurizations through different hyperparameters and induced metrics and kernels |
| 14 | +[Goscinski2021]_ . |
| 15 | +
|
| 16 | +These reconstruction measures are available: |
| 17 | +
|
| 18 | +* :ref:`GRE-api` (GRE) computes the amount of linearly-decodable information |
| 19 | + recovered through a global linear reconstruction. |
| 20 | +* :ref:`GRD-api` (GRD) computes the amount of distortion contained in a global linear |
| 21 | + reconstruction. |
| 22 | +* :ref:`LRE-api` (LRE) computes the amount of decodable information recovered through |
| 23 | + a local linear reconstruction for the k-nearest neighborhood of each sample. |
16 | 24 | """ |
17 | 25 |
|
18 | 26 | from ._reconstruction_measures import ( |
|
0 commit comments