You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/high_dimension.rst
+52-2Lines changed: 52 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,13 +14,63 @@ In some cases, data represent high-dimensional measurements of some phenomenon o
14
14
* powerless: As dimensionality and correlation increase, it becomes harder and harder to isolate the contribution of each variable, meaning that conditional inference is ill-posed.
15
15
16
16
This is illustrated in the above example, where the Desparsified Lasso struggles
17
-
to identify relevant features
17
+
to identify relevant features::
18
18
19
+
n_samples = 100
20
+
shape = (40, 40)
21
+
n_features = shape[1] * shape[0]
22
+
roi_size = 4 # size of the edge of the four predictive regions
19
23
24
+
# generating the data
25
+
from hidimstat._utils.scenario import multivariate_simulation_spatial
26
+
X_init, y, beta, epsilon = multivariate_simulation_spatial(
As discussed earlier, feature grouping is a meaningful solution to deal with such cases: it reduces the number of features to condition on, and generally also decreases the level of correlation between features (XXX see grouping section).
61
+
As hinted in [Meinshausen XXX] an efficient way to deal with such configuration is to take the per-group average of the features: this leads to a *reduced design*. After inference, all the feature in a given group obtain the p-value of the group representative. When the inference engine is Desparsified Lasso, the resulting mùethod is called Clustered Desparsified lasso, or **CluDL**.
62
+
63
+
The issue is that very-high-dimensional data (biological, images, etc.) do not have any canonical grouping structure. Hence, they rely on grouping obtained from the data, typically with clustering technique. However, the resulting clusters bring some undesirable randomness. Think that imputing slightly differnt data would lead to different clusters. Since there is no globally optimal clustering, the wiser solution is to *average* the results across clusterings. Since it may not be a good idea to average p-values, an alternative *ensembling* or *aggregation* strategy is sued instead. When the inference engine is Desparsified Lasso, the resulting mùethod is called Ensemble of Clustered Desparsified lasso, or **EnCluDL**.
64
+
65
+
Example
66
+
-------
25
67
26
68
69
+
70
+
.. topic:: **Full example**
71
+
72
+
See the following example for a full file running the analysis:
0 commit comments