@@ -448,45 +448,45 @@ def _initialize_components(n_components, input, y=None, init='auto',
448448 The input labels (or not if there are no labels).
449449
450450 init : string or numpy array, optional (default='auto')
451- Initialization of the linear transformation. Possible options are
452- 'auto', 'pca', 'lda', 'identity', 'random', and a numpy array of shape
453- (n_features_a, n_features_b).
454-
455- 'auto'
456- Depending on ``n_components``, the most reasonable initialization
457- will be chosen. If ``n_components <= n_classes`` we use 'lda' (see
458- the description of 'lda' init), as it uses labels information. If
459- not, but ``n_components < min(n_features, n_samples)``, we use 'pca',
460- as it projects data onto meaningful directions (those of higher
461- variance). Otherwise, we just use 'identity'.
462-
463- 'pca'
464- ``n_components`` principal components of the inputs passed
465- to :meth:`fit` will be used to initialize the transformation.
466- (See `sklearn.decomposition.PCA`)
467-
468- 'lda'
469- ``min(n_components, n_classes)`` most discriminative
470- components of the inputs passed to :meth:`fit` will be used to
471- initialize the transformation. (If ``n_components > n_classes``,
472- the rest of the components will be zero.) (See
473- `sklearn.discriminant_analysis.LinearDiscriminantAnalysis`).
474- This initialization is possible only if `has_classes == True`.
475-
476- 'identity'
477- The identity matrix. If ``n_components`` is strictly smaller than the
478- dimensionality of the inputs passed to :meth:`fit`, the identity
479- matrix will be truncated to the first ``n_components`` rows.
480-
481- 'random'
482- The initial transformation will be a random array of shape
483- `(n_components, n_features)`. Each value is sampled from the
484- standard normal distribution.
485-
486- numpy array
487- n_features_b must match the dimensionality of the inputs passed to
488- :meth:`fit` and n_features_a must be less than or equal to that.
489- If ``n_components`` is not None, n_features_a must match it.
451+ Initialization of the linear transformation. Possible options are
452+ 'auto', 'pca', 'lda', 'identity', 'random', and a numpy array of shape
453+ (n_features_a, n_features_b).
454+
455+ 'auto'
456+ Depending on ``n_components``, the most reasonable initialization
457+ will be chosen. If ``n_components <= n_classes`` we use 'lda' (see
458+ the description of 'lda' init), as it uses labels information. If
459+ not, but ``n_components < min(n_features, n_samples)``, we use 'pca',
460+ as it projects data onto meaningful directions (those of higher
461+ variance). Otherwise, we just use 'identity'.
462+
463+ 'pca'
464+ ``n_components`` principal components of the inputs passed
465+ to :meth:`fit` will be used to initialize the transformation.
466+ (See `sklearn.decomposition.PCA`)
467+
468+ 'lda'
469+ ``min(n_components, n_classes)`` most discriminative
470+ components of the inputs passed to :meth:`fit` will be used to
471+ initialize the transformation. (If ``n_components > n_classes``,
472+ the rest of the components will be zero.) (See
473+ `sklearn.discriminant_analysis.LinearDiscriminantAnalysis`).
474+ This initialization is possible only if `has_classes == True`.
475+
476+ 'identity'
477+ The identity matrix. If ``n_components`` is strictly smaller than the
478+ dimensionality of the inputs passed to :meth:`fit`, the identity
479+ matrix will be truncated to the first ``n_components`` rows.
480+
481+ 'random'
482+ The initial transformation will be a random array of shape
483+ `(n_components, n_features)`. Each value is sampled from the
484+ standard normal distribution.
485+
486+ numpy array
487+ n_features_b must match the dimensionality of the inputs passed to
488+ :meth:`fit` and n_features_a must be less than or equal to that.
489+ If ``n_components`` is not None, n_features_a must match it.
490490
491491 verbose : bool
492492 Whether to print the details of the initialization or not.
@@ -606,26 +606,26 @@ def _initialize_metric_mahalanobis(input, init='identity', random_state=None,
606606 The input samples (can be tuples or regular samples).
607607
608608 init : string or numpy array, optional (default='identity')
609- Specification for the matrix to initialize. Possible options are
610- 'identity', 'covariance', 'random', and a numpy array of shape
611- (n_features, n_features).
612-
613- 'identity'
614- An identity matrix of shape (n_features, n_features).
615-
616- 'covariance'
617- The (pseudo-)inverse covariance matrix (raises an error if the
618- covariance matrix is not definite and `strict_pd == True`)
619-
620- 'random'
621- A random positive definite (PD) matrix of shape
622- `(n_features, n_features)`, generated using
623- `sklearn.datasets.make_spd_matrix`.
624-
625- numpy array
626- A PSD matrix (or strictly PD if strict_pd==True) of
627- shape (n_features, n_features), that will be used as such to
628- initialize the metric, or set the prior.
609+ Specification for the matrix to initialize. Possible options are
610+ 'identity', 'covariance', 'random', and a numpy array of shape
611+ (n_features, n_features).
612+
613+ 'identity'
614+ An identity matrix of shape (n_features, n_features).
615+
616+ 'covariance'
617+ The (pseudo-)inverse covariance matrix (raises an error if the
618+ covariance matrix is not definite and `strict_pd == True`)
619+
620+ 'random'
621+ A random positive definite (PD) matrix of shape
622+ `(n_features, n_features)`, generated using
623+ `sklearn.datasets.make_spd_matrix`.
624+
625+ numpy array
626+ A PSD matrix (or strictly PD if strict_pd==True) of
627+ shape (n_features, n_features), that will be used as such to
628+ initialize the metric, or set the prior.
629629
630630 random_state : int or `numpy.RandomState` or None, optional (default=None)
631631 A pseudo random number generator object or a seed for it if int. If
0 commit comments