@@ -33,9 +33,9 @@ function MMI.fit(model::PCA, verbosity::Int, X)
3333 tvar= MS. var (fitresult),
3434 mean= copy (MS. mean (fitresult)),
3535 principalvars= copy (MS. principalvars (fitresult)),
36- # no need to copy here as a new copy is created
36+ # no need to copy here as a new copy is created
3737 # for each function call
38- loadings = MS. loadings (fitresult)
38+ loadings = MS. loadings (fitresult)
3939 )
4040 return fitresult, cache, report
4141end
@@ -281,13 +281,32 @@ end
281281MMI. fitted_params (:: ICA , fr) = (projection= copy (fr. W), mean = copy (MS. mean (fr)))
282282
283283
284+ # # PACKAGE METADATA
285+
286+ metadata_pkg .(
287+ [
288+ PCA,
289+ KernelPCA,
290+ ICA,
291+ PPCA,
292+ FactorAnalysis,
293+ ],
294+ name = " MultivariateStats" ,
295+ uuid = " 6f286f6a-111f-5878-ab1e-185364afe411" ,
296+ url = " https://github.com/JuliaStats/MultivariateStats.jl" ,
297+ license = " MIT" ,
298+ julia = true ,
299+ is_wrapper = false
300+ )
301+
302+
284303# # DOCUMENT STRINGS
285304
286305"""
287306
288307$(MMI. doc_header (PCA))
289308
290- Principal component analysis learns a linear projection onto a lower dimensional space
309+ Principal component analysis learns a linear projection onto a lower dimensional space
291310while preserving most of the initial variance seen in the training data.
292311
293312# Training data
@@ -351,7 +370,7 @@ The fields of `fitted_params(mach)` are:
351370
352371The fields of `report(mach)` are:
353372
354- - `indim`: Dimension (number of columns) of the training data and new data to be
373+ - `indim`: Dimension (number of columns) of the training data and new data to be
355374 transformed.
356375
357376- `outdim = min(n, indim, maxoutdim)` is the output dimension; here `n` is the number of
@@ -365,11 +384,11 @@ The fields of `report(mach)` are:
365384
366385- `mean`: The mean of the untransformed training data, of length `indim`.
367386
368- - `principalvars`: The variance of the principal components. An AbstractVector of
387+ - `principalvars`: The variance of the principal components. An AbstractVector of
369388 length `outdim`
370389
371- - `loadings`: The models loadings, weights for each variable used when calculating
372- principal components. A matrix of size (`indim`, `outdim`) where `indim` and
390+ - `loadings`: The models loadings, weights for each variable used when calculating
391+ principal components. A matrix of size (`indim`, `outdim`) where `indim` and
373392 `outdim` are as defined above.
374393
375394# Examples
@@ -443,8 +462,8 @@ Train the machine using `fit!(mach, rows=...)`.
443462 returned by `transform`, reconstruct a table, having same the number of columns as the
444463 original training data `X`, that transforms to `Xsmall`. Mathematically,
445464 `inverse_transform` is a right-inverse for the PCA projection map, whose image is
446- orthogonal to the kernel of that map. In particular, if
447- `Xsmall = transform(mach, Xnew)`, then `inverse_transform(Xsmall)` is only an
465+ orthogonal to the kernel of that map. In particular, if
466+ `Xsmall = transform(mach, Xnew)`, then `inverse_transform(Xsmall)` is only an
448467 approximation to `Xnew`.
449468
450469# Fitted parameters
@@ -512,10 +531,10 @@ Train the machine using `fit!(mach, rows=...)`.
512531
513532# Hyper-parameters
514533
515- - `outdim::Int=0`: The number of independent components to recover, set automatically
534+ - `outdim::Int=0`: The number of independent components to recover, set automatically
516535 if `0`.
517536
518- - `alg::Symbol=:fastica`: The algorithm to use (only `:fastica` is supported at the
537+ - `alg::Symbol=:fastica`: The algorithm to use (only `:fastica` is supported at the
519538 moment).
520539
521540- `fun::Symbol=:tanh`: The approximate neg-entropy function, one of `:tanh`, `:gaus`.
@@ -527,17 +546,17 @@ Train the machine using `fit!(mach, rows=...)`.
527546- `tol::Real=1e-6`: The convergence tolerance for change in the unmixing matrix W.
528547
529548- `mean::Union{Nothing, Real, Vector{Float64}}=nothing`: mean to use, if nothing (default)
530- centering is computed and applied, if zero, no centering; otherwise a vector of means
549+ centering is computed and applied, if zero, no centering; otherwise a vector of means
531550 can be passed.
532551
533- - `winit::Union{Nothing,Matrix{<:Real}}=nothing`: Initial guess for the unmixing matrix
534- `W`: either an empty matrix (for random initialization of `W`), a matrix of size
535- `m × k` (if `do_whiten` is true), or a matrix of size `m × k`. Here `m` is the number
552+ - `winit::Union{Nothing,Matrix{<:Real}}=nothing`: Initial guess for the unmixing matrix
553+ `W`: either an empty matrix (for random initialization of `W`), a matrix of size
554+ `m × k` (if `do_whiten` is true), or a matrix of size `m × k`. Here `m` is the number
536555 of components (columns) of the input.
537556
538557# Operations
539558
540- - `transform(mach, Xnew)`: Return the component-separated version of input `Xnew`, which
559+ - `transform(mach, Xnew)`: Return the component-separated version of input `Xnew`, which
541560 should have the same scitype as `X` above.
542561
543562# Fitted parameters
@@ -552,7 +571,7 @@ The fields of `fitted_params(mach)` are:
552571
553572The fields of `report(mach)` are:
554573
555- - `indim`: Dimension (number of columns) of the training data and new data to be
574+ - `indim`: Dimension (number of columns) of the training data and new data to be
556575 transformed.
557576
558577- `outdim`: Dimension of transformed data.
606625$(MMI. doc_header (FactorAnalysis))
607626
608627Factor analysis is a linear-Gaussian latent variable model that is closely related to
609- probabilistic PCA. In contrast to the probabilistic PCA model, the covariance of
610- conditional distribution of the observed variable given the latent variable is diagonal
628+ probabilistic PCA. In contrast to the probabilistic PCA model, the covariance of
629+ conditional distribution of the observed variable given the latent variable is diagonal
611630rather than isotropic.
612631
613632# Training data
@@ -666,7 +685,7 @@ The fields of `fitted_params(mach)` are:
666685
667686The fields of `report(mach)` are:
668687
669- - `indim`: Dimension (number of columns) of the training data and new data to be
688+ - `indim`: Dimension (number of columns) of the training data and new data to be
670689 transformed.
671690
672691- `outdim`: Dimension of transformed data (number of factors).
@@ -677,7 +696,7 @@ The fields of `report(mach)` are:
677696
678697- `mean`: The mean of the untransformed training data, of length `indim`.
679698
680- - `loadings`: The factor loadings. A matrix of size (`indim`, `outdim`) where
699+ - `loadings`: The factor loadings. A matrix of size (`indim`, `outdim`) where
681700 `indim` and `outdim` are as defined above.
682701
683702# Examples
@@ -752,7 +771,7 @@ Train the machine using `fit!(mach, rows=...)`.
752771 of columns as the original training data `X`, that transforms to `Xsmall`.
753772 Mathematically, `inverse_transform` is a right-inverse for the PCA projection
754773 map, whose image is orthogonal to the kernel of that map. In particular, if
755- `Xsmall = transform(mach, Xnew)`, then `inverse_transform(Xsmall)` is only an
774+ `Xsmall = transform(mach, Xnew)`, then `inverse_transform(Xsmall)` is only an
756775 approximation to `Xnew`.
757776
758777# Fitted parameters
@@ -767,14 +786,14 @@ The fields of `fitted_params(mach)` are:
767786
768787The fields of `report(mach)` are:
769788
770- - `indim`: Dimension (number of columns) of the training data and new data to be
789+ - `indim`: Dimension (number of columns) of the training data and new data to be
771790 transformed.
772791
773792- `outdim`: Dimension of transformed data.
774793
775794- `tvat`: The variance of the components.
776795
777- - `loadings`: The model's loadings matrix. A matrix of size (`indim`, `outdim`) where
796+ - `loadings`: The model's loadings matrix. A matrix of size (`indim`, `outdim`) where
778797 `indim` and `outdim` as as defined above.
779798
780799# Examples
0 commit comments