-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Ju-Chi Yu edited this page Jun 20, 2019
·
39 revisions
Welcome to the restinginpca wiki!
Last update: June 20th, 2019
- Normalization - Goal: don't let any subject standout, but keep session differences
- Should normalization be done within subject across sessions (keeping magnitude of correlation across sessions)? In that case, the double-centered(mat)/first-eiganvalue style normalization won't work.
Styles of normalization (10 sessions from subjects 1-10):
- Center the columns
- Center and normalize the columns
- MFA-Normalize subject without normalizing columns
- MFA-Normalize subject after normalizing columns
- MFA-Normalize network without normalizing columns
- MFA-Normalize network after normalizing columns
- HMFA-normalize both subject and network without normalizing columns
- HMFA-normalize both subject and network after normalizing columns
- Double centering the correlation matrix + MFA-normalize subject
- Double centering the correlation matrix + MFA-normalize subject after centering the columns
- Analysis with only the common networks across subjects
- Supplementary project other networks that are missing in some of the subjects
- Double centering the correlation matrix and do 9, 10
- Speed up the bootstrap function
- We might want to use coefficient of variance (CV), std, and mean to show the dispersion.
⋅⋅⋅_Note: CV gives combined information of std and mean._
- If we plot the mean factor scores of different edges of regions according to their contribution, and compute the contribution of the mean by adding all contributions within an edge. We could end up with two types of significant mean factor scores:
⋅⋅⋅1. The mean with factor scores that vary so much that they contribute from both end of a component but end up with a mean close to the origin.
⋅⋅⋅2. The mean with factor scores that all contribute to the component in the same direction and have a mean far from the origin.