-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Summary
Demonstrate the usability/stability of NiMARE's CBMA estimators and provide soft recommendations
for users. work builds off:
- http://dx.doi.org/10.1016/j.neuroimage.2008.12.039
- http://dx.doi.org/10.1016/j.neuroimage.2016.04.072
- https://doi.org/10.1101/048249
- and more to be added
Additional details
- systematic comparison of CBMA estimators with simulated and real data
Next steps
Steps:
- decide on null data generation process
- spatial distribution options:
- choose voxels uniformly from a (gray matter) mask (essentially copy the empirical generation method)
- pull randomly from a probabilistic map (Eickhoff 2016)
- create an advanced model (possibly a gaussian mixture model) to account for spatial distribution of coordinates?
- choose number of participants to simulate
- choose number of foci to simulate
- choose number of study contrasts to include in simulated meta-analysis
- spatial distribution options:
- compare empirical and analytic estimation on null data to test false positive rates
- (use analytic null for further analyses if they are well correlated with the empirical null)
- how should we determine kernel size/how many kernel sizes should we test over?
- Select studies where we have group level statistical maps:
- Datasets available:
- naturalistic imaging
- pain dataset
- others?
- another consideration to ensure IBMA is reaching a consistent result for comparing to CBMA is to treat individual statistical maps as group maps.
- Datasets available:
- select several IBMA methods to compare to CBMA results
- Which methods should we use?
Fishers- does not take into account variances
Stouffers- does not take into account variances
- WeightedLeastSquares
- DerSimonianLaird
Hedges- gave poor results using the naturalistic imaging datasets
SampleSizeBasedLikelihood- VarianceBasedLikelihood
- PermutedOLS
- What thresholds should be applied to the results?
- Which methods should we use?
- convert the images to coordinate datasets:
- what parameters should be chosen/varied?
- min distance between clusters? (default 8 mm)
- stat threshold (z=3.1?)
- cluster_threshold
- what parameters should be chosen/varied?
- run ALE, KDA, and MKDA on generated coordinate datasets
- choices on kernel size?
- ALE kernel can be decided based on sample size...
- should the output be thresholded at multiple levels (0.01, 0.001)
- should there be FDR/FWER corrections applied to the output?
- choices on kernel size?
- compare CBMA maps to IBMA maps
- what metric(s) should we choose?
- Dice similarity
- Correlation
- "True" Positive Rate
- (which voxels were statistically significant with IBMA and CBMA)
- False Positive Rate
- (which voxels were statistically non-significant with IBMA, but significant with CBMA)
- Should we "hold out" some of the data to see if the results (which estimator with what parameters is most like IBMA) generalize to new data?
- the IBMA data will contain positive and negative values, whereas the CBMA analyses will only contain positive values (that may represent either or negative statistical peaks), probably just want to observe positive values and or compare similarity to positive and negative values separately.
- what metric(s) should we choose?
Metadata
Metadata
Assignees
Labels
No labels