🌌 Multiverse analysis to prevent selective reporting + internal inconsistency #106
hyunjimoon
started this conversation in
people; relating
Replies: 1 comment
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Solely on my experience of fitting the hierarchical Bayes model with Venism for four months, internal inconsistency and selection bias worries me. Orchestrating different versions of data (updated daily), models (up to V78), and optimization algorithms (vensim, March and September Vengine) is challenging for me in the Covid and job satisfaction project.
Contrary to covid model, job model is even more challenging as parameters are less tangible. e.g. discovering "too many people are dying" or prior knowledge "most cost is from death" prompted me to revise the error in the covid model, but I don't know if these cues can exist for the job model.
Another is selection bias, reporting only the successfully calibrated subgroup index or parameter. Randomizing the presented plot is one solution but this also is seed-controlled.
We can adopt the multiverse analysis detailed in Steegen16_TransparencyThruMultiverseAnal.pdf whose idea is active data construction i.e. construct data multiverse, improving consistency by endogenizing variability (e.g. Stanify's design choice of making data as data function). Table 1 illustrates choices of data construction. (reminds me of 2^4 type covering man-kind #97 (comment))
Beta Was this translation helpful? Give feedback.
All reactions