Getting Ed's agreement on need for sparsity-inducing prior #88
hyunjimoon
started this conversation in
people; relating
Replies: 1 comment
-
Relevant to #69 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
From the paper, Combining stock-and-flow, agent-based, and social network methods to model team performance, Ed commented we could start from large enough N by N matrix and aim for sparsity.
I think horse-shoe prior might be the first to look into although I had a feeling Michael's writing shows evidence against that prior. Also, lasso i.e. L0 regularization does not always filter useful features.
Beta Was this translation helpful? Give feedback.
All reactions