Skip to content

Feature request: Other relevant tools for model selection / comparison #15

@rob-nicholls-stats

Description

@rob-nicholls-stats

In addition to information criteria, there's also:

LOO-CV - leave-one-out cross-validation. This gives an estimate of predictive performance. It's the gold standard, but computationally expensive.

PSIS-LOO-CV. A pragmatic approximation to LOO-CV, which is less computationally expensive.

Bayes Factor. Used to directly compare one model against another, based on marginal likelihoods. Complexity is penalised naturally, but can be sensitive to modelling assumptions (priors). And it doesn't require models to be nested.

Bayesian Model Averaging. Conceptually different to the others, because rather than comparing models it aims to combine them. The idea is to account for model uncertainty by averaging predictions over multiple models. This reduces overconfidence in a single model, but can be computationally intensive, and of course is dependent on what models you include.

Summary:
If interested in predictive performance/generalisability:

  • WAIC - quick to compute.
  • LOO-CV - gold standard.
  • PSIS-LOO-CV - pragmatic middle ground.
    Want to directly compare two models:
  • Bayes Factor. This is pretty much the approach.
    Want to combine info from multiple models:
  • Bayesian Model Averaging.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions