You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add MarkovChain meta distribution for time series modeling.
My original motivation for this was to unify the two stochastic volatility models in the inference gym (https://github.com/tensorflow/probability/blob/master/spinoffs/inference_gym/inference_gym/targets/stochastic_volatility.py and
https://github.com/tensorflow/probability/blob/master/spinoffs/inference_gym/inference_gym/targets/vectorized_stochastic_volatility.py). Currently, we have a non-vectorized model that exposes the graphical model structure needed for ASVI, but is very slow, and a vectorized version that is fast but does not expose any structure. Representing the volatility series as a MarkovChain would expose the graphical structure *and* support vectorized log-prob evaluation.
More generally we could think of MarkovChain as a new flavor of joint distribution, since (like TransformedDistribution) it can represent multipart events. It proposes an answer to the question, "How can a JD efficiently define random variables in a loop?" Currently the only way to do this is to write an explicit Python loop in a JDCoroutine model, as the naive stochastic volatility model does, but this is obviously suboptimal. MarkovChains might be useful both as joint distributions over time series and to represent time-series components in larger joint distribution models.
Another potential use is as a bridge between the particle filter API and the Distributions API. Our particle filtering code expects a time series model to be specified in terms of an initial prior, a transition function, and an observation function. It's natural to want to represent the distribution on the latent sequences implied by the prior and transition model, e.g., for prior predictive checks; MarkovChain allows us to do this.
I'm not totally sure if MarkovChain is the right name. Throwing out some other possibilities: Markov, AutoregressiveMarkov, TimeSeries, Loop, Scan, <your suggestion here>?
PiperOrigin-RevId: 377150791
0 commit comments