-
Notifications
You must be signed in to change notification settings - Fork 70
Make basic INLA interface and simple marginalisation routine #533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Make basic INLA interface and simple marginalisation routine #533
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
else: | ||
dependent_rvs_dim_connections = [ | ||
(None,), | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is redundant in INLA and is only necessary for Discrete marginal stuff (and thus it is fine to define this as None). Please correct me if I'm wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I think in the refactor WIP PR the base marginalRV class doesn't have this property
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In lines 613-619, marginalization_op
is defined consistently regardless of the MarginalRV
it's built from. For compactness, I think it's fine to set it to None through this else block rather than having a seperate marginalize_constructor
line?
marginalization_op = marginalize_constructor(
inputs=inner_inputs,
outputs=inner_outputs,
dims_connections=dependent_rvs_dim_connections,
dims=dims,
**marginalize_kwargs,
)
d = 3 # 10000 # TODO pull this from x.shape (or similar) somehow | ||
rng = np.random.default_rng(12345) | ||
x0 = pytensor.graph.replace.graph_replace(x0, {marginalized_vv: rng.random(d)}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not within the scope of this PR, but is there a nice (i.e. pytensor-native) way to specify an RNG vector of shape (d,) without np.random
? I believe that rng.random(d)
crashes when we set d = marginalized_vv.shape[0]
because marginalized_vv
is a pt tensor, so it's unhappy with d being something symbolic rather than an int. For now I've worked around it by hardcoding d.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's a one-liner fix, I'll include that in this PR of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pt.random.uniform(size=x0.shape)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Throws the following:
ValueError: Random variables detected in the logp graph: {uniform_rv{"(),()->()"}.out, uniform_rv{"(),()->()"}.out}.
This can happen when mixing variables from different models, or when CustomDist logp or Interval transform functions reference nonlocal variables.
else: | ||
dependent_rvs_dim_connections = [ | ||
(None,), | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I think in the refactor WIP PR the base marginalRV class doesn't have this property
398979a
to
c6010f3
Compare
dad163c
to
a473e87
Compare
Addresses #532 and #344.
Relies on pymc-devs/pytensor#1582 and pymc-devs/pymc#7895.
Currently uses a closed-form solution for a specific case (nested normals) while awaiting pymc-devs/pytensor#1550