Skip to content

Make basic INLA interface and simple marginalisation routine #533

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: main
Choose a base branch
from

Conversation

Michal-Novomestsky
Copy link
Contributor

@Michal-Novomestsky Michal-Novomestsky commented Jul 2, 2025

Addresses #532 and #344.

Relies on pymc-devs/pytensor#1582 and pymc-devs/pymc#7895.

Currently uses a closed-form solution for a specific case (nested normals) while awaiting pymc-devs/pytensor#1550

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Comment on lines +589 to +592
else:
dependent_rvs_dim_connections = [
(None,),
]
Copy link
Contributor Author

@Michal-Novomestsky Michal-Novomestsky Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is redundant in INLA and is only necessary for Discrete marginal stuff (and thus it is fine to define this as None). Please correct me if I'm wrong.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think in the refactor WIP PR the base marginalRV class doesn't have this property

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In lines 613-619, marginalization_op is defined consistently regardless of the MarginalRV it's built from. For compactness, I think it's fine to set it to None through this else block rather than having a seperate marginalize_constructor line?

marginalization_op = marginalize_constructor(
    inputs=inner_inputs,
    outputs=inner_outputs,
    dims_connections=dependent_rvs_dim_connections,
    dims=dims,
    **marginalize_kwargs,
)

Comment on lines +428 to +430
d = 3 # 10000 # TODO pull this from x.shape (or similar) somehow
rng = np.random.default_rng(12345)
x0 = pytensor.graph.replace.graph_replace(x0, {marginalized_vv: rng.random(d)})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not within the scope of this PR, but is there a nice (i.e. pytensor-native) way to specify an RNG vector of shape (d,) without np.random? I believe that rng.random(d) crashes when we set d = marginalized_vv.shape[0] because marginalized_vv is a pt tensor, so it's unhappy with d being something symbolic rather than an int. For now I've worked around it by hardcoding d.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's a one-liner fix, I'll include that in this PR of course.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pt.random.uniform(size=x0.shape)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Throws the following:

ValueError: Random variables detected in the logp graph: {uniform_rv{"(),()->()"}.out, uniform_rv{"(),()->()"}.out}.
This can happen when mixing variables from different models, or when CustomDist logp or Interval transform functions reference nonlocal variables.

Comment on lines +589 to +592
else:
dependent_rvs_dim_connections = [
(None,),
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think in the refactor WIP PR the base marginalRV class doesn't have this property

@Michal-Novomestsky Michal-Novomestsky force-pushed the implement-pmx.fit-option-for-INLA-+-marginalisation-routine branch from 398979a to c6010f3 Compare August 12, 2025 11:05
@Michal-Novomestsky Michal-Novomestsky force-pushed the implement-pmx.fit-option-for-INLA-+-marginalisation-routine branch from dad163c to a473e87 Compare August 12, 2025 11:12
@Michal-Novomestsky
Copy link
Contributor Author

@maresb, @zaxtax suggested that I reach out to you about the Docs not being built. Do you have any ideas why it's failing? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants