Both #26 and #22 raise the question of how to deal with automatic RVs whose value_variable results from an intermediate broadcast of the base RV
loc = at.vector("loc")
y_base_rv = at.random.normal(0, 1, name="y_base_rv", size=2)
y_rv = loc + y_base_rv
y_val = y_rv.clone()
logp = joint_logprob(y_rv, {y_rv: y_val})
logp.eval({y_val: [0, 0, 0, 0], loc_val: [0, 0, 0, 0]})
The computed logp is only correct when loc is of size==2.
This might be even more complicated if the y_base_rv inputs are themselves variable (meaning it may itself have not a fixed size)
I see two ways we could deal with this:
- Add a shape assert for
y_base_rv.shape == y_val.shape in the returned logprob
- Ignore the situation and put the onus on the users to specify correctly sized variables for derived logprobs
- Something else?