You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where we use two utility functions `flatten`and `unflatten` to convert between the single vector of real numbers and the named tuple of parameters.
419
+
We are using `NamedTuple` to store the mapping between variables and samplers. The order will determine the order of the Gibbs sweeps. A limitation is that exactly one sampler for each variable is required, which means it is less flexible than Gibbs in `Turing.jl`.
423
420
424
-
```julia
425
-
"""
426
-
flatten(trace::NamedTuple)
427
-
428
-
Flatten all the values in the trace into a single vector. Variable names information is discarded.
429
-
"""
430
-
functionflatten(trace::NamedTuple)
431
-
returnreduce(vcat, vec.(values(trace)))
432
-
end
421
+
We uses the `AbstractPPL.condition` to devide the full model into smaller conditional probability problems.
422
+
And each conditional probability problem corresponds to a sampler and corresponding state.
The `Gibbs` sampler has the same interface as other samplers in `AbstractMCMC` (we don't implement the above state interface for `GibbsState` to keep it simple, but it can be implemented similarly).
436
425
437
-
Reverse operation of flatten. Reshape the vector into the original arrays using size information.
returnNamedTuple{variable_names}(Tuple([result[name] for name in variable_names]))
452
-
end
453
-
```
428
+
1. Initialization:
429
+
- Set up initial states for each conditional probability problem.
454
430
455
-
Some points worth noting:
431
+
2. Iterative Sampling:
432
+
For each iteration, the sampler performs a sweep over all conditional probability problems:
456
433
457
-
1. We are using `NamedTuple` to store the mapping between variables and samplers. The order will determine the order of the Gibbs sweeps. A limitation is that exactly one sampler for each variable is required, which means it is less flexible than Gibbs in `Turing.jl`.
458
-
2. For each conditional probability problem, we need to store the sampler states for each variable group and also the values of all the variables from last iteration.
459
-
3. The first step of the Gibbs sampler is to setup the states for each conditional probability problem.
460
-
4. In the following steps of the Gibbs sampler, it will do a sweep over all the conditional probability problems, and update the sampler states for each problem. In each step of the sweep, it will do the following:
461
-
- condition on the values of all variables that are not in the current group
462
-
- recompute the log probability of the current state, because the values of the variables that are not in the current group may have changed
463
-
- perform a step of the sampler for the conditional probability problem, and update the sampler state
464
-
- update the `vi` with the new values from the sampler state
434
+
a. Condition on other variables:
435
+
- Fix the values of all variables except the current one.
436
+
b. Update current variable:
437
+
- Recompute the log probability of the current state, as other variables may have changed:
438
+
- Use `LogDensityProblems.logdensity(cond_logdensity_model, sub_state)` to get the new log probability.
439
+
- Update the state with `sub_state = sub_state(logp)` to incorporate the new log probability.
440
+
- Perform a sampling step for the current conditional probability problem:
441
+
- Use `AbstractMCMC.step(rng, cond_logdensity_model, sub_sampler, sub_state; kwargs...)` to generate a new state.
442
+
- Update the global trace:
443
+
- Extract parameter values from the new state using `Base.vec(new_sub_state)`.
444
+
- Incorporate these values into the overall Gibbs state trace.
465
445
466
-
The `state` interface in AbstractMCMC allows the Gibbs sampler to be agnostic of the details of the sampler state, and acquire the values of the parameters from individual sampler states.
446
+
This process allows the Gibbs sampler to iteratively update each variable while conditioning on the others, gradually exploring the joint distribution of all variables.
467
447
468
-
Now we can use the Gibbs sampler to sample from the hierarchical normal model.
448
+
Now we can use the Gibbs sampler to sample from the hierarchical Normal model.
0 commit comments