Skip to content
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions main.jl
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ end
# although it's hardly a big deal.
@include_model "Base Julia features" "control_flow"
@include_model "Base Julia features" "multithreaded"
@include_model "Base Julia features" "call_C"
@include_model "Core Turing syntax" "broadcast_macro"
@include_model "Core Turing syntax" "dot_assume"
@include_model "Core Turing syntax" "dot_observe"
Expand Down Expand Up @@ -114,6 +115,7 @@ end
@include_model "Effect of model size" "n500"
@include_model "PosteriorDB" "pdb_eight_schools_centered"
@include_model "PosteriorDB" "pdb_eight_schools_noncentered"
@include_model "Miscellaneous features" "metabayesian_MH"

# The entry point to this script itself begins here
if ARGS == ["--list-model-keys"]
Expand Down
10 changes: 10 additions & 0 deletions models/call_C.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
@model function call_C(y = 0.0)
x ~ Normal(0, 1)

# Call C library abs function
x_abs = @ccall fabs(x::Cdouble)::Cdouble

y ~ Normal(0, x_abs)
end

model = call_C()
48 changes: 48 additions & 0 deletions models/metabayesian_MH.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#=
This is a "meta-Bayesian" model, where the generative model includes an inversion of a different generative model.
These types of models are common in cognitive modelling, where systems of interest (e.g. human subjects) are thought to use Bayesian inference to navigate their environment.
Here we use a Metropolis-Hasting sampler implemented with Turing as the inversion of the inner "subjective" model.
=#

# Inner model function
@model function inner_model(observation, prior_μ = 0, prior_σ = 1)

# The innter model's prior
mean ~ Normal(prior_μ, prior_σ)

# The inner model's likelihood
observation ~ Normal(mean, 1)
end

# Outer model function
@model function metabayesian_MH(observation, action, inner_sampler = MH(), inner_n_samples = 20)

### Sample parameters for the inner inference and response ###

#The inner model's prior's sufficient statistics
subj_prior_μ ~ Normal(0, 1)
subj_prior_σ = 1.0

# #Inverse temperature for actions
β ~ 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# #Inverse temperature for actions
β ~ 1
# #Inverse temperature for actions
β = 1

This is probably why this model is failing. I'll also fix that on #37.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It actually did run locally for me with estimating the beta (just took ages). But I removed it for simplicity.
Let me know if you need anything :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you prefer to keep it as beta ~ Exponential(1)? I could try changing it back to that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No the simplicity is better I suppose - what matters is that it can differentiate through the Turing call :)
Noise parameters like that are common (ubiquitous) in these types of models, but not necessary for testing the differentiation :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it back to Exponential(1) anyway and it still works fine on FiniteDifferences, so happy to keep it that way.


### "Perceptual inference": running the inner model ###

#Condition the inner model
inner_m = inner_model(observation, subj_prior_μ, subj_prior_σ)

#Run the inner Bayesian inference
chns = sample(inner_m, inner_sampler, inner_n_samples, progress = false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In #37 I also changed this to fix the random seed (otherwise, each time you calculate AD, the gradients will be different, and thus it's impossible to determine whether the gradients are correct). I think with this change we should at least see FiniteDifferences run correctly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes perfect sense - I guess that's necessary for any model function which includes a stochastic process which isn't a tilde sampling statement.
Maybe we should include an example of that in the base Julia functions category, there might be some reason that some fo the AD backends breaks (compiled ReverseDiff perhaps?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already one, I think it's called control_flow :)


#Extract subjective point estimate
subj_mean_expectationₜ = mean(chns[:mean])


### "Response model": picking an action ###

#The action is a Gaussian-noise report of the subjective point estimate
action ~ Normal(subj_mean_expectationₜ, β)

end

model = metabayesian_MH(0.0, 1.0)
Loading