-
Notifications
You must be signed in to change notification settings - Fork 2
Added model which calls C #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 5 commits
0480a51
95e60ea
87384dd
8bc8c5e
c049fd8
887b82a
18d32a8
70643f1
7f58934
c4d941b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
@model function call_C(y = 0.0) | ||
x ~ Normal(0, 1) | ||
|
||
# Call C library abs function | ||
x_abs = @ccall fabs(x::Cdouble)::Cdouble | ||
|
||
y ~ Normal(0, x_abs) | ||
end | ||
|
||
model = call_C() |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
#= | ||
This is a "meta-Bayesian" model, where the generative model includes an inversion of a different generative model. | ||
These types of models are common in cognitive modelling, where systems of interest (e.g. human subjects) are thought to use Bayesian inference to navigate their environment. | ||
Here we use a Metropolis-Hasting sampler implemented with Turing as the inversion of the inner "subjective" model. | ||
=# | ||
|
||
# Inner model function | ||
@model function inner_model(observation, prior_μ = 0, prior_σ = 1) | ||
|
||
# The innter model's prior | ||
mean ~ Normal(prior_μ, prior_σ) | ||
|
||
# The inner model's likelihood | ||
observation ~ Normal(mean, 1) | ||
end | ||
|
||
# Outer model function | ||
@model function metabayesian_MH(observation, action, inner_sampler = MH(), inner_n_samples = 20) | ||
|
||
### Sample parameters for the inner inference and response ### | ||
|
||
#The inner model's prior's sufficient statistics | ||
subj_prior_μ ~ Normal(0, 1) | ||
subj_prior_σ = 1.0 | ||
|
||
# #Inverse temperature for actions | ||
β ~ 1 | ||
|
||
### "Perceptual inference": running the inner model ### | ||
|
||
#Condition the inner model | ||
inner_m = inner_model(observation, subj_prior_μ, subj_prior_σ) | ||
|
||
#Run the inner Bayesian inference | ||
chns = sample(inner_m, inner_sampler, inner_n_samples, progress = false) | ||
|
||
|
||
#Extract subjective point estimate | ||
subj_mean_expectationₜ = mean(chns[:mean]) | ||
|
||
|
||
### "Response model": picking an action ### | ||
|
||
#The action is a Gaussian-noise report of the subjective point estimate | ||
action ~ Normal(subj_mean_expectationₜ, β) | ||
|
||
end | ||
|
||
model = metabayesian_MH(0.0, 1.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably why this model is failing. I'll also fix that on #37.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It actually did run locally for me with estimating the beta (just took ages). But I removed it for simplicity.
Let me know if you need anything :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you prefer to keep it as beta ~ Exponential(1)? I could try changing it back to that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No the simplicity is better I suppose - what matters is that it can differentiate through the Turing call :)
Noise parameters like that are common (ubiquitous) in these types of models, but not necessary for testing the differentiation :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it back to
Exponential(1)
anyway and it still works fine on FiniteDifferences, so happy to keep it that way.