Skip to content

Conversation

@kiante-fernandez
Copy link
Contributor

This is the first of two examples I want to include for using simulation-based inference with models in SSM.jl.

Here, I am using the package NeuralEstimators.jl. Feedback is welcome at this point.

https://github.com/msainsburydale/NeuralEstimators

Next, I am working on building likelihood approximation networks in Flux based on methods from Fengler, and I will have it up and running to sample with Turing. That will be the next example.

For now, I tried to use the LCA, as similar examples have shown online (see https://bayesflow.org/stable-legacy/_examples/LCA_Model_Posterior_Estimation.html).

I was able to get this running local but I think we have to env coflict we have to resolve to get NeuralEstimators working on the docs

This is the first of two examples I want to include for using simulation-based inference with models in SSM.jl.

Here, I am using the package NeuralEstimators.jl. Feedback is welcome at this point.

https://github.com/msainsburydale/NeuralEstimators

Next, I am working on building likelihood approximation networks in Flux based on methods from Fengler, and I will have it up and running to sample with Turing. That will be the next example.

For now, I tried to use the LCA, as similar examples have shown online (see https://bayesflow.org/stable-legacy/_examples/LCA_Model_Posterior_Estimation.html).

I was able to get this running local but I think we have to env coflict we have to resolve to get NeuralEstimators working on the docs
@github-actions
Copy link
Contributor

github-actions bot commented Apr 15, 2025

Benchmark Results

master 580b359... master / 580b359...
logpdf/("SequentialSamplingModels.DDM", 10) 1.66 ± 0.12 μs 1.66 ± 0.12 μs 0.999
logpdf/("SequentialSamplingModels.DDM", 100) 15.6 ± 0.83 μs 15.7 ± 0.78 μs 0.993
logpdf/("SequentialSamplingModels.LBA", 10) 2.95 ± 0.18 μs 2.92 ± 0.19 μs 1.01
logpdf/("SequentialSamplingModels.LBA", 100) 22.4 ± 0.71 μs 22.4 ± 0.73 μs 1
logpdf/("SequentialSamplingModels.LNR", 10) 1.67 ± 0.12 μs 1.68 ± 0.12 μs 0.997
logpdf/("SequentialSamplingModels.LNR", 100) 9.47 ± 0.35 μs 9.53 ± 0.34 μs 0.993
logpdf/("SequentialSamplingModels.RDM", 10) 2.65 ± 0.21 μs 2.61 ± 0.21 μs 1.01
logpdf/("SequentialSamplingModels.RDM", 100) 24.1 ± 0.99 μs 23.8 ± 0.82 μs 1.02
logpdf/("SequentialSamplingModels.Wald", 10) 0.453 ± 0.0081 μs 0.451 ± 0.011 μs 1
logpdf/("SequentialSamplingModels.Wald", 100) 2.65 ± 0.14 μs 2.65 ± 0.13 μs 0.999
logpdf/("SequentialSamplingModels.WaldMixture", 10) 1.3 ± 0.014 μs 1.33 ± 0.013 μs 0.976
logpdf/("SequentialSamplingModels.WaldMixture", 100) 11 ± 0.19 μs 11.4 ± 0.18 μs 0.969
rand/("SequentialSamplingModels.DDM", 10) 3.15 ± 0.4 μs 3.14 ± 0.45 μs 1
rand/("SequentialSamplingModels.DDM", 100) 30.2 ± 1.6 μs 30.3 ± 1.7 μs 0.995
rand/("SequentialSamplingModels.LBA", 10) 1.62 ± 0.048 μs 1.65 ± 0.053 μs 0.982
rand/("SequentialSamplingModels.LBA", 100) 11.2 ± 0.28 μs 11.3 ± 0.34 μs 0.988
rand/("SequentialSamplingModels.LCA", 10) 0.503 ± 0.12 ms 0.52 ± 0.1 ms 0.967
rand/("SequentialSamplingModels.LCA", 100) 5.29 ± 0.13 ms 5.42 ± 0.16 ms 0.977
rand/("SequentialSamplingModels.LNR", 10) 1.14 ± 0.037 μs 1.22 ± 0.033 μs 0.933
rand/("SequentialSamplingModels.LNR", 100) 6.19 ± 2.6 μs 7.22 ± 2.9 μs 0.856
rand/("SequentialSamplingModels.RDM", 10) 1.15 ± 0.039 μs 1.22 ± 0.041 μs 0.941
rand/("SequentialSamplingModels.RDM", 100) 9.27 ± 0.27 μs 10.1 ± 0.93 μs 0.919
rand/("SequentialSamplingModels.Wald", 10) 0.394 ± 0.024 μs 0.392 ± 0.026 μs 1
rand/("SequentialSamplingModels.Wald", 100) 2.12 ± 0.066 μs 2.14 ± 0.072 μs 0.993
rand/("SequentialSamplingModels.WaldMixture", 10) 1.35 ± 0.023 μs 1.35 ± 0.022 μs 1
rand/("SequentialSamplingModels.WaldMixture", 100) 11.9 ± 0.11 μs 11.9 ± 0.11 μs 1
simulate/SequentialSamplingModels.DDM 1.95 ± 0.51 μs 1.88 ± 0.53 μs 1.04
simulate/SequentialSamplingModels.LBA 4.94 ± 3.7 μs 4.85 ± 3.7 μs 1.02
simulate/SequentialSamplingModels.LCA 0.0679 ± 0.014 ms 0.0684 ± 0.013 ms 0.992
simulate/SequentialSamplingModels.RDM 0.089 ± 0.029 ms 0.0886 ± 0.028 ms 1
simulate/SequentialSamplingModels.Wald 4.33 ± 1.8 μs 4.73 ± 2.4 μs 0.915
simulate/SequentialSamplingModels.WaldMixture 2.04 ± 0.56 μs 2.09 ± 0.55 μs 0.976
simulate/mdft 0.159 ± 0.057 ms 0.169 ± 0.066 ms 0.937
time_to_load 0.88 ± 0.0061 s 0.887 ± 0.0053 s 0.993

Benchmark Plots

A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.
Go to "Actions"->"Benchmark a pull request"->[the most recent run]->"Artifacts" (at the bottom).

@itsdfish
Copy link
Owner

Thanks! I only gave it a cursory look, but this is nice from what I can tell.

I was able to get this running local but I think we have to env coflict we have to resolve to get NeuralEstimators working on the docs

I think that is OK. I no longer auto-generate computationally intensive docs with the @example block to cut down on run time.

By the way, I have been corresponding with the developer of NeuralEstimators.jl to get help with a few technical details. One thing I have planned is to add an example of Bayesian parameter estimation of the LNR using NormalisingFlow and comparing it to Turing (for validation). I want to let you know so we do not duplicate any work. Thanks again!

@kiante-fernandez
Copy link
Contributor Author

Thanks! I only gave it a cursory look, but this is nice from what I can tell.

I was able to get this running local but I think we have to env coflict we have to resolve to get NeuralEstimators working on the docs

I think that is OK. I no longer auto-generate computationally intensive docs with the @example block to cut down on run time.

By the way, I have been corresponding with the developer of NeuralEstimators.jl to get help with a few technical details. One thing I have planned is to add an example of Bayesian parameter estimation of the LNR using NormalisingFlow and comparing it to Turing (for validation). I want to let you know so we do not duplicate any work. Thanks again!

Okay, I will add @example. Also, nice! I think both cases are good examples, since in one we can map it to the "ground truth" by comparing with Turing.

I want to follow up with the team, as I think some additional text on considering simulation-based calibration could be instructive. Do you know if they are working on that? I was considering opening an issue.

@itsdfish
Copy link
Owner

Okay, I will add @example.

Just to clarify, please to not use @example for long running documentation such as this. One thing that would be helpful is if you included a full, copy and paste version of the code in the following:

#```@raw html
#<details>
#<summary><b>Show Details </b></summary>
#```
#```julia
## your code here
#```
#```@raw html
#</details>
#```

Sorry for the #. It would not display at all otherwise. Anyways, this will allow users to copy and paste a full version of the code rather than multiple pieces. It also hides/reveals the code so it does not create visual clutter. I also recommend it for testing the examples locally, since @example is too slow to run each time we test the docs.

I want to follow up with the team, as I think some additional text on considering simulation-based calibration could be instructive. Do you know if they are working on that? I was considering opening an issue.

Not that I know of. It might be worth looking into.

… documentation for clarity and additional references
@kiante-fernandez
Copy link
Contributor Author

Sorry for the #. It would not display at all otherwise. Anyways, this will allow users to copy and paste a full version of the code rather than multiple pieces. It also hides/reveals the code so it does not create visual clutter. I also recommend it for testing the examples locally, since @example is too slow to run each time we test the docs.

I see understood! I will modify.

Copy link
Owner

@itsdfish itsdfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kiante, there a few minor revisions requested below. Once we sort those out, I will merge.

@itsdfish itsdfish closed this Apr 22, 2025
@itsdfish itsdfish reopened this Apr 22, 2025
@itsdfish
Copy link
Owner

@kiante-fernandez, thank you for the revisions. This looks nice. I will go ahead and merge.

The unit tests for Turing is not passing with 0.37, even though it passed before. I'm looking into a solution.

@itsdfish itsdfish merged commit e36aaa4 into itsdfish:master Apr 22, 2025
7 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants