-
-
Notifications
You must be signed in to change notification settings - Fork 458
GSoC 2022 projects
ArviZ, is a project dedicated to promote and build tools for exploratory analysis of Bayesian models. It currently has a Python and a Julia interface. ArviZ aims to integrate seamlessly with established probabilistic programming languages like PyStan, PyMC (3 and 4), Turing, Soss, emcee, Pyro, and to be easily integrated with novel or bespoke Bayesian analyses. Where the aim of the probabilistic programming languages is to make it easy to build and solve Bayesian models, the aim of the ArviZ libraries is to make it easy to process and analyze the results from those Bayesian models.
The timeline of the GSoC internships is available at the GSoC website
Below there is a list of possible topics for your GSoC project, we are also open to other topics, contact us on Gitter. Keep in mind that these are only ideas and that some of them can't be completely solved in a single GSoC project. When writing your proposal, choose some specific tasks and make sure your proposal is adequate for the GSoC time commitment. We expect all projects to be 350h projects, if you'd like to be considered for a 175h project you must reach out on Gitter. We will not accept 175h applications from people with whom we haven't discussed their time commitments before submitting the application.
- ArviZ Dashboards (Python)
- InferenceData R compatibility (R)
- New plots (Python)
- Native Julia plotting backends (Julia)
- Add Gen converter to ArviZ.jl (Julia)
- Speed-up and parallelize ArviZ (Python)
- Add refitting algorithms to ArviZ (Python)
- Explore design ideas for plotting refactoring (Python)
Each project also lists some specific requirements needed to be able to successfully complete the project, general requirements are listed below.
Note that these requirements can be learnt while writing the proposal and during the community bonding period. You should feel confident to work in any project whose requirements are interesting to you and you would like to learn about them, they are not skills all of which you are expected to know before writing your proposal. We aim for GSoC to provide a win-win scenario where you benefit from an inclusive and thriving environment in which to learn and the library benefits from your contributions.
All projects requires being comfortable using ArviZ and understanding the relations between its 3 main modules: plots, stats and data.
However, unless specified otherwise, no specific knowledge of inference libraries or about the internals of from_xyz converter functions is needed.
Students working on Python projects should be familiar with Python, numpy and scipy and have basic xarray/InferenceData knowledge.
They should also be able to write unit tests for the added functionality using pytest and be able to enforce development conventions and use black, pylint and pydocstyle for code style and linting.
Students working on Julia projects should be familiar with Julia, PyCall to use Python objects from within Julia, DataFrames and StatsBase. They should also be able to write unit tests for the added functionality using Test.
The highest priority projects are (in no particular order): "Add Gen converter to ArviZ.jl", "New plots" and "InferenceData R compatibility". What does this mean? Organizations send a ranked list of their applicants to Google who then selects the x first ranked students. We will use proposal quality as the main evaluation criteria, taking project priority into account only as a tie breaker between proposals of similar quality. We recommend you to apply for the projects that interest you as you'll probably write a much better proposal for that and have better chance of being accepted.
Students who work on ArviZ can expect their skillset to grow in
- Bayesian Inference libraries
- Bayesian modeling workflow and model criticism
- Matplotlib and/or bokeh usage (depending on project)
- xarray usage (depending on project)
- Numba or Dask optimization (depending on project)
The main proposal is to build a dashboard with linked plots, so inspecting multiple dimensions is easier. At first the focus should be the ability to call templates which only consume data. Some of the possible templates should be prior + prior predictive, sample diagnostics, posterior + posterior predictive, loo, regression. The ability to dynamically add or subtract new plots, change plot types, and manually select and save information should also be considered. This dashboard could be built on top of Panel. Although other alternatives might be explored.
The expected outcome is a prototype of a new library (or module within the ArviZ library) with building blocks for users to generate dashboards and a couple of working example dashboards built using said prototype. We don't expect the prototype written during GSoC to support including all ArviZ stats and plots as dashboard elements
- Expected size: 350h
- Difficulty rating: medium
- Potential Mentors: Ari Hartikainen, Osvaldo Martin
People working on this project will need to be familiar with Panel (or alternative dashboard framework) and with ArviZ plotting and stats module.
Work together with posterior developers to enable data sharing between ArviZ and posterior via netCDF and Zarr files.
- Oriol Abril
People working on this project should be familiar with R, the posterior package and one of the netcdf/zarr R packages. Basic Python knowledge will also come in handy.
Add new plotting capabilities to ArviZ such as:
- half-eye plots
- calibration plots for classification (see. e.g. https://avehtari.github.io/modelselection/diabetes.html) (notice that our loo_pit works for binary data)
- Discrete posterior predictive checks plots (see #1103)
- Oriol Abril
- Osvaldo Martin
People working on this project should be between familiar and proficient in matplotlib and/or bokeh. They are also expected to understand the theory and data processing of the chosen plots.
Add native plotting functions in Julia to take advantage of its multiple backends and ability to express complex combinations of plots with little code when plotting InferenceData objects.
Check out https://github.com/arviz-devs/ArviZPlots.jl for more details and to see the current experimentation state.
- Seth Axen
People working on this project should be familiar with Plots.jl and/or Makie.jl, with InferenceData, and with the targeted plots. It may involve reimplementing ArviZ algorithms such as KDE or PSIS in Julia.
Add a converter function to ArviZ.jl to transform inference results obtained with Gen.jl to InferenceData.
- Seth Axen
People working on this project should be familiar with the inference library Gen and with the InferenceData schema.
ArviZ uses Numba optionally to speed up expensive calculations, achieving noticeable speed-ups in some cases.
Many ArviZ use-cases also require an optimization of the memory usage and of the resource handling.
Dask combines dynamic task scheduling optimized for computation and “Big Data” collections and it is compatible with both xarray and Numba.
The usage of Dask would allow to seamlessly use ArviZ with large databases that do not fit into memory and to optimize the distribution of computational resources.
The aim of this project is to make ArviZ compatible with Dask, so that it may be used as an optional dependency and to build on top of Numba benchmarks to automatically calculate speed-ups provided by Dask in some examples.
A possible extension would be to work on optimizing make_ufunc.
- Oriol Abril
- Ravin Kumar
People working on this project should be familiar with the computational library/ies aimed to use in enhancing ArviZ (i.e.
Numba or Dask).
Here it may be needed to know about the supported inference libraries and the internals of from_xyz converters, for example if using Dask to allow out of memory computation.
Some benchmarking knowledge using asv is a plus.
Some of the probabilistic programming libraries that integrate with ArviZ allow to easily refit the model on a subset of the data (useful for cross-validation for example). The use of sampling wrappers to call the inference libraries from within ArviZ would allow to include algorithms which require refitting to ArviZ. Some interesting examples are the approximate Leave Future Out cross-validation, Importance Weighted Moment Matching or Simulation Based Calibration. The aim of this project is to extend the sampling wrappers and to implement algorithms requiring refitting of the model. See example usages of the current wrapper implementation and the corresponding api docs
- Oriol Abril
People working on this project should be familiar with at least 2 different inference libraries (the more the better though) as well as with the internals of the conversion process. Moreover, they should understand the target algorithms both to design the sampling wrappers api with the final goal in mind and to be able to implement them using said wrappers.
We are brainstorming ideas to refactor the plotting module that would allow better composability and extensibility of ArviZ plotting. We have some ideas in other wiki pages:
- https://github.com/arviz-devs/arviz/wiki/ArviZ-1.0-ideas
- https://github.com/arviz-devs/arviz/wiki/Plot-hierarchy
- Oriol Abril
- Ari Hartikainen
- Osvaldo Martin
People working on this project should be familiar with plot facetting and composition with both matplotlib and bokeh.