diff --git a/demo/Assessing People Skills.ipynb b/demo/Assessing People Skills.ipynb new file mode 100644 index 000000000..475274d20 --- /dev/null +++ b/demo/Assessing People Skills.ipynb @@ -0,0 +1,218 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Assessing People’s Skills\n", + "\n", + "This demo demonstrates the capabilities of ReactiveMP.jl to perform inference in the models composed of Bernoulli random variables.\n", + "\n", + "The demo is inspired by the example from Chapter 2 of Bishop's Model-Based Machine Learning book.\n", + "We are going to perform an exact inference to assess the skills of a student given the results of the test." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let us assume that our imaginary test is composed of three questions, and each of these questions is associated with test results $r$, where $\\{r \\in \\mathbb{R}, 0 < r < 1\\}$\n", + "\n", + "The result of the first question will solely depend on the student's attendance. For example, if the student attends the lectures, he will most certainly answer the first question.\n", + "The result of the second question will depend on a specific skill $s_2$. However, if the student has attended the lectures, he would still have a good chance of answering the second question.\n", + "We will model this relationship through disjunction or logical $OR$.\n", + "The third question is more difficult to answer, i.e., the student needs to have a particular skill $s_3$ __and__ he must have good attendance or must have a $s_3$\n", + "Hence, to model this relationship between skills and the third question, we will use conjunction or logical $AND$.\n", + "\n", + "For the sake of the example, we will replace attendance with laziness. The convention is that if a person is not lazy, he attends lectures.\n", + "This way, the first question can be answered if the student is not lazy. We will use the $NOT$ function to represent this relationship." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let us define the generative model:\n", + "$$p(l, s_2, s_3, r_1, r_2, r_3)=p(l)p(s_2)p(s_3)p(r_1|f_1(l))p(r_2|f_2(l, s_2))p(r_3|f_3(l, s_2, s_3))$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The factors $p(l), p(s_2), p(s_3)$ represent Bernoulli prior distributions. \n", + "\n", + "$f_1(l) = NOT(l)$ where $NOT(X) \\triangleq \\overline{X}$, \n", + "\n", + "$f_2(l, s_2) = OR(NOT(l), s_2)$ where $OR(X, Y) \\triangleq X \\vee Y$, \n", + "\n", + "$f_3(l, s_2, s_3) = AND(OR(NOT(l), s_2), s_3)$ where $AND(X, Y) \\triangleq X \\land Y$\n", + "\n", + "An attentive reader may notice that $f_2(l, s_2)$ can be rewritten as $IMPLY(l, s_2)$, i.e., $l\\implies s_2$ " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Similar to the example from the Model-Based Machine Learning book, our observations are noisy. It means that the likelihood functions should map $\\{0, 1\\}$ to a real value $r \\in (0, 1)$, denoting the result of the test. We can associate $r=0$ and $r=1.0$ with $0\\%$ and $100\\%$ correctness of the test, respectively." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One way of specifying the likelihood is $$p(r_i|f) = \\begin{cases} r_i & \\text{if }f_i = 1 \\\\\n", + "1-r_i & \\text{if }f_i=0 \\end{cases}$$\n", + "or $$p(r_i|f)=r_if_i+(1-r_i)(1-f_i)$$\n", + "\n", + "It can be shown that given the observation $r_i$, the backward message from the node $p(r_i|f_i)$ will be a Bernoulli distribution with parameter $r_i$, i.e. $\\overleftarrow{\\mu}({f_i})\\propto\\mathrm{Ber}(r_i)$. \n", + "If we observe $r_i=0.9$ it is more \"likely\" that the variable $f_i=1$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Following Bishop, we will call this node function __AddNoise__" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "┌ Info: Precompiling GraphPPL [b3f8163a-e979-4e85-b43e-1f63d8c8b42c]\n", + "└ @ Base loading.jl:1423\n", + "┌ Info: Precompiling ReactiveMP [a194aa59-28ba-4574-a09c-4a745416d6e3]\n", + "└ @ Base loading.jl:1423\n" + ] + } + ], + "source": [ + "using Rocket, GraphPPL, ReactiveMP, Distributions, Random" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Create AddNoise node\n", + "struct AddNoise end\n", + "\n", + "@node AddNoise Stochastic [out, in]" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "# Adding update rule for AddNoise node\n", + "@rule AddNoise(:in, Marginalisation) (q_out::PointMass,) = begin \n", + " return Bernoulli(mean(q_out))\n", + "end" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# GraphPPL.jl export `@model` macro for model specification\n", + "# It accepts a regular Julia function and builds an FFG under the hood\n", + "@model function skill_model()\n", + "\n", + " res = datavar(Float64, 3)\n", + "\n", + " laziness ~ Bernoulli(0.5)\n", + " skill2 ~ Bernoulli(0.5)\n", + " skill3 ~ Bernoulli(0.5)\n", + "\n", + " test2 ~ IMPLY(laziness, skill2)\n", + " test3 ~ AND(test2, skill3)\n", + " \n", + " res[1] ~ AddNoise(NOT(laziness))\n", + " res[2] ~ AddNoise(test2)\n", + " res[3] ~ AddNoise(test3)\n", + "\n", + "end" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let us assume that a student scoared $70\\%$ and $95\\%$ at first and second tests respectively. But got only $30\\%$ on the third one. " + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Inference results:\n", + "-----------------------------------------\n", + "skill3 = Bernoulli{Float64}[Bernoulli{Float64}(p=0.3025672371638141)]\n", + "skill2 = Bernoulli{Float64}[Bernoulli{Float64}(p=0.5806845965770171)]\n", + "laziness = Bernoulli{Float64}[Bernoulli{Float64}(p=0.18704156479217607)]\n" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "test_results = [0.7, 0.95, 0.3]\n", + "\n", + "inference_result = inference(\n", + " model = Model(skill_model),\n", + " data = (res = test_results, )\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The results make sense. On the one hand, the student answered the first question correctly, which immediately gives us reason to believe that he is not lazy. He answered the second question pretty well, but this does not mean that the student had the skills to answer this question (attendance,i.e., lack of laziness, could help). To answer the third question, it was necessary to answer the second and have additional skills (#3). Unfortunately, the student's answer was weak, so our confidence about skill #3 was shattered." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "@webio": { + "lastCommId": null, + "lastKernelId": null + }, + "kernelspec": { + "display_name": "Julia 1.7.2", + "language": "julia", + "name": "julia-1.7" + }, + "language_info": { + "file_extension": ".jl", + "mimetype": "application/julia", + "name": "julia", + "version": "1.7.3" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/docs/make.jl b/docs/make.jl index 1bd4889d0..994531414 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -39,6 +39,7 @@ makedocs( "Examples" => [ "Overview" => "examples/overview.md", "Linear Regression" => "examples/linear_regression.md", + "Assessing Peoples Skills" => "examples/assessing_peoples_skills.md", "Linear Gaussian Dynamical System" => "examples/linear_gaussian_state_space_model.md", "Hidden Markov Model" => "examples/hidden_markov_model.md", "Hierarchical Gaussian Filter" => "examples/hierarchical_gaussian_filter.md", diff --git a/docs/src/examples/assessing_peoples_skills.md b/docs/src/examples/assessing_peoples_skills.md new file mode 100644 index 000000000..51ee89f60 --- /dev/null +++ b/docs/src/examples/assessing_peoples_skills.md @@ -0,0 +1,92 @@ +## Assessing People’s Skills + +This demo demonstrates the capabilities of ReactiveMP.jl to perform inference in the models composed of Bernoulli random variables. + +The demo is inspired by the example from Chapter 2 of Bishop's Model-Based Machine Learning book. +We are going to perform an exact inference to assess the skills of a student given the results of the test. + +Let us assume that our imaginary test is composed of three questions, and each of these questions is associated with test results $r$, where $\{r \in \mathbb{R}, 0 < r < 1\}$ + +The result of the first question will solely depend on the student's attendance. For example, if the student attends the lectures, he will most certainly answer the first question. +The result of the second question will depend on a specific skill $s_2$. However, if the student has attended the lectures, he would still have a good chance of answering the second question. +We will model this relationship through disjunction or logical $OR$. +The third question is more difficult to answer, i.e., the student needs to have a particular skill $s_3$ __and__ he must have good attendance or must have a $s_3$ +Hence, to model this relationship between skills and the third question, we will use conjunction or logical $AND$. + +For the sake of the example, we will replace attendance with laziness. The convention is that if a person is not lazy, he attends lectures. +This way, the first question can be answered if the student is not lazy. We will use the $NOT$ function to represent this relationship. + +Let us define the generative model: +$$p(l, s_2, s_3, r_1, r_2, r_3)=p(l)p(s_2)p(s_3)p(r_1|f_1(l))p(r_2|f_2(l, s_2))p(r_3|f_3(l, s_2, s_3))$$ + +The factors $p(l), p(s_2), p(s_3)$ represent Bernoulli prior distributions. + +$f_1(l) = NOT(l)$ where $NOT(X) \triangleq \overline{X}$, + +$f_2(l, s_2) = OR(NOT(l), s_2)$ where $OR(X, Y) \triangleq X \vee Y$, + +$f_3(l, s_2, s_3) = AND(OR(NOT(l), s_2), s_3)$ where $AND(X, Y) \triangleq X \land Y$ + +An attentive reader may notice that $f_2(l, s_2)$ can be rewritten as $IMPLY(l, s_2)$, i.e., $l\implies s_2$ + +Similar to the example from the Model-Based Machine Learning book, our observations are noisy. It means that the likelihood functions should map $\{0, 1\}$ to a real value $r \in (0, 1)$, denoting the result of the test. We can associate $r=0$ and $r=1.0$ with $0\%$ and $100\%$ correctness of the test, respectively. + +One way of specifying the likelihood is $$p(r_i|f) = \begin{cases} r_i & \text{if }f_i = 1 \\ +1-r_i & \text{if }f_i=0 \end{cases}$$ +or $$p(r_i|f)=r_if_i+(1-r_i)(1-f_i)$$ + +It can be shown that given the observation $r_i$, the backward message from the node $p(r_i|f_i)$ will be a Bernoulli distribution with parameter $r_i$, i.e. $\overleftarrow{\mu}({f_i})\propto\mathrm{Ber}(r_i)$. +If we observe $r_i=0.9$ it is more "likely" that the variable $f_i=1$. + +Following Bishop, we will call this node function __AddNoise__ + +```@example skills +using Rocket, GraphPPL, ReactiveMP, Distributions, Random +``` + +```@example skills +# Create AddNoise node +struct AddNoise end + +@node AddNoise Stochastic [out, in] +``` + +```@example skills +# Adding update rule for AddNoise node +@rule AddNoise(:in, Marginalisation) (q_out::PointMass,) = begin + return Bernoulli(mean(q_out)) +end +``` + +```@example skills +# GraphPPL.jl export `@model` macro for model specification +# It accepts a regular Julia function and builds an FFG under the hood +@model function skill_model() + + res = datavar(Float64, 3) + + laziness ~ Bernoulli(0.5) + skill2 ~ Bernoulli(0.5) + skill3 ~ Bernoulli(0.5) + + test2 ~ IMPLY(laziness, skill2) + test3 ~ AND(test2, skill3) + + res[1] ~ AddNoise(NOT(laziness)) + res[2] ~ AddNoise(test2) + res[3] ~ AddNoise(test3) + +end +``` + +Let us assume that a student scoared $70\%$ and $95\%$ at first and second tests respectively. But got only $30\%$ on the third one. + +```@example skills +test_results = [0.7, 0.95, 0.3] + +inference_result = inference( + model = Model(skill_model), + data = (res = test_results, ) +) +``` +The results make sense. On the one hand, the student answered the first question correctly, which immediately gives us reason to believe that he is not lazy. He answered the second question pretty well, but this does not mean that the student had the skills to answer this question (attendance,i.e., lack of laziness, could help). To answer the third question, it was necessary to answer the second and have additional skills (#3). Unfortunately, the student's answer was weak, so our confidence about skill #3 was shattered. \ No newline at end of file diff --git a/docs/src/examples/overview.md b/docs/src/examples/overview.md index 6ea23dba5..14f4a6cbb 100644 --- a/docs/src/examples/overview.md +++ b/docs/src/examples/overview.md @@ -6,6 +6,8 @@ This section contains a set of examples for Bayesian Inference with `ReactiveMP` More examples can be found in [`demo/`](https://github.com/biaslab/ReactiveMP.jl/tree/master/demo) folder at GitHub repository. - [Linear regression](@ref examples-linear-regression): An example of linear regression Bayesian inference. +- [Assessing Peoples Skills]: The demo is inspired by the example from Chapter 2 of Bishop's Model-Based Machine Learning book. +We are going to perform an exact inference to assess the skills of a student given the results of the test. - [Gaussian Linear Dynamical System](@ref examples-linear-gaussian-state-space-model): An example of inference procedure for Gaussian Linear Dynamical System with multivariate noisy observations using Belief Propagation (Sum Product) algorithm. Reference: [Simo Sarkka, Bayesian Filtering and Smoothing](https://users.aalto.fi/~ssarkka/pub/cup_book_online_20131111.pdf). - [Hidden Markov Model](@ref examples-hidden-markov-model): An example of structured variational Bayesian inference in Hidden Markov Model with unknown transition and observational matrices. - [Hierarchical Gaussian Filter](@ref examples-hgf): An example of online inference procedure for Hierarchical Gaussian Filter with univariate noisy observations using Variational Message Passing algorithm. Reference: [Ismail Senoz, Online Message Passing-based Inference in the Hierarchical Gaussian Filter](https://ieeexplore.ieee.org/document/9173980). diff --git a/src/ReactiveMP.jl b/src/ReactiveMP.jl index c03db1736..ba2ebb9a9 100644 --- a/src/ReactiveMP.jl +++ b/src/ReactiveMP.jl @@ -133,6 +133,10 @@ include("nodes/poisson.jl") include("nodes/addition.jl") include("nodes/subtraction.jl") include("nodes/multiplication.jl") +include("nodes/and.jl") +include("nodes/or.jl") +include("nodes/not.jl") +include("nodes/implication.jl") include("rules/prototypes.jl") diff --git a/src/distributions/contingency.jl b/src/distributions/contingency.jl index 5deb9ea8a..26eac7466 100644 --- a/src/distributions/contingency.jl +++ b/src/distributions/contingency.jl @@ -36,6 +36,9 @@ contingency_matrix(distribution::Contingency) = distribution.p vague(::Type{<:Contingency}, dims::Int) = Contingency(ones(dims, dims) ./ abs2(dims)) +convert_eltype(::Type{Contingency}, ::Type{T}, distribution::Contingency{R}) where {T <: Real, R <: Real} = + Contingency(convert(AbstractArray{T}, contingency_matrix(distribution))) + function entropy(distribution::Contingency) P = contingency_matrix(distribution) return -mapreduce((p) -> p * clamplog(p), +, P) diff --git a/src/nodes/and.jl b/src/nodes/and.jl new file mode 100644 index 000000000..84e4a6412 --- /dev/null +++ b/src/nodes/and.jl @@ -0,0 +1,13 @@ +export AND + +""" +AND node implements logic AND function (conjuction) that can be desribed by the followsing table: +| in1 in2 | out | +| 0 0 | 0 | +| 0 1 | 0 | +| 1 0 | 0 | +| 1 1 | 1 | +""" +struct AND end + +@node AND Deterministic [out, in1, in2] diff --git a/src/nodes/implication.jl b/src/nodes/implication.jl new file mode 100644 index 000000000..637e0327f --- /dev/null +++ b/src/nodes/implication.jl @@ -0,0 +1,13 @@ +export IMPLY + +""" +IMPY node implements implication function that can be desribed by the followsing table: +| in1 in2 | out | +| 0 0 | 1 | +| 0 1 | 1 | +| 1 0 | 0 | +| 1 1 | 1 | +""" +struct IMPLY end + +@node IMPLY Deterministic [out, in1, in2] diff --git a/src/nodes/not.jl b/src/nodes/not.jl new file mode 100644 index 000000000..a877384d1 --- /dev/null +++ b/src/nodes/not.jl @@ -0,0 +1,11 @@ +export NOT + +""" +NOT node implements negation function that can be desribed by the followsing table: +| in | out | +| 0 | 1 | +| 1 | 0 | +""" +struct NOT end + +@node NOT Deterministic [out, in] diff --git a/src/nodes/or.jl b/src/nodes/or.jl new file mode 100644 index 000000000..f802fc508 --- /dev/null +++ b/src/nodes/or.jl @@ -0,0 +1,13 @@ +export OR + +""" +OR node implements logic OR function (disjunction) that can be desribed by the followsing table: +| in1 in2 | out | +| 0 0 | 0 | +| 0 1 | 1 | +| 1 0 | 1 | +| 1 1 | 1 | +""" +struct OR end + +@node OR Deterministic [out, in1, in2] diff --git a/src/rules/and/in1.jl b/src/rules/and/in1.jl new file mode 100644 index 000000000..ee0bb32fa --- /dev/null +++ b/src/rules/and/in1.jl @@ -0,0 +1,8 @@ +@rule AND(:in1, Marginalisation) ( + m_out::Bernoulli, + m_in2::Bernoulli +) = begin + pout, pin2 = mean(m_out), mean(m_in2) + + return Bernoulli((1 - pout - pin2 + 2 * pout * pin2) / (2 - 2 * pout - pin2 + 2 * pout * pin2)) +end diff --git a/src/rules/and/in2.jl b/src/rules/and/in2.jl new file mode 100644 index 000000000..b675e3b89 --- /dev/null +++ b/src/rules/and/in2.jl @@ -0,0 +1,3 @@ +@rule AND(:in2, Marginalisation) (m_out::Bernoulli, m_in1::Bernoulli, meta::Any) = begin + return @call_rule AND(:in1, Marginalisation) (m_out = m_out, m_in2 = m_in1, meta = meta) +end diff --git a/src/rules/and/marginals.jl b/src/rules/and/marginals.jl new file mode 100644 index 000000000..52d061897 --- /dev/null +++ b/src/rules/and/marginals.jl @@ -0,0 +1,8 @@ +@marginalrule AND(:in1_in2) ( + m_out::Bernoulli, + m_in1::Bernoulli, + m_in2::Bernoulli +) = begin + pin1, pin2, pout = mean(m_in1), mean(m_in2), mean(m_out) + return Contingency([(1-pin1)*(1-pin2)*(1-pout) (1-pin1)*pin2*(1-pout); pin1*(1-pin2)*(1-pout) pin1*pin2*pout]) +end diff --git a/src/rules/and/out.jl b/src/rules/and/out.jl new file mode 100644 index 000000000..9dd186b5f --- /dev/null +++ b/src/rules/and/out.jl @@ -0,0 +1,8 @@ +@rule AND(:out, Marginalisation) ( + m_in1::Bernoulli, + m_in2::Bernoulli +) = begin + pin1, pin2 = mean(m_in1), mean(m_in2) + + return Bernoulli(pin1 * pin2) +end diff --git a/src/rules/implication/in1.jl b/src/rules/implication/in1.jl new file mode 100644 index 000000000..55161f94a --- /dev/null +++ b/src/rules/implication/in1.jl @@ -0,0 +1,8 @@ +@rule typeof(IMPLY)(:in1, Marginalisation) ( + m_out::Bernoulli, + m_in2::Bernoulli +) = begin + pout, pin2 = mean(m_out), mean(m_in2) + + return Bernoulli((1 - pout - pin2 + 2 * pout * pin2) / (1 - pin2 + 2 * pout * pin2)) +end diff --git a/src/rules/implication/in2.jl b/src/rules/implication/in2.jl new file mode 100644 index 000000000..c1e5c113b --- /dev/null +++ b/src/rules/implication/in2.jl @@ -0,0 +1,8 @@ +@rule typeof(IMPLY)(:in2, Marginalisation) ( + m_out::Bernoulli, + m_in1::Bernoulli +) = begin + pout, pin1 = mean(m_out), mean(m_in1) + + return Bernoulli((pout) / (2 * pout + pin1 - 2 * pout * pin1)) +end diff --git a/src/rules/implication/marginals.jl b/src/rules/implication/marginals.jl new file mode 100644 index 000000000..17f329a54 --- /dev/null +++ b/src/rules/implication/marginals.jl @@ -0,0 +1,8 @@ +@marginalrule IMPLY(:in1_in2) ( + m_out::Bernoulli, + m_in1::Bernoulli, + m_in2::Bernoulli +) = begin + pin1, pin2, pout = mean(m_in1), mean(m_in2), mean(m_out) + return Contingency([(1-pin1)*pout*(1-pin2) (1-pin1)*pin2*pout; pin1*(1-pin2)*(1-pout) pin1*pin2*pout]) +end diff --git a/src/rules/implication/out.jl b/src/rules/implication/out.jl new file mode 100644 index 000000000..9c45e1570 --- /dev/null +++ b/src/rules/implication/out.jl @@ -0,0 +1,8 @@ +@rule typeof(IMPLY)(:out, Marginalisation) ( + m_in1::Bernoulli, + m_in2::Bernoulli +) = begin + pin1, pin2 = mean(m_in1), mean(m_in2) + + return Bernoulli(1 - pin1 + pin1 * pin2) +end diff --git a/src/rules/not/in.jl b/src/rules/not/in.jl new file mode 100644 index 000000000..8f4ef357d --- /dev/null +++ b/src/rules/not/in.jl @@ -0,0 +1,3 @@ +export rule + +@rule NOT(:in, Marginalisation) (m_out::Bernoulli,) = Bernoulli(1 - mean(m_out)) diff --git a/src/rules/not/marginals.jl b/src/rules/not/marginals.jl new file mode 100644 index 000000000..968357dc6 --- /dev/null +++ b/src/rules/not/marginals.jl @@ -0,0 +1,7 @@ +@marginalrule NOT(:in) ( + m_out::Bernoulli, + m_in::Bernoulli +) = begin + pin, pout = mean(m_in), mean(m_out) + return Bernoulli(pin * (1 - pout) / (pin * (1 - pout) + pout * (1 - pin))) +end diff --git a/src/rules/not/out.jl b/src/rules/not/out.jl new file mode 100644 index 000000000..a780cc79d --- /dev/null +++ b/src/rules/not/out.jl @@ -0,0 +1,3 @@ +export rule + +@rule NOT(:out, Marginalisation) (m_in::Bernoulli,) = Bernoulli(1 - mean(m_in)) diff --git a/src/rules/or/in1.jl b/src/rules/or/in1.jl new file mode 100644 index 000000000..a993a0f9e --- /dev/null +++ b/src/rules/or/in1.jl @@ -0,0 +1,7 @@ +@rule OR(:in1, Marginalisation) ( + m_out::Bernoulli, + m_in2::Bernoulli +) = begin + pin2, pout = mean(m_in2), mean(m_out) + return Bernoulli(pout / (1 - pin2 + 2 * pin2 * pout)) +end diff --git a/src/rules/or/in2.jl b/src/rules/or/in2.jl new file mode 100644 index 000000000..af6337983 --- /dev/null +++ b/src/rules/or/in2.jl @@ -0,0 +1,3 @@ +@rule OR(:in2, Marginalisation) (m_out::Bernoulli, m_in1::Bernoulli) = begin + return @call_rule typeof(OR)(:in1, Marginalisation) (m_out = m_out, m_in2 = m_in1, meta = meta) +end diff --git a/src/rules/or/marginals.jl b/src/rules/or/marginals.jl new file mode 100644 index 000000000..a4cc56147 --- /dev/null +++ b/src/rules/or/marginals.jl @@ -0,0 +1,8 @@ +@marginalrule OR(:in1_in2) ( + m_out::Bernoulli, + m_in1::Bernoulli, + m_in2::Bernoulli +) = begin + pin1, pin2, pout = mean(m_in1), mean(m_in2), mean(m_out) + return Contingency([(1-pin1)*(1-pin2)*(1-pout) (1-pin1)*pin2*pout; pin1*(1-pin2)*pout pin1*pin2*pout]) +end diff --git a/src/rules/or/out.jl b/src/rules/or/out.jl new file mode 100644 index 000000000..ef4ff9fd4 --- /dev/null +++ b/src/rules/or/out.jl @@ -0,0 +1,8 @@ +@rule OR(:out, Marginalisation) ( + m_in1::Bernoulli, + m_in2::Bernoulli +) = begin + pin1, pin2 = mean(m_in1), mean(m_in2) + + return Bernoulli(pin1 + pin2 - pin1 * pin2) +end diff --git a/src/rules/prototypes.jl b/src/rules/prototypes.jl index 8dd2a9828..4f5cf0d7c 100644 --- a/src/rules/prototypes.jl +++ b/src/rules/prototypes.jl @@ -127,3 +127,22 @@ include("bifm_helper/out.jl") include("poisson/l.jl") include("poisson/marginals.jl") include("poisson/out.jl") + +include("or/in1.jl") +include("or/in2.jl") +include("or/out.jl") +include("or/marginals.jl") + +include("not/in.jl") +include("not/out.jl") +include("not/marginals.jl") + +include("and/in1.jl") +include("and/in2.jl") +include("and/out.jl") +include("and/marginals.jl") + +include("implication/in1.jl") +include("implication/in2.jl") +include("implication/out.jl") +include("implication/marginals.jl") diff --git a/test/nodes/test_and.jl b/test/nodes/test_and.jl new file mode 100644 index 000000000..986c6ace5 --- /dev/null +++ b/test/nodes/test_and.jl @@ -0,0 +1,20 @@ +module OrNodeTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "AndNode" begin + @testset "Creation" begin + node = make_node(AND) + + @test functionalform(node) === AND + @test sdtype(node) === Deterministic() + @test name.(interfaces(node)) === (:out, :in1, :in2) + @test factorisation(node) === ((1, 2, 3),) + @test localmarginalnames(node) === (:out_in1_in2,) + @test metadata(node) === nothing + end +end +end diff --git a/test/nodes/test_implication.jl b/test/nodes/test_implication.jl new file mode 100644 index 000000000..3541aa7a3 --- /dev/null +++ b/test/nodes/test_implication.jl @@ -0,0 +1,20 @@ +module ImplicationNodeTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "ImplicationNode" begin + @testset "Creation" begin + node = make_node(IMPLY) + + @test functionalform(node) === IMPLY + @test sdtype(node) === Deterministic() + @test name.(interfaces(node)) === (:out, :in1, :in2) + @test factorisation(node) === ((1, 2, 3),) + @test localmarginalnames(node) === (:out_in1_in2,) + @test metadata(node) === nothing + end +end +end diff --git a/test/nodes/test_not.jl b/test/nodes/test_not.jl new file mode 100644 index 000000000..c2d5fa071 --- /dev/null +++ b/test/nodes/test_not.jl @@ -0,0 +1,19 @@ +module NotNodeTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "NotNode" begin + @testset "Creation" begin + node = make_node(NOT) + + @test functionalform(node) === NOT + @test sdtype(node) === Deterministic() + @test name.(interfaces(node)) === (:out, :in) + @test factorisation(node) === ((1, 2),) + @test metadata(node) === nothing + end +end +end diff --git a/test/nodes/test_or.jl b/test/nodes/test_or.jl new file mode 100644 index 000000000..96b99cf3d --- /dev/null +++ b/test/nodes/test_or.jl @@ -0,0 +1,20 @@ +module OrNodeTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "OrNode" begin + @testset "Creation" begin + node = make_node(OR) + + @test functionalform(node) === OR + @test sdtype(node) === Deterministic() + @test name.(interfaces(node)) === (:out, :in1, :in2) + @test factorisation(node) === ((1, 2, 3),) + @test localmarginalnames(node) === (:out_in1_in2,) + @test metadata(node) === nothing + end +end +end diff --git a/test/rules/and/test_in1.jl b/test/rules/and/test_in1.jl new file mode 100644 index 000000000..7e5b4641d --- /dev/null +++ b/test/rules/and/test_in1.jl @@ -0,0 +1,22 @@ +module RulesANDIn1Test + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:AND:in1" begin + @testset "Belief Propagation: (m_out::Bernoulli, m_in2::Bernoulli)" begin + @test_rules [with_float_conversions = true] AND(:in1, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.5 / 0.9) + ), + ( + input = (m_out = Bernoulli(0.3), m_in2 = Bernoulli(0.4)), + output = Bernoulli(0.54 / 1.24) + ) + ] + end +end +end diff --git a/test/rules/and/test_in2.jl b/test/rules/and/test_in2.jl new file mode 100644 index 000000000..55f926793 --- /dev/null +++ b/test/rules/and/test_in2.jl @@ -0,0 +1,22 @@ +module RulesANDIn2Test + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:AND:in2" begin + @testset "Belief Propagation: (m_out::Bernoulli, m_in1::Bernoulli)" begin + @test_rules [with_float_conversions = true] AND(:in2, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6), m_in1 = Bernoulli(0.5)), + output = Bernoulli(0.5 / 0.9) + ), + ( + input = (m_out = Bernoulli(0.3), m_in1 = Bernoulli(0.4)), + output = Bernoulli(0.54 / 1.24) + ) + ] + end +end +end diff --git a/test/rules/and/test_marginals.jl b/test/rules/and/test_marginals.jl new file mode 100644 index 000000000..1d810b918 --- /dev/null +++ b/test/rules/and/test_marginals.jl @@ -0,0 +1,32 @@ +module RulesANDMarginalsTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules, @test_marginalrules + +@testset "rules:AND:marginals" begin + @testset ":in1_in2 (m_out::Bernoulli, m_in1::Bernoulli, m_in2::Bernoulli)" begin + @test_marginalrules [with_float_conversions = true] AND(:in1_in2) [ + ( + input = ( + m_out = Bernoulli(0.5), + m_in1 = Bernoulli(0.5), + m_in2 = Bernoulli(0.5) + ), + output = (Contingency([0.5^3 0.5^3; 0.5^3 0.5^3]) + ) + ), + ( + input = ( + m_out = Bernoulli(0.2), + m_in1 = Bernoulli(0.8), + m_in2 = Bernoulli(0.4) + ), + output = (Contingency([0.2*0.8*0.6 0.2*0.8*0.4; 0.8*0.8*0.6 0.2*0.8*0.4]) + ) + ) + ] + end +end +end diff --git a/test/rules/and/test_out.jl b/test/rules/and/test_out.jl new file mode 100644 index 000000000..6c3f12ad3 --- /dev/null +++ b/test/rules/and/test_out.jl @@ -0,0 +1,22 @@ +module RulesANDOutTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:AND:out" begin + @testset "Belief Propagation: (m_in1::Bernoulli, m_in2::Bernoulli)" begin + @test_rules [with_float_conversions = true] AND(:out, Marginalisation) [ + ( + input = (m_in1 = Bernoulli(0.3), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.15) + ), + ( + input = (m_in1 = Bernoulli(0.4), m_in2 = Bernoulli(0.3)), + output = Bernoulli(0.12) + ) + ] + end +end +end diff --git a/test/rules/implication/test_in1.jl b/test/rules/implication/test_in1.jl new file mode 100644 index 000000000..085a133b2 --- /dev/null +++ b/test/rules/implication/test_in1.jl @@ -0,0 +1,22 @@ +module RulesImplicationIn1Test + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:IMPLY:in1" begin + @testset "Belief Propagation: (m_out::Bernoulli, m_in2::Bernoulli)" begin + @test_rules [with_float_conversions = true] IMPLY(:in1, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.5 / 1.1) + ), + ( + input = (m_out = Bernoulli(0.2), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.5 / 0.7) + ) + ] + end +end +end diff --git a/test/rules/implication/test_in2.jl b/test/rules/implication/test_in2.jl new file mode 100644 index 000000000..ef5bd0a19 --- /dev/null +++ b/test/rules/implication/test_in2.jl @@ -0,0 +1,22 @@ +module RulesANDIn2Test + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:IMPLY:in2" begin + @testset "Belief Propagation: (m_out::Bernoulli, m_in1::Bernoulli)" begin + @test_rules [with_float_conversions = true] IMPLY(:in2, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6), m_in1 = Bernoulli(0.5)), + output = Bernoulli(0.6 / 1.1) + ), + ( + input = (m_out = Bernoulli(0.3), m_in1 = Bernoulli(0.4)), + output = Bernoulli(0.3 / 0.76) + ) + ] + end +end +end diff --git a/test/rules/implication/test_marginals.jl b/test/rules/implication/test_marginals.jl new file mode 100644 index 000000000..b67265fe0 --- /dev/null +++ b/test/rules/implication/test_marginals.jl @@ -0,0 +1,32 @@ +module RulesImplicationMarginalsTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules, @test_marginalrules + +@testset "rules:IMPLY:marginals" begin + @testset ":in1_in2 (m_out::Bernoulli, m_in1::Bernoulli, m_in2::Bernoulli)" begin + @test_marginalrules [with_float_conversions = true] IMPLY(:in1_in2) [ + ( + input = ( + m_out = Bernoulli(0.5), + m_in1 = Bernoulli(0.5), + m_in2 = Bernoulli(0.5) + ), + output = (Contingency([0.5^3 0.5^3; 0.5^3 0.5^3]) + ) + ), + ( + input = ( + m_out = Bernoulli(0.2), + m_in1 = Bernoulli(0.8), + m_in2 = Bernoulli(0.4) + ), + output = (Contingency([0.2*0.2*0.6 0.2*0.2*0.4; 0.8*0.8*0.6 0.2*0.8*0.4]) + ) + ) + ] + end +end +end diff --git a/test/rules/implication/test_out.jl b/test/rules/implication/test_out.jl new file mode 100644 index 000000000..dd4a68765 --- /dev/null +++ b/test/rules/implication/test_out.jl @@ -0,0 +1,22 @@ +module RulesImplicationOutTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:IMPLY:out" begin + @testset "Belief Propagation: (m_in1::Bernoulli, m_in2::Bernoulli)" begin + @test_rules [with_float_conversions = true] IMPLY(:out, Marginalisation) [ + ( + input = (m_in1 = Bernoulli(0.3), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.85) + ), + ( + input = (m_in1 = Bernoulli(0.4), m_in2 = Bernoulli(0.7)), + output = Bernoulli(0.88) + ) + ] + end +end +end diff --git a/test/rules/not/test_in.jl b/test/rules/not/test_in.jl new file mode 100644 index 000000000..ba6d6f7df --- /dev/null +++ b/test/rules/not/test_in.jl @@ -0,0 +1,22 @@ +module RulesNOTInTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:NOT:in" begin + @testset "Belief Propagation: (m_in::Bernoulli)" begin + @test_rules [with_float_conversions = true] NOT(:in, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6),), + output = Bernoulli(0.4) + ), + ( + input = (m_out = Bernoulli(0.3),), + output = Bernoulli(0.7) + ) + ] + end +end +end diff --git a/test/rules/not/test_marginals.jl b/test/rules/not/test_marginals.jl new file mode 100644 index 000000000..d6681e781 --- /dev/null +++ b/test/rules/not/test_marginals.jl @@ -0,0 +1,28 @@ +module RulesNOTMarginalsTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules, @test_marginalrules + +@testset "rules:NOT:marginals" begin + @testset ":in (m_out::Bernoulli, m_in::Bernoulli)" begin + @test_marginalrules [with_float_conversions = true] NOT(:in) [ + ( + input = ( + m_out = Bernoulli(0.4), + m_in = Bernoulli(0.5) + ), + output = Bernoulli(0.6) + ), + ( + input = ( + m_out = Bernoulli(0.2), + m_in = Bernoulli(0.8) + ), + output = Bernoulli(0.64 / (0.68)) + ) + ] + end +end +end diff --git a/test/rules/not/test_out.jl b/test/rules/not/test_out.jl new file mode 100644 index 000000000..1f2b11e66 --- /dev/null +++ b/test/rules/not/test_out.jl @@ -0,0 +1,21 @@ +module RulesNOTOutTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:NOT:out" begin + @testset "Belief Propagation: (m_in::Bernoulli)" begin + @test_rules [with_float_conversions = true] NOT(:out, Marginalisation) [ + ( + input = (m_in = Bernoulli(0.5),), + output = Bernoulli(0.5) + ), ( + input = (m_in = Bernoulli(0.3),), + output = Bernoulli(0.7) + ) + ] + end +end +end diff --git a/test/rules/or/test_in1.jl b/test/rules/or/test_in1.jl new file mode 100644 index 000000000..b0a283b67 --- /dev/null +++ b/test/rules/or/test_in1.jl @@ -0,0 +1,22 @@ +module RulesORIn1Test + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:OR:in1" begin + @testset "Belief Propagation: (m_out::Bernoulli, m_in2::Bernoulli)" begin + @test_rules [with_float_conversions = true] OR(:in1, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.6 / 1.1) + ), + ( + input = (m_out = Bernoulli(0.3), m_in2 = Bernoulli(0.4)), + output = Bernoulli(0.3 / 0.84) + ) + ] + end +end +end diff --git a/test/rules/or/test_in2.jl b/test/rules/or/test_in2.jl new file mode 100644 index 000000000..594789f69 --- /dev/null +++ b/test/rules/or/test_in2.jl @@ -0,0 +1,22 @@ +module RulesORIn2Test + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:OR:in2" begin + @testset "Belief Propagation: (m_out::Bernoulli, m_in1::Bernoulli)" begin + @test_rules [with_float_conversions = true] OR(:in2, Marginalisation) [ + ( + input = (m_out = Bernoulli(0.6), m_in1 = Bernoulli(0.5)), + output = Bernoulli(0.6 / 1.1) + ), + ( + input = (m_out = Bernoulli(0.3), m_in1 = Bernoulli(0.4)), + output = Bernoulli(0.3 / 0.84) + ) + ] + end +end +end diff --git a/test/rules/or/test_marginals.jl b/test/rules/or/test_marginals.jl new file mode 100644 index 000000000..18314480c --- /dev/null +++ b/test/rules/or/test_marginals.jl @@ -0,0 +1,32 @@ +module RulesORMarginalsTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules, @test_marginalrules + +@testset "rules:OR:marginals" begin + @testset ":in1_in2 (m_out::Bernoulli, m_in1::Bernoulli, m_in2::Bernoulli)" begin + @test_marginalrules [with_float_conversions = true] OR(:in1_in2) [ + ( + input = ( + m_out = Bernoulli(0.5), + m_in1 = Bernoulli(0.5), + m_in2 = Bernoulli(0.5) + ), + output = (Contingency([0.5^3 0.5^3; 0.5^3 0.5^3]) + ) + ), + ( + input = ( + m_out = Bernoulli(0.2), + m_in1 = Bernoulli(0.8), + m_in2 = Bernoulli(0.4) + ), + output = (Contingency([0.8*0.2*0.6 0.2*0.4*0.2; 0.8*0.2*0.6 0.2*0.8*0.4]) + ) + ) + ] + end +end +end diff --git a/test/rules/or/test_out.jl b/test/rules/or/test_out.jl new file mode 100644 index 000000000..8ad48d71a --- /dev/null +++ b/test/rules/or/test_out.jl @@ -0,0 +1,21 @@ +module RulesOROutTest + +using Test +using ReactiveMP +using Random +import ReactiveMP: @test_rules + +@testset "rules:OR:out" begin + @testset "Belief Propagation: (m_in1::Bernoulli, m_in2::Bernoulli)" begin + @test_rules [with_float_conversions = true] OR(:out, Marginalisation) [ + ( + input = (m_in1 = Bernoulli(0.5), m_in2 = Bernoulli(0.5)), + output = Bernoulli(0.75) + ), ( + input = (m_in1 = Bernoulli(0.3), m_in2 = Bernoulli(0.4)), + output = Bernoulli(0.58) + ) + ] + end +end +end diff --git a/test/runtests.jl b/test/runtests.jl index 5cc238f0d..89cd1ff4a 100644 --- a/test/runtests.jl +++ b/test/runtests.jl @@ -150,6 +150,10 @@ end addtests("nodes/test_mv_normal_mean_covariance.jl") addtests("nodes/test_poisson.jl") addtests("nodes/test_wishart_inverse.jl") + addtests("nodes/test_or.jl") + addtests("nodes/test_not.jl") + addtests("nodes/test_and.jl") + addtests("nodes/test_implication.jl") addtests("rules/uniform/test_out.jl") @@ -219,6 +223,25 @@ end addtests("rules/poisson/test_marginals.jl") addtests("rules/poisson/test_out.jl") + addtests("rules/or/test_out.jl") + addtests("rules/or/test_in1.jl") + addtests("rules/or/test_in2.jl") + addtests("rules/or/test_marginals.jl") + + addtests("rules/not/test_out.jl") + addtests("rules/not/test_in.jl") + addtests("rules/not/test_marginals.jl") + + addtests("rules/and/test_out.jl") + addtests("rules/and/test_in1.jl") + addtests("rules/and/test_in2.jl") + addtests("rules/and/test_marginals.jl") + + addtests("rules/implication/test_out.jl") + addtests("rules/implication/test_in1.jl") + addtests("rules/implication/test_in2.jl") + addtests("rules/implication/test_marginals.jl") + addtests("models/test_lgssm.jl") addtests("models/test_hgf.jl") addtests("models/test_ar.jl")