-
Notifications
You must be signed in to change notification settings - Fork 14
Support of message passing for Boolean functions #170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
31 commits
Select commit
Hold shift + click to select a range
8b4b755
Add OR Node
Sepideh-Adamiat 082065c
Add rules for in1 and in2
Sepideh-Adamiat 74ddf64
add And and Implication node
Chengfeng-Jia b7548ec
add Implication node
Chengfeng-Jia 37e4706
Update in2
Sepideh-Adamiat 6446033
Add Not node
Sepideh-Adamiat 7b01a64
Fix the test for the nodes
Sepideh-Adamiat 082a5a8
add test for AND_IMPL
Chengfeng-Jia eec109f
add test for AND_IMPL
Chengfeng-Jia a3ed124
Add marginal rules
albertpod ed56cba
Make format
albertpod fca65ef
add more test exampls
Chengfeng-Jia 547df59
Update rules
albertpod b14b302
Merge branches
albertpod 03d9233
Make format
albertpod 52025ae
Add demo
albertpod b07e976
Merge branch 'master' into dev_logic
albertpod 5d58184
Update
albertpod 6f68b7e
Fix tests
albertpod b142440
Rename IMPL to IMPLY
albertpod d91613d
test: fix contingency matrix eltype conversion
bvdmitri 19449ff
docs: add new notebook to the documentation examples
bvdmitri 08d406d
update ordering
bartvanerp 6d07229
update notebook output
bvdmitri 1ef5b94
Merge branch 'dev_logic' of github.com:biaslab/ReactiveMP.jl into dev…
bvdmitri 581dd8b
Update runtests.jl
bartvanerp afa9870
Add descriptions
albertpod 8a435ad
Make format
albertpod 8ec8000
Fix comments
albertpod 4bfdba3
Update docs
albertpod 9cda713
Update examples
albertpod File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,218 @@ | ||
| { | ||
| "cells": [ | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Assessing People’s Skills\n", | ||
| "\n", | ||
| "This demo demonstrates the capabilities of ReactiveMP.jl to perform inference in the models composed of Bernoulli random variables.\n", | ||
| "\n", | ||
| "The demo is inspired by the example from Chapter 2 of Bishop's Model-Based Machine Learning book.\n", | ||
| "We are going to perform an exact inference to assess the skills of a student given the results of the test." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "Let us assume that our imaginary test is composed of three questions, and each of these questions is associated with test results $r$, where $\\{r \\in \\mathbb{R}, 0 < r < 1\\}$\n", | ||
| "\n", | ||
| "The result of the first question will solely depend on the student's attendance. For example, if the student attends the lectures, he will most certainly answer the first question.\n", | ||
| "The result of the second question will depend on a specific skill $s_2$. However, if the student has attended the lectures, he would still have a good chance of answering the second question.\n", | ||
| "We will model this relationship through disjunction or logical $OR$.\n", | ||
| "The third question is more difficult to answer, i.e., the student needs to have a particular skill $s_3$ __and__ he must have good attendance or must have a $s_3$\n", | ||
| "Hence, to model this relationship between skills and the third question, we will use conjunction or logical $AND$.\n", | ||
| "\n", | ||
| "For the sake of the example, we will replace attendance with laziness. The convention is that if a person is not lazy, he attends lectures.\n", | ||
| "This way, the first question can be answered if the student is not lazy. We will use the $NOT$ function to represent this relationship." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "Let us define the generative model:\n", | ||
| "$$p(l, s_2, s_3, r_1, r_2, r_3)=p(l)p(s_2)p(s_3)p(r_1|f_1(l))p(r_2|f_2(l, s_2))p(r_3|f_3(l, s_2, s_3))$$" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "The factors $p(l), p(s_2), p(s_3)$ represent Bernoulli prior distributions. \n", | ||
| "\n", | ||
| "$f_1(l) = NOT(l)$ where $NOT(X) \\triangleq \\overline{X}$, \n", | ||
| "\n", | ||
| "$f_2(l, s_2) = OR(NOT(l), s_2)$ where $OR(X, Y) \\triangleq X \\vee Y$, \n", | ||
| "\n", | ||
| "$f_3(l, s_2, s_3) = AND(OR(NOT(l), s_2), s_3)$ where $AND(X, Y) \\triangleq X \\land Y$\n", | ||
| "\n", | ||
| "An attentive reader may notice that $f_2(l, s_2)$ can be rewritten as $IMPLY(l, s_2)$, i.e., $l\\implies s_2$ " | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "Similar to the example from the Model-Based Machine Learning book, our observations are noisy. It means that the likelihood functions should map $\\{0, 1\\}$ to a real value $r \\in (0, 1)$, denoting the result of the test. We can associate $r=0$ and $r=1.0$ with $0\\%$ and $100\\%$ correctness of the test, respectively." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "One way of specifying the likelihood is $$p(r_i|f) = \\begin{cases} r_i & \\text{if }f_i = 1 \\\\\n", | ||
| "1-r_i & \\text{if }f_i=0 \\end{cases}$$\n", | ||
| "or $$p(r_i|f)=r_if_i+(1-r_i)(1-f_i)$$\n", | ||
| "\n", | ||
| "It can be shown that given the observation $r_i$, the backward message from the node $p(r_i|f_i)$ will be a Bernoulli distribution with parameter $r_i$, i.e. $\\overleftarrow{\\mu}({f_i})\\propto\\mathrm{Ber}(r_i)$. \n", | ||
| "If we observe $r_i=0.9$ it is more \"likely\" that the variable $f_i=1$." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "Following Bishop, we will call this node function __AddNoise__" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 1, | ||
| "metadata": {}, | ||
| "outputs": [ | ||
| { | ||
| "name": "stderr", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "┌ Info: Precompiling GraphPPL [b3f8163a-e979-4e85-b43e-1f63d8c8b42c]\n", | ||
| "└ @ Base loading.jl:1423\n", | ||
| "┌ Info: Precompiling ReactiveMP [a194aa59-28ba-4574-a09c-4a745416d6e3]\n", | ||
| "└ @ Base loading.jl:1423\n" | ||
| ] | ||
| } | ||
| ], | ||
| "source": [ | ||
| "using Rocket, GraphPPL, ReactiveMP, Distributions, Random" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 2, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Create AddNoise node\n", | ||
| "struct AddNoise end\n", | ||
| "\n", | ||
| "@node AddNoise Stochastic [out, in]" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 3, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Adding update rule for AddNoise node\n", | ||
| "@rule AddNoise(:in, Marginalisation) (q_out::PointMass,) = begin \n", | ||
| " return Bernoulli(mean(q_out))\n", | ||
| "end" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 4, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# GraphPPL.jl export `@model` macro for model specification\n", | ||
| "# It accepts a regular Julia function and builds an FFG under the hood\n", | ||
| "@model function skill_model()\n", | ||
| "\n", | ||
| " res = datavar(Float64, 3)\n", | ||
| "\n", | ||
| " laziness ~ Bernoulli(0.5)\n", | ||
| " skill2 ~ Bernoulli(0.5)\n", | ||
| " skill3 ~ Bernoulli(0.5)\n", | ||
| "\n", | ||
| " test2 ~ IMPLY(laziness, skill2)\n", | ||
| " test3 ~ AND(test2, skill3)\n", | ||
| " \n", | ||
| " res[1] ~ AddNoise(NOT(laziness))\n", | ||
| " res[2] ~ AddNoise(test2)\n", | ||
| " res[3] ~ AddNoise(test3)\n", | ||
| "\n", | ||
| "end" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "Let us assume that a student scoared $70\\%$ and $95\\%$ at first and second tests respectively. But got only $30\\%$ on the third one. " | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 12, | ||
| "metadata": {}, | ||
| "outputs": [ | ||
| { | ||
| "data": { | ||
| "text/plain": [ | ||
| "Inference results:\n", | ||
| "-----------------------------------------\n", | ||
| "skill3 = Bernoulli{Float64}[Bernoulli{Float64}(p=0.3025672371638141)]\n", | ||
| "skill2 = Bernoulli{Float64}[Bernoulli{Float64}(p=0.5806845965770171)]\n", | ||
| "laziness = Bernoulli{Float64}[Bernoulli{Float64}(p=0.18704156479217607)]\n" | ||
| ] | ||
| }, | ||
| "execution_count": 12, | ||
| "metadata": {}, | ||
| "output_type": "execute_result" | ||
| } | ||
| ], | ||
| "source": [ | ||
| "test_results = [0.7, 0.95, 0.3]\n", | ||
| "\n", | ||
| "inference_result = inference(\n", | ||
| " model = Model(skill_model),\n", | ||
| " data = (res = test_results, )\n", | ||
| ")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "The results make sense. On the one hand, the student answered the first question correctly, which immediately gives us reason to believe that he is not lazy. He answered the second question pretty well, but this does not mean that the student had the skills to answer this question (attendance,i.e., lack of laziness, could help). To answer the third question, it was necessary to answer the second and have additional skills (#3). Unfortunately, the student's answer was weak, so our confidence about skill #3 was shattered." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [] | ||
| } | ||
| ], | ||
| "metadata": { | ||
| "@webio": { | ||
| "lastCommId": null, | ||
| "lastKernelId": null | ||
| }, | ||
| "kernelspec": { | ||
| "display_name": "Julia 1.7.2", | ||
| "language": "julia", | ||
| "name": "julia-1.7" | ||
| }, | ||
| "language_info": { | ||
| "file_extension": ".jl", | ||
| "mimetype": "application/julia", | ||
| "name": "julia", | ||
| "version": "1.7.3" | ||
| } | ||
| }, | ||
| "nbformat": 4, | ||
| "nbformat_minor": 4 | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,92 @@ | ||
| ## Assessing People’s Skills | ||
|
|
||
| This demo demonstrates the capabilities of ReactiveMP.jl to perform inference in the models composed of Bernoulli random variables. | ||
|
|
||
| The demo is inspired by the example from Chapter 2 of Bishop's Model-Based Machine Learning book. | ||
| We are going to perform an exact inference to assess the skills of a student given the results of the test. | ||
|
|
||
| Let us assume that our imaginary test is composed of three questions, and each of these questions is associated with test results $r$, where $\{r \in \mathbb{R}, 0 < r < 1\}$ | ||
|
|
||
| The result of the first question will solely depend on the student's attendance. For example, if the student attends the lectures, he will most certainly answer the first question. | ||
| The result of the second question will depend on a specific skill $s_2$. However, if the student has attended the lectures, he would still have a good chance of answering the second question. | ||
| We will model this relationship through disjunction or logical $OR$. | ||
| The third question is more difficult to answer, i.e., the student needs to have a particular skill $s_3$ __and__ he must have good attendance or must have a $s_3$ | ||
| Hence, to model this relationship between skills and the third question, we will use conjunction or logical $AND$. | ||
|
|
||
| For the sake of the example, we will replace attendance with laziness. The convention is that if a person is not lazy, he attends lectures. | ||
| This way, the first question can be answered if the student is not lazy. We will use the $NOT$ function to represent this relationship. | ||
|
|
||
| Let us define the generative model: | ||
| $$p(l, s_2, s_3, r_1, r_2, r_3)=p(l)p(s_2)p(s_3)p(r_1|f_1(l))p(r_2|f_2(l, s_2))p(r_3|f_3(l, s_2, s_3))$$ | ||
|
|
||
| The factors $p(l), p(s_2), p(s_3)$ represent Bernoulli prior distributions. | ||
|
|
||
| $f_1(l) = NOT(l)$ where $NOT(X) \triangleq \overline{X}$, | ||
|
|
||
| $f_2(l, s_2) = OR(NOT(l), s_2)$ where $OR(X, Y) \triangleq X \vee Y$, | ||
|
|
||
| $f_3(l, s_2, s_3) = AND(OR(NOT(l), s_2), s_3)$ where $AND(X, Y) \triangleq X \land Y$ | ||
|
|
||
| An attentive reader may notice that $f_2(l, s_2)$ can be rewritten as $IMPLY(l, s_2)$, i.e., $l\implies s_2$ | ||
|
|
||
| Similar to the example from the Model-Based Machine Learning book, our observations are noisy. It means that the likelihood functions should map $\{0, 1\}$ to a real value $r \in (0, 1)$, denoting the result of the test. We can associate $r=0$ and $r=1.0$ with $0\%$ and $100\%$ correctness of the test, respectively. | ||
|
|
||
| One way of specifying the likelihood is $$p(r_i|f) = \begin{cases} r_i & \text{if }f_i = 1 \\ | ||
| 1-r_i & \text{if }f_i=0 \end{cases}$$ | ||
| or $$p(r_i|f)=r_if_i+(1-r_i)(1-f_i)$$ | ||
|
|
||
| It can be shown that given the observation $r_i$, the backward message from the node $p(r_i|f_i)$ will be a Bernoulli distribution with parameter $r_i$, i.e. $\overleftarrow{\mu}({f_i})\propto\mathrm{Ber}(r_i)$. | ||
| If we observe $r_i=0.9$ it is more "likely" that the variable $f_i=1$. | ||
|
|
||
| Following Bishop, we will call this node function __AddNoise__ | ||
|
|
||
| ```@example skills | ||
| using Rocket, GraphPPL, ReactiveMP, Distributions, Random | ||
| ``` | ||
|
|
||
| ```@example skills | ||
| # Create AddNoise node | ||
| struct AddNoise end | ||
| @node AddNoise Stochastic [out, in] | ||
| ``` | ||
|
|
||
| ```@example skills | ||
| # Adding update rule for AddNoise node | ||
| @rule AddNoise(:in, Marginalisation) (q_out::PointMass,) = begin | ||
| return Bernoulli(mean(q_out)) | ||
| end | ||
| ``` | ||
|
|
||
| ```@example skills | ||
| # GraphPPL.jl export `@model` macro for model specification | ||
| # It accepts a regular Julia function and builds an FFG under the hood | ||
| @model function skill_model() | ||
| res = datavar(Float64, 3) | ||
| laziness ~ Bernoulli(0.5) | ||
| skill2 ~ Bernoulli(0.5) | ||
| skill3 ~ Bernoulli(0.5) | ||
| test2 ~ IMPLY(laziness, skill2) | ||
| test3 ~ AND(test2, skill3) | ||
| res[1] ~ AddNoise(NOT(laziness)) | ||
| res[2] ~ AddNoise(test2) | ||
| res[3] ~ AddNoise(test3) | ||
| end | ||
| ``` | ||
|
|
||
| Let us assume that a student scoared $70\%$ and $95\%$ at first and second tests respectively. But got only $30\%$ on the third one. | ||
|
|
||
| ```@example skills | ||
| test_results = [0.7, 0.95, 0.3] | ||
| inference_result = inference( | ||
| model = Model(skill_model), | ||
| data = (res = test_results, ) | ||
| ) | ||
| ``` | ||
| The results make sense. On the one hand, the student answered the first question correctly, which immediately gives us reason to believe that he is not lazy. He answered the second question pretty well, but this does not mean that the student had the skills to answer this question (attendance,i.e., lack of laziness, could help). To answer the third question, it was necessary to answer the second and have additional skills (#3). Unfortunately, the student's answer was weak, so our confidence about skill #3 was shattered. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| export AND | ||
|
|
||
| """ | ||
| AND node implements logic AND function (conjuction) that can be desribed by the followsing table: | ||
| | in1 in2 | out | | ||
| | 0 0 | 0 | | ||
| | 0 1 | 0 | | ||
| | 1 0 | 0 | | ||
| | 1 1 | 1 | | ||
| """ | ||
| struct AND end | ||
|
|
||
| @node AND Deterministic [out, in1, in2] | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| export IMPLY | ||
|
|
||
| """ | ||
| IMPY node implements implication function that can be desribed by the followsing table: | ||
| | in1 in2 | out | | ||
| | 0 0 | 1 | | ||
| | 0 1 | 1 | | ||
| | 1 0 | 0 | | ||
| | 1 1 | 1 | | ||
| """ | ||
| struct IMPLY end | ||
|
|
||
| @node IMPLY Deterministic [out, in1, in2] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,11 @@ | ||
| export NOT | ||
|
|
||
| """ | ||
| NOT node implements negation function that can be desribed by the followsing table: | ||
| | in | out | | ||
| | 0 | 1 | | ||
| | 1 | 0 | | ||
| """ | ||
| struct NOT end | ||
|
|
||
| @node NOT Deterministic [out, in] |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.