Skip to content

Commit ca90618

Browse files
committed
docs
1 parent aa129b6 commit ca90618

File tree

3 files changed

+5
-57
lines changed

3 files changed

+5
-57
lines changed

docs/source/available_structure_learning_algorithms.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Algorithms
3030
structure_learning_algorithms/bnlearn_sihitonpc
3131
structure_learning_algorithms/bnlearn_tabu
3232
structure_learning_algorithms/causaldag_gsp
33+
structure_learning_algorithms/causallearn_ges
3334
structure_learning_algorithms/causallearn_grasp
3435
structure_learning_algorithms/corr_thresh
3536
structure_learning_algorithms/dualpc

docs/source/examples.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
.. _examples:
22

3-
Example studies
4-
########################
3+
Causal discovery examples
4+
#################################
55

66
.. include:: example_conf.rst
77

docs/source/structure_learning_algorithms/huge_glasso.rst

Lines changed: 2 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
.. meta::
55
:title: Graphical lasso
6-
:description: Abstract: We consider the problem of estimating the marginal independence structure of a Bayesian network from observational data in the form of an undirected graph called the unconditional dependence graph. We show that unconditional dependence graphs of Bayesian networks correspond to the graphs having equal independence and intersection numbers. Using this observation, a Gröbner basis for a toric ideal associated to unconditional dependence graphs of Bayesian networks is given and then extended by additional binomial relations to connect the space of all such graphs. An MCMC method, called GrUES (Gröbner-based Unconditional Equivalence Search), is implemented based on the resulting moves and applied to synthetic Gaussian data. GrUES recovers the true marginal independence structure via a penalized maximum likelihood or MAP estimate at a higher rate than simple independence tests while also yielding an estimate of the posterior, for which the 20% HPD credible sets include the true structure at a high rate for data-generating graphs with density at least 0.5. .. rubric:: Example Config file: `grues_vs_corr-thresh.json <https://github.com/felixleopoldo/benchpress/blob/master/workflow/rules/structure_learning_algorithms/grues/grues_vs_corr-thresh.json>`_ Command: .. code:: bash snakemake --cores all --use-singularity --configfile workflow/rules/structure_learning_algorithms/grues/grues_vs_corr-thresh.json :numref:`roc_grues_vs_thresh` shows the ROC and :numref:`shd_grues_vs_thresh` shows the SHD comparing GrUES to correlation thresholding for datsets from five different graphs corresponding to a 5-variable random Gaussian SEM whose nodes have average degree of 1 and whose edge weights were allowed to be close to 0. Each dataset contains 300 observations and each Markov chain has 10000 observations. Note that SHD between a learned UDG and true CPDAG is not the most reasonable comparison because an inflated FPR will be reported---see :footcite:t:`grues2023` for discussion and a more reasonable benchmark. :numref:`adj_grues` shows that GrUES estimates the correct `UDG <https://arxiv.org/pdf/2210.00822.pdf#subsection.2.2>`__ while correlation thresholding (:numref:`adj_thresh`) misses the edge `1---2`. .. _roc_grues_vs_thresh: .. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/roc.png :width: 320 :alt: ROC (FPR vs. TPR) GrUES vs corr_thresh example :align: left ROC of GrUES vs corr_thresh. .. _shd_grues_vs_thresh: .. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/shd.png :width: 320 :alt: SHD GrUES vs corr_thresh example :align: right SHD of GrUES vs corr_thresh. .. _adj_grues: .. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/diffplot_30.png :width: 320 :alt: adjacency matrix GrUES example :align: left Adj mat learned by GrUES. .. _adj_thresh: .. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/diffplot_15.png :width: 320 :alt: adjacency matrix corr_thresh example :align: right Adj mat learned by corr_thresh.
6+
:description: Abstract: We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm—the graphical lasso—that is remarkably fast: It solves a 1000-node problem (∼500000 parameters) in at most a minute and is 30–4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
77

88

99
.. _huge_glasso:
@@ -37,60 +37,7 @@ huge_glasso
3737

3838
.. rubric:: Description
3939

40-
Abstract:
41-
We consider the problem of estimating the marginal independence structure of a Bayesian network from observational data in the form of an undirected graph called the unconditional dependence graph. We show that unconditional dependence graphs of Bayesian networks correspond to the graphs having equal independence and intersection numbers. Using this observation, a Gröbner basis for a toric ideal associated to unconditional dependence graphs of Bayesian networks is given and then extended by additional binomial relations to connect the space of all such graphs. An MCMC method, called GrUES (Gröbner-based Unconditional Equivalence Search), is implemented based on the resulting moves and applied to synthetic Gaussian data. GrUES recovers the true marginal independence structure via a penalized maximum likelihood or MAP estimate at a higher rate than simple independence tests while also yielding an estimate of the posterior, for which the 20% HPD credible sets include the true structure at a high rate for data-generating graphs with density at least 0.5.
42-
43-
.. rubric:: Example
44-
45-
Config file: `grues_vs_corr-thresh.json <https://github.com/felixleopoldo/benchpress/blob/master/workflow/rules/structure_learning_algorithms/grues/grues_vs_corr-thresh.json>`_
46-
47-
Command:
48-
49-
.. code:: bash
50-
51-
snakemake --cores all --use-singularity --configfile workflow/rules/structure_learning_algorithms/grues/grues_vs_corr-thresh.json
52-
53-
:numref:`roc_grues_vs_thresh` shows the ROC and :numref:`shd_grues_vs_thresh` shows the SHD comparing GrUES to correlation thresholding for datsets from five different graphs corresponding to a 5-variable random Gaussian SEM whose nodes have average degree of 1 and whose edge weights were allowed to be close to 0. Each dataset contains 300 observations and each Markov chain has 10000 observations. Note that SHD between a learned UDG and true CPDAG is not the most reasonable comparison because an inflated FPR will be reported---see :footcite:t:`grues2023` for discussion and a more reasonable benchmark.
54-
55-
:numref:`adj_grues` shows that GrUES estimates the correct `UDG <https://arxiv.org/pdf/2210.00822.pdf#subsection.2.2>`__ while correlation thresholding (:numref:`adj_thresh`) misses the edge `1---2`.
56-
57-
58-
.. _roc_grues_vs_thresh:
59-
60-
.. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/roc.png
61-
:width: 320
62-
:alt: ROC (FPR vs. TPR) GrUES vs corr_thresh example
63-
:align: left
64-
65-
ROC of GrUES vs corr_thresh.
66-
67-
.. _shd_grues_vs_thresh:
68-
69-
.. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/shd.png
70-
:width: 320
71-
:alt: SHD GrUES vs corr_thresh example
72-
:align: right
73-
74-
SHD of GrUES vs corr_thresh.
75-
76-
.. _adj_grues:
77-
78-
.. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/diffplot_30.png
79-
:width: 320
80-
:alt: adjacency matrix GrUES example
81-
:align: left
82-
83-
Adj mat learned by GrUES.
84-
85-
.. _adj_thresh:
86-
87-
.. figure:: ../../../workflow/rules/structure_learning_algorithms/grues/images/diffplot_15.png
88-
:width: 320
89-
:alt: adjacency matrix corr_thresh example
90-
:align: right
91-
92-
Adj mat learned by corr_thresh.
93-
40+
Abstract: We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm—the graphical lasso—that is remarkably fast: It solves a 1000-node problem (∼500000 parameters) in at most a minute and is 30–4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
9441

9542
.. rubric:: Some fields described
9643
* ``lambda`` A positive number to control the regularization. Typical usage is to leave the input lambda: null and have the program compute its own.

0 commit comments

Comments
 (0)