You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-13Lines changed: 16 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,28 +22,30 @@ The causal testing framework has three core components:
22
22
For more information on each of these steps, follow the link to their respective documentation.
23
23
24
24
## Causal Inference Terminology
25
+
25
26
Here are some explanations for the causal inference terminology used above.
26
27
27
28
- Causal inference (CI) is a family of statistical techniques designed to quantify and establish **causal** relationships in data. In contrast to purely statistical techniques that are driven by associations in data, CI incorporates knowledge about the data-generating mechanisms behind relationships in data to derive causal conclusions.
28
29
- One of the key advantages of CI is that it is possible to answer causal questions using **observational data**. That is, data which has been passively observed rather than collected from an experiment and, therefore, may contain all kinds of bias. In a testing context, we would like to leverage this advantage to test causal relationships in software without having to run costly experiments.
29
30
- There are many forms of CI techniques with slightly different aims, but in this framework we focus on graphical CI techniques that use directed acyclic graphs to obtain causal estimates. These approaches used a causal DAG to explain the causal relationships that exist in data and, based on the structure of this graph, design statistical experiments capable of estimating the causal effect of a particular intervention or action, such as taking a drug or changing the value of an input variable.
30
31
31
-
# Installation
32
+
##Installation
32
33
33
34
To use the causal testing framework, clone the repository, `cd` into the root directory, and run `pip install -e .`. More detailled installation instructions can be found in the [online documentation](https://causal-testing-framework.readthedocs.io/en/latest/installation.html).
34
35
35
-
36
-
# Usage
36
+
## Usage
37
37
38
38
There are currently two methods of using the Causal Testing Framework, through the [JSON Front End](https://causal-testing-framework.readthedocs.io/en/latest/json_front_end.html) or directly as described below.
39
39
40
40
The causal testing framework is made up of three main components: Specification, Testing, and Data Collection. The first step is to specify the (part of the) system under test as a modelling `Scenario`. Modelling scenarios specify the observable variables and any constraints which exist between them. We currently support three types of variable:
41
-
-`Input` variables are input parameters to the system.
42
-
-`Output` variables are outputs from the system.
43
-
-`Meta` variables are not directly observable but are relevant to system testing, e.g. a model may take a `location` parameter and expand this out into `average_age` and `household_size` variables "under the hood". These parameters can be made explicit by instantiating them as metavariables.
41
+
42
+
-`Input` variables are input parameters to the system.
43
+
-`Output` variables are outputs from the system.
44
+
-`Meta` variables are not directly observable but are relevant to system testing, e.g. a model may take a `location` parameter and expand this out into `average_age` and `household_size` variables "under the hood". These parameters can be made explicit by instantiating them as metavariables.
44
45
45
46
To instantiate a scenario, simply provide a set of variables and an optional set of constraints, e.g.
46
-
```
47
+
48
+
```{python}
47
49
from causal_testing.specification.variable import Input, Output, Meta
48
50
from causal_testing.specification.scenario import Scenario
49
51
@@ -57,7 +59,8 @@ modelling_scenario = Scenario({x, y, z}, {x > z, z < 3}) # Define a scenario wi
57
59
Note that scenario constraints are primarily intended to help specify the region of the input space under test in a manner consistent with the Category Partition Method. It is not intended to serve as a test oracle. Use constraints sparingly and with caution to avoid introducing data selection bias. We use Z3 to handle constraints. For help with this, check out [their documentation](https://ericpony.github.io/z3py-tutorial/guide-examples.htm).
58
60
59
61
Having fully specified the modelling scenario, we are now ready to test. Causal tests are, essentially [metamorphic tests](https://en.wikipedia.org/wiki/Metamorphic_testing) which are executed using statistical causal inference. A causal test expresses the change in a given output that we expect to see when we change a particular input in a particular way, e.g.
60
-
```
62
+
63
+
```{python}
61
64
from causal_testing.testing.causal_test_case import CausalTestCase
62
65
from causal_testing.testing.causal_test_outcome import Positive
Before we can run our test case, we first need data. There are two ways to acquire this: 1. run the model with the specific input configurations we're interested in, 2. use data from previous model runs. For a small number of specific tests where accuracy is critical, the first approach will yield the best results. To do this, you need to instantiate the `ExperimentalDataCollector` class.
73
76
74
77
Where there are many test cases using pre-existing data is likely to be faster. If the program's behaviour can be estimated statistically, the results should still be reliable as long as there is enough data for the estimator to work as intended. This will vary depending on the program and the estimator. To use this method, simply instantiate the `ObservationalDataCollector` class with the modelling scenario and a path to the CSV file containing the runtime data, e.g.
The actual running of the tests is done using the `CausalTestEngine` class. This is still a work in progress and may change in the future to improve ease of use, but currently proceeds as follows.
82
85
83
-
```
86
+
```{python}
84
87
causal_test_engine = CausalTestEngine(causal_test_case, causal_specification, data_collector) # Instantiate the causal test engine
85
88
minimal_adjustment_set = causal_test_engine.load_data(data_csv_path, index_col=0) # Calculate the adjustment set
@@ -89,14 +92,14 @@ minimal_adjustment_set = minimal_adjustment_set - set([v.name for v in treatment
89
92
90
93
Whether using fresh or pre-existing data, a key aspect of causal inference is estimation. To actually execute a test, we need an estimator. We currently support two estimators: linear regression and causal forest. These can simply be instantiated as per the [documentation](https://causal-testing-framework.readthedocs.io/en/latest/autoapi/causal_testing/testing/estimators/index.html).
91
94
92
-
```
95
+
```{python}
93
96
from causal_testing.testing.estimators import LinearRegressionEstimator
We can now execute the test using the estimation model. This returns a causal test result, from which we can extract various information. Here, we simply assert that the observed result is (on average) what we expect to see.
0 commit comments