Skip to content

Commit b462aa1

Browse files
committed
Removed all mention of data collection and json front end
1 parent 28c99b7 commit b462aa1

File tree

8 files changed

+12
-26
lines changed

8 files changed

+12
-26
lines changed

causal_testing/__init__.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
"""
22
This is the CausalTestingFramework Module
33
It contains 5 subpackages:
4-
data_collection
5-
generation
6-
json_front
4+
estimation
75
specification
6+
surrogate
87
testing
8+
utils
99
"""
1010

1111
import logging

causal_testing/surrogate/causal_surrogate_assisted.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,11 +78,11 @@ def execute(
7878
):
7979
"""For this specific test case, a search algorithm is used to find the most contradictory point in the input
8080
space which is, therefore, most likely to indicate incorrect behaviour. This cadidate test case is run against
81-
the simulator, checked for faults and the result returned with collected data
81+
the simulator, checked for faults and the result returned.
8282
:param df: An dataframe which contains data relevant to the specified scenario
8383
:param max_executions: Maximum number of simulator executions before exiting the search
8484
:param custom_data_aggregator:
85-
:return: tuple containing SimulationResult or str, execution number and collected data"""
85+
:return: tuple containing SimulationResult or str, execution number and dataframe"""
8686

8787
for i in range(max_executions):
8888
surrogate_models = self.generate_surrogates(self.specification, df)

docs/source/index.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -119,14 +119,6 @@ system-under-test that is expected to cause a change to some output(s).
119119

120120
/autoapi/index
121121

122-
.. toctree::
123-
:hidden:
124-
:maxdepth: 1
125-
:caption: Front Ends
126-
127-
frontends/json_front_end
128-
frontends/test_suite
129-
130122
.. toctree::
131123
:hidden:
132124
:maxdepth: 1

docs/source/modules/causal_tests.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Causal Testing
33
==============
44

5-
This package contains the main components of the causal testing framework, causal tests and causal oracles, which utilise both the specification and data collection packages.
5+
This package contains the main components of the causal testing framework, causal tests and causal oracles, which utilise the specification package.
66

77
- A causal test case is a triple ``(X, \Delta, Y)`` where ``X`` is an input configuration, ``\Delta`` is an intervention, and ``Y`` is the expected causal effect of applying ``\Delta`` to ``X``. Put simply, a causal test case states the expected change in an outcome that applying an intervention to X should cause. In this context, an intervention is simply a function which manipulates the input configuration of the scenario-under-test in a way that is expected to cause a change to some outcome.
88

@@ -44,12 +44,12 @@ We then define a number of causal test cases to apply to the scenario-under-test
4444

4545
- To run these test cases experimentally, we need to execute both ``X`` and ``\Delta(X)`` - that is, with and without the interventions. Since the only difference between these test cases is the intervention, we can conclude that the observed difference in ``n_infected_t5`` was caused by the interventions. While this is the simplest approach, it can be extremely inefficient at scale, particularly when dealing with complex software such as computational models.
4646

47-
- To run these test cases observationally, we need to collect *valid* observational data for the scenario-under-test. This means we can only use executions with between 20 and 30 people, a square environment of size betwen 20x20 and 40x40, and where a single person was initially infected. In addition, this data must contain executions both with and without the intervention. Next, we need to identify any sources of bias in this data and determine a procedure to counteract them. This is achieved automatically using graphical causal inference techniques that identify a set of variables that can be adjusted to obtain a causal estimate. Finally, for any categorical biasing variables, we need to make sure we have executions corresponding to each category otherwise we have a positivity violation (i.e. missing data). In the worst case, this at least guides the user to an area of the system-under-test that should be executed.
47+
- To run these test cases observationally, we need *valid* observational data for the scenario-under-test. This means we can only use executions with between 20 and 30 people, a square environment of size betwen 20x20 and 40x40, and where a single person was initially infected. In addition, this data must contain executions both with and without the intervention. Next, we need to identify any sources of bias in this data and determine a procedure to counteract them. This is achieved automatically using graphical causal inference techniques that identify a set of variables that can be adjusted to obtain a causal estimate. Finally, for any categorical biasing variables, we need to make sure we have executions corresponding to each category otherwise we have a positivity violation (i.e. missing data). In the worst case, this at least guides the user to an area of the system-under-test that should be executed.
4848

4949
Causal Inference
5050
----------------
5151

52-
- After collecting either observational or experimental data, we now need to apply causal inference. First, as described above, we use our causal graph to identify a set of adjustment variables which mitigate all bias in the data. Next, we use statistical models to adjust for these variables (implementing the statistical procedure necessary to isolate the causal effect) and obtain the desired causal estimate. Depending on the statistical model used, we can also generate 95% confidence intervals (or confidence intervals at any confidence level for that matter).
52+
- After obtaining suitable test data, we now need to apply causal inference. First, as described above, we use our causal graph to identify a set of adjustment variables which mitigate all bias in the data. Next, we use statistical models to adjust for these variables (implementing the statistical procedure necessary to isolate the causal effect) and obtain the desired causal estimate. Depending on the statistical model used, we can also generate 95% confidence intervals (or confidence intervals at any confidence level for that matter).
5353

5454
- In our example, the causal DAG tell us it is necessary to adjust for ``environment`` in order to obtain the causal effect of ``precaution`` on ``n_infected_t5``. Supposing the relationship is linear, we can employ a linear regression model of the form ``n_infected_t5 ~ p0*precaution + p1*environment`` to carry out this adjustment. If we use experimental data, only a single environment is used by design and therefore the adjustment has no impact. However, if we use observational data, the environment may vary and therefore this adjustment will look at the causal effect within different environments and then provide a weighted average, which turns out to be the partial coefficient ``p0``.
5555

docs/source/usage.rst

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,7 @@
22
Usage
33
-----
44

5-
There are currently 3 methods of using the Causal Testing Framework; 1) :doc:`JSON Front End </frontends/json_front_end>`\, 2)
6-
:doc:`Test Suites </frontends/test_suite>`\, or 3) directly as
7-
described below.
8-
9-
The causal testing framework is made up of 3 main components: Specification, Testing, and Data Collection. The first
5+
The causal testing framework is made up of 2 main components: Specification and Testing. The first
106
step is to specify the (part of the) system under test as a modelling ``Scenario``. Modelling scenarios specify the
117
observable variables and any constraints which exist between them. We currently support 3 types of variable:
128

examples/covasim_/doubling_beta/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Covasim Case Study: Doubling Beta (Infectiousness)
2-
In this case study, we demonstrate how to use the causal testing framework with observational
3-
data collected Covasim to conduct Statistical Metamorphic Testing (SMT) a posteriori. Here, we focus on a set of simple
2+
In this case study, we demonstrate how to use the causal testing framework with observational data from
3+
Covasim to conduct Statistical Metamorphic Testing (SMT) a posteriori. Here, we focus on a set of simple
44
modelling scenarios that investigate how the infectiousness of the virus (encoded as the parameter beta) affects the
55
cumulative number of infections over a fixed duration. We also run several causal tests that focus on increasingly
66
specific causal questions pertaining to more refined metamorphic properties and enabling us to learn more about the

examples/covasim_/vaccinating_elderly/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ Further details are provided in Section 5.3 (Prioritising the elderly for vaccin
1515

1616
>[!NOTE]
1717
>This version of the CTF uses observational data to separate the software execution and testing.
18-
Older versions of this framework simulate the data using a custom experimental data collector and the `covasim`
19-
package (version 3.0.7) as outlined below.
18+
Older versions of this framework directly run the `covasim` package (version 3.0.7) as outlined below.
2019

2120
## How to run
2221
To run this case study:

examples/poisson-line-process/README.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,5 @@ To run this case study:
77
(instructions are provided in the project README).
88
2. Change directory to `causal_testing/examples/poisson-line-process`.
99
3. Run the command `python example_pure_python.py` to demonstrate causal testing using pure python.
10-
3. Run the command `python example_json_frontend.py` to demonstrate the same causal tests using JSON.
1110

1211
This should print a series of causal test results and produce two CSV files. `intensity_num_shapes_results_random_1000.csv` corresponds to table 1, and `width_num_shapes_results_random_1000.csv` relates to our findings regarding the relationship of width and `P_u`.

0 commit comments

Comments
 (0)