You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This codebase will be developed into a standalone pip package, with planned release date June 2025.
5
7
6
-
An example of the enrivonment setup is shown below.
8
+
Current features:
9
+
- Fully automated STEP policy synthesis
10
+
- Baseline sequential method implementations of [SAVI](https://www.sciencedirect.com/science/article/pii/S0167715223000597?via%3Dihub) and [Lai](https://projecteuclid.org/journals/annals-of-statistics/volume-16/issue-2/Nearly-Optimal-Sequential-Tests-of-Composite-Hypotheses/10.1214/aos/1176350840.full) procedures.
11
+
- Validation tools
12
+
- STEP policy visualization
13
+
- Verification of Type-1 Error control
14
+
- Unit tests
15
+
- Method functionality
16
+
- Recreate results from our [paper](https://www.arxiv.org/abs/2503.10966).
17
+
18
+
Features in development:
19
+
- Demonstration scripts for each stage of the STEP evaluation pipeline
20
+
- Approximately optimal risk budget estimation tool based on evaluator priors on $$(p_0, p_1)$$
21
+
- Fundamental limit estimator for guiding evaluation effort
22
+
- Determine if a particular effect size is plausibly discoverable given the evaluation budget
23
+
- Baseline implementation of a sequential Barnard procedure which controls Type-1 error -->
24
+
25
+
## Installation Instructions \[Standard\]
26
+
The basic environmental setup is shown below. A virtual / conda environment may be constructed; however, the requirements are quite lightweight and this is probably not needed.
We assume that any specified virtual / conda environment has been activated for all subsequent code snippets.
49
+
50
+
# Quick Start Guides
51
+
We include key notes for understanding the core ideas of the STEP code. Quick-start resources are included in both shell script and notebook form.
52
+
53
+
## Quick Start Guide: Making a STEP Policy for Specific \{n_max, alpha\}
54
+
55
+
### (1A) Understanding the Accepted Shape Parameters
56
+
In order to synthesize a STEP Policy for specific values of n_max and alpha, one additional set of parametric decisions will be required. The user will need to set the risk budget shape, which is specified by choice of function family (p-norm vs zeta-function) and particular shape parameter. The shape parameter is real-valued; it is used directly for zeta functions and is exponentiated for p-norms.
The user may confirm that in each case, evaluating the accumulated risk budget at $`n=n_{max}`$ returns precisely $\alpha$.
68
+
69
+
70
+
### (1B) Shape Parameters in Code
71
+
In the codebase, the value of $\lambda$ is set in the \{shape_parameter\} variable. This variable is real-valued.\
72
+
\
73
+
The family shape is set by the \{use_p_norm\} variable. This variable is Boolean.
74
+
- If it is True, then p-norm family is used.
75
+
- If it is False, the zeta-function family is used.
76
+
77
+
78
+
### (1C) Arbitrary Risk Budgets
79
+
Generalizing the accepted risk budgets to arbitrary monotonic sequences $`\{0, \epsilon_1 > 0, \epsilon_2 > \epsilon_1, ..., \epsilon_{n_{max}} = \alpha\}`$ is in the development pipeline, but is **not handled at present in the code**.
80
+
81
+
82
+
### (2A) Running STEP Policy Synthesis
83
+
Having decided an appropriate form for the risk budget shape, policy synthesis is straightforward to run. From the base directory, the general command would be:
### (2B) What If I Don't Know the Right Risk Budget?
90
+
We recommend using the default linear risk budget, which is the shape *used in the paper*. This corresponds to \{shape_parameter\}$`= 0.0`$ for each shape family. Thus, *each of the following commands constructs the same policy*:
91
+
92
+
```bash
93
+
$ python scripts/synthesize_general_step_policy.py -n {n_max} -a {alpha}
Note: For \{shape_parameter\} $`\neq 0`$, the shape families differ. Therefore, the choice of \{use_p_norm\}*will affect the STEP policy*.
103
+
104
+
### (2C) Disclaimers
105
+
- Running the policy synthesis will save a durable policy to the user's local machine. This policy can be reused for all future settings requiring the same \{n_max, alpha\} combination. For \{n_max\} $`< 500`$, the amount of required memory is < 5Mb. The policy is saved under:
106
+
```bash
107
+
$ sequentialized_barnard_tests/policies/
108
+
```
109
+
110
+
- At present, we have not tested extensively beyond \{n_max\}$`=500`$. Going beyond this limit may lead to issues, and the likelihood will grow the larger \{n_max\} is set to be. The code will also require increasing amounts of RAM as \{n_max\} is increased.
111
+
112
+
## Quick Start Guide: Evaluation on Real Data
113
+
114
+
We now assume that a STEP policy has been constructed for the target problem. This can either be one of the default policies, or a newly constructed one following the recipe in the preceding section.
115
+
116
+
### (1) Formatting the Real Data
117
+
The data should be formatted into a numpy array of shape $(N, 2)$. The user should create a new project directory and save the data within it:
Scripts to generate and visualize STEP policies are included under:
144
+
```bash
145
+
$ scripts/
146
+
```
147
+
148
+
Any resulting visualizations are stored in:
149
+
```bash
150
+
$ media/
151
+
```
152
+
153
+
The evaluation environment for real data is included in:
154
+
```bash
155
+
$ evaluation/
156
+
```
157
+
158
+
and the associated evaluation data is stored in:
159
+
```bash
160
+
$ data/
161
+
```
162
+
163
+
# Citation
164
+
165
+
```
166
+
@inproceedings{snyder2025step,
167
+
title = {Is Your Imitation Learning Policy Better Than Mine? Policy Comparison with Near-Optimal Stopping},
168
+
author = {Snyder, David and Hancock, Asher James and Badithela, Apurva and Dixon, Emma and Miller, Patrick and Ambrus, Rares Andrei and Majumdar, Anirudha and Itkina, Masha and Nishimura, Haruki},
169
+
booktitle={Proceedings of the Robotics: Science and Systems Conference (RSS)},
0 commit comments