|
3 | 3 | {
|
4 | 4 | "metadata": {},
|
5 | 5 | "cell_type": "markdown",
|
6 |
| - "source": "# Binary classification risk control - Theoretical tests prototype", |
| 6 | + "source": "# Binary classification risk control - Theoretical tests to validate implementation", |
7 | 7 | "id": "ed592eb3f8989aa8"
|
8 | 8 | },
|
| 9 | + { |
| 10 | + "metadata": {}, |
| 11 | + "cell_type": "markdown", |
| 12 | + "source": [ |
| 13 | + "# Protocol description\n", |
| 14 | + "Testing theoretical guarantees of risk control in binary classification using a random classifier and synthetic data.\n", |
| 15 | + "\n", |
| 16 | + "Each test case looks at a combination of parameters, for which we repeat the experiment `n_repeat` times. The model is the same for all experiments (basically a random classifier), but the data is different each time.\n", |
| 17 | + "\n", |
| 18 | + "Each experiment consists of the following:\n", |
| 19 | + " - We calibrate a BinaryClassificationController. It gives us the list of lambda values that control the risk according to LTT.\n", |
| 20 | + " - Because we know that the model is random, we know the theoretical risk associated with each lambda value. So we are able to check if the lambda values given by LTT actually control the risk. If not, we count 1 \"error\". Note that *each* lambda value should control the risk, not just one of them.\n", |
| 21 | + "\n", |
| 22 | + "After n_repeat experiments, we compute the proportion of errors, that should be less than delta (1 - confidence_level).\n", |
| 23 | + "\n", |
| 24 | + "# Results\n", |
| 25 | + "The risk is controlled in all the test cases. Overall, LTT seems very conservative (to achieve a high percentage of errors, we need to lower the confidence level significantly (0.01) and use only one threshold to avoid the Bonferroni effect). But this is likely due to the model being random, and thus having a lot of variance. It would be interesting to see how this evolves with a better model." |
| 26 | + ], |
| 27 | + "id": "8c1746b673c148dd" |
| 28 | + }, |
9 | 29 | {
|
10 | 30 | "metadata": {
|
11 | 31 | "ExecuteTime": {
|
|
51 | 71 | },
|
52 | 72 | "cell_type": "code",
|
53 | 73 | "source": [
|
54 |
| - "# Using sklearn.dummy.DummyClassifier would be clearer\n", |
| 74 | + "# Using sklearn.dummy.DummyClassifier would be cleaner\n", |
55 | 75 | "class RandomClassifier:\n",
|
56 | 76 | " def __init__(self, seed=42, threshold=0.5):\n",
|
57 | 77 | " self.seed = seed\n",
|
|
131 | 151 | " valid_parameters = controller.valid_thresholds\n",
|
132 | 152 | " total_nb_valid_params += len(valid_parameters)\n",
|
133 | 153 | "\n",
|
134 |
| - " # The following works because the data is balanced\n", |
| 154 | + " # In the following, we check that all the valid thresholds found by LTT actually control the risk.\n", |
| 155 | + " # Instead of sampling a large test set, we use the fact that we know the theoretical risk of a random classifier.\n", |
| 156 | + " # The calculations here are valid only for a balanced data generator.\n", |
135 | 157 | " if risk[\"risk\"] == precision or risk[\"risk\"] == accuracy:\n",
|
136 | 158 | " if target_level > 0.5 and len(valid_parameters) >= 1:\n",
|
137 | 159 | " nb_errors += 1\n",
|
|
0 commit comments