|
44 | 44 | ``` |
45 | 45 | $ pip install libauc |
46 | 46 | ``` |
47 | | -You can also download source code from [here](https://github.com/Optimization-AI/LibAUC/releases) |
| 47 | +You can also download source code from [here](https://github.com/Optimization-AI/LibAUC/releases). |
48 | 48 |
|
49 | 49 | :notebook_with_decorative_cover: Usage |
50 | 50 | ------- |
51 | 51 | ### Official Tutorials: |
52 | | -- Creating Imbalanced Benchmark Datasets [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/01_Creating_Imbalanced_Benchmark_Datasets.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)] |
| 52 | +- Creating Imbalanced Benchmark Datasets on **CIFAR10, CIFAR100, CATvsDOG, STL10** [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/01_Creating_Imbalanced_Benchmark_Datasets.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)] |
53 | 53 | - Training with Pytorch Learning Rate Scheduling [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/04_Training_with_Pytorch_Learning_Rate_Scheduling.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)] |
54 | 54 | - Optimizing <strong>AUROC</strong> loss with ResNet20 on Imbalanced CIFAR10 [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/02_Optimizing_AUROC_with_ResNet20_on_Imbalanced_CIFAR10.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)] |
55 | 55 | - Optimizing <strong>AUPRC</strong> loss with ResNet18 on Imbalanced CIFAR10 [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/03_Optimizing_AUPRC_with_ResNet18_on_Imbalanced_CIFAR10.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)] |
@@ -111,14 +111,13 @@ You can also download source code from [here](https://github.com/Optimization-AI |
111 | 111 |
|
112 | 112 | :zap: Useful Tips |
113 | 113 | --- |
114 | | -Checklist before Running Experiments: |
115 | | -- [ ] Your data should have binary labels **0,1** and **1** is the **minority class** and **0** is the **majority class** |
116 | | -- [ ] Compute the **imbalance_ratio** from your train set and pass it to `AUCMLoss(imratio=xxx)` |
117 | | -- [ ] Adopt a proper **initial learning rate**, e.g., **lr=[0.1, 0.05]** usually works better |
| 114 | +- [ ] Your dataset should have **0,1** labels, e.g., **1** is the **minority class** and **0** is the **majority class** |
| 115 | +- [ ] Compute `imratio=#pos/#total` based on training set and pass it to `AUCMLoss(imratio=xxx)` |
| 116 | +- [ ] Adopt a proper `initial learning rate`, e.g., `lr=[0.1, 0.05]` usually works better |
118 | 117 | - [ ] Choose `libauc.optimizers.PESG` to optimize `AUCMLoss(imratio=xxx)` |
119 | 118 | - [ ] Use `optimizer.update_regularizer(decay_factor=10)` to update learning rate and regularizer in stagewise |
120 | 119 | - [ ] Add activation layer, e.g., `torch.sigmoid(logits)`, before passing model outputs to loss function |
121 | | -- [ ] Reshape both variables **preds** and **targets** to `(N, 1)` before calling loss function |
| 120 | +- [ ] Reshape both variables `y_preds` and `y_targets` to `(N, 1)` before calling loss function |
122 | 121 |
|
123 | 122 |
|
124 | 123 | :page_with_curl: Citation |
|
0 commit comments