You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**LibAUC** aims to provide efficient solutions for optimizing AUC scores (auroc, auprc). We will continuously update our library by fixing bugs and adding new materials. If you use or like our library, please click the **star** and **fork** buttons so that you will get updates automatically. Thank you!
25
+
**LibAUC** aims to provide efficient solutions for optimizing AUC scores (auroc, auprc). We will continuously update our library by fixing bugs and adding new features. If you use or like our library, please **star**:star: our repo. Thank you!
26
26
27
27
28
28
29
-
Why LibAUC?
29
+
:mag:Why LibAUC?
30
30
---
31
31
*Deep AUC Maximization (DAM)* is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. In practice, many real-world datasets are usually imbalanced and AUC score is a better metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance since maximizing AUC aims to rank the prediction score of any positive data higher than any negative data. Our library can be used in many applications, such as medical image classification and drug discovery.
32
32
33
33
34
-
Key Features
34
+
:star:Key Features
35
35
---
36
36
-**[Easy Installation](https://github.com/Optimization-AI/LibAUC#key-features)** - Integrate *AUROC*, *AUPRC* training code with your existing pipeline in just a few steps
37
37
-**[Large-scale Learning](https://github.com/Optimization-AI/LibAUC#key-features)** - Handle large-scale optimization and make the training more smoothly
38
38
-**[Distributed Training](https://github.com/Optimization-AI/LibAUC#key-features)** - Extend to distributed setting to accelerate training efficiency and enhance data privacy
39
39
-**[ML Benchmarks](https://github.com/Optimization-AI/LibAUC#key-features)** - Provide easy-to-use input pipeline and benchmarks on various datasets
40
40
41
41
42
-
Installation
42
+
:gear:Installation
43
43
--------------
44
44
```
45
45
$ pip install libauc
46
46
```
47
47
You can also download source code from [here](https://github.com/Optimization-AI/LibAUC/releases)
- Training with Pytorch Learning Rate Scheduling [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/04_Training_with_Pytorch_Learning_Rate_Scheduling.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
53
54
- Optimizing <strong>AUROC</strong> loss with ResNet20 on Imbalanced CIFAR10 [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/02_Optimizing_AUROC_with_ResNet20_on_Imbalanced_CIFAR10.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
54
55
- Optimizing <strong>AUPRC</strong> loss with ResNet18 on Imbalanced CIFAR10 [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/03_Optimizing_AUPRC_with_ResNet18_on_Imbalanced_CIFAR10.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
55
-
- Training with Pytorch Learning Rate Scheduling [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/04_Training_with_Pytorch_Learning_Rate_Scheduling.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
56
56
- Optimizing <strong>AUROC</strong> loss with DenseNet121 on <strong>CheXpert</strong> [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/05_Optimizing_AUROC_Loss_with_DenseNet121_on_CheXpert.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
57
-
- Optimizing <strong>AUROC</strong> loss with DenseNet121 on CIFAR100 in Federated Setting (<strong>CODASCA</strong>) [[Preliminary Release](https://github.com/Optimization-AI/LibAUC/blob/main/examples/scripts/06_Optimizing_AUROC_loss_with_DenseNet121_on_CIFAR100_in_Federated_Setting_CODASCA.py)]
57
+
- Optimizing <strong>AUROC</strong> loss with DenseNet121 on CIFAR100 for **Federated Learning**[[Preliminary Release](https://github.com/Optimization-AI/LibAUC/blob/main/examples/scripts/06_Optimizing_AUROC_loss_with_DenseNet121_on_CIFAR100_in_Federated_Setting_CODASCA.py)]
58
+
- Optimizing <strong>AUROC</strong> loss with DenseNet121 on <strong>Melanoma</strong> [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/08_Optimizing_AUROC_Loss_with_DenseNet121_on_Melanoma.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
58
59
- Optimizing <strong>Multi-Label AUROC</strong> loss with DenseNet121 on <strong>CheXpert</strong> [[Notebook](https://github.com/Optimization-AI/LibAUC/blob/main/examples/07_Optimizing_Multi_Label_AUROC_Loss_with_DenseNet121_on_CheXpert.ipynb)][[Script](https://github.com/Optimization-AI/LibAUC/tree/main/examples/scripts)]
59
60
60
61
@@ -108,7 +109,7 @@ Usage
108
109
109
110
```
110
111
111
-
Useful Tips
112
+
:zap:Useful Tips
112
113
---
113
114
Checklist before Running Experiments:
114
115
-[ ] Your data should have binary labels **0,1** and **1** is the **minority class** and **0** is the **majority class**
@@ -120,7 +121,7 @@ Checklist before Running Experiments:
120
121
-[ ] Reshape both variables **preds** and **targets** to `(N, 1)` before calling loss function
121
122
122
123
123
-
Citation
124
+
:page_with_curl:Citation
124
125
---------
125
126
If you find LibAUC useful in your work, please cite the following paper for our library:
126
127
```
@@ -132,6 +133,6 @@ If you find LibAUC useful in your work, please cite the following paper for our
132
133
}
133
134
```
134
135
135
-
Contact
136
+
:email:Contact
136
137
----------
137
138
If you have any questions, please contact us @ [Zhuoning Yuan](https://homepage.divms.uiowa.edu/~zhuoning/)[[email protected]] and [Tianbao Yang](https://homepage.cs.uiowa.edu/~tyng/)[[email protected]] or please open a new issue in the Github .
0 commit comments