Skip to content

Commit 9bb8f5e

Browse files
authored
Update README.md
1 parent 740676f commit 9bb8f5e

File tree

1 file changed

+39
-30
lines changed

1 file changed

+39
-30
lines changed

README.md

Lines changed: 39 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,42 @@
11
<p align="center">
2-
<img src="https://github.com/yzhuoning/LibAUC/blob/main/imgs/libauc.png" width="50%" align="center"/>
2+
<img src="https://github.com/yzhuoning/LibAUC/blob/main/imgs/libauc.png" width="70%" align="center"/>
33
</p>
44
<p align="center">
55
Logo by <a href="https://homepage.divms.uiowa.edu/~zhuoning/">Zhuoning Yuan</a>
66
</p>
77

8-
9-
<p align="center">
8+
**LibAUC**: A Machine Learning Library for AUC Optimization
9+
---
10+
<p align="left">
1011
<img alt="PyPI version" src="https://img.shields.io/pypi/v/libauc?color=blue&style=flat-square"/>
11-
<img alt="PyPI LICENSE" src="https://img.shields.io/pypi/pyversions/libauc?color=blue&style=flat-square" />
12-
<img alt="PyPI language" src="https://img.shields.io/github/license/yzhuoning/libauc?color=blue&logo=libauc&style=flat-square" />
12+
<img alt="Python Version" src="https://img.shields.io/pypi/pyversions/libauc?color=blue&style=flat-square" />
13+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-1.8-yellow?color=blue&style=flat-square" />
14+
<img alt="Tensorflow" src="https://img.shields.io/badge/Tensorflow-2.0-yellow?color=blue&style=flat-square" />
15+
<img alt="PyPI LICENSE" src="https://img.shields.io/github/license/yzhuoning/libauc?color=blue&logo=libauc&style=flat-square" />
1316
</p>
1417

15-
LibAUC
16-
======
17-
An end-to-end machine learning library for AUC optimization (<strong>AUROC, AUPRC</strong>).
18+
[**Website**](https://libauc.org/)
19+
| [**Updates**](https://libauc.org/news/)
20+
| [**Installation**](https://libauc.org/get-started/)
21+
| [**Tutorial**](https://github.com/Optimization-AI/LibAUC/tree/main/examples)
22+
| [**Research**](https://libauc.org/publications/)
23+
| [**Github**](https://github.com/Optimization-AI/LibAUC/)
24+
25+
**LibAUC** aims to provide efficient solutions for optimizing AUC scores (auroc, auprc).
1826

1927

2028
Why LibAUC?
21-
---------------
22-
Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.
29+
---
30+
*Deep AUC Maximization (DAM)* is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. In practice, many real-world datasets are usually imbalanced and AUC score is a better metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance since maximizing AUC aims to rank the prediction score of any positive data higher than any negative data. Our library can be used in many applications, such as medical image classification and drug discovery.
2331

24-
- In many domains, AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance.
25-
- Many real-world datasets are usually imbalanced. AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data
2632

27-
Links
28-
--------------
29-
- Official Website: https://libauc.org
30-
- Release Notes: https://github.com/Optimization-AI/LibAUC/releases
31-
- Repository: https://github.com/Optimization-AI/LibAUC
33+
34+
Key Features
35+
---
36+
- **[Easy Installation]()** - Integrate *AUROC*, *AUPRC* training code with your existing pipeline in just a few steps
37+
- **[Large-scale Learning]()** - Handle large-scale optimization and make the training more smoothly
38+
- **[Distributed Training]()** - Extend to distributed setting to accelerate training efficiency and enhance data privacy
39+
- **[ML Benchmarks]()** - Provide easy-to-use input pipeline and benchmarks on various datasets
3240

3341

3442
Installation
@@ -57,24 +65,25 @@ Usage
5765
>>> from libauc.optimizers import PESG
5866
...
5967
>>> #define loss
60-
>>> Loss = AUCMLoss()
68+
>>> Loss = AUCMLoss(imratio=[YOUR NUMBER])
6169
>>> optimizer = PESG()
6270
...
6371
>>> #training
6472
>>> model.train()
6573
>>> for data, targets in trainloader:
6674
>>> data, targets = data.cuda(), targets.cuda()
67-
preds = model(data)
75+
logits = model(data)
76+
preds = torch.sigmoid(logits)
6877
loss = Loss(preds, targets)
6978
optimizer.zero_grad()
7079
loss.backward()
7180
optimizer.step()
7281
...
7382
>>> #restart stage
74-
>>> optimizer.update_regularizer()
83+
>>> optimizer.update_regularizer()
84+
```
7585

7686

77-
```
7887
#### Optimizing AUPRC (Area Under the Precision-Recall Curve)
7988
```python
8089
>>> #import library
@@ -89,23 +98,23 @@ Usage
8998
>>> model.train()
9099
>>> for index, data, targets in trainloader:
91100
>>> data, targets = data.cuda(), targets.cuda()
92-
preds = model(data)
101+
logits = model(data)
102+
preds = torch.sigmoid(logits)
93103
loss = Loss(preds, targets, index)
94104
optimizer.zero_grad()
95105
loss.backward()
96106
optimizer.step()
97107

98108
```
99-
Please visit our [website](https://libauc.org/) or [github](https://github.com/Optimization-AI/LibAUC) for more examples.
100-
101109

102-
Tips
110+
Useful Tips
103111
---
104-
Please check the following list before running experiments:
105-
- Compute the `imbalance_ratio` from your train set and pass it to `AUCMLoss(imratio=xxx)`
106-
- Choose the proper `initial learning rate` and use `optimizer.update_regularizer(decay_factor=10)` to decay learning rate
107-
- Use activation function e.g., `torch.sigmoid()` before passing model outputs to loss function
108-
- Reshape both `preds` and `targets` to `(N, 1)` before constructing loss
112+
Chesklist before running experiments using LibAUC:
113+
- [x] Compute the **imbalance_ratio** from your train set and pass it to `AUCMLoss(imratio=xxx)`
114+
- [x] Choose a proper **initial learning rate** and use `optimizer.update_regularizer(decay_factor=10)` in stagewise
115+
- [x] Use activation function, e.g., `torch.sigmoid()`, before passing model outputs to loss function
116+
- [x] Reshape both variables **preds** and **targets** to `(N, 1)` before calling loss function
117+
109118

110119

111120
Citation

0 commit comments

Comments
 (0)