Skip to content

Commit c08d4e7

Browse files
committed
Update README.md
1 parent 4254be7 commit c08d4e7

File tree

1 file changed

+28
-13
lines changed

1 file changed

+28
-13
lines changed

README.md

Lines changed: 28 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
Fair and benchmark for dataset distillation.
1414
</h3> -->
1515
<p align="center">
16-
| <a href="https://nus-hpc-ai-lab.github.io/DD-Ranking/"><b>Documentation</b></a> | <a href="https://huggingface.co/spaces/Soptq/DD-Ranking"><b>Leaderboard</b></a> | <b>Paper </b> (Coming Soon) | <a href=""><b>Twitter/X</b></a> | <a href=""><b>Developer Slack</b></a> |
16+
| <a href="https://nus-hpc-ai-lab.github.io/DD-Ranking/"><b>Documentation</b></a> | <a href="https://huggingface.co/spaces/Soptq/DD-Ranking"><b>Leaderboard</b></a> | <a href=""><b>Paper </b> (Coming Soon)</a> | <a href=""><b>Twitter/X</b> (Coming Soon)</a> | <a href=""><b>Developer Slack</b> (Coming Soon)</a> |
1717
</p>
1818

1919

@@ -35,7 +35,7 @@ Fair and benchmark for dataset distillation.
3535

3636
<details>
3737
<summary>Unfold to see more details.</summary>
38-
38+
<br>
3939
Dataset Distillation (DD) aims to condense a large dataset into a much smaller one, which allows a model to achieve comparable performance after training on it. DD has gained extensive attention since it was proposed. With some foundational methods such as DC, DM, and MTT, various works have further pushed this area to a new standard with their novel designs.
4040

4141
![history](./static/history.png)
@@ -58,13 +58,16 @@ Motivated by this, we propose DD-Ranking, a new benchmark for DD evaluation. DD-
5858

5959
## About
6060

61+
<details>
62+
<summary>Unfold to see more details.</summary>
63+
<br>
6164
DD-Ranking (DD, *i.e.*, Dataset Distillation) is an integrated and easy-to-use benchmark for dataset distillation. It aims to provide a fair evaluation scheme for DD methods that can decouple the impacts from knowledge distillation and data augmentation to reflect the real informativeness of the distilled data.
6265

6366
<!-- Hard label is tested -->
6467
<!-- Keep the same compression ratio, comparing with random selection -->
6568
**Performance benchmark**
6669

67-
<span style="color: #ffff00;">[To Verify]:</span>Revisit the original goal of dataset distillation:
70+
Revisit the original goal of dataset distillation:
6871
> The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data.
6972
>
7073
@@ -85,21 +88,33 @@ $$\text{IOR}/\text{HLR} = \frac{(\text{Acc.}{\text{syn-any}}-\text{Acc.}{\text{r
8588
DD-Ranking is integrated with:
8689
<!-- Uniform Fair Labels: loss on soft label -->
8790
<!-- Data Aug. -->
88-
- <span style="color: #ffff00;">[To Verify]:</span>Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels;
89-
- <span style="color: #ffff00;">[To Verify]:</span>Data augmentation, reconsidered as [optional tricks](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) in DD;
90-
- <span style="color: #ffff00;">[To Verify]:</span>Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in DD.
91-
<span style="color: #ffff00;">[To Verify]:</span> A new ranking on representative DD methods.
91+
- Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels;
92+
- Data augmentation, reconsidered as [optional tricks](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) in DD;
93+
- Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in DD.
94+
A new ranking on representative DD methods.
9295

9396
DD-Ranking is flexible and easy to use, supported by:
9497
<!-- Defualt configs: Customized configs -->
9598
<!-- Integrated classes: 1) Optimizer and etc.; 2) random selection tests (additionally, w/ or w/o hard labels)-->
96-
- <span style="color: #ffff00;">[To Verify]:</span>Extensive configs provided;
97-
- <span style="color: #ffff00;">[To Verify]:</span>Cutomized configs;
98-
- <span style="color: #ffff00;">[To Verify]:</span>Testing and training framework with integrated metrics.
99+
- Extensive configs provided;
100+
- Cutomized configs;
101+
- Testing and training framework with integrated metrics.
102+
103+
</details>
104+
105+
## Overview
106+
: DC DSA DM MTT
107+
: DATM SRe2L RDED D4M
108+
|Dataset|Hard Label|Soft Label|
109+
|:-|:-|:-|
110+
|CIFAR10|DC|DATM|
111+
|CIFAR100|DSA|SRe2L|
112+
|TinyImageNet|DM|RDED|
113+
||MTT|D4M|
99114

100115
## Coming Soon
101-
<span style="color: #ffff00;">[To Verify]:</span>Rank on different data augmentation methods.
102-
<span style="color: #ffff00;">[To Verify]:</span>Rank on different data augmentation methods.
116+
Rank on different data augmentation methods.
117+
Rank on different data augmentation methods.
103118
## Tutorial
104119

105120
Install DD-Ranking with `pip` or from [source](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main):
@@ -208,7 +223,7 @@ The following results will be returned to you:
208223
- [Supported Models]() -->
209224

210225
## Contributing
211-
ß
226+
212227
<!-- Only PR for the 1st version of DD-Ranking -->
213228
Feel free to submit grades to update the DD-Ranking list. We welcome and value any contributions and collaborations.
214229
Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.

0 commit comments

Comments
 (0)