Skip to content

Commit a17fa74

Browse files
Update doc
1 parent 494b1b2 commit a17fa74

File tree

3 files changed

+20
-19
lines changed

3 files changed

+20
-19
lines changed

README.md

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Motivated by this, we propose DD-Ranking, a new benchmark for DD evaluation. DD-
5656

5757
</details>
5858

59-
## About
59+
## Introduction
6060

6161
<details>
6262
<summary>Unfold to see more details.</summary>
@@ -88,32 +88,29 @@ $$\text{IOR}/\text{HLR} = \frac{(\text{Acc.}{\text{syn-any}}-\text{Acc.}{\text{r
8888
DD-Ranking is integrated with:
8989
<!-- Uniform Fair Labels: loss on soft label -->
9090
<!-- Data Aug. -->
91-
- Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels;
92-
- Data augmentation, reconsidered as [optional tricks](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) in DD;
93-
- Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in DD.
94-
A new ranking on representative DD methods.
95-
96-
DD-Ranking is flexible and easy to use, supported by:
97-
<!-- Defualt configs: Customized configs -->
98-
<!-- Integrated classes: 1) Optimizer and etc.; 2) random selection tests (additionally, w/ or w/o hard labels)-->
99-
- Extensive configs provided;
100-
- Cutomized configs;
101-
- Testing and training framework with integrated metrics.
91+
- Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels in existing works;
92+
- Commonly used [data augmentation](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) methods in existing works;
93+
- Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in existing works.
94+
95+
DD-Ranking has the following features:
96+
- **Fair Evaluation**: DD-Ranking provides a fair evaluation scheme for DD methods that can decouple the impacts from knowledge distillation and data augmentation to reflect the real informativeness of the distilled data.
97+
- **Easy-to-use**: DD-Ranking provides a unified interface for dataset distillation evaluation.
98+
- **Extensible**: DD-Ranking supports various datasets and models.
99+
- **Customizable**: DD-Ranking supports various data augmentations and soft label strategies.
102100

103101
</details>
104102

105103
## Overview
106-
Included datasets and methods (hard/soft label).
107-
|Dataset|Hard Label|Soft Label|
104+
Included datasets and methods (categorized by hard/soft label).
105+
|Supported Dataset|Evaluated Hard Label Methods|Evaluated Soft Label Methods|
108106
|:-|:-|:-|
109107
|CIFAR-10|DC|DATM|
110108
|CIFAR-100|DSA|SRe2L|
111109
|TinyImageNet|DM|RDED|
112110
||MTT|D4M|
113111

114-
## Coming Soon
115-
Rank on different data augmentation methods.
116-
Rank on different data augmentation methods.
112+
Evaluation results can be found in the [leaderboard](https://huggingface.co/spaces/Soptq/DD-Ranking).
113+
117114
## Tutorial
118115

119116
Install DD-Ranking with `pip` or from [source](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main):
@@ -221,6 +218,10 @@ The following results will be returned to you:
221218
- [Quickstart]()
222219
- [Supported Models]() -->
223220

221+
## Coming Soon
222+
- [ ] DD-Ranking scores that decouple the impacts from data augmentation.
223+
- [ ] Evaluation results on ImageNet subsets.
224+
224225
## Contributing
225226

226227
<!-- Only PR for the 1st version of DD-Ranking -->

book.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ authors = ["DD-Ranking Team"]
77
language = "en"
88
multilingual = false
99
src = "doc"
10-
title = "DD-Ranking Benchmark"
10+
title = "DD-Ranking API Documentation"
1111

1212
[output.html]
1313
mathjax-support = true

doc/introduction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Dataset Distillation (DD) aims to condense a large dataset into a much smaller o
88

99
![history](static/history.png)
1010

11-
Notebaly, more and more methods are transitting from "hard label" to "soft label" in dataset distillation, especially during evaluation. **Hard labels** are categorical, having the same format of the real dataset. **Soft labels** are distributions, typically generated by a pre-trained teacher model.
11+
Notebaly, more and more methods are transitting from "hard label" to "soft label" in dataset distillation, especially during evaluation. **Hard labels** are categorical, having the same format of the real dataset. **Soft labels** are outputs of a pre-trained teacher model.
1212
Recently, Deng et al., pointed out that "a label is worth a thousand images". They showed analytically that soft labels are exetremely useful for accuracy improvement.
1313

1414
However, since the essence of soft labels is **knowledge distillation**, we find that when applying the same evaluation method to randomly selected data, the test accuracy also improves significantly (see the figure above).

0 commit comments

Comments
 (0)