Skip to content

Commit 6f19cb9

Browse files
Update reamde
1 parent 6759ed5 commit 6f19cb9

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# <center>DD-Ranking</center>
1+
# <center>DD-Ranking: Rethinking the Evaluation of Dataset Distillation</center>
22

33
<p align="center">
44
<picture>
@@ -40,7 +40,7 @@ Dataset Distillation (DD) aims to condense a large dataset into a much smaller o
4040

4141
![history](./static/history.png)
4242

43-
Notebaly, more and more methods are transitting from "hard label" to "soft label" in dataset distillation, especially during evaluation. **Hard labels** are categorical, having the same format of the real dataset. **Soft labels** are distributions, typically generated by a pre-trained teacher model.
43+
Notebaly, more and more methods are transitting from "hard label" to "soft label" in dataset distillation, especially during evaluation. **Hard labels** are categorical, having the same format of the real dataset. **Soft labels** are outputs of a pre-trained teacher model.
4444
Recently, Deng et al., pointed out that "a label is worth a thousand images". They showed analytically that soft labels are exetremely useful for accuracy improvement.
4545

4646
However, since the essence of soft labels is **knowledge distillation**, we find that when applying the same evaluation method to randomly selected data, the test accuracy also improves significantly (see the figure above).

0 commit comments

Comments
 (0)