You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -35,7 +35,7 @@ Fair and benchmark for dataset distillation.
35
35
36
36
<details>
37
37
<summary>Unfold to see more details.</summary>
38
-
38
+
<br>
39
39
Dataset Distillation (DD) aims to condense a large dataset into a much smaller one, which allows a model to achieve comparable performance after training on it. DD has gained extensive attention since it was proposed. With some foundational methods such as DC, DM, and MTT, various works have further pushed this area to a new standard with their novel designs.
40
40
41
41

@@ -58,13 +58,16 @@ Motivated by this, we propose DD-Ranking, a new benchmark for DD evaluation. DD-
58
58
59
59
## About
60
60
61
+
<details>
62
+
<summary>Unfold to see more details.</summary>
63
+
<br>
61
64
DD-Ranking (DD, *i.e.*, Dataset Distillation) is an integrated and easy-to-use benchmark for dataset distillation. It aims to provide a fair evaluation scheme for DD methods that can decouple the impacts from knowledge distillation and data augmentation to reflect the real informativeness of the distilled data.
62
65
63
66
<!-- Hard label is tested -->
64
67
<!-- Keep the same compression ratio, comparing with random selection -->
65
68
**Performance benchmark**
66
69
67
-
<spanstyle="color: #ffff00;">[To Verify]:</span>Revisit the original goal of dataset distillation:
70
+
Revisit the original goal of dataset distillation:
68
71
> The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data.
-<spanstyle="color: #ffff00;">[To Verify]:</span>Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels;
89
-
-<spanstyle="color: #ffff00;">[To Verify]:</span>Data augmentation, reconsidered as [optional tricks](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) in DD;
90
-
-<spanstyle="color: #ffff00;">[To Verify]:</span>Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in DD.
91
-
<spanstyle="color: #ffff00;">[To Verify]:</span> A new ranking on representative DD methods.
91
+
- Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels;
92
+
- Data augmentation, reconsidered as [optional tricks](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) in DD;
93
+
- Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in DD.
94
+
A new ranking on representative DD methods.
92
95
93
96
DD-Ranking is flexible and easy to use, supported by:
94
97
<!-- Defualt configs: Customized configs -->
95
98
<!-- Integrated classes: 1) Optimizer and etc.; 2) random selection tests (additionally, w/ or w/o hard labels)-->
0 commit comments