Skip to content

Commit 7b521a4

Browse files
Update doc
1 parent 9e1f72e commit 7b521a4

File tree

2 files changed

+34
-35
lines changed

2 files changed

+34
-35
lines changed

README.md

Lines changed: 32 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -77,11 +77,11 @@ Revisit the original goal of dataset distillation:
7777
>
7878
7979
The evaluation method for DD-Ranking is grounded in the essence of dataset distillation, aiming to better reflect the informativeness of the synthesized data by assessing the following two aspects:
80-
1. The degree to which the original dataset is recovered under hard labels (hard label recovery): $\text{HLR}=\text{Acc.}{\text{full-hard}}-\text{Acc.}{\text{syn-hard}}$.
80+
1. The degree to which the real dataset is recovered under hard labels (hard label recovery): $\text{HLR}=\text{Acc.}{\text{real-hard}}-\text{Acc.}{\text{syn-hard}}$.
8181

8282
2. The improvement over random selection when using personalized evaluation methods (improvement over random): $\text{IOR}=\text{Acc.}{\text{syn-any}}-\text{Acc.}{\text{rdm-any}}$.
8383
$\text{Acc.}$ is the accuracy of models trained on different samples. Samples' marks are as follows:
84-
- $\text{full-hard}$: Full dataset with hard labels;
84+
- $\text{real-hard}$: Real dataset with hard labels;
8585
- $\text{syn-hard}$: Synthetic dataset with hard labels;
8686
- $\text{syn-any}$: Synthetic dataset with personalized evaluation methods (hard or soft labels);
8787
- $\text{rdm-any}$: Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.
@@ -99,8 +99,6 @@ By default, we set $w = 0.5$ on the leaderboard, meaning that both $\text{IOR}$
9999
## Overview
100100

101101
DD-Ranking is integrated with:
102-
<!-- Uniform Fair Labels: loss on soft label -->
103-
<!-- Data Aug. -->
104102
- Multiple [strategies](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/loss) of using soft labels in existing works;
105103
- Commonly used [data augmentation](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/dd_ranking/aug) methods in existing works;
106104
- Commonly used [model architectures](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/blob/main/dd_ranking/utils/networks.py) in existing works.
@@ -222,8 +220,6 @@ The following results will be returned to you:
222220
- `HLR std`: The standard deviation of hard label recovery over `num_eval` runs.
223221
- `IOR mean`: The mean of improvement over random over `num_eval` runs.
224222
- `IOR std`: The standard deviation of improvement over random over `num_eval` runs.
225-
<!-- - `IOR/HLR mean`: The mean of IOR/HLR over `num_eval` runs.
226-
- `IOR/HLR std`: The standard deviation of IOR/HLR over `num_eval` runs. -->
227223

228224
Check out our <span style="color: #ff0000;">[documentation](https://nus-hpc-ai-lab.github.io/DD-Ranking/)</span> to learn more.
229225

@@ -240,7 +236,7 @@ Feel free to submit grades to update the DD-Ranking list. We welcome and value a
240236
Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.
241237

242238

243-
<!-- ## Team
239+
## Team
244240

245241
### Developers:
246242

@@ -260,32 +256,35 @@ Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.
260256
### Advisors:
261257
<div style="column-count: 2;">
262258

263-
- [Dai Liu](https://scholar.google.com/citations?user=3aWKpkQAAAAJ&hl=en)
264-
- [Ziheng Qin](https://henryqin1997.github.io/ziheng_qin/)
265-
- [Kaipeng Zhang](https://kpzhang93.github.io/)
266-
- [Yuzhang Shang](https://42shawn.github.io/)
267-
- [Zheng Zhu](http://www.zhengzhu.net/)
268-
- [Kun Wang](https://www.kunwang.net/)
269-
- [Guang Li](https://www-lmd.ist.hokudai.ac.jp/member/guang-li/)
270-
- [Junhao Zhang](https://junhaozhang98.github.io/)
271-
- [Jiawei Liu](https://jia-wei-liu.github.io/)
272-
- [Lingjuan Lyu](https://sites.google.com/view/lingjuan-lyu)
273-
- [Yaochu Jin](https://en.westlake.edu.cn/faculty/yaochu-jin.html)
274-
- [Mike Shou](https://sites.google.com/view/showlab)
275-
- [Angela Yao](https://www.comp.nus.edu.sg/~ayao/)
276-
- [Xavier Bresson](https://graphdeeplearning.github.io/authors/xavier-bresson/)
277-
- [Tat-Seng Chua](https://www.chuatatseng.com/)
278-
- [Justin Cui](https://scholar.google.com/citations?user=zel3jUcAAAAJ&hl=en)
279-
- [Yan Yan](https://tomyan555.github.io/)
280-
- [Tianlong Chen](https://tianlong-chen.github.io/)
281-
- [Zhangyang Wang](https://vita-group.github.io/)
282-
- [Konstantinos N. Plataniotis](https://www.comm.utoronto.ca/~kostas/)
283-
- [Bo Zhao](https://www.bozhao.me/)
284-
- [Manolis Kellis](https://web.mit.edu/manoli/)
285-
- [Yang You](https://www.comp.nus.edu.sg/~youy/)
286-
- [Kai Wang](https://kaiwang960112.github.io/)
287-
288-
</div> -->
259+
- [Dai Liu](https://scholar.google.com/citations?user=3aWKpkQAAAAJ&hl=en) (Munich Technology University)
260+
- [Ziheng Qin](https://henryqin1997.github.io/ziheng_qin/) (National University of Singapore)
261+
- [Kaipeng Zhang](https://kpzhang93.github.io/) (Shanghai AI Lab)
262+
- [Yuzhang Shang](https://42shawn.github.io/) (University of Illinois at Chicago)
263+
- [Tianyi Zhou](https://joeyzhouty.github.io/) (A*STAR)
264+
- [Zheng Zhu](http://www.zhengzhu.net/) (GigaAI)
265+
- [Kun Wang](https://www.kunwang.net/) (University of Science and Technology of China)
266+
- [Guang Li](https://www-lmd.ist.hokudai.ac.jp/member/guang-li/) (Hokkaido University)
267+
- [Junhao Zhang](https://junhaozhang98.github.io/) (National University of Singapore)
268+
- [Jiawei Liu](https://jia-wei-liu.github.io/) (National University of Singapore)
269+
- [Lingjuan Lyu](https://sites.google.com/view/lingjuan-lyu) (Sony)
270+
- [Jiancheng Lv](https://scholar.google.com/citations?user=0TCaWKwAAAAJ&hl=en) (Sichuan University)
271+
- [Yaochu Jin](https://en.westlake.edu.cn/faculty/yaochu-jin.html) (Westlake University)
272+
- [Mike Shou](https://sites.google.com/view/showlab) (National University of Singapore)
273+
- [Angela Yao](https://www.comp.nus.edu.sg/~ayao/) (National University of Singapore)
274+
- [Xavier Bresson](https://graphdeeplearning.github.io/authors/xavier-bresson/) (National University of Singapore)
275+
- [Tat-Seng Chua](https://www.chuatatseng.com/) (National University of Singapore)
276+
- [Justin Cui](https://scholar.google.com/citations?user=zel3jUcAAAAJ&hl=en) (UC Los Angeles)
277+
- [George Cazenavette](https://georgecazenavette.github.io/) (Massachusetts Institute of Technology)
278+
- [Yan Yan](https://tomyan555.github.io/) (University of Illinois at Chicago)
279+
- [Tianlong Chen](https://tianlong-chen.github.io/) (UNC Chapel Hill)
280+
- [Zhangyang Wang](https://vita-group.github.io/) (UT Austin)
281+
- [Konstantinos N. Plataniotis](https://www.comm.utoronto.ca/~kostas/) (University of Toronto)
282+
- [Bo Zhao](https://www.bozhao.me/) (Shanghai Jiao Tong University)
283+
- [Manolis Kellis](https://web.mit.edu/manoli/) (Massachusetts Institute of Technology)
284+
- [Yang You](https://www.comp.nus.edu.sg/~youy/) (National University of Singapore)
285+
- [Kai Wang](https://kaiwang960112.github.io/) (National University of Singapore)
286+
287+
</div>
289288

290289
## License
291290

doc/introduction.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,13 +45,13 @@ Revisit the original goal of dataset distillation:
4545
>
4646
4747
The evaluation method for DD-Ranking is grounded in the essence of dataset distillation, aiming to better reflect the information content of the synthesized data by assessing the following two aspects:
48-
1. The degree to which the original dataset is recovered under hard labels (hard label recovery): \\( \text{HLR} = \text{Acc.} \text{full-hard} - \text{Acc.} \text{syn-hard} \\)
48+
1. The degree to which the real dataset is recovered under hard labels (hard label recovery): \\( \text{HLR} = \text{Acc.} \text{real-hard} - \text{Acc.} \text{syn-hard} \\)
4949

5050
2. The improvement over random selection when using personalized evaluation methods (improvement over random): \\( \text{IOR} = \text{Acc.} \text{syn-any} - \text{Acc.} \text{rdm-any} \\)
5151

5252
\\(\text{Acc.}\\) is the accuracy of models trained on different samples. Samples' marks are as follows:
5353

54-
- \\(\text{full-hard}\\): Full dataset with hard labels;
54+
- \\(\text{real-hard}\\): Real dataset with hard labels;
5555
- \\(\text{syn-hard}\\): Synthetic dataset with hard labels;
5656
- \\(\text{syn-any}\\): Synthetic dataset with personalized evaluation methods (hard or soft labels);
5757
- \\(\text{rdm-any}\\): Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.

0 commit comments

Comments
 (0)