Skip to content

Commit ffe35de

Browse files
Update code
1 parent a804caf commit ffe35de

File tree

17 files changed

+47
-45
lines changed

17 files changed

+47
-45
lines changed

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
Fair and benchmark for dataset distillation.
1414
</h3> -->
1515
<p align="center">
16-
| <a href=""><b>Documentation</b></a> | <a href=""><b>Leaderboard</b></a> | <b>Paper </b> (Coming Soon) | <a href=""><b>Twitter/X</b></a> | <a href=""><b>Developer Slack</b></a> |
16+
| <a href="https://nus-hpc-ai-lab.github.io/DD-Ranking/"><b>Documentation</b></a> | <a href="https://nus-hpc-ai-lab.github.io/DD-Ranking/"><b>Leaderboard</b></a> | <b>Paper </b> (Coming Soon) | <a href=""><b>Twitter/X</b></a> | <a href=""><b>Developer Slack</b></a> |
1717
</p>
1818

1919

@@ -43,9 +43,11 @@ Dataset Distillation (DD) aims to condense a large dataset into a much smaller o
4343
Notebaly, more and more methods are transitting from "hard label" to "soft label" in dataset distillation, especially during evaluation. **Hard labels** are categorical, having the same format of the real dataset. **Soft labels** are distributions, typically generated by a pre-trained teacher model.
4444
Recently, Deng et al., pointed out that "a label is worth a thousand images". They showed analytically that soft labels are exetremely useful for accuracy improvement.
4545

46-
However, since the essence of soft labels is **knowledge distillation**, we want to ask a question: **Can the test accuracy of the model trained on distilled data reflect the real informativeness of the distilled data?**
46+
However, since the essence of soft labels is **knowledge distillation**, we find that when applying the same evaluation method to randomly selected data, the test accuracy also improves significantly (see the figure above).
4747

48-
Specifically, we have discoverd unfairness of using only test accuracy to demonstrate one's performance from the following three aspects:
48+
This makes us wonder: **Can the test accuracy of the model trained on distilled data reflect the real informativeness of the distilled data?**
49+
50+
Additionally, we have discoverd unfairness of using only test accuracy to demonstrate one's performance from the following three aspects:
4951
1. Results of using hard and soft labels are not directly comparable since soft labels introduce teacher knowledge.
5052
2. Strategies of using soft labels are diverse. For instance, different objective functions are used during evaluation, such as soft Cross-Entropy and Kullback–Leibler divergence. Also, one image may be mapped to one or multiple soft labels.
5153
3. Different data augmentations are used during evaluation.

dd_ranking/aug/__init__.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
from .dsa import DSA_Augmentation
2-
from .mixup import Mixup_Augmentation
3-
from .cutmix import Cutmix_Augmentation
4-
from .zca import ZCA_Whitening_Augmentation
1+
from .dsa import DSAugmentation
2+
from .mixup import MixupAugmentation
3+
from .cutmix import CutmixAugmentation
4+
from .zca import ZCAWhiteningAugmentation

dd_ranking/aug/cutmix.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import kornia
44

55

6-
class Cutmix_Augmentation:
6+
class CutmixAugmentation:
77
def __init__(self, params: dict):
88
self.cutmix_p = params["cutmix_p"]
99

dd_ranking/aug/dsa.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import torch.nn.functional as F
44

55

6-
class DSA_Augmentation:
6+
class DSAugmentation:
77

88
def __init__(self, params: dict, seed: int=-1, aug_mode: str='S'):
99
self.params = params

dd_ranking/aug/mixup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import kornia
44

55

6-
class Mixup_Augmentation:
6+
class MixupAugmentation:
77
def __init__(self, params: dict):
88
self.mixup_p = params["mixup_p"]
99

dd_ranking/aug/zca.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import kornia
22

33

4-
class ZCA_Whitening_Augmentation:
4+
class ZCAWhiteningAugmentation:
55
def __init__(self, params: dict):
66
self.transform = kornia.enhance.ZCAWhitening()
77

dd_ranking/metrics/__init__.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
from .general import Unified_Evaluator
2-
from .soft_label import Soft_Label_Evaluator
3-
from .hard_label import Hard_Label_Evaluator
1+
from .general import GeneralEvaluator
2+
from .soft_label import SoftLabelEvaluator
3+
from .hard_label import HardLabelEvaluator

dd_ranking/metrics/general.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,11 @@
1414
from dd_ranking.utils import set_seed, get_optimizer, get_lr_scheduler
1515
from dd_ranking.utils import train_one_epoch, validate
1616
from dd_ranking.loss import SoftCrossEntropyLoss, KLDivergenceLoss
17-
from dd_ranking.aug import DSA_Augmentation, Mixup_Augmentation, Cutmix_Augmentation, ZCA_Whitening_Augmentation
17+
from dd_ranking.aug import DSAugmentation, MixupAugmentation, CutmixAugmentation, ZCAWhiteningAugmentation
1818
from dd_ranking.config import Config
1919

2020

21-
class Unified_Evaluator:
21+
class GeneralEvaluator:
2222

2323
def __init__(self,
2424
config: Config=None,

dd_ranking/metrics/hard_label.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
from dd_ranking.config import Config
1717

1818

19-
class Hard_Label_Evaluator:
19+
class HardLabelEvaluator:
2020

2121
def __init__(self, config: Config=None, dataset: str='CIFAR10', real_data_path: str='./dataset/', ipc: int=10,
2222
model_name: str='ConvNet-3', data_aug_func: str='cutmix', aug_params: dict={'cutmix_p': 1.0}, optimizer: str='sgd',

dd_ranking/metrics/soft_label.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
from dd_ranking.config import Config
1818

1919

20-
class Soft_Label_Evaluator:
20+
class SoftLabelEvaluator:
2121

2222
def __init__(self, config: Config=None, dataset: str='CIFAR10', real_data_path: str='./dataset/', ipc: int=10, model_name: str='ConvNet-3',
2323
soft_label_criterion: str='kl', data_aug_func: str='cutmix', aug_params: dict={'cutmix_p': 1.0}, soft_label_mode: str='S',

0 commit comments

Comments
 (0)