Skip to content

Commit bf335b7

Browse files
Update doc
1 parent beb17bb commit bf335b7

33 files changed

+905
-1034
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -152,17 +152,17 @@ python setup.py install
152152
```
153153
### Quickstart
154154

155-
Below is a step-by-step guide on how to use our `ddranking`. This demo is based on LRS on soft labels (source code can be found in `demo_soft.py`). You can find LRS on hard labels in `demo_hard.py` and ARS in `demo_aug.py`.
155+
Below is a step-by-step guide on how to use our `ddranking`. This demo is based on LRS on soft labels (source code can be found in `demo_lrs_soft.py`). You can find LRS on hard labels in `demo_lrs_hard.py` and ARS in `demo_aug.py`.
156156
DD-Ranking supports multi-GPU Distributed evaluation. You can simply use `torchrun` to launch the evaluation.
157157

158158
**Step1**: Intialize a soft-label metric evaluator object. Config files are recommended for users to specify hyper-parameters. Sample config files are provided [here](https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/configs).
159159

160160
```python
161-
from ddranking.metrics import SoftLabelEvaluator
161+
from ddranking.metrics import LabelRobustScoreSoft
162162
from ddranking.config import Config
163163

164-
config = Config.from_file("./configs/Demo_Soft_Label.yaml")
165-
soft_label_metric_calc = SoftLabelEvaluator(config)
164+
config = Config.from_file("./configs/Demo_LRS_Soft_Label.yaml")
165+
lrs_soft_metric = LabelRobustScoreSoft(config)
166166
```
167167

168168
<details>
@@ -194,7 +194,7 @@ random_data_path = "./random_data" # Specify your random data path
194194
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/dm_hard_scores.csv"
195195

196196
""" We only list arguments that usually need specifying"""
197-
soft_label_metric_calc = SoftLabelEvaluator(
197+
lrs_soft_metric = LabelRobustScoreSoft(
198198
dataset=dataset,
199199
real_data_path=real_data_dir,
200200
ipc=ipc,
@@ -233,9 +233,9 @@ syn_lr = torch.load('/your/path/to/syn/lr.pt')
233233
**Step 3:** Compute the metric.
234234

235235
```python
236-
metric = soft_label_metric_calc.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
236+
lrs_soft_metric.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
237237
# alternatively, you can specify the image folder path to compute the metric
238-
soft_label_metric_calc.compute_metrics(image_path='./your/path/to/syn/images', soft_labels=soft_labels, syn_lr=syn_lr)
238+
lrs_soft_metric.compute_metrics(image_path='./your/path/to/syn/images', soft_labels=soft_labels, syn_lr=syn_lr)
239239
```
240240

241241
The following results will be printed and saved to `save_path`:

book/config/overview.html

Lines changed: 20 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,8 @@ <h1 class="menu-title">DD-Ranking API Documentation</h1>
155155
<div id="content" class="content">
156156
<main>
157157
<h1 id="config"><a class="header" href="#config">Config</a></h1>
158-
<p>To ease the usage of DD-Ranking, we allow users to specify the parameters of the evaluator in a config file. The config file is a YAML file that contains the parameters of the evaluator. We illustrate the config file with the following example.</p>
158+
<p>To ease the usage of DD-Ranking, we allow users to specify the parameters of the evaluator in a config file. The config file is a YAML file that contains the parameters of the evaluator. We illustrate the config file with the following examples.</p>
159+
<h2 id="lrs"><a class="header" href="#lrs">LRS</a></h2>
159160
<pre><code class="language-yaml">dataset: CIFAR100 # dataset name
160161
real_data_path: ./dataset/ # path to the real dataset
161162
ipc: 10 # image per class
@@ -166,12 +167,14 @@ <h1 id="config"><a class="header" href="#config">Config</a></h1>
166167
tea_use_torchvision: true # whether to use torchvision to load teacher model
167168

168169
teacher_dir: ./teacher_models # path to the pretrained teacher model
170+
teacher_model_names: [ResNet-18-BN] # the list of teacher models being used for evaluation
169171

170172
data_aug_func: mixup # data augmentation function
171173
aug_params:
172174
lambda: 0.8 # data augmentation parameter; please follow this format for other parameters
173175

174176
use_zca: false # whether to use ZCA whitening
177+
use_aug_for_hard: false # whether to use data augmentation for hard label evaluation
175178

176179
custom_train_trans: # custom torchvision-based transformations to process training data; please follow this format for your own transformations
177180
- name: RandomCrop
@@ -189,28 +192,38 @@ <h1 id="config"><a class="header" href="#config">Config</a></h1>
189192

190193
custom_val_trans: null # custom torchvision-based transformations to process validation data; please follow the format above for your own transformations
191194

192-
use_aug_for_hard: false # whether to use data augmentation for hard label evaluation
193-
194195
soft_label_mode: M # soft label mode
195196
soft_label_criterion: kl # soft label criterion
196-
temperature: 30.0 # temperature for soft label
197+
loss_fn_kwargs:
198+
temperature: 30.0 # temperature for soft label
199+
scale_loss: false # whether to scale the loss
200+
197201
optimizer: adamw # optimizer
198202
lr_scheduler: cosine # learning rate scheduler
199203
weight_decay: 0.01 # weight decay
204+
momentum: 0.9 # momentum
200205
num_eval: 5 # number of evaluations
206+
eval_full_data: false # whether to compute the test accuracy on the full dataset
201207
num_epochs: 400 # number of training epochs
202-
default_lr: 0.001 # default learning rate
203208
num_workers: 4 # number of workers
204209
device: cuda # device
210+
dist: true # whether to use distributed training
205211
syn_batch_size: 256 # batch size for synthetic data
206212
real_batch_size: 256 # batch size for real data
207213
save_path: ./results.csv # path to save the results
214+
215+
random_data_format: tensor # format of the random data, tensor or image
216+
random_data_path: ./random_data # path to the save the random data
217+
208218
</code></pre>
209219
<p>To use config file, you can follow the example below.</p>
210-
<pre><code class="language-python">from dd_ranking.metrics import SoftLabelEvaluator
220+
<pre><code class="language-python">from dd_ranking.metrics import LabelRobustScoreSoft
211221

212222
config = Config(config_path='./config.yaml')
213-
evaluator = SoftLabelEvaluator(config)
223+
evaluator = LabelRobustScoreSoft(config)
224+
</code></pre>
225+
<h2 id="ars"><a class="header" href="#ars">ARS</a></h2>
226+
<pre><code class="language-yaml">
214227
</code></pre>
215228

216229
</main>

book/datasets/overview.html

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -166,6 +166,7 @@ <h3 id="parameters"><a class="header" href="#parameters">Parameters</a></h3>
166166
<li><strong>data_path</strong>(<span style="color:#FF6B00;">str</span>): Path to the dataset.</li>
167167
<li><strong>im_size</strong>(<span style="color:#FF6B00;">tuple</span>): Image size.</li>
168168
<li><strong>use_zca</strong>(<span style="color:#FF6B00;">bool</span>): Whether to use ZCA whitening. When set to True, the dataset will <strong>not be</strong> normalized using the mean and standard deviation of the training set.</li>
169+
<li><strong>custom_train_trans</strong>(<span style="color:#FF6B00;">Optional[Callable]</span>): Custom transformation on the training set.</li>
169170
<li><strong>custom_val_trans</strong>(<span style="color:#FF6B00;">Optional[Callable]</span>): Custom transformation on the validation set.</li>
170171
<li><strong>device</strong>(<span style="color:#FF6B00;">str</span>): Device for performing ZCA whitening.</li>
171172
</ul>
@@ -198,6 +199,15 @@ <h3 id="parameters"><a class="header" href="#parameters">Parameters</a></h3>
198199
<li><strong>std</strong>: <code>[0.229, 0.224, 0.225]</code></li>
199200
</ul>
200201
</li>
202+
<li><strong>ImageNet1K</strong>
203+
<ul>
204+
<li><strong>channels</strong>: <code>3</code></li>
205+
<li><strong>im_size</strong>: <code>(224, 224)</code></li>
206+
<li><strong>num_classes</strong>: <code>1000</code></li>
207+
<li><strong>mean</strong>: <code>[0.485, 0.456, 0.406]</code></li>
208+
<li><strong>std</strong>: <code>[0.229, 0.224, 0.225]</code></li>
209+
</ul>
210+
</li>
201211
</ul>
202212

203213
</main>

book/getting-started/quick-start.html

Lines changed: 28 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -155,24 +155,26 @@ <h1 class="menu-title">DD-Ranking API Documentation</h1>
155155
<div id="content" class="content">
156156
<main>
157157
<h2 id="quick-start"><a class="header" href="#quick-start">Quick Start</a></h2>
158-
<p>Below is a step-by-step guide on how to use our <code>dd_ranking</code>. This demo is based on soft labels (source code can be found in <code>demo_soft.py</code>). You can find hard label demo in <code>demo_hard.py</code>.</p>
158+
<p>Below is a step-by-step guide on how to use our <code>dd_ranking</code>. This demo is for label-robust score (LRS) on soft labels (source code can be found in <code>demo_lrs_soft.py</code>). You can find the demo for LRS on hard label demo in <code>demo_lrs_hard.py</code> and the demo for augmentation-robust score (ARS) in <code>demo_ars.py</code>.
159+
DD-Ranking supports multi-GPU Distributed evaluation. You can simply use <code>torchrun</code> to launch the evaluation.</p>
159160
<p><strong>Step1</strong>: Intialize a soft-label metric evaluator object. Config files are recommended for users to specify hyper-parameters. Sample config files are provided <a href="https://github.com/NUS-HPC-AI-Lab/DD-Ranking/tree/main/configs">here</a>.</p>
160-
<pre><code class="language-python">from ddranking.metrics import SoftLabelEvaluator
161+
<pre><code class="language-python">from ddranking.metrics import LabelRobustScoreSoft
161162
from ddranking.config import Config
162163

163-
&gt;&gt;&gt; config = Config.from_file("./configs/Demo_Soft_Label.yaml")
164-
&gt;&gt;&gt; soft_label_metric_calc = SoftLabelEvaluator(config)
164+
&gt;&gt;&gt; config = Config.from_file("./configs/Demo_LRS_Soft_Label.yaml")
165+
&gt;&gt;&gt; lrs_soft_metric = LabelRobustScoreSoft(config)
165166
</code></pre>
166167
<details>
167168
<summary>You can also pass keyword arguments.</summary>
168169
<pre><code class="language-python">device = "cuda"
169170
method_name = "DATM" # Specify your method name
170171
ipc = 10 # Specify your IPC
171-
dataset = "CIFAR10" # Specify your dataset name
172-
syn_data_dir = "./data/CIFAR10/IPC10/" # Specify your synthetic data path
172+
dataset = "CIFAR100" # Specify your dataset name
173+
syn_data_dir = "./data/CIFAR100/IPC10/" # Specify your synthetic data path
173174
real_data_dir = "./datasets" # Specify your dataset path
174175
model_name = "ConvNet-3" # Specify your model name
175176
teacher_dir = "./teacher_models" # Specify your path to teacher model chcekpoints
177+
teacher_model_names = ["ConvNet-3"] # Specify your teacher model names
176178
im_size = (32, 32) # Specify your image size
177179
dsa_params = { # Specify your data augmentation parameters
178180
"prob_flip": 0.5,
@@ -184,23 +186,31 @@ <h2 id="quick-start"><a class="header" href="#quick-start">Quick Start</a></h2>
184186
"ratio_crop_pad": 0.125,
185187
"ratio_cutout": 0.5
186188
}
187-
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/datm_ranking_scores.csv"
189+
random_data_format = "tensor" # Specify your random data format (tensor or image)
190+
random_data_path = "./random_data" # Specify your random data path
191+
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/dm_hard_scores.csv"
188192

189193
""" We only list arguments that usually need specifying"""
190-
soft_label_metric_calc = SoftLabelEvaluator(
194+
lrs_soft_metric = LabelRobustScoreSoft(
191195
dataset=dataset,
192196
real_data_path=real_data_dir,
193197
ipc=ipc,
194198
model_name=model_name,
195199
soft_label_criterion='sce', # Use Soft Cross Entropy Loss
196200
soft_label_mode='S', # Use one-to-one image to soft label mapping
201+
loss_fn_kwargs={'temperature': 1.0, 'scale_loss': False},
197202
data_aug_func='dsa', # Use DSA data augmentation
198203
aug_params=dsa_params, # Specify dsa parameters
199204
im_size=im_size,
205+
random_data_format=random_data_format,
206+
random_data_path=random_data_path,
200207
stu_use_torchvision=False,
201208
tea_use_torchvision=False,
202-
teacher_dir='./teacher_models',
209+
teacher_dir=teacher_dir,
210+
teacher_model_names=teacher_model_names,
211+
num_eval=5,
203212
device=device,
213+
dist=True,
204214
save_path=save_path
205215
)
206216
</code></pre>
@@ -213,16 +223,18 @@ <h2 id="quick-start"><a class="header" href="#quick-start">Quick Start</a></h2>
213223
&gt;&gt;&gt; syn_lr = torch.load('/your/path/to/syn/lr.pt')
214224
</code></pre>
215225
<p><strong>Step 3:</strong> Compute the metric.</p>
216-
<pre><code class="language-python">&gt;&gt;&gt; metric = soft_label_metric_calc.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
226+
<pre><code class="language-python">&gt;&gt;&gt; lrs_soft_metric.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
217227
# alternatively, you can specify the image folder path to compute the metric
218-
&gt;&gt;&gt; metric = soft_label_metric_calc.compute_metrics(image_path='./your/path/to/syn/images', soft_labels=soft_labels, syn_lr=syn_lr)
228+
&gt;&gt;&gt; lrs_soft_metric.compute_metrics(image_path='./your/path/to/syn/images', soft_labels=soft_labels, syn_lr=syn_lr)
219229
</code></pre>
220-
<p>The following results will be returned to you:</p>
230+
<p>The following results will be printed and saved to <code>save_path</code>:</p>
221231
<ul>
222-
<li><code>hard_label_recovery mean</code>: The mean of hard label recovery scores.</li>
223-
<li><code>hard_label_recovery std</code>: The standard deviation of hard label recovery scores.</li>
224-
<li><code>improvement_over_random mean</code>: The mean of improvement over random scores.</li>
225-
<li><code>improvement_over_random std</code>: The standard deviation of improvement over random scores.</li>
232+
<li><code>HLR mean</code>: The mean of hard label recovery over <code>num_eval</code> runs.</li>
233+
<li><code>HLR std</code>: The standard deviation of hard label recovery over <code>num_eval</code> runs.</li>
234+
<li><code>IOR mean</code>: The mean of improvement over random over <code>num_eval</code> runs.</li>
235+
<li><code>IOR std</code>: The standard deviation of improvement over random over <code>num_eval</code> runs.</li>
236+
<li><code>LRS mean</code>: The mean of Label-Robust Score over <code>num_eval</code> runs.</li>
237+
<li><code>LRS std</code>: The standard deviation of Label-Robust Score over <code>num_eval</code> runs.</li>
226238
</ul>
227239
<!-- - `dd_ranking_score mean`: The mean of dd ranking scores.
228240
- `dd_ranking_score std`: The standard deviation of dd ranking scores. -->

0 commit comments

Comments
 (0)