Skip to content

Commit e5847f0

Browse files
Update DDRS
1 parent 714c963 commit e5847f0

File tree

8 files changed

+34
-20
lines changed

8 files changed

+34
-20
lines changed

README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,8 +86,14 @@ $\text{Acc.}$ is the accuracy of models trained on different samples. Samples' m
8686

8787
DD-Ranking uses a weight sum of $\text{IOR}$ and $-\text{HLR}$ to rank different methods:
8888
$$
89-
\text{Rank\_Score} = \frac{e^{w \text{IOR} - (1-w) \text{HLR}} - e^{-1}}{e - e^{-1}}, \quad w \in [0, 1]
89+
\alpha = w\text{IOR}-(1-w)\text{HLR}, \quad w \in [0, 1]
9090
$$
91+
92+
Formally, the **DD-Ranking Score (DDRS)** is defined as:
93+
$$
94+
\text{DDRS} = \frac{e^{\alpha}-e^{-1}}{e - e^{-1}}
95+
$$
96+
9197
By default, we set $w = 0.5$ on the leaderboard, meaning that both $\text{IOR}$ and $\text{HLR}$ are equally important. Users can adjust the weights to emphasize one aspect on the leaderboard.
9298

9399
</details>

book/getting-started/quick-start.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ <h2 id="quick-start"><a class="header" href="#quick-start">Quick Start</a></h2>
162162
from dd_ranking.config import Config
163163

164164
&gt;&gt;&gt; config = Config.from_file("./configs/Demo_Soft_Label.yaml")
165-
&gt;&gt;&gt; soft_obj = SoftLabelEvaluator(config)
165+
&gt;&gt;&gt; soft_label_metric_calc = SoftLabelEvaluator(config)
166166
</code></pre>
167167
<details>
168168
<summary>You can also pass keyword arguments.</summary>
@@ -185,7 +185,7 @@ <h2 id="quick-start"><a class="header" href="#quick-start">Quick Start</a></h2>
185185
"ratio_crop_pad": 0.125,
186186
"ratio_cutout": 0.5
187187
}
188-
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/dm_hard_scores.csv"
188+
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/datm_ranking_scores.csv"
189189

190190
""" We only list arguments that usually need specifying"""
191191
soft_label_metric_calc = SoftLabelEvaluator(
@@ -213,7 +213,7 @@ <h2 id="quick-start"><a class="header" href="#quick-start">Quick Start</a></h2>
213213
&gt;&gt;&gt; soft_labels = torch.load('/your/path/to/syn/labels.pt')
214214
&gt;&gt;&gt; syn_lr = torch.load('/your/path/to/syn/lr.pt')
215215
</code></pre>
216-
<p><strong>Step 3:</strong> Compute the xxx metric.</p>
216+
<p><strong>Step 3:</strong> Compute the metric.</p>
217217
<pre><code class="language-python">&gt;&gt;&gt; metric = soft_label_metric_calc.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
218218
</code></pre>
219219
<p>The following results will be returned to you:</p>

book/index.html

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -206,9 +206,11 @@ <h2 id="dd-ranking-score"><a class="header" href="#dd-ranking-score">DD-Ranking
206206
<li>\(\text{syn-any}\): Synthetic dataset with personalized evaluation methods (hard or soft labels);</li>
207207
<li>\(\text{rdm-any}\): Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.</li>
208208
</ul>
209-
<!-- To rank different methods, we combine the above two metrics as DD-Ranking Score:
210-
211-
\\[\text{DD-Ranking Score} = \frac{\text{IOR}}{\text{HLR}} = \frac{(\text{Acc.} \text{syn-any}-\text{Acc.} \text{rdm-any})}{(\text{Acc.} \text{full-hard}-\text{Acc.} \text{syn-hard})}\\] -->
209+
<p>DD-Ranking uses a weight sum of \(\text{IOR}\) and \(-\text{HLR}\) to rank different methods:
210+
\[\alpha = w \text{IOR} - (1-w) \text{HLR}, \quad w \in [0, 1]\]
211+
Formally, the <strong>DD-Ranking Score (DDRS)</strong> is defined as:
212+
\[\text{DDRS} = \frac{e^{\alpha} - e^{-1}}{e - e^{-1}} \]</p>
213+
<p>By default, we set \(w = 0.5\) on the leaderboard, meaning that both \(\text{IOR}\) and \(\text{HLR}\) are equally important. Users can adjust the weights to emphasize one aspect on the leaderboard.</p>
212214

213215
</main>
214216

book/introduction.html

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -206,9 +206,11 @@ <h2 id="dd-ranking-score"><a class="header" href="#dd-ranking-score">DD-Ranking
206206
<li>\(\text{syn-any}\): Synthetic dataset with personalized evaluation methods (hard or soft labels);</li>
207207
<li>\(\text{rdm-any}\): Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.</li>
208208
</ul>
209-
<!-- To rank different methods, we combine the above two metrics as DD-Ranking Score:
210-
211-
\\[\text{DD-Ranking Score} = \frac{\text{IOR}}{\text{HLR}} = \frac{(\text{Acc.} \text{syn-any}-\text{Acc.} \text{rdm-any})}{(\text{Acc.} \text{full-hard}-\text{Acc.} \text{syn-hard})}\\] -->
209+
<p>DD-Ranking uses a weight sum of \(\text{IOR}\) and \(-\text{HLR}\) to rank different methods:
210+
\[\alpha = w \text{IOR} - (1-w) \text{HLR}, \quad w \in [0, 1]\]
211+
Formally, the <strong>DD-Ranking Score (DDRS)</strong> is defined as:
212+
\[\text{DDRS} = \frac{e^{\alpha} - e^{-1}}{e - e^{-1}} \]</p>
213+
<p>By default, we set \(w = 0.5\) on the leaderboard, meaning that both \(\text{IOR}\) and \(\text{HLR}\) are equally important. Users can adjust the weights to emphasize one aspect on the leaderboard.</p>
212214

213215
</main>
214216

book/print.html

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -207,9 +207,11 @@ <h2 id="dd-ranking-score"><a class="header" href="#dd-ranking-score">DD-Ranking
207207
<li>\(\text{syn-any}\): Synthetic dataset with personalized evaluation methods (hard or soft labels);</li>
208208
<li>\(\text{rdm-any}\): Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.</li>
209209
</ul>
210-
<!-- To rank different methods, we combine the above two metrics as DD-Ranking Score:
211-
212-
\\[\text{DD-Ranking Score} = \frac{\text{IOR}}{\text{HLR}} = \frac{(\text{Acc.} \text{syn-any}-\text{Acc.} \text{rdm-any})}{(\text{Acc.} \text{full-hard}-\text{Acc.} \text{syn-hard})}\\] -->
210+
<p>DD-Ranking uses a weight sum of \(\text{IOR}\) and \(-\text{HLR}\) to rank different methods:
211+
\[\alpha = w \text{IOR} - (1-w) \text{HLR}, \quad w \in [0, 1]\]
212+
Formally, the <strong>DD-Ranking Score (DDRS)</strong> is defined as:
213+
\[\text{DDRS} = \frac{e^{\alpha} - e^{-1}}{e - e^{-1}} \]</p>
214+
<p>By default, we set \(w = 0.5\) on the leaderboard, meaning that both \(\text{IOR}\) and \(\text{HLR}\) are equally important. Users can adjust the weights to emphasize one aspect on the leaderboard.</p>
213215
<div style="break-before: page; page-break-before: always;"></div><h1 id="contributing"><a class="header" href="#contributing">Contributing</a></h1>
214216
<p>Welcome! We are glad that you by willing to contribute to the field of dataset distillation.</p>
215217
<ul>
@@ -240,7 +242,7 @@ <h2 id="dd-ranking-score"><a class="header" href="#dd-ranking-score">DD-Ranking
240242
from dd_ranking.config import Config
241243

242244
&gt;&gt;&gt; config = Config.from_file("./configs/Demo_Soft_Label.yaml")
243-
&gt;&gt;&gt; soft_obj = SoftLabelEvaluator(config)
245+
&gt;&gt;&gt; soft_label_metric_calc = SoftLabelEvaluator(config)
244246
</code></pre>
245247
<details>
246248
<summary>You can also pass keyword arguments.</summary>
@@ -263,7 +265,7 @@ <h2 id="dd-ranking-score"><a class="header" href="#dd-ranking-score">DD-Ranking
263265
"ratio_crop_pad": 0.125,
264266
"ratio_cutout": 0.5
265267
}
266-
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/dm_hard_scores.csv"
268+
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/datm_ranking_scores.csv"
267269

268270
""" We only list arguments that usually need specifying"""
269271
soft_label_metric_calc = SoftLabelEvaluator(
@@ -291,7 +293,7 @@ <h2 id="dd-ranking-score"><a class="header" href="#dd-ranking-score">DD-Ranking
291293
&gt;&gt;&gt; soft_labels = torch.load('/your/path/to/syn/labels.pt')
292294
&gt;&gt;&gt; syn_lr = torch.load('/your/path/to/syn/lr.pt')
293295
</code></pre>
294-
<p><strong>Step 3:</strong> Compute the xxx metric.</p>
296+
<p><strong>Step 3:</strong> Compute the metric.</p>
295297
<pre><code class="language-python">&gt;&gt;&gt; metric = soft_label_metric_calc.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
296298
</code></pre>
297299
<p>The following results will be returned to you:</p>

book/searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

book/searchindex.json

Lines changed: 1 addition & 1 deletion
Large diffs are not rendered by default.

doc/introduction.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,10 @@ The evaluation method for DD-Ranking is grounded in the essence of dataset disti
5656
- \\(\text{syn-any}\\): Synthetic dataset with personalized evaluation methods (hard or soft labels);
5757
- \\(\text{rdm-any}\\): Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.
5858

59-
DD-Ranking uses a weight sum of \\\text{IOR}\\) and \\(-\text{HLR}\\) to rank different methods:
60-
\\[\text{Rank\_Score} = \frac{e^{w \text{IOR}) - (1-w) \text{HLR}} - e^{-1}}{e - e^{-1}}, \quad w \in [0, 1]\\]
59+
DD-Ranking uses a weight sum of \\(\text{IOR}\\) and \\(-\text{HLR}\\) to rank different methods:
60+
\\[\alpha = w \text{IOR} - (1-w) \text{HLR}, \quad w \in [0, 1]\\]
61+
Formally, the **DD-Ranking Score (DDRS)** is defined as:
62+
\\[\text{DDRS} = \frac{e^{\alpha} - e^{-1}}{e - e^{-1}} \\]
6163

6264
By default, we set \\(w = 0.5\\) on the leaderboard, meaning that both \\(\text{IOR}\\) and \\(\text{HLR}\\) are equally important. Users can adjust the weights to emphasize one aspect on the leaderboard.
6365

0 commit comments

Comments
 (0)