Skip to content

Commit b2cafce

Browse files
Fix math
1 parent de27b4e commit b2cafce

File tree

1 file changed

+4
-7
lines changed

1 file changed

+4
-7
lines changed

README.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -85,13 +85,10 @@ $\text{Acc.}$ is the accuracy of models trained on different samples. Samples' m
8585
- $\text{rdm-any}$: Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.
8686

8787
DD-Ranking uses a weight sum of $\text{IOR}$ and $-\text{HLR}$ to rank different methods:
88-
```math
89-
\alpha = w\text{IOR}-(1-w)\text{HLR}, \quad w \in [0, 1]
90-
```
91-
Formally, the **DD-Ranking Score (DDRS)** is defined as:
92-
```math
93-
\text{DDRS} = \frac{e^{\alpha}-e^{-1}}{e - e^{-1}}
94-
```
88+
$$\alpha = w\text{IOR}-(1-w)\text{HLR}, \quad w \in [0, 1]$$
89+
90+
Formally, the **DD-Ranking Score (DDRS)** is defined as:\
91+
$$\text{DDRS} = \frac{e^{\alpha}-e^{-1}}{e - e^{-1}}$$
9592
By default, we set $w = 0.5$ on the leaderboard, meaning that both $\text{IOR}$ and $\text{HLR}$ are equally important. Users can adjust the weights to emphasize one aspect on the leaderboard.
9693

9794
</details>

0 commit comments

Comments
 (0)