Skip to content

Commit ac2fe10

Browse files
richardreeveKristofferC
authored andcommitted
Tests for correct Pre/Semi/Metric subtyping (#68)
* test_dists.jl: Fixes errors that could arise in the unlikely event that all values in p are < 0.3. * Introduce tests to make sure that claims of PreMetric, SemiMetric and Metric-ness for the distance measures are justified. * Improved and more widespread fix to problem of fixed 0.3 threshold for setting p to 0. * Tidying test code to standard julia style used in rest of package of using .0 at end of floating point and not just . * Force exact computation of tests, which now results in test failures. * Individual columns of P matrix can still be zero, so fix to remove elements per column. * Removing duplicated tests for equality with zero now we have PreMetric tests. * Improve documentation by describing argument types of distance measures. * Removing floating point rounding errors (sum(p) != 1) in RenyiDivergence causing v small negative divergences. RenyiDivergence now internally normalises arguments. * Add in some extra checks for RenyiDivergence, and remove one which no longer applies post-normalisation. * CosineDist and CorrDist can both have small rounding errors - corrected for now by increasing error tolerance. * Add length match check for Bhattacharyya and Mahalanobis distances, especially since Bhattacharyya guarantees this using @inbounds. * Add in metric-ness tests for Jaccard, SpanNormDist and RogersTanimoto. * Update benchmarks to include Renyi divergences, and pass them probability distributions. * Update benchmark processor information. * Add in new DimensionMismatch checks to confirm implementation. * Fix indentation problems. * typo * More indentation fixes - mostly to make @testsets indent. * Remove a final equal-to-zero test. * Fixed indent. * There are more rounding errors resulting in dxz ≤ dxy + dyz failing because dxz ≈ dxy + dyz but on the wrong side of the equality. For the time being, we just need to accept this. * Fix README.md argument explanation. * Indent. * Add in Renyi divergence docs. * Add in new DimensionMismatch checks to confirm implementation. * Fix indentation problems. * typo * More indentation fixes - mostly to make @testsets indent. * Remove a final equal-to-zero test. * Fixed indent. * There are more rounding errors resulting in dxz ≤ dxy + dyz failing because dxz ≈ dxy + dyz but on the wrong side of the equality. For the time being, we just need to accept this. * Fix README.md argument explanation. * Indent. * Add in Renyi divergence docs. * Fix benchmarks.
1 parent 80b6c02 commit ac2fe10

File tree

7 files changed

+498
-329
lines changed

7 files changed

+498
-329
lines changed

README.md

Lines changed: 57 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -131,15 +131,15 @@ Each distance corresponds to a distance type. The type name and the correspondin
131131
| Cityblock | `cityblock(x, y)` | `sum(abs(x - y))` |
132132
| Chebyshev | `chebyshev(x, y)` | `max(abs(x - y))` |
133133
| Minkowski | `minkowski(x, y, p)` | `sum(abs(x - y).^p) ^ (1/p)` |
134-
| Hamming | `hamming(x, y)` | `sum(x .!= y)` |
135-
| Rogers-Tanimoto | `rogerstanimoto(x, y)` | `2(sum(x&!y) + sum(!x&y)) / (2(sum(x&!y) + sum(!x&y)) + sum(x&y) + sum(!x&!y))` |
134+
| Hamming | `hamming(k, l)` | `sum(k .!= l)` |
135+
| Rogers-Tanimoto | `rogerstanimoto(a, b)` | `2(sum(a&!b) + sum(!a&b)) / (2(sum(a&!b) + sum(!a&b)) + sum(a&b) + sum(!a&!b))` |
136136
| Jaccard | `jaccard(x, y)` | `1 - sum(min(x, y)) / sum(max(x, y))` |
137137
| CosineDist | `cosine_dist(x, y)` | `1 - dot(x, y) / (norm(x) * norm(y))` |
138138
| CorrDist | `corr_dist(x, y)` | `cosine_dist(x - mean(x), y - mean(y))` |
139139
| ChiSqDist | `chisq_dist(x, y)` | `sum((x - y).^2 / (x + y))` |
140-
| KLDivergence | `kl_divergence(x, y)` | `sum(p .* log(p ./ q))` |
141-
| RenyiDivergence | `renyi_divergence(x, y, k)`| `log(sum( x .* (x ./ y) .^ (k - 1))) / (k - 1)` |
142-
| JSDivergence | `js_divergence(x, y)` | `KL(x, m) / 2 + KL(y, m) / 2 with m = (x + y) / 2` |
140+
| KLDivergence | `kl_divergence(p, q)` | `sum(p .* log(p ./ q))` |
141+
| RenyiDivergence | `renyi_divergence(p, q, k)`| `log(sum( p .* (p ./ q) .^ (k - 1))) / (k - 1)` |
142+
| JSDivergence | `js_divergence(p, q)` | `KL(p, m) / 2 + KL(p, m) / 2 with m = (p + q) / 2` |
143143
| SpanNormDist | `spannorm_dist(x, y)` | `max(x - y) - min(x - y )` |
144144
| BhattacharyyaDist | `bhattacharyya(x, y)` | `-log(sum(sqrt(x .* y) / sqrt(sum(x) * sum(y)))` |
145145
| HellingerDist | `hellinger(x, y) ` | `sqrt(1 - sum(sqrt(x .* y) / sqrt(sum(x) * sum(y))))` |
@@ -151,7 +151,7 @@ Each distance corresponds to a distance type. The type name and the correspondin
151151
| WeightedMinkowski | `wminkowski(x, y, w, p)` | `sum(abs(x - y).^p .* w) ^ (1/p)` |
152152
| WeightedHamming | `whamming(x, y, w)` | `sum((x .!= y) .* w)` |
153153

154-
**Note:** The formulas above are using *Julia*'s functions. These formulas are mainly for conveying the math concepts in a concise way. The actual implementation may use a faster way.
154+
**Note:** The formulas above are using *Julia*'s functions. These formulas are mainly for conveying the math concepts in a concise way. The actual implementation may use a faster way. The arguments `x` and `y` are arrays of real numbers; `k` and `l` are arrays of distinct elements of any kind; a and b are arrays of Bools; and finally, `p` and `q` are arrays forming a discrete probability distribution and are therefore both expected to sum to one.
155155

156156
### Precision for Euclidean and SqEuclidean
157157

@@ -185,62 +185,70 @@ julia> pairwise(Euclidean(1e-12), x, x)
185185

186186
The implementation has been carefully optimized based on benchmarks. The Julia scripts ``test/bench_colwise.jl`` and ``test/bench_pairwise.jl`` run the benchmarks on a variety of distances, respectively under column-wise and pairwise settings.
187187

188-
Here are benchmarks obtained on Linux with Intel Core i7-4770K 3.5 GHz.
188+
Here are benchmarks obtained running Julia 0.5.1 on a late-2016 MacBook Pro running MacOS 10.12.3 with an quad-core Intel Core i7 processor @ 2.9 GHz.
189189

190190
#### Column-wise benchmark
191191

192192
The table below compares the performance (measured in terms of average elapsed time of each iteration) of a straightforward loop implementation and an optimized implementation provided in *Distances.jl*. The task in each iteration is to compute a specific distance between corresponding columns in two ``200-by-10000`` matrices.
193193

194194
| distance | loop | colwise | gain |
195195
|----------- | -------| ----------| -------|
196-
| SqEuclidean | 0.012308s | 0.003860s | 3.1884 |
197-
| Euclidean | 0.012484s | 0.003995s | 3.1246 |
198-
| Cityblock | 0.012463s | 0.003927s | 3.1735 |
199-
| Chebyshev | 0.014897s | 0.005898s | 2.5258 |
200-
| Minkowski | 0.028154s | 0.017812s | 1.5806 |
201-
| Hamming | 0.012200s | 0.003896s | 3.1317 |
202-
| CosineDist | 0.013816s | 0.004670s | 2.9583 |
203-
| CorrDist | 0.023349s | 0.016626s | 1.4044 |
204-
| ChiSqDist | 0.015375s | 0.004788s | 3.2109 |
205-
| KLDivergence | 0.044360s | 0.036123s | 1.2280 |
206-
| JSDivergence | 0.098587s | 0.085595s | 1.1518 |
207-
| BhattacharyyaDist | 0.023103s | 0.013002s | 1.7769 |
208-
| HellingerDist | 0.023329s | 0.012555s | 1.8581 |
209-
| WeightedSqEuclidean | 0.012136s | 0.003758s | 3.2296 |
210-
| WeightedEuclidean | 0.012307s | 0.003789s | 3.2482 |
211-
| WeightedCityblock | 0.012287s | 0.003923s | 3.1321 |
212-
| WeightedMinkowski | 0.029895s | 0.018471s | 1.6185 |
213-
| WeightedHamming | 0.013427s | 0.004082s | 3.2896 |
214-
| SqMahalanobis | 0.121636s | 0.019370s | 6.2796 |
215-
| Mahalanobis | 0.117871s | 0.019939s | 5.9117 |
216-
217-
We can see that using ``colwise`` instead of a simple loop yields considerable gain (2x - 6x), especially when the internal computation of each distance is simple. Nonetheless, when the computaton of a single distance is heavy enough (e.g. *Minkowski* and *JSDivergence*), the gain is not as significant.
196+
| SqEuclidean | 0.007267s | 0.002000s | 3.6334 |
197+
| Euclidean | 0.007471s | 0.002042s | 3.6584 |
198+
| Cityblock | 0.007239s | 0.001980s | 3.6556 |
199+
| Chebyshev | 0.011396s | 0.005274s | 2.1606 |
200+
| Minkowski | 0.022127s | 0.017161s | 1.2894 |
201+
| Hamming | 0.006777s | 0.001841s | 3.6804 |
202+
| CosineDist | 0.008709s | 0.003046s | 2.8592 |
203+
| CorrDist | 0.012766s | 0.014199s | 0.8991 |
204+
| ChiSqDist | 0.007321s | 0.002042s | 3.5856 |
205+
| KLDivergence | 0.037239s | 0.033535s | 1.1105 |
206+
| RenyiDivergence(0) | 0.014607s | 0.009587s | 1.5237 |
207+
| RenyiDivergence(1) | 0.044142s | 0.040953s | 1.0779 |
208+
| RenyiDivergence(2) | 0.019056s | 0.012029s | 1.5842 |
209+
| RenyiDivergence(∞) | 0.014469s | 0.010906s | 1.3267 |
210+
| JSDivergence | 0.077435s | 0.081599s | 0.9490 |
211+
| BhattacharyyaDist | 0.009805s | 0.004355s | 2.2514 |
212+
| HellingerDist | 0.010007s | 0.004030s | 2.4832 |
213+
| WeightedSqEuclidean | 0.007435s | 0.002051s | 3.6254 |
214+
| WeightedEuclidean | 0.008217s | 0.002075s | 3.9591 |
215+
| WeightedCityblock | 0.007486s | 0.002058s | 3.6378 |
216+
| WeightedMinkowski | 0.024653s | 0.019632s | 1.2557 |
217+
| WeightedHamming | 0.008467s | 0.002962s | 2.8587 |
218+
| SqMahalanobis | 0.101976s | 0.031780s | 3.2088 |
219+
| Mahalanobis | 0.105060s | 0.031806s | 3.3032 |
220+
221+
We can see that using ``colwise`` instead of a simple loop yields considerable gain (2x - 4x), especially when the internal computation of each distance is simple. Nonetheless, when the computation of a single distance is heavy enough (e.g. *KLDivergence*, *RenyiDivergence*), the gain is not as significant.
218222

219223
#### Pairwise benchmark
220224

221225
The table below compares the performance (measured in terms of average elapsed time of each iteration) of a straightforward loop implementation and an optimized implementation provided in *Distances.jl*. The task in each iteration is to compute a specific distance in a pairwise manner between columns in a ``100-by-200`` and ``100-by-250`` matrices, which will result in a ``200-by-250`` distance matrix.
222226

223227
| distance | loop | pairwise | gain |
224228
|----------- | -------| ----------| -------|
225-
| SqEuclidean | 0.032179s | 0.000170s | **189.7468** |
226-
| Euclidean | 0.031646s | 0.000326s | **97.1773** |
227-
| Cityblock | 0.031594s | 0.002771s | 11.4032 |
228-
| Chebyshev | 0.036732s | 0.011575s | 3.1735 |
229-
| Minkowski | 0.073685s | 0.047725s | 1.5440 |
230-
| Hamming | 0.030016s | 0.002539s | 11.8236 |
231-
| CosineDist | 0.035426s | 0.000235s | **150.8504** |
232-
| CorrDist | 0.061430s | 0.000341s | **180.1693** |
233-
| ChiSqDist | 0.037702s | 0.011709s | 3.2199 |
234-
| KLDivergence | 0.119043s | 0.086861s | 1.3705 |
235-
| JSDivergence | 0.255449s | 0.227079s | 1.1249 |
236-
| BhattacharyyaDist | 0.059165s | 0.033330s | 1.7751 |
237-
| HellingerDist | 0.056953s | 0.031163s | 1.8276 |
238-
| WeightedSqEuclidean | 0.031781s | 0.000218s | **145.9820** |
239-
| WeightedEuclidean | 0.031365s | 0.000410s | **76.4517** |
240-
| WeightedCityblock | 0.031239s | 0.003242s | 9.6360 |
241-
| WeightedMinkowski | 0.077039s | 0.049319s | 1.5621 |
242-
| WeightedHamming | 0.032584s | 0.005673s | 5.7442 |
243-
| SqMahalanobis | 0.280485s | 0.000297s | **943.6018** |
244-
| Mahalanobis | 0.295715s | 0.000498s | **593.6096** |
229+
| SqEuclidean | 0.022982s | 0.000145s | **158.9554** |
230+
| Euclidean | 0.022155s | 0.000843s | **26.2716** |
231+
| Cityblock | 0.022382s | 0.003899s | 5.7407 |
232+
| Chebyshev | 0.034491s | 0.014600s | 2.3624 |
233+
| Minkowski | 0.065968s | 0.046761s | 1.4107 |
234+
| Hamming | 0.021016s | 0.003139s | 6.6946 |
235+
| CosineDist | 0.024394s | 0.000828s | **29.4478** |
236+
| CorrDist | 0.039089s | 0.000852s | **45.8839** |
237+
| ChiSqDist | 0.022152s | 0.004361s | 5.0793 |
238+
| KLDivergence | 0.096694s | 0.086728s | 1.1149 |
239+
| RenyiDivergence(0) | 0.042658s | 0.023323s | 1.8290 |
240+
| RenyiDivergence(1) | 0.122015s | 0.104527s | 1.1673 |
241+
| RenyiDivergence(2) | 0.052896s | 0.033865s | 1.5620 |
242+
| RenyiDivergence(∞) | 0.039993s | 0.027331s | 1.4632 |
243+
| JSDivergence | 0.211276s | 0.204046s | 1.0354 |
244+
| BhattacharyyaDist | 0.030378s | 0.011189s | 2.7151 |
245+
| HellingerDist | 0.029592s | 0.010109s | 2.9273 |
246+
| WeightedSqEuclidean | 0.025619s | 0.000217s | **117.8128** |
247+
| WeightedEuclidean | 0.023366s | 0.000264s | **88.3711** |
248+
| WeightedCityblock | 0.026213s | 0.004610s | 5.6855 |
249+
| WeightedMinkowski | 0.068588s | 0.050033s | 1.3708 |
250+
| WeightedHamming | 0.025936s | 0.007225s | 3.5895 |
251+
| SqMahalanobis | 0.520046s | 0.000939s | **553.6694** |
252+
| Mahalanobis | 0.480556s | 0.000954s | **503.6009** |
245253

246254
For distances of which a major part of the computation is a quadratic form (e.g. *Euclidean*, *CosineDist*, *Mahalanobis*), the performance can be drastically improved by restructuring the computation and delegating the core part to ``GEMM`` in *BLAS*. The use of this strategy can easily lead to 100x performance gain over simple loops (see the highlighted part of the table above).

src/bhattacharyya.jl

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,10 @@ type HellingerDist <: Metric end
1010
# Bhattacharyya coefficient
1111

1212
function bhattacharyya_coeff{T<:Number}(a::AbstractVector{T}, b::AbstractVector{T})
13+
if length(a) != length(b)
14+
throw(DimensionMismatch("first array has length $(length(a)) which does not match the length of the second, $(length(b))."))
15+
end
16+
1317
n = length(a)
1418
sqab = zero(T)
1519
# We must normalize since we cannot assume that the vectors are normalized to probability vectors.

src/mahalanobis.jl

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,10 @@ result_type{T}(::SqMahalanobis{T}, ::AbstractArray, ::AbstractArray) = T
1414
# SqMahalanobis
1515

1616
function evaluate{T<:AbstractFloat}(dist::SqMahalanobis{T}, a::AbstractVector, b::AbstractVector)
17+
if length(a) != length(b)
18+
throw(DimensionMismatch("first array has length $(length(a)) which does not match the length of the second, $(length(b))."))
19+
end
20+
1721
Q = dist.qmat
1822
z = a - b
1923
return dot(z, Q * z)

src/metrics.jl

Lines changed: 57 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,40 @@ type CorrDist <: SemiMetric end
2929
type ChiSqDist <: SemiMetric end
3030
type KLDivergence <: PreMetric end
3131

32+
"""
33+
RenyiDivergence(α::Real)
34+
renyi_divergence(P, Q, α::Real)
35+
36+
Create a Rényi premetric of order α.
37+
38+
Rényi defined a spectrum of divergence measures generalising the
39+
Kullback–Leibler divergence (see `KLDivergence`). The divergence is
40+
not a semimetric as it is not symmetric. It is parameterised by a
41+
parameter α, and is equal to Kullback–Leibler divergence at α = 1:
42+
43+
At α = 0, ``R_0(P | Q) = -log(sum_{i: p_i > 0}(q_i))``
44+
45+
At α = 1, ``R_1(P | Q) = sum_{i: p_i > 0}(p_i log(p_i / q_i))``
46+
47+
At α = ∞, ``R_∞(P | Q) = log(sup_{i: p_i > 0}(p_i / q_i))``
48+
49+
Otherwise ``R_α(P | Q) = log(sum_{i: p_i > 0}((p_i ^ α) / (q_i ^ (α - 1))) / (α - 1)``
50+
51+
# Example:
52+
```jldoctest
53+
julia> x = reshape([0.1, 0.3, 0.4, 0.2], 2, 2);
54+
55+
julia> pairwise(RenyiDivergence(0), x, x)
56+
2×2 Array{Float64,2}:
57+
0.0 0.0
58+
0.0 0.0
59+
60+
julia> pairwise(Euclidean(2), x, x)
61+
2×2 Array{Float64,2}:
62+
0.0 0.577315
63+
0.655407 0.0
64+
```
65+
"""
3266
immutable RenyiDivergence{T <: Real} <: PreMetric
3367
p::T # order of power mean (order of divergence - 1)
3468
is_normal::Bool
@@ -208,49 +242,52 @@ kl_divergence(a::AbstractArray, b::AbstractArray) = evaluate(KLDivergence(), a,
208242

209243
# RenyiDivergence
210244
function eval_start{T<:AbstractFloat}(::RenyiDivergence, a::AbstractArray{T}, b::AbstractArray{T})
211-
zero(T), zero(T)
245+
zero(T), zero(T), sum(a), sum(b)
212246
end
213247

214248
@inline function eval_op{T<:AbstractFloat}(dist::RenyiDivergence, ai::T, bi::T)
215249
if ai == zero(T)
216-
return zero(T), zero(T)
250+
return zero(T), zero(T), zero(T), zero(T)
217251
elseif dist.is_normal
218-
return ai, ai .* ((ai ./ bi) .^ dist.p)
252+
return ai, ai * ((ai / bi) ^ dist.p), zero(T), zero(T)
219253
elseif dist.is_zero
220-
return ai, bi
254+
return ai, bi, zero(T), zero(T)
221255
elseif dist.is_one
222-
return ai, ai * log(ai / bi)
256+
return ai, ai * log(ai / bi), zero(T), zero(T)
223257
else # otherwise q = ∞
224-
return ai, ai / bi
258+
return ai, ai / bi, zero(T), zero(T)
225259
end
226260
end
227261

228262
@inline function eval_reduce{T<:AbstractFloat}(dist::RenyiDivergence,
229-
s1::Tuple{T, T},
230-
s2::Tuple{T, T})
263+
s1::Tuple{T, T, T, T},
264+
s2::Tuple{T, T, T, T})
231265
if dist.is_inf
232266
if s1[1] == zero(T)
233-
return s2
267+
return (s2[1], s2[2], s1[3], s1[4])
234268
elseif s2[1] == zero(T)
235269
return s1
236270
else
237-
return s1[2] > s2[2] ? s1 : s2
271+
return s1[2] > s2[2] ? s1 : (s2[1], s2[2], s1[3], s1[4])
238272
end
239273
else
240-
return s1[1] + s2[1], s1[2] + s2[2]
274+
return s1[1] + s2[1], s1[2] + s2[2], s1[3], s1[4]
241275
end
242276
end
243277

244-
function eval_end(dist::RenyiDivergence, s)
278+
function eval_end{T<:AbstractFloat}(dist::RenyiDivergence, s::Tuple{T, T, T, T})
245279
if dist.is_zero || dist.is_normal
246-
log(s[2] / s[1]) / dist.p
280+
log(s[2] / s[1]) / dist.p + log(s[4] / s[3])
247281
elseif dist.is_one
248-
return s[2] / s[1]
282+
return s[2] / s[1] + log(s[4] / s[3])
249283
else # q = ∞
250-
log(s[2])
284+
log(s[2]) + log(s[4] / s[3])
251285
end
252286
end
253287

288+
# Combine docs with RenyiDivergence
289+
@doc (@doc RenyiDivergence) renyi_divergence
290+
254291
renyi_divergence(a::AbstractArray, b::AbstractArray, q::Real) = evaluate(RenyiDivergence(q), a, b)
255292

256293
# JSDivergence
@@ -310,11 +347,11 @@ jaccard(a::AbstractArray, b::AbstractArray) = evaluate(Jaccard(), a, b)
310347

311348
@inline eval_start(::RogersTanimoto, a::AbstractArray, b::AbstractArray) = 0, 0, 0, 0
312349
@inline function eval_op(::RogersTanimoto, s1, s2)
313-
tt = s1 && s2
314-
tf = s1 && !s2
315-
ft = !s1 && s2
316-
ff = !s1 && !s2
317-
tt, tf, ft, ff
350+
tt = s1 && s2
351+
tf = s1 && !s2
352+
ft = !s1 && s2
353+
ff = !s1 && !s2
354+
tt, tf, ft, ff
318355
end
319356
@inline function eval_reduce(::RogersTanimoto, s1, s2)
320357
@inbounds begin

test/bench_colwise.jl

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,14 @@ n = 10000
2828

2929
x = rand(m, n)
3030
y = rand(m, n)
31+
32+
p = x
33+
q = y
34+
for i = 1:n
35+
p[:,i] /= sum(x[:,i])
36+
q[:,i] /= sum(y[:,i])
37+
end
38+
3139
w = rand(m)
3240

3341
Q = rand(m, m)
@@ -46,8 +54,12 @@ bench_colwise_distance(Hamming(), x, y)
4654
bench_colwise_distance(CosineDist(), x, y)
4755
bench_colwise_distance(CorrDist(), x, y)
4856
bench_colwise_distance(ChiSqDist(), x, y)
49-
bench_colwise_distance(KLDivergence(), x, y)
50-
bench_colwise_distance(JSDivergence(), x, y)
57+
bench_colwise_distance(KLDivergence(), p, q)
58+
bench_colwise_distance(RenyiDivergence(0), p, q)
59+
bench_colwise_distance(RenyiDivergence(1), p, q)
60+
bench_colwise_distance(RenyiDivergence(2), p, q)
61+
bench_colwise_distance(RenyiDivergence(Inf), p, q)
62+
bench_colwise_distance(JSDivergence(), p, q)
5163

5264
bench_colwise_distance(BhattacharyyaDist(), x, y)
5365
bench_colwise_distance(HellingerDist(), x, y)

test/bench_pairwise.jl

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,16 @@ ny = 250
3333
x = rand(m, nx)
3434
y = rand(m, ny)
3535

36+
p = x
37+
for i = 1:nx
38+
p[:,i] /= sum(x[:,i])
39+
end
40+
41+
q = y
42+
for i = 1:ny
43+
q[:,i] /= sum(y[:,i])
44+
end
45+
3646
w = rand(m)
3747
Q = rand(m, m)
3848
Q = Q' * Q
@@ -50,8 +60,12 @@ bench_pairwise_distance(Hamming(), x, y)
5060
bench_pairwise_distance(CosineDist(), x, y)
5161
bench_pairwise_distance(CorrDist(), x, y)
5262
bench_pairwise_distance(ChiSqDist(), x, y)
53-
bench_pairwise_distance(KLDivergence(), x, y)
54-
bench_pairwise_distance(JSDivergence(), x, y)
63+
bench_pairwise_distance(KLDivergence(), p, q)
64+
bench_pairwise_distance(RenyiDivergence(0), p, q)
65+
bench_pairwise_distance(RenyiDivergence(1), p, q)
66+
bench_pairwise_distance(RenyiDivergence(2), p, q)
67+
bench_pairwise_distance(RenyiDivergence(Inf), p, q)
68+
bench_pairwise_distance(JSDivergence(), p, q)
5569

5670
bench_pairwise_distance(BhattacharyyaDist(), x, y)
5771
bench_pairwise_distance(HellingerDist(), x, y)

0 commit comments

Comments
 (0)