Skip to content

Commit 387cfe5

Browse files
authored
update for 0.6 (#64)
* update for 0.6 * drop 0.4 support * modernize and update benchmarks * fixes * update benchmark text
1 parent 9d09e91 commit 387cfe5

13 files changed

+195
-215
lines changed

.travis.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ os:
33
- linux
44
- osx
55
julia:
6-
- 0.4
76
- 0.5
87
- nightly
98
notifications:

README.md

Lines changed: 73 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@
33
[![Build Status](https://travis-ci.org/JuliaStats/Distances.jl.svg?branch=master)](https://travis-ci.org/JuliaStats/Distances.jl)
44
[![Coverage Status](https://coveralls.io/repos/JuliaStats/Distances.jl/badge.svg?branch=master&service=github)](https://coveralls.io/github/JuliaStats/Distances.jl?branch=master)
55

6-
[![Distances](http://pkg.julialang.org/badges/Distances_0.4.svg)](http://pkg.julialang.org/?pkg=Distances&ver=0.4)
76
[![Distances](http://pkg.julialang.org/badges/Distances_0.5.svg)](http://pkg.julialang.org/?pkg=Distances)
87

98
A Julia package for evaluating distances(metrics) between vectors.
@@ -125,32 +124,32 @@ This type system has practical significance. For example, when computing pairwis
125124

126125
Each distance corresponds to a distance type. The type name and the corresponding mathematical definitions of the distances are listed in the following table.
127126

128-
| type name | convenient syntax | math definition |
129-
| -------------------- | ------------------------ | --------------------|
130-
| Euclidean | euclidean(x, y) | sqrt(sum((x - y) .^ 2)) |
131-
| SqEuclidean | sqeuclidean(x, y) | sum((x - y).^2) |
132-
| Cityblock | cityblock(x, y) | sum(abs(x - y)) |
133-
| Chebyshev | chebyshev(x, y) | max(abs(x - y)) |
134-
| Minkowski | minkowski(x, y, p) | sum(abs(x - y).^p) ^ (1/p) |
135-
| Hamming | hamming(x, y) | sum(x .!= y) |
136-
| Rogers-Tanimoto | rogerstanimoto(x, y) | 2(sum(x&!y) + sum(!x&y)) / (2(sum(x&!y) + sum(!x&y)) + sum(x&y) + sum(!x&!y)) |
137-
| Jaccard | jaccard(x, y) | 1 - sum(min(x, y)) / sum(max(x, y)) |
138-
| CosineDist | cosine_dist(x, y) | 1 - dot(x, y) / (norm(x) * norm(y)) |
139-
| CorrDist | corr_dist(x, y) | cosine_dist(x - mean(x), y - mean(y)) |
140-
| ChiSqDist | chisq_dist(x, y) | sum((x - y).^2 / (x + y)) |
141-
| KLDivergence | kl_divergence(x, y) | sum(p .* log(p ./ q)) |
142-
| RenyiDivergence | renyi_divergence(x, y, k)| log(sum( x .* (x ./ y) .^ (k - 1))) / (k - 1) |
143-
| JSDivergence | js_divergence(x, y) | KL(x, m) / 2 + KL(y, m) / 2 with m = (x + y) / 2 |
144-
| SpanNormDist | spannorm_dist(x, y) | max(x - y) - min(x - y ) |
145-
| BhattacharyyaDist | bhattacharyya(x, y) | -log(sum(sqrt(x .* y) / sqrt(sum(x) * sum(y))) |
146-
| HellingerDist | hellinger(x, y) | sqrt(1 - sum(sqrt(x .* y) / sqrt(sum(x) * sum(y)))) |
147-
| Mahalanobis | mahalanobis(x, y, Q) | sqrt((x - y)' * Q * (x - y)) |
148-
| SqMahalanobis | sqmahalanobis(x, y, Q) | (x - y)' * Q * (x - y) |
149-
| WeightedEuclidean | weuclidean(x, y, w) | sqrt(sum((x - y).^2 .* w)) |
150-
| WeightedSqEuclidean | wsqeuclidean(x, y, w) | sum((x - y).^2 .* w) |
151-
| WeightedCityblock | wcityblock(x, y, w) | sum(abs(x - y) .* w) |
152-
| WeightedMinkowski | wminkowski(x, y, w, p) | sum(abs(x - y).^p .* w) ^ (1/p) |
153-
| WeightedHamming | whamming(x, y, w) | sum((x .!= y) .* w) |
127+
| type name | convenient syntax | math definition |
128+
| -------------------- | -------------------------- | --------------------|
129+
| Euclidean | `euclidean(x, y)` | `sqrt(sum((x - y) .^ 2))` |
130+
| SqEuclidean | `sqeuclidean(x, y)` | `sum((x - y).^2)` |
131+
| Cityblock | `cityblock(x, y)` | `sum(abs(x - y))` |
132+
| Chebyshev | `chebyshev(x, y)` | `max(abs(x - y))` |
133+
| Minkowski | `minkowski(x, y, p)` | `sum(abs(x - y).^p) ^ (1/p)` |
134+
| Hamming | `hamming(x, y)` | `sum(x .!= y)` |
135+
| Rogers-Tanimoto | `rogerstanimoto(x, y)` | `2(sum(x&!y) + sum(!x&y)) / (2(sum(x&!y) + sum(!x&y)) + sum(x&y) + sum(!x&!y))` |
136+
| Jaccard | `jaccard(x, y)` | `1 - sum(min(x, y)) / sum(max(x, y))` |
137+
| CosineDist | `cosine_dist(x, y)` | `1 - dot(x, y) / (norm(x) * norm(y))` |
138+
| CorrDist | `corr_dist(x, y)` | `cosine_dist(x - mean(x), y - mean(y))` |
139+
| ChiSqDist | `chisq_dist(x, y)` | `sum((x - y).^2 / (x + y))` |
140+
| KLDivergence | `kl_divergence(x, y)` | `sum(p .* log(p ./ q))` |
141+
| RenyiDivergence | `renyi_divergence(x, y, k)`| `log(sum( x .* (x ./ y) .^ (k - 1))) / (k - 1)` |
142+
| JSDivergence | `js_divergence(x, y)` | `KL(x, m) / 2 + KL(y, m) / 2 with m = (x + y) / 2` |
143+
| SpanNormDist | `spannorm_dist(x, y)` | `max(x - y) - min(x - y )` |
144+
| BhattacharyyaDist | `bhattacharyya(x, y)` | `-log(sum(sqrt(x .* y) / sqrt(sum(x) * sum(y)))` |
145+
| HellingerDist | `hellinger(x, y) ` | `sqrt(1 - sum(sqrt(x .* y) / sqrt(sum(x) * sum(y))))` |
146+
| Mahalanobis | `mahalanobis(x, y, Q)` | `sqrt((x - y)' * Q * (x - y))` |
147+
| SqMahalanobis | `sqmahalanobis(x, y, Q)` | ` (x - y)' * Q * (x - y)` |
148+
| WeightedEuclidean | `weuclidean(x, y, w)` | `sqrt(sum((x - y).^2 .* w))` |
149+
| WeightedSqEuclidean | `wsqeuclidean(x, y, w)` | `sum((x - y).^2 .* w)` |
150+
| WeightedCityblock | `wcityblock(x, y, w)` | `sum(abs(x - y) .* w)` |
151+
| WeightedMinkowski | `wminkowski(x, y, w, p)` | `sum(abs(x - y).^p .* w) ^ (1/p)` |
152+
| WeightedHamming | `whamming(x, y, w)` | `sum((x .!= y) .* w)` |
154153

155154
**Note:** The formulas above are using *Julia*'s functions. These formulas are mainly for conveying the math concepts in a concise way. The actual implementation may use a faster way.
156155

@@ -186,58 +185,62 @@ julia> pairwise(Euclidean(1e-12), x, x)
186185

187186
The implementation has been carefully optimized based on benchmarks. The Julia scripts ``test/bench_colwise.jl`` and ``test/bench_pairwise.jl`` run the benchmarks on a variety of distances, respectively under column-wise and pairwise settings.
188187

189-
Here are the benchmarks that I obtained on Mac OS X 10.8 with Intel Core i7 2.6 GHz.
188+
Here are benchmarks obtained on Linux with Intel Core i7-4770K 3.5 GHz.
190189

191190
#### Column-wise benchmark
192191

193192
The table below compares the performance (measured in terms of average elapsed time of each iteration) of a straightforward loop implementation and an optimized implementation provided in *Distances.jl*. The task in each iteration is to compute a specific distance between corresponding columns in two ``200-by-10000`` matrices.
194193

195-
| distance | loop | colwise | gain |
196-
|------------ | --------| ------------| -----------|
197-
| SqEuclidean | 0.046962 | 0.002782 | 16.8782 |
198-
| Euclidean | 0.046667 | 0.0029 | 16.0937 |
199-
| Cityblock | 0.046619 | 0.0031 | 15.039 |
200-
| Chebyshev | 0.053578 | 0.010856 | 4.9356 |
201-
| Minkowski | 0.061804 | 0.02357 | 2.6221 |
202-
| Hamming | 0.044047 | 0.00219 | 20.1131 |
203-
| CosineDist | 0.04496 | 0.002855 | 15.7457 |
204-
| CorrDist | 0.080828 | 0.029708 | 2.7207 |
205-
| ChiSqDist | 0.051009 | 0.008088 | 6.307 |
206-
| KLDivergence | 0.079598 | 0.035353 | 2.2515 |
207-
| JSDivergence | 0.545789 | 0.493362 | 1.1063 |
208-
| WeightedSqEuclidean | 0.046182 | 0.003219 | 14.3477 |
209-
| WeightedEuclidean | 0.046831 | 0.004122 | 11.3603 |
210-
| WeightedCityblock | 0.046457 | 0.003636 | 12.7781 |
211-
| WeightedMinkowski | 0.062532 | 0.020486 | 3.0524 |
212-
| WeightedHamming | 0.046217 | 0.002269 | 20.3667 |
213-
| SqMahalanobis | 0.150364 | 0.042335 | 3.5518 |
214-
| Mahalanobis | 0.159638 | 0.041071 | 3.8869 |
215-
216-
We can see that using ``colwise`` instead of a simple loop yields considerable gain (2x - 9x), especially when the internal computation of each distance is simple. Nonetheless, when the computaton of a single distance is heavy enough (e.g. *Minkowski* and *JSDivergence*), the gain is not as significant.
194+
| distance | loop | colwise | gain |
195+
|----------- | -------| ----------| -------|
196+
| SqEuclidean | 0.012308s | 0.003860s | 3.1884 |
197+
| Euclidean | 0.012484s | 0.003995s | 3.1246 |
198+
| Cityblock | 0.012463s | 0.003927s | 3.1735 |
199+
| Chebyshev | 0.014897s | 0.005898s | 2.5258 |
200+
| Minkowski | 0.028154s | 0.017812s | 1.5806 |
201+
| Hamming | 0.012200s | 0.003896s | 3.1317 |
202+
| CosineDist | 0.013816s | 0.004670s | 2.9583 |
203+
| CorrDist | 0.023349s | 0.016626s | 1.4044 |
204+
| ChiSqDist | 0.015375s | 0.004788s | 3.2109 |
205+
| KLDivergence | 0.044360s | 0.036123s | 1.2280 |
206+
| JSDivergence | 0.098587s | 0.085595s | 1.1518 |
207+
| BhattacharyyaDist | 0.023103s | 0.013002s | 1.7769 |
208+
| HellingerDist | 0.023329s | 0.012555s | 1.8581 |
209+
| WeightedSqEuclidean | 0.012136s | 0.003758s | 3.2296 |
210+
| WeightedEuclidean | 0.012307s | 0.003789s | 3.2482 |
211+
| WeightedCityblock | 0.012287s | 0.003923s | 3.1321 |
212+
| WeightedMinkowski | 0.029895s | 0.018471s | 1.6185 |
213+
| WeightedHamming | 0.013427s | 0.004082s | 3.2896 |
214+
| SqMahalanobis | 0.121636s | 0.019370s | 6.2796 |
215+
| Mahalanobis | 0.117871s | 0.019939s | 5.9117 |
216+
217+
We can see that using ``colwise`` instead of a simple loop yields considerable gain (2x - 6x), especially when the internal computation of each distance is simple. Nonetheless, when the computaton of a single distance is heavy enough (e.g. *Minkowski* and *JSDivergence*), the gain is not as significant.
217218

218219
#### Pairwise benchmark
219220

220221
The table below compares the performance (measured in terms of average elapsed time of each iteration) of a straightforward loop implementation and an optimized implementation provided in *Distances.jl*. The task in each iteration is to compute a specific distance in a pairwise manner between columns in a ``100-by-200`` and ``100-by-250`` matrices, which will result in a ``200-by-250`` distance matrix.
221222

222-
| distance | loop | pairwise | gain |
223-
|------------ | --------| ------------| -----------|
224-
| SqEuclidean | 0.119961 | 0.00037 | **324.6457** |
225-
| Euclidean | 0.122645 | 0.000678 | **180.9180** |
226-
| Cityblock | 0.116956 | 0.007997 | 14.6251 |
227-
| Chebyshev | 0.137985 | 0.028489 | 4.8434 |
228-
| Minkowski | 0.170101 | 0.059991 | 2.8354 |
229-
| Hamming | 0.110742 | 0.004781 | 23.1627 |
230-
| CosineDist | 0.110913 | 0.000514 | **215.8028** |
231-
| CorrDist | 0.1992 | 0.000808 | 246.4574 |
232-
| ChiSqDist | 0.124782 | 0.020781 | 6.0046 |
233-
| KLDivergence | 0.1994 | 0.088366 | 2.2565 |
234-
| JSDivergence | 1.35502 | 1.215785 | 1.1145 |
235-
| WeightedSqEuclidean | 0.119797 | 0.000444 | **269.531** |
236-
| WeightedEuclidean | 0.126304 | 0.000712 | **177.5122** |
237-
| WeightedCityblock | 0.117185 | 0.011475 | 10.2122 |
238-
| WeightedMinkowski | 0.172614 | 0.061693 | 2.7979 |
239-
| WeightedHamming | 0.112525 | 0.005072 | 22.1871 |
240-
| SqMahalanobis | 0.377342 | 0.000577 | **653.9759** |
241-
| Mahalanobis | 0.373796 | 0.002359 | **158.4337** |
223+
| distance | loop | pairwise | gain |
224+
|----------- | -------| ----------| -------|
225+
| SqEuclidean | 0.032179s | 0.000170s | **189.7468** |
226+
| Euclidean | 0.031646s | 0.000326s | **97.1773** |
227+
| Cityblock | 0.031594s | 0.002771s | 11.4032 |
228+
| Chebyshev | 0.036732s | 0.011575s | 3.1735 |
229+
| Minkowski | 0.073685s | 0.047725s | 1.5440 |
230+
| Hamming | 0.030016s | 0.002539s | 11.8236 |
231+
| CosineDist | 0.035426s | 0.000235s | **150.8504** |
232+
| CorrDist | 0.061430s | 0.000341s | **180.1693** |
233+
| ChiSqDist | 0.037702s | 0.011709s | 3.2199 |
234+
| KLDivergence | 0.119043s | 0.086861s | 1.3705 |
235+
| JSDivergence | 0.255449s | 0.227079s | 1.1249 |
236+
| BhattacharyyaDist | 0.059165s | 0.033330s | 1.7751 |
237+
| HellingerDist | 0.056953s | 0.031163s | 1.8276 |
238+
| WeightedSqEuclidean | 0.031781s | 0.000218s | **145.9820** |
239+
| WeightedEuclidean | 0.031365s | 0.000410s | **76.4517** |
240+
| WeightedCityblock | 0.031239s | 0.003242s | 9.6360 |
241+
| WeightedMinkowski | 0.077039s | 0.049319s | 1.5621 |
242+
| WeightedHamming | 0.032584s | 0.005673s | 5.7442 |
243+
| SqMahalanobis | 0.280485s | 0.000297s | **943.6018** |
244+
| Mahalanobis | 0.295715s | 0.000498s | **593.6096** |
242245

243246
For distances of which a major part of the computation is a quadratic form (e.g. *Euclidean*, *CosineDist*, *Mahalanobis*), the performance can be drastically improved by restructuring the computation and delegating the core part to ``GEMM`` in *BLAS*. The use of this strategy can easily lead to 100x performance gain over simple loops (see the highlighted part of the table above).

REQUIRE

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1 @@
1-
julia 0.4
2-
Compat 0.8.4
1+
julia 0.5

src/Distances.jl

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@ __precompile__()
22

33
module Distances
44

5-
import Compat.view
6-
75
export
86
# generic types/functions
97
PreMetric,

src/common.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ end
101101
function sumsq_percol{T}(a::AbstractMatrix{T})
102102
m = size(a, 1)
103103
n = size(a, 2)
104-
r = Array(T, n)
104+
r = Vector{T}(n)
105105
for j = 1:n
106106
aj = view(a, :, j)
107107
r[j] = dot(aj, aj)
@@ -113,7 +113,7 @@ function wsumsq_percol{T1, T2}(w::AbstractArray{T1}, a::AbstractMatrix{T2})
113113
m = size(a, 1)
114114
n = size(a, 2)
115115
T = typeof(one(T1)*one(T2))
116-
r = Array(T, n)
116+
r = Vector{T}(n)
117117
for j = 1:n
118118
aj = view(a, :, j)
119119
s = zero(T)
@@ -138,4 +138,4 @@ function dot_percol!(r::AbstractArray, a::AbstractMatrix, b::AbstractMatrix)
138138
return r
139139
end
140140

141-
dot_percol(a::AbstractMatrix, b::AbstractMatrix) = dot_percol!(Array(Float64, size(a,2)), a, b)
141+
dot_percol(a::AbstractMatrix, b::AbstractMatrix) = dot_percol!(Vector{Float64}(size(a,2)), a, b)

src/generic.jl

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -62,19 +62,19 @@ end
6262

6363
function colwise(metric::PreMetric, a::AbstractMatrix, b::AbstractMatrix)
6464
n = get_common_ncols(a, b)
65-
r = Array(result_type(metric, a, b), n)
65+
r = Vector{result_type(metric, a, b)}(n)
6666
colwise!(r, metric, a, b)
6767
end
6868

6969
function colwise(metric::PreMetric, a::AbstractVector, b::AbstractMatrix)
7070
n = size(b, 2)
71-
r = Array(result_type(metric, a, b), n)
71+
r = Vector{result_type(metric, a, b)}(n)
7272
colwise!(r, metric, a, b)
7373
end
7474

7575
function colwise(metric::PreMetric, a::AbstractMatrix, b::AbstractVector)
7676
n = size(a, 2)
77-
r = Array(result_type(metric, a, b), n)
77+
r = Vector{result_type(metric, a, b)}(n)
7878
colwise!(r, metric, a, b)
7979
end
8080

@@ -117,18 +117,12 @@ end
117117
function pairwise(metric::PreMetric, a::AbstractMatrix, b::AbstractMatrix)
118118
m = size(a, 2)
119119
n = size(b, 2)
120-
r = Array(result_type(metric, a, b), (m, n))
120+
r = Matrix{result_type(metric, a, b)}(m, n)
121121
pairwise!(r, metric, a, b)
122122
end
123123

124124
function pairwise(metric::PreMetric, a::AbstractMatrix)
125125
n = size(a, 2)
126-
r = Array(result_type(metric, a, a), (n, n))
127-
pairwise!(r, metric, a)
128-
end
129-
130-
function pairwise(metric::SemiMetric, a::AbstractMatrix)
131-
n = size(a, 2)
132-
r = Array(result_type(metric, a, a), (n, n))
126+
r = Matrix{result_type(metric, a, a)}(n, n)
133127
pairwise!(r, metric, a)
134128
end

src/metrics.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -341,8 +341,8 @@ rogerstanimoto{T <: Bool}(a::AbstractArray{T}, b::AbstractArray{T}) = evaluate(R
341341
# SqEuclidean
342342
function pairwise!(r::AbstractMatrix, dist::SqEuclidean, a::AbstractMatrix, b::AbstractMatrix)
343343
At_mul_B!(r, a, b)
344-
sa2 = sumabs2(a, 1)
345-
sb2 = sumabs2(b, 1)
344+
sa2 = sum(abs2, a, 1)
345+
sb2 = sum(abs2, b, 1)
346346
threshT = convert(eltype(r), dist.thresh)
347347
if threshT <= 0
348348
# If there's no chance of triggering the threshold, we can use @simd

test/REQUIRE

Lines changed: 0 additions & 1 deletion
This file was deleted.

test/bench_colwise.jl

Lines changed: 41 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -2,34 +2,26 @@
22
# Benchmark on column-wise distance evaluation
33

44
using Distances
5+
using BenchmarkTools
56

6-
macro bench_colwise_dist(repeat, dist, x, y)
7-
quote
8-
println("bench ", typeof($dist))
7+
BenchmarkTools.DEFAULT_PARAMETERS.seconds = 1.0
98

10-
# warming up
11-
r1 = evaluate($dist, ($x)[:,1], ($y)[:,1])
12-
colwise($dist, $x, $y)
9+
function bench_colwise_distance(dist, x, y)
10+
r1 = evaluate(dist, x[:,1], y[:,1])
1311

14-
# timing
15-
16-
t0 = @elapsed for k = 1 : $repeat
17-
n = size($x, 2)
18-
r = Array(typeof(r1), n)
19-
for j = 1 : n
20-
r[j] = evaluate($dist, ($x)[:,j], ($y)[:,j])
21-
end
22-
end
23-
@printf " loop: t = %9.6fs\n" (t0 / $repeat)
24-
25-
t1 = @elapsed for k = 1 : $repeat
26-
r = colwise($dist, $x, $y)
12+
# timing
13+
t0 = @belapsed begin
14+
n = size(x, 2)
15+
r = Vector{typeof($r1)}(n)
16+
for j = 1:n
17+
r[j] = evaluate($dist, $(x)[:, j], $(y)[:, j])
2718
end
28-
@printf " colwise: t = %9.6fs | gain = %7.4fx\n" (t1 / $repeat) (t0 / t1)
29-
println()
3019
end
31-
end
3220

21+
t1 = @belapsed colwise($dist, $x, $y)
22+
print("| ", typeof(dist).name.name, " |")
23+
@printf("%9.6fs | %9.6fs | %7.4f |\n", t0, t1, (t0 / t1))
24+
end
3325

3426
m = 200
3527
n = 10000
@@ -41,27 +33,30 @@ w = rand(m)
4133
Q = rand(m, m)
4234
Q = Q' * Q
4335

44-
@bench_colwise_dist 20 SqEuclidean() x y
45-
@bench_colwise_dist 20 Euclidean() x y
46-
@bench_colwise_dist 20 Cityblock() x y
47-
@bench_colwise_dist 20 Chebyshev() x y
48-
@bench_colwise_dist 5 Minkowski(3.0) x y
49-
@bench_colwise_dist 20 Hamming() x y
50-
51-
@bench_colwise_dist 20 CosineDist() x y
52-
@bench_colwise_dist 10 CorrDist() x y
53-
@bench_colwise_dist 20 ChiSqDist() x y
54-
@bench_colwise_dist 10 KLDivergence() x y
55-
@bench_colwise_dist 5 JSDivergence() x y
56-
57-
@bench_colwise_dist 10 BhattacharyyaDist() x y
58-
@bench_colwise_dist 10 HellingerDist() x y
59-
60-
@bench_colwise_dist 20 WeightedSqEuclidean(w) x y
61-
@bench_colwise_dist 20 WeightedEuclidean(w) x y
62-
@bench_colwise_dist 20 WeightedCityblock(w) x y
63-
@bench_colwise_dist 5 WeightedMinkowski(w, 3.0) x y
64-
@bench_colwise_dist 20 WeightedHamming(w) x y
65-
66-
@bench_colwise_dist 10 SqMahalanobis(Q) x y
67-
@bench_colwise_dist 10 Mahalanobis(Q) x y
36+
println("| distance | loop | colwise | gain |")
37+
println("|----------- | -------| ----------| -------|")
38+
39+
bench_colwise_distance(SqEuclidean(), x, y)
40+
bench_colwise_distance(Euclidean(), x, y)
41+
bench_colwise_distance(Cityblock(), x, y)
42+
bench_colwise_distance(Chebyshev(), x, y)
43+
bench_colwise_distance(Minkowski(3.0), x, y)
44+
bench_colwise_distance(Hamming(), x, y)
45+
46+
bench_colwise_distance(CosineDist(), x, y)
47+
bench_colwise_distance(CorrDist(), x, y)
48+
bench_colwise_distance(ChiSqDist(), x, y)
49+
bench_colwise_distance(KLDivergence(), x, y)
50+
bench_colwise_distance(JSDivergence(), x, y)
51+
52+
bench_colwise_distance(BhattacharyyaDist(), x, y)
53+
bench_colwise_distance(HellingerDist(), x, y)
54+
55+
bench_colwise_distance(WeightedSqEuclidean(w), x, y)
56+
bench_colwise_distance(WeightedEuclidean(w), x, y)
57+
bench_colwise_distance(WeightedCityblock(w), x, y)
58+
bench_colwise_distance(WeightedMinkowski(w, 3.0), x, y)
59+
bench_colwise_distance(WeightedHamming(w), x, y)
60+
61+
bench_colwise_distance(SqMahalanobis(Q), x, y)
62+
bench_colwise_distance(Mahalanobis(Q), x, y)

0 commit comments

Comments
 (0)