You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is easy to generate data which makes the 1st operand of _vmum
to be zero. In this case whatever the second operand is, the hash
will be generated the same. So an adversary can generate a lot of
data with the same hash
This is pretty common code mistake for a few fast hash-functions. At
least I found the same vulnerability in wyhash and rapidhash
After the code change, the safe variants of VMUM and MUM hashes are
switched on by default. If you want previous variants, please use
macros VMUM_V1 and MUM_V3 correspondingly. I believe there are still
cases when they can be used, e.g. for hash tables in compilers.
The fix consist of checking _vmum operands on zero and use nonzero value instead
all checks are implemented to avoid branch instruction generations to keep hash calculation pipeline going
still the checks increase length of critical paths of calculation
in most cases, new versions of VMUM and MUM generates the same hashes as the previous versions
The fix results in slowing down hash speeds by about 10% according to my benchmarks
I updated all benchmark data related to the new versions of VMUM and MUM below
MUM Hash
MUM hash is a fast non-cryptographic hash function
suitable for different hash table implementations
MUM means MUltiply and Mix
It is a name of the base transformation on which hashing is implemented
Modern processors have a fast logic to do long number multiplications
It is very attractive to use it for fast hashing
For example, 64x64-bit multiplication can do the same work as 32
shifts and additions
I'd like to call it Multiply and Reduce. Unfortunately, MUR
(MUltiply and Rotate) is already taken for famous hashing
technique designed by Austin Appleby
I've chosen the name also as the first release happened on Mother's day
For comparison, only 4 out of 15 non-cryptographic hash functions
in SMHasher passes the tests, e.g. well known FNV, Murmur2,
Lookup, and Superfast hashes fail the tests
MUM V3 hash does not pass the following tests of a more rigourous
version of SMHasher:
It fails on Perlin noise and bad seeds tests. It means it still
qualitative enough for the most applications
To make MUM V3 to pass the Rurban SMHasher, macro MUM_QUALITY has been
added. Compilation with this defined macro makes MUM V3 to pass
all tests of Rurban SMHasher. The slowdown is about 5% in average
or 10% at most on keys of length 8. It also results in generating
a target independent hash
For historic reasons mum.h contains code for older version V1 and
V2. You can switch them on by defining macros MUM_V1 and MUM_V2
MUM algorithm is simpler than the VMUM one
MUM is specifically designed to be fast on 64-bit CPUs
Still MUM will work for 32-bit CPUs and it will be sometimes
faster than Spooky and City
MUM has a fast startup. It is particular good to hash small keys
which are prevalent in hash table applications
MUM implementation details
Input 64-bit data is randomized by 64x64->128 bit multiplication and mixing
high- and low-parts of the multiplication result by using addition.
The result is mixed with the current internal state by using XOR
Instead of addition for mixing high- and low- parts, XOR could be
used
Using addition instead of XOR improves performance by about
10% on Haswell and Power7
Factor numbers, randomly generated with an equal probability of their
bit values, are used for the multiplication
When all factors are used once, the internal state is randomized, and the same
factors are used again for subsequent data randomization
The main loop is formed to be unrolled by the compiler to benefit from the
the compiler instruction scheduling optimization and OOO
(out-of-order) instruction execution in modern CPUs
MUM code does not contain assembly (asm) code anymore. This makes MUM less
machine-dependent. To have efficient mum implementation, the
compiler should support 128-bit integer
extension (true for GCC and Clang on many targets)
VMUM Hash
VMUM is a vector variant of mum hashing (see below)
It uses target SIMD instructions (insns)
In comparison with mum v3, vmum considerably (up to 3 times) improves the speed
of hashing mid-range (32 to 256 bytes) to long-range (more 256 bytes) length keys
As with previous mum hashing, to use vmum you just need one header
file (vmum.h)
vmum source code is considerably smaller than that of extremely
fast xxHash3 and th1ha2 and competes with them on hashing speed
There is a scalar emulation of the vector insns, too, for other targets
This could be useful for understanding used the vector
operations used
You can add usage of vector insns for other targets. For this you
just need to add small functions _vmum_update_block,
_vmum_zero_block, and _vmum_fold_block
For the beneficial usage of vector insns the target should have unsigned 32 x 32-bit -> 64-bit vector multiplication
To run vector insns in parallel on OOO CPUs, two vmum code loops are formed
to be unrolled by the compiler into one basic block
I experimented a lot with other vector insns and found that the usage of
carry-less (sometimes called polynomial) vector multiplication insns does not work
well enough for hashing
VMUM and MUM benchmarking vs other famous hash functions
Here are the results of benchmarking VMUM and MUM with the fastest
non-cryptographic hash functions I know:
Google City64 (sources are taken from SMHasher)
Bob Jenkins Spooky (sources are taken from SMHasher)
I also added J. Aumasson and D. Bernstein's
SipHash24 for the comparison as it
is a popular choice for hash table implementation these days
A metro hash
was added as people asked and as metro hash is
claimed to be the fastest hash function
metro hash is not portable as others functions as it does not deal
with the unaligned accesses problem on some targets
metro hash will produce different hash for LE/BE targets
Measurements were done on 4 different architecture machines:
AMD Ryzen 9900X
Intel i5-1300K
IBM Power10
Apple M4 10 cores (mac mini)
Hashing 10,000 of 16MB keys (bulk)
Hashing 1,280M keys for all other length keys
Each test was run 3 times and the minimal time was taken
GCC-14.2.1 was used on AMD and M4 machine, GCC-12.3.1 on Intel
machine, GCC-11.5.0 was used on Power10
-O3 was used for all compilations
The keys were generated by rand calls
The keys were aligned to see a hashing speed better and to permit runs for Metro
Some people complaint that my comparison is unfair as most hash functions are not inlined
I believe that the interface is the part of the implementation. So when
the interface does not provide an easy way for inlining, it is an
implementation pitfall
Still to address the complaints I added -flto for benchmarking all hash
functions excluding MUM and VMUM. This option makes cross-file inlining
Here are graphs summarizing the measurements:
Exact numbers are given in the last section
SMhasher Speed Measurements
SMhasher also measures hash speeds. It uses the CPU cycle counter (__rtdc)
__rtdc-based measurements might be inaccurate for a small number of
executed insns as the process can migrate, not all insns can
retire, and CPU freq can be different. That is why I prefer long
running benchmarks
Here are the results on AMD Ryzen 9900X for the fastest quality hashes
(chosen according to SMhasher bulk speed results from https://github.com/rurban/smhasher)
More GB/sec is better. Less cycles/hash is better
Some hashes are based on the use of x86_64 AES insns and are less portable.
They are marked by "Yes" in the AES column
The SLOC column gives the source code lines to implement the hash
Hash
AES
Bulk Speed (256KB): GB/s
Av. Speed on keys (1-32 bytes): cycles/hash
SLOC
VMUM-V2
-
103.7
16.4
459
VMUM-V1
-
143.5
16.8
459
MUM-V4
-
28.6
15.8
291
MUM-V3
-
40.4
16.3
291
xxh3
-
66.6
17.6
965
umash64
-
63.1
25.4
1097
FarmHash32
-
39.8
32.6
1423
wyhash
-
39.3
18.3
194
clhash
-
38.4
51.7
366
t1ha2_atonce
-
34.7
25.5
2262
t1ha0_aes_avx2
Yes
128.9
25.0
2262
gxhash64
Yes
197.1
27.9
274
aesni
Yes
38.7
28.5
132
Using cryptographic vs. non-cryptographic hash function
People worrying about denial attacks based on generating hash
collisions started to use cryptographic hash functions in hash tables
Cryptographic functions are very slow
sha1 is about 20-30 times slower than MUM and City on the bulk speed tests
The new fastest cryptographic hash function SipHash is up to 10
times slower
MUM and VMUM are also resistant to preimage attack (finding a
key with a given hash)
To make hard moving to previous state values we use mostly 1-to-1 one way
function lo(x*C) + hi(x*C) where C is a constant. Brute force
solution of equation f(x) = a probably requires 2^63 tries.
Another used function equation x ^ y = a has a 2^64
solutions. It complicates finding the overal solution further
If somebody is not convinced, you can use randomly chosen
multiplication constants (see functions mum_hash_randomize and
vmum_hash_randomize).
Finding a key with a given hash even if you know a key with such
a hash probably will be close to finding two or more solutions of
Diophantine equations
If somebody is still not convinced, you can implement hash tables
to recognize the attack and rebuild the table using the MUM function
with the new multiplication constants
Analogous approach can be used if you use weak hash function as
MurMur or City. Instead of using cryptographic hash functions
all the time, hash tables can be implemented to recognize the
attack and rebuild the table and start using a cryptographic hash
function
This approach solves the speed problem and permits us to switch easily to a new
cryptographic hash function if a flaw is found in the old one, e.g., switching from
SipHash to SHA2
How to use [V]MUM
Please just include file [v]mum.h into your C/C++ program and use the following functions:
optional [v]mum_hash_randomize for choosing multiplication constants randomly
[v]mum_hash_init, [v]mum_hash_step, and [v]mum_hash_finish for hashing complex data structures
[v]mum_hash64 for hashing a 64-bit data
[v]mum_hash for hashing any continuous block of data
Compile vmum.h with other code using options switching on vector
insns if necessary (e.g. -mavx2 for x86_64)
To compare MUM and VMUM speed with other hash functions on your machine go to
the directory benchmarks and run a script ./bench.sh
The script will compile source files and run the tests printing the
results as a markdown table
Crypto-hash function MUM512
[V]MUM is not designed to be a crypto-hash
The key (seed) and state are only 64-bit which are not crypto-level ones
The result can be different for different targets (BE/LE
machines, 32- and 64-bit machines) as for other hash functions, e.g. City (hash can be
different on SSE4.2 nad non SSE4.2 targets) or Spooky (BE/LE machines)
If you need the same MUM hash independent on the target, please
define macro [V]MUM_TARGET_INDEPENDENT_HASH. Defining the
macro affects the performace only on big-endian targets or
targets without int128 support
There is a variant of MUM called MUM512 which can be a candidate
for a crypto-hash function and keyed crypto-hash function and
might be interesting for researchers
The key is 256-bit
The state and the output are 512-bit
The block size is 512-bit
It uses 128x128->256-bit multiplication which is analogous to about
64 shifts and additions for 128-bit block word instead of 80
rounds of shifts, additions, logical operations for 512-bit block
in sha2-512.
It is only a candidate for a crypto hash function
I did not make any differential crypto-analysis or investigated
probabilities of different attacks on the hash function (sorry, it
is too big job)
I might be do this in the future as I am interested in
differential characteristics of the MUM512 base transformation
step (128x128-bit multiplications with addition of high and
low 128-bit parts)
I am also interested in the right choice of the multiplication constants
May be somebody will do the analysis. I will be glad to hear anything.
Who knows, may be it can be easily broken as Nimbus cipher.
The current code might be also vulnerable to timing attack on
systems with varying multiplication instruction latency time.
There is no code for now to prevent it
To compare the MUM512 speed with the speed of SHA-2 (SHA512) and
SHA-3 (SHA3-512) go to the directory benchmarks and run a script ./bench-crypto.sh
Blake2 crypto-hash from github.com/BLAKE2/BLAKE2
was added for comparison. I use sse version of 64-bit Blake2 (blake2b).
Here is the speed of the crypto hash functions on AMD 9900X:
MUM512
SHA2
SHA3
Blake2B
10 bytes (20 M texts)
0.27s
0.27s
0.44s
0.81s
100 bytes (20 M texts)
0.36s
0.25s
0.84s
0.84s
1000 bytes (20 M texts)
1.21s
2.08s
5.63s
3.70s
10000 bytes (5 M texts)
5.60s
5.05s
14.07s
7.99s
Pseudo-random generators
Files mum-prng.h and mum512-prng.h provide pseudo-random
functions based on MUM and MUM512 hash functions
All PRNGs passed NIST Statistical Test Suite for Random and
Pseudorandom Number Generators for Cryptographic Applications
(version 2.2.1) with 1000 bitstreams each containing 1M bits
Although MUM PRNG passed the test, it is not a cryptographically
secure PRNG as is the hash function used for it
To compare the PRNG speeds go to
the directory benchmarks and run a script ./bench-prng.sh
For the comparison I wrote crypto-secured Blum Blum Shub PRNG
(file bbs-prng.h) and PRNGs based on fast cryto-level hash
functions in ChaCha stream cipher (file chacha-prng.h) and
SipHash24 (file sip24-prng.h).
The additional PRNGs also pass the Statistical Test Suite
As recommended the first numbers generated by splitmix64 were used as a seed
I had no intention to tune MUM based PRNG first but
after adding xoroshiro128+ and finding how fast it is, I've decided
to speedup MUM PRNG
I added code to calculate a few PRNs at once to calculate them in parallel
I added AVX2 version functions to use the faster MULX instruction
The new version also passed NIST Statistical Test Suite. It was
tested even on bigger data (10K bitstreams each containing 10M
bits). The test took several days on i7-4790K
The new version is almost 2 times faster than the old one and MUM PRN
speed became almost the same as xoroshiro/xoshiro ones
All xoroshiro/xoshiro and MUM PRNG functions are inlined in the benchmark program
Both code without inlining will be visibly slower and the speed
difference will be negligible as one PRN calculation takes
only about 3-4 machine cycle for xoroshiro/xoshiro and MUM PRN.
Update Nov.2 2019: I found that MUM PRNG fails practrand on 512GB. So I modified it.
Instead of basically 16 independent PRNGs with 64-bit state, I made it one PRNG with 1024-bit state.
I also managed to speed up MUM PRNG by 15%.
All PRNG were tested by practrand with
4TB PRNG generated stream (it took a few days)
GLIBC RAND, xoroshiro128+, xoshiro256+, and xoshiro512+ failed on the first stages of practrand
The rest of the PRNGs passed
BBS PRNG was tested by only 64GB stream because it is too slow
Here is the speed of the PRNGs in millions generated PRNs
per second:
M prns/sec
AMD 9900X
Intel i5-1360K
Apple M4
Power10
BBS
0.0886
0.0827
0.122
0.021
ChaCha
357.68
184.80
262.81
83.20
SipHash24
702.10
567.43
760.13
231.48
MUM512
91.54
179.62
268.04
44.28
MUM
1947.27
1620.65
2263.68
694.42
XOSHIRO128**
1797.02
1386.87
1095.37
477.67
XOSHIRO256**
1866.35
1364.85
1466.15
607.65
XOSHIRO512**
1663.86
1235.15
1423.90
631.90
GLIBC RAND
115.57
101.48
228.99
33.66
XOROSHIRO128+
1786.62
1299.59
1296.48
549.85
XOSHIRO256+
2321.99
1720.67
1690.96
711.41
XOSHIRO512+
1808.81
1525.18
1659.76
717.12
Table results for hash speed measurements
Here are table variants of my measurements for people wanting the
exact numbers. The tables also contain time spent for hashing.