Skip to content

Commit 377fb96

Browse files
Fix Bloom documented formulas (#962)
* Fix Bloom documented formulas * Update bloom-filter.md * Update bloom-filter.md * Update bloom-filter.md * Update bloom-filter.md * Apply suggestions from code review --------- Co-authored-by: David Dougherty <[email protected]>
1 parent c4f1ef1 commit 377fb96

File tree

2 files changed

+23
-26
lines changed

2 files changed

+23
-26
lines changed

content/commands/bf.reserve/index.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -46,12 +46,13 @@ Though the filter can scale up by creating sub-filters, it is recommended to res
4646
sub-filters requires additional memory (each sub-filter uses an extra bits and hash function) and consume further CPU time than an equivalent filter that had
4747
the right capacity at creation time.
4848

49-
The number of hash functions is `-log(error)/ln(2)^2`.
50-
The number of bits per item is `-log(error)/ln(2)` ≈ 1.44.
49+
The optimal number of hash functions is `ceil(-ln(error_rate) / ln(2))`.
5150

52-
* **1%** error rate requires 7 hash functions and 10.08 bits per item.
53-
* **0.1%** error rate requires 10 hash functions and 14.4 bits per item.
54-
* **0.01%** error rate requires 14 hash functions and 20.16 bits per item.
51+
The required number of bits per item, given the desired `error_rate` and the optimal number of hash functions, is `-ln(error_rate) / ln(2)^2`. Hence, the required number of bits in the filter is `capacity * -ln(error_rate) / ln(2)^2`.
52+
53+
* **1%** error rate requires 7 hash functions and 9.585 bits per item.
54+
* **0.1%** error rate requires 10 hash functions and 14.378 bits per item.
55+
* **0.01%** error rate requires 14 hash functions and 19.170 bits per item.
5556

5657
## Required arguments
5758

@@ -86,7 +87,7 @@ Non-scaling filters requires slightly less memory than their scaling counterpart
8687
When `capacity` is reached, an additional sub-filter is created.
8788
The size of the new sub-filter is the size of the last sub-filter multiplied by `expansion`, specified as a positive integer.
8889

89-
If the number of elements to be stored in the filter is unknown, you use an `expansion` of `2` or more to reduce the number of sub-filters.
90+
If the number of items to be stored in the filter is unknown, you use an `expansion` of `2` or more to reduce the number of sub-filters.
9091
Otherwise, you use an `expansion` of `1` to reduce memory consumption. The default value is `2`.
9192
</details>
9293

content/develop/data-types/probabilistic/bloom-filter.md

Lines changed: 16 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -10,18 +10,18 @@ categories:
1010
- kubernetes
1111
- clients
1212
description: Bloom filters are a probabilistic data structure that checks for presence
13-
of an element in a set
13+
of an item in a set
1414
linkTitle: Bloom filter
1515
stack: true
1616
title: Bloom filter
1717
weight: 10
1818
---
1919

20-
A Bloom filter is a probabilistic data structure in Redis Stack that enables you to check if an element is present in a set using a very small memory space of a fixed size.
20+
A Bloom filter is a probabilistic data structure in Redis Stack that enables you to check if an item is present in a set using a very small memory space of a fixed size.
2121

22-
Instead of storing all of the elements in the set, Bloom Filters store only the elements' hashed representation, thus sacrificing some precision. The trade-off is that Bloom Filters are very space-efficient and fast.
22+
Instead of storing all the items in a set, a Bloom Filter stores only the items' hashed representations, thus sacrificing some precision. The trade-off is that Bloom Filters are very space-efficient and fast.
2323

24-
A Bloom filter can guarantee the absence of an element from a set, but it can only give an estimation about its presence. So when it responds that an element is not present in a set (a negative answer), you can be sure that indeed is the case. But one out of every N positive answers will be wrong. Even though it looks unusual at a first glance, this kind of uncertainty still has its place in computer science. There are many cases out there where a negative answer will prevent more costly operations, for example checking if a username has been taken, if a credit card has been reported as stolen, if a user has already seen an ad and much more.
24+
A Bloom filter can guarantee the absence of an item from a set, but it can only give an estimation about its presence. So when it responds that an item is not present in a set (a negative answer), you can be sure that indeed is the case. But one out of every N positive answers will be wrong. Even though it looks unusual at first glance, this kind of uncertainty still has its place in computer science. There are many cases out there where a negative answer will prevent more costly operations, for example checking if a username has been taken, if a credit card has been reported as stolen, if a user has already seen an ad and much more.
2525

2626
## Use cases
2727

@@ -111,11 +111,11 @@ BF.RESERVE {key} {error_rate} {capacity} [EXPANSION expansion] [NONSCALING]
111111
The rate is a decimal value between 0 and 1. For example, for a desired false positive rate of 0.1% (1 in 1000), error_rate should be set to 0.001.
112112

113113
#### 2. Expected capacity (`capacity`)
114-
This is the number of elements you expect having in your filter in total and is trivial when you have a static set but it becomes more challenging when your set grows over time. It's important to get the number right because if you **oversize** - you'll end up wasting memory. If you **undersize**, the filter will fill up and a new one will have to be stacked on top of it (sub-filter stacking). In the cases when a filter consists of multiple sub-filters stacked on top of each other latency for adds stays the same, but the latency for presence checks increases. The reason for this is the way the checks work: a regular check would first be performed on the top (latest) filter and if a negative answer is returned the next one is checked and so on. That's where the added latency comes from.
114+
This is the number of items you expect having in your filter in total and is trivial when you have a static set but it becomes more challenging when your set grows over time. It's important to get the number right because if you **oversize** - you'll end up wasting memory. If you **undersize**, the filter will fill up and a new one will have to be stacked on top of it (sub-filter stacking). In the cases when a filter consists of multiple sub-filters stacked on top of each other latency for adds stays the same, but the latency for presence checks increases. The reason for this is the way the checks work: a regular check would first be performed on the top (latest) filter and if a negative answer is returned the next one is checked and so on. That's where the added latency comes from.
115115

116116
#### 3. Scaling (`EXPANSION`)
117-
Adding an element to a Bloom filter never fails due to the data structure "filling up". Instead the error rate starts to grow. To keep the error close to the one set on filter initialisation - the Bloom filter will auto-scale, meaning when capacity is reached an additional sub-filter will be created.
118-
The size of the new sub-filter is the size of the last sub-filter multiplied by `EXPANSION`. If the number of elements to be stored in the filter is unknown, we recommend that you use an expansion of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an expansion of 1 to reduce memory consumption. The default expansion value is 2.
117+
Adding an item to a Bloom filter never fails due to the data structure "filling up". Instead, the error rate starts to grow. To keep the error close to the one set on filter initialization, the Bloom filter will auto-scale, meaning, when capacity is reached, an additional sub-filter will be created.
118+
The size of the new sub-filter is the size of the last sub-filter multiplied by `EXPANSION`. If the number of items to be stored in the filter is unknown, we recommend that you use an expansion of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an expansion of 1 to reduce memory consumption. The default expansion value is 2.
119119

120120
The filter will keep adding more hash functions for every new sub-filter in order to keep your desired error rate.
121121

@@ -127,26 +127,22 @@ If you know you're not going to scale use the `NONSCALING` flag because that way
127127

128128
### Total size of a Bloom filter
129129
The actual memory used by a Bloom filter is a function of the chosen error rate:
130-
```
131-
bits_per_item = -log(error)/ln(2)
132-
memory = capacity * bits_per_item
133-
134-
memory = capacity * (-log(error)/ln(2))
135-
```
136130

137-
- 1% error rate requires 10.08 bits per item
138-
- 0.1% error rate requires 14.4 bits per item
139-
- 0.01% error rate requires 20.16 bits per item
131+
The optimal number of hash functions is `ceil(-ln(error_rate) / ln(2))`.
132+
133+
The required number of bits per item, given the desired `error_rate` and the optimal number of hash functions, is `-ln(error_rate) / ln(2)^2`. Hence, the required number of bits in the filter is `capacity * -ln(error_rate) / ln(2)^2`.
134+
135+
* **1%** error rate requires 7 hash functions and 9.585 bits per item.
136+
* **0.1%** error rate requires 10 hash functions and 14.378 bits per item.
137+
* **0.01%** error rate requires 14 hash functions and 19.170 bits per item.
140138

141139
Just as a comparison, when using a Redis set for membership testing the memory needed is:
142140

143141
```
144142
memory_with_sets = capacity*(192b + value)
145143
```
146144

147-
For a set of IP addresses, for example, we would have around 40 bytes (320 bits) per element, which is considerably higher than the 20 bits per element we need for a Bloom filter with 0.01% precision.
148-
149-
145+
For a set of IP addresses, for example, we would have around 40 bytes (320 bits) per item - considerably higher than the 19.170 bits we need for a Bloom filter with a 0.01% false positives rate.
150146

151147

152148
## Bloom vs. Cuckoo filters
@@ -159,7 +155,7 @@ Cuckoo filters are quicker on check operations and also allow deletions.
159155

160156
Insertion in a Bloom filter is O(K), where `k` is the number of hash functions.
161157

162-
Checking for an element is O(K) or O(K*n) for stacked filters, where n is the number of stacked filters.
158+
Checking for an item is O(K) or O(K*n) for stacked filters, where n is the number of stacked filters.
163159

164160

165161
## Academic sources

0 commit comments

Comments
 (0)