You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Efficient management of high-dimensional vector data is crucial for scalable search and retrieval. Advanced methods for vector compression and quantization—such as LVQ (Locally-Adaptive Vector Quantization) and LeanVec—can dramatically optimize memory usage and improve search speed, without sacrificing too much accuracy. This page describes practical approaches to compressing and quantizing vectors for scalable search.
18
+
Intel's SVS (Scalable Vector Search) introduces two advanced vector compression techniques—LVQ and LeanVec—designed to optimize memory usage and search performance. These methods compress high-dimensional vectors while preserving the geometric relationships essential for accurate similarity search.
19
19
20
20
{{< warning >}}
21
-
Some advanced vector compression features may depend on hardware or Intel's proprietary optimizations. On platforms without these capabilities, generic compression methods will be used, possibly with reduced performance.
21
+
Intel's proprietary LVQ and LeanVec optimizations are not available on Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to Intel’s basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
22
22
{{< /warning >}}
23
23
24
24
## Compression and Quantization Techniques
@@ -57,42 +57,46 @@ Some advanced vector compression features may depend on hardware or Intel's prop
57
57
| LeanVec8x8 | Improved recall when LeanVec4x8 is insufficient | LeanVec dimensionality reduction might reduce recall |
58
58
| LVQ4x8 | Improved recall when LVQ4x4 is insufficient | Slightly worse memory savings |
59
59
60
-
## Two-Level Compression
60
+
## Two-level compression
61
61
62
-
Both LVQ and LeanVec support multi-level compression schemes. The first level compresses each vector to capture its main structure, while the second encodes residual errors for more precise re-ranking.
62
+
Both LVQ and LeanVec support two-level compression schemes. LVQ's two-level compression works by first quantizing each vector individually to capture its main structure, then encoding the residual error—the difference between the original and quantized vector—using a second quantization step. This allows fast search using only the first level, with the second level used for re-ranking to boost accuracy when needed.
63
63
64
-
This two-level approach enables:
64
+
Similarly, LeanVec uses a two-level approach: the first level reduces dimensionality and applies LVQ to speed up candidate retrieval, while the second level applies LVQ to the original high-dimensional vectors for accurate re-ranking.
65
+
66
+
Note that the original full-precision embeddings are never used by either LVQ or LeanVec, as both operate entirely on compressed representations.
67
+
68
+
This two-level approach allows for:
65
69
66
70
* Fast candidate retrieval using the first-level compressed vectors.
67
-
* High-accuracy re-ranking using second-level residuals.
71
+
* High-accuracy re-ranking using the second-level residuals.
68
72
69
-
The naming convention reflects the number of bits per dimension at each compression level.
73
+
The naming convention used for the configurations reflects the number of bits allocated per dimension at each level of compression.
70
74
71
-
### Naming convention: LVQ<B₁>x<B₂> or LeanVec<B₁>x<B₂>
75
+
### Naming convention: LVQ<B₁>x<B₂>
72
76
73
-
***B₁:**Bits per dimension for first-level quantization.
74
-
***B₂:**Bits per dimension for second-level quantization (residual encoding).
77
+
***B₁:**Number of bits per dimension used in the first-level quantization.
78
+
***B₂:**Number of bits per dimension used in the second-level quantization (residual encoding).
75
79
76
80
#### Examples
77
81
78
82
***LVQ4x8:**
79
83
* First level: 4 bits per dimension.
80
84
* Second level: 8 bits per dimension.
81
-
* Total: 12 bits per dimension (across two stages).
85
+
* Total: 12 bits per dimension (used across two stages).
82
86
***LVQ8:**
83
-
* Single-level compression.
87
+
* Single-level compression only.
84
88
* 8 bits per dimension.
85
89
* No second-level residuals.
86
-
***LeanVec4x8:**
87
-
* Dimensionality reduction followed by LVQ4x8 scheme.
88
90
89
-
## Learning Compression Parameters from Vector Data
91
+
Same notation is used for LeanVec.
92
+
93
+
## Learning compression parameters from vector data
90
94
91
-
The effectiveness of LVQ and LeanVec compression relies on adapting to the structure of input vectors. Learning parameters directly from data leads to more accurate and efficient search.
95
+
The strong performance of LVQ and LeanVec stems from their ability to adapt to the structure of the input vectors. By learning compression parameters directly from the data, they achieve more accurate representations with fewer bits.
92
96
93
-
### Practical Considerations
97
+
### What does this mean in practice?
94
98
95
-
***Initial Training Requirement:**
96
-
A minimum number of representative vectors is needed during index initialization to train the compression parameters (see [TRAINING_THRESHOLD]({{< relref "/develop/ai/search-and-query/vectors/svs-training.md" >}})).
97
-
***Handling Data Drift:**
98
-
If incoming vector characteristics change significantly over time (data distribution shift), compression quality may degrade—a general limitation of all data-dependent methods.
99
+
***Initial training requirement:**
100
+
A minimum number of representative vectors is required during index initialization to train the compression parameters (see the [TRAINING_THRESHOLD]({{< relref "/develop/ai/search-and-query/vectors/#svs-vamana-index" >}}) parameter). A random sample from the dataset typically works well.
101
+
***Handling data drift:**
102
+
If the characteristics of incoming vectors change significantly over time (that is, a data distribution shift), compression quality may degrade. This is a general limitation of all data-dependent compression methods,not just LVQ and LeanVec. When the data no longer resembles the original training sample, the learned representation becomes less effective.
0 commit comments