Skip to content

Commit b084906

Browse files
committed
Editorial updates (style, etc.)
1 parent 7a61295 commit b084906

File tree

2 files changed

+15
-13
lines changed

2 files changed

+15
-13
lines changed

content/develop/ai/search-and-query/vectors/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ Choose the `SVS-VAMANA` index type when all of the following requirements apply:
161161
| `LEANVEC_DIM` | The dimension used when using `LeanVec4x8` or `LeanVec8x8` compression for dimensionality reduction. If a value is provided, it should be less than `DIM`. Lowering it can speed up search and reduce memory use. | `DIM / 2` |
162162

163163
{{< warning >}}
164-
Intel's proprietary LVQ and LeanVec optimizations are not available on Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to Intel’s basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
164+
Some advanced vector compression features may depend on hardware or Intel's proprietary optimizations. Intel's proprietary LVQ and LeanVec optimizations are not available in Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
165165
{{< /warning >}}
166166

167167
**Example**

content/develop/ai/search-and-query/vectors/svs-compression.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
11
---
2+
aliases:
3+
24
categories:
35
- docs
46
- develop
@@ -9,23 +11,23 @@ categories:
911
- oss
1012
- kubernetes
1113
- clients
12-
description: Vector compression and quantization for efficient memory usage and search performance
13-
linkTitle: Vector Compression & Quantization
14-
title: Vector Compression and Quantization
14+
description: Vector quantization and compression for efficient memory usage and search performance
15+
linkTitle: Quantization and compression
16+
title: Vector quantization and compression
1517
weight: 2
1618
---
1719

18-
Efficient management of high-dimensional vector data is crucial for scalable search and retrieval. Advanced methods for vector compression and quantization—such as LVQ (Locally-Adaptive Vector Quantization) and LeanVeccan dramatically optimize memory usage and improve search speed, without sacrificing too much accuracy. This page describes practical approaches to compressing and quantizing vectors for scalable search.
20+
Efficient management of high-dimensional vector data is crucial for scalable search and retrieval. Advanced methods for vector quantization and compression, such as LVQ (Locally-adaptive Vector Quantization) and LeanVec, can dramatically optimize memory usage and improve search speed, without sacrificing much accuracy. This page describes practical approaches to quantizing and compressing vectors for scalable search.
1921

2022
{{< warning >}}
21-
Some advanced vector compression features may depend on hardware or Intel's proprietary optimizations. Intel's proprietary LVQ and LeanVec optimizations are not available on Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
23+
Some advanced vector compression features may depend on hardware or Intel's proprietary optimizations. Intel's proprietary LVQ and LeanVec optimizations are not available in Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
2224
{{< /warning >}}
2325

24-
## Compression and Quantization Techniques
26+
## Quantization and compression techniques
2527

26-
### LVQ (Locally-Adaptive Vector Quantization)
28+
### LVQ (Locally-adaptive Vector Quantization)
2729

28-
* **Method:** Applies per-vector normalization and scalar quantization, learning parameters directly from the data.
30+
* **Method:** Applies per-vector normalization and scalar quantization; learns parameters directly from the data.
2931
* **Advantages:**
3032
* Enables fast, on-the-fly distance computations.
3133
* SIMD-optimized layout for efficient search.
@@ -35,7 +37,7 @@ Some advanced vector compression features may depend on hardware or Intel's prop
3537
* **LVQ8:** Faster ingestion, slower search.
3638
* **LVQ4x8:** Two-level quantization for improved recall.
3739

38-
### LeanVec
40+
### LeanVec (LVQ with dimensionality reduction)
3941

4042
* **Method:** Combines dimensionality reduction with LVQ, applying quantization after reducing vector dimensions.
4143
* **Advantages:**
@@ -44,11 +46,11 @@ Some advanced vector compression features may depend on hardware or Intel's prop
4446
* **Variants:**
4547
* **LeanVec4x8:** Recommended for high-dimensional datasets, fastest search and ingestion.
4648
* **LeanVec8x8:** Improved recall when more granularity is needed.
47-
* **LeanVec Dimension:** For faster search and lower memory usage, reduce the dimension further by using the optional `REDUCE` argument. The default is typically `input dimension / 2`, but more aggressive reduction (such as `dimension / 4`) is possible for greater efficiency.
49+
* **LeanVec Dimension:** For faster search and lower memory usage, reduce the dimension further by using the optional `REDUCE` argument. The default is typically `input dimension / 2`, but more aggressive reduction (such as `input dimension / 4`) is possible for greater efficiency.
4850

49-
## Choosing a Compression Type
51+
## Choosing a compression type
5052

51-
| Compression type | Best for | Observations |
53+
| Compression type | Best for | Observations |
5254
|----------------------|--------------------------------------------------|---------------------------------------------------------|
5355
| LVQ4x4 | Fast search and low memory use | Consider LeanVec for even faster search |
5456
| LeanVec4x8 | Fastest search and ingestion | LeanVec dimensionality reduction might reduce recall |

0 commit comments

Comments
 (0)