Skip to content

Commit 7a61295

Browse files
fix miswordings svs-compression.md
1 parent 5a92938 commit 7a61295

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

content/develop/ai/search-and-query/vectors/svs-compression.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,16 @@ categories:
99
- oss
1010
- kubernetes
1111
- clients
12-
description: Intel scalable vector search (SVS) LVQ and LeanVec compression
13-
linkTitle: Intel SVS compression
14-
title: Intel scalable vector search (SVS) compression
12+
description: Vector compression and quantization for efficient memory usage and search performance
13+
linkTitle: Vector Compression & Quantization
14+
title: Vector Compression and Quantization
1515
weight: 2
1616
---
1717

18-
Intel's SVS (Scalable Vector Search) introduces two advanced vector compression techniques—LVQ and LeanVec—designed to optimize memory usage and search performance. These methods compress high-dimensional vectors while preserving the geometric relationships essential for accurate similarity search.
18+
Efficient management of high-dimensional vector data is crucial for scalable search and retrieval. Advanced methods for vector compression and quantization—such as LVQ (Locally-Adaptive Vector Quantization) and LeanVec—can dramatically optimize memory usage and improve search speed, without sacrificing too much accuracy. This page describes practical approaches to compressing and quantizing vectors for scalable search.
1919

2020
{{< warning >}}
21-
Intel's proprietary LVQ and LeanVec optimizations are not available on Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to Intel’s basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
21+
Some advanced vector compression features may depend on hardware or Intel's proprietary optimizations. Intel's proprietary LVQ and LeanVec optimizations are not available on Redis Open Source. On non-Intel platforms and Redis Open Source platforms, `SVS-VAMANA` with `COMPRESSION` will fall back to basic, 8-bit scalar quantization implementation: all values in a vector are scaled using the global minimum and maximum, and then each dimension is quantized independently into 256 levels using 8-bit precision.
2222
{{< /warning >}}
2323

2424
## Compression and Quantization Techniques

0 commit comments

Comments
 (0)