Skip to content

Commit 7ba679e

Browse files
Update articles/search/vector-search-how-to-configure-compression-storage.md
Co-authored-by: Robert Lee <[email protected]>
1 parent 7c8d5c7 commit 7ba679e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/search/vector-search-how-to-configure-compression-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ It's particularly effective for embeddings with dimensions greater than 1024. Fo
190190

191191
### Use MRL compression and truncated dimensions (preview)
192192

193-
MRL multilevel compression saves on vector storage and increases query response times for vector queries based on text embeddings. In Azure AI Search, MRL support is an extension of quantization. Using binary quantization with MRL provides the maximum vector index size reduction. To achieve maximum storage reduction, use binary quantization with MRL, and `stored` set to false.
193+
MRL multilevel compression saves on vector storage and improves query response times for vector queries based on text embeddings. In Azure AI Search, MRL support is only offered together with another method of quantization. Using binary quantization with MRL provides the maximum vector index size reduction. To achieve maximum storage reduction, use binary quantization with MRL, and `stored` set to false.
194194

195195
This feature is in preview. It's available in `2024-09-01-preview` and in beta SDK packages targeting that preview API version.
196196

0 commit comments

Comments
 (0)