Skip to content

Commit 02259e6

Browse files
committed
Added link to cost/compression blog
1 parent 288bba4 commit 02259e6

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

articles/search/vector-search-how-to-quantization.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,9 @@ To use built-in quantization, follow these steps:
2828
> - Load the index with float32 or float16 data that's quantized during indexing with the configuration you defined
2929
> - Optionally, [query quantized data](#query-a-quantized-vector-field-using-oversampling) using the oversampling parameter. If the vector field doesn't specify oversampling in its definition, you can add it at query time.
3030
31+
> [!TIP]
32+
> [Azure AI Search: Cut Vector Costs Up To 92.5% with New Compression Techniques](https://aka.ms/AISearch-cut-cost) compares compression strategies and explains savings in storage and costs. It also includes metrics for measuring relevance based on Normalized discounted cumulative gain (NDCG), demonstrating that you can compress your data without sacrificing search quality.
33+
3134
## Prerequisites
3235

3336
- [Vector fields in a search index](vector-search-how-to-create-index.md), with a `vectorSearch` configuration specifying either the Hierarchical Navigable Small Worlds (HNSW) or exhaustive K-nearest neighbor (eKNN) algorithm, and a new vector profile.

0 commit comments

Comments
 (0)