Skip to content

Commit 44bc008

Browse files
authored
Update _posts/2025-03-27-Boost-OpenSearch-VectorSearch-Performance-With-Intel-AVX512.md
Signed-off-by: Nathan Bower <[email protected]>
1 parent 7f1e5f1 commit 44bc008

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2025-03-27-Boost-OpenSearch-VectorSearch-Performance-With-Intel-AVX512.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ The next section describes the results of benchmarks run with AVX2 and AVX-512 v
9696

9797
The results show that the time spent on hot functions of the distance calculation is significantly reduced when using AVX-512, and the OpenSearch cluster shows higher throughput for search and indexing.
9898

99-
SQfp16 encoding provided by the Faiss library further helps with faster computation and efficient storage by compressing the 32-bit floating-point vectors into 16-bit floating-point format. The smaller memory footprint allows for more vectors to be stored in the same amount of memory. Additionally, the operations on the 16-bit floats are typically faster than those on 32-bit floats, leading to faster similarity searches.
99+
SQfp16 encoding provided by the Faiss library further helps with faster computation and efficient storage by compressing the 32-bit floating-point vectors into 16-bit floating-point format. The smaller memory footprint allows for more vectors to be stored in the same amount of memory. Additionally, the operations on the 16-bit floats are typically faster than those on the 32-bit floats, leading to faster similarity searches.
100100

101101
A greater performance improvement is observed between AVX-512 and AVX2 on FP16 because of code optimizations and the use of AVX-512 intrinsics in Faiss, which are not present in AVX2.
102102

0 commit comments

Comments
 (0)