-
Notifications
You must be signed in to change notification settings - Fork 21
Sq8 to Sq8 dist functions - ip and cosine [MOD-13170] #873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 35 commits
Commits
Show all changes
52 commits
Select commit
Hold shift + click to select a range
746bf31
Add SQ8-to-SQ8 distance functions and optimizations
dor-forer 8697a3e
Add SQ8-to-SQ8 benchmark tests and update related scripts
dor-forer e0ce268
Format
dor-forer ab6b077
Orgnizing
dor-forer 931e339
Add full sq8 bencharks
dor-forer a56474d
Optimize the sq8 sq8
dor-forer a25f45c
Optimize SQ8 distance functions for NEON by reducing operations and i…
dor-forer 0ad941e
format
dor-forer 68cd068
Add NEON DOTPROD-optimized distance functions for SQ8-to-SQ8 calculat…
dor-forer 0b4b568
PR
dor-forer d0fd2e4
Remove NEON DOTPROD-optimized distance functions for INT8, UINT8, and…
dor-forer 9de6163
Fix vector layout documentation by removing inv_norm from comments in…
dor-forer 63a46a1
Remove 'constexpr' from ones vector declaration in NEON inner product…
dor-forer 525f8da
Refactor distance functions to remove inv_norm parameter and update d…
dor-forer 13a477b
Update SQ8 Cosine test to normalize both input vectors and adjust dis…
dor-forer c18000e
Rename 'compressed' to 'quantized' in SQ8 functions for clarity and c…
dor-forer bbf810e
Implement SQ8-to-SQ8 distance functions with precomputed sum and norm…
dor-forer dbbb7d9
Add edge case tests for SQ8-to-SQ8 precomputed cosine distance functions
dor-forer 36ab068
Refactor SQ8 test cases to use CreateSQ8QuantizedVector for vector po…
dor-forer 00617d7
Implement SQ8-to-SQ8 precomputed distance functions using ARM NEON, S…
dor-forer 4331d91
Implement SQ8-to-SQ8 precomputed inner product and cosine functions; …
dor-forer 2e7b30d
Refactor SQ8 distance functions and remove precomputed variants
dor-forer a111e36
Refactor SQ8 distance functions and tests for improved clarity and co…
dor-forer d510b8a
Refactor SQ8 benchmarks by removing precomputed variants and updating…
dor-forer ee26740
foramt
dor-forer afe1a4f
Remove serialization benchmark script for HNSW disk serialization
dor-forer a31f95c
Refactor SQ8 distance functions and tests to remove precomputed norm …
dor-forer f12ecf4
format
dor-forer 0e36030
Merge branch 'main' of https://github.com/RedisAI/VectorSimilarity in…
dor-forer fdc16c6
Refactor SQ8 distance tests to use compressed vectors and improve nor…
dor-forer e5f519c
Update vector layout documentation to reflect removal of sum of squar…
dor-forer db1e671
Refactor SQ8 distance functions to remove norm computation
dor-forer d5b8587
Update SQ8-to-SQ8 distance function comment to remove norm reference
dor-forer 91f48df
Refactor cosine similarity functions to remove unnecessary subtractio…
dor-forer b660111
Refactor cosine similarity functions to use specific SIMD implementat…
dor-forer 9166cac
Refactor benchmark setup to allocate additional space for sum and sum…
dor-forer f28f4e7
Add CPU feature checks to disable optimizations for AArch64 in SQ8 di…
dor-forer e50dc45
Add CPU feature checks to disable optimizations for AArch64 in SQ8 di…
dor-forer 6bbbc38
Fix formatting issues in SQ8 inner product function and clean up cond…
dor-forer 66a5f88
Enhance SQ8 Inner Product Implementations with Optimized Dot Product …
dor-forer d7972e9
Fix header guard duplication and update test assertion for floating-p…
dor-forer a8075bf
Add missing pragma once directive in NEON header files
dor-forer cddc497
Refactor SQ8 distance functions for improved performance and clarity
dor-forer 4f0fec7
Update SQ8 vector population functions to include metadata and adjust…
dor-forer 8ab4192
Refactor SQ8 inner product functions for improved clarity and perform…
dor-forer 8c59cb2
Rename inner product implementation functions for AVX2 and AVX512 for…
dor-forer a4ff5d0
Refactor SQ8 cosine function to utilize inner product function for im…
dor-forer c22158f
Remove redundant inner product edge case tests for SQ8 distance funct…
dor-forer 4c19d9e
Add SVE2 support to SQ8-to-SQ8 Inner Product distance function
dor-forer 5c22af8
Remove SVE2 and other optimizations from SQ8 cosine function test for…
dor-forer 9e50d7c
Update NEON benchmarks to use a vector size of 64 for SQ8-to-SQ8 func…
dor-forer 2e57cf2
Increase allocated space for cosine calculations in SQ8 benchmark setup
dor-forer File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,145 @@ | ||
| /* | ||
| * Copyright (c) 2006-Present, Redis Ltd. | ||
| * All rights reserved. | ||
| * | ||
| * Licensed under your choice of the Redis Source Available License 2.0 | ||
| * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the | ||
| * GNU Affero General Public License v3 (AGPLv3). | ||
| */ | ||
| #pragma once | ||
| #include "VecSim/spaces/space_includes.h" | ||
| #include <immintrin.h> | ||
|
|
||
| /** | ||
| * SQ8 distance functions (float32 query vs uint8 stored) using AVX512. | ||
| * | ||
| * Uses algebraic optimization to reduce operations per element: | ||
| * | ||
| * IP = Σ query[i] * (val[i] * δ + min) | ||
| * = δ * Σ(query[i] * val[i]) + min * Σ(query[i]) | ||
| * | ||
| * This saves one FMA per 16 elements by separating: | ||
| * - dot_sum: accumulates query[i] * val[i] | ||
| * - query_sum: accumulates query[i] | ||
| * Then combines at the end: result = δ * dot_sum + min * query_sum | ||
| * | ||
| * Also uses multiple accumulators for better instruction-level parallelism. | ||
| * | ||
| * Vector layout: [uint8_t values (dim)] [min_val (float)] [delta (float)] [sum (float)] | ||
| */ | ||
|
|
||
| // Process 16 elements with algebraic optimization | ||
| static inline void SQ8_InnerProductStep(const float *pVec1, const uint8_t *pVec2, __m512 &dot_sum, | ||
| __m512 &query_sum) { | ||
| // Load 16 float elements from query | ||
| __m512 v1 = _mm512_loadu_ps(pVec1); | ||
|
|
||
| // Load 16 uint8 elements and convert to float | ||
| __m128i v2_128 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(pVec2)); | ||
| __m512i v2_512 = _mm512_cvtepu8_epi32(v2_128); | ||
| __m512 v2_f = _mm512_cvtepi32_ps(v2_512); | ||
|
|
||
| // Accumulate query * val (without dequantization) | ||
| dot_sum = _mm512_fmadd_ps(v1, v2_f, dot_sum); | ||
|
|
||
| // Accumulate query sum | ||
| query_sum = _mm512_add_ps(query_sum, v1); | ||
| } | ||
|
|
||
| // Common implementation for both inner product and cosine similarity | ||
| template <unsigned char residual> // 0..15 | ||
| float SQ8_InnerProductImp_AVX512(const void *pVec1v, const void *pVec2v, size_t dimension) { | ||
| const float *pVec1 = static_cast<const float *>(pVec1v); | ||
| const uint8_t *pVec2 = static_cast<const uint8_t *>(pVec2v); | ||
|
|
||
| // Get dequantization parameters from the end of pVec2 | ||
| const float min_val = *reinterpret_cast<const float *>(pVec2 + dimension); | ||
| const float delta = *reinterpret_cast<const float *>(pVec2 + dimension + sizeof(float)); | ||
|
|
||
| // Multiple accumulators for instruction-level parallelism | ||
| __m512 dot_sum0 = _mm512_setzero_ps(); | ||
| __m512 dot_sum1 = _mm512_setzero_ps(); | ||
| __m512 dot_sum2 = _mm512_setzero_ps(); | ||
| __m512 dot_sum3 = _mm512_setzero_ps(); | ||
| __m512 query_sum0 = _mm512_setzero_ps(); | ||
| __m512 query_sum1 = _mm512_setzero_ps(); | ||
| __m512 query_sum2 = _mm512_setzero_ps(); | ||
| __m512 query_sum3 = _mm512_setzero_ps(); | ||
|
|
||
| size_t offset = 0; | ||
|
|
||
| // Deal with remainder first | ||
| if constexpr (residual > 0) { | ||
| // Handle less than 16 elements | ||
| __mmask16 mask = (1U << residual) - 1; | ||
|
|
||
| // Load masked float elements from query | ||
| __m512 v1 = _mm512_maskz_loadu_ps(mask, pVec1); | ||
|
|
||
| // Load uint8 elements and convert to float | ||
| __m128i v2_128 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(pVec2)); | ||
| __m512i v2_512 = _mm512_cvtepu8_epi32(v2_128); | ||
| __m512 v2_f = _mm512_cvtepi32_ps(v2_512); | ||
|
|
||
| // Masked accumulation (mask already zeroed unused elements in v1) | ||
| dot_sum0 = _mm512_mul_ps(v1, v2_f); | ||
| query_sum0 = v1; | ||
|
|
||
| offset = residual; | ||
| } | ||
|
|
||
| // Calculate number of full 64-element chunks (4 x 16) | ||
| size_t num_chunks = (dimension - residual) / 64; | ||
|
|
||
| // Process 4 chunks at a time for maximum ILP | ||
| for (size_t i = 0; i < num_chunks; i++) { | ||
| SQ8_InnerProductStep(pVec1 + offset, pVec2 + offset, dot_sum0, query_sum0); | ||
| SQ8_InnerProductStep(pVec1 + offset + 16, pVec2 + offset + 16, dot_sum1, query_sum1); | ||
| SQ8_InnerProductStep(pVec1 + offset + 32, pVec2 + offset + 32, dot_sum2, query_sum2); | ||
| SQ8_InnerProductStep(pVec1 + offset + 48, pVec2 + offset + 48, dot_sum3, query_sum3); | ||
| offset += 64; | ||
| } | ||
|
|
||
| // Handle remaining 16-element chunks (0-3 remaining) | ||
| size_t remaining = (dimension - residual) % 64; | ||
| if (remaining >= 16) { | ||
| SQ8_InnerProductStep(pVec1 + offset, pVec2 + offset, dot_sum0, query_sum0); | ||
| offset += 16; | ||
| remaining -= 16; | ||
| } | ||
| if (remaining >= 16) { | ||
| SQ8_InnerProductStep(pVec1 + offset, pVec2 + offset, dot_sum1, query_sum1); | ||
| offset += 16; | ||
| remaining -= 16; | ||
| } | ||
| if (remaining >= 16) { | ||
| SQ8_InnerProductStep(pVec1 + offset, pVec2 + offset, dot_sum2, query_sum2); | ||
| } | ||
|
|
||
| // Combine accumulators | ||
| __m512 dot_total = | ||
| _mm512_add_ps(_mm512_add_ps(dot_sum0, dot_sum1), _mm512_add_ps(dot_sum2, dot_sum3)); | ||
| __m512 query_total = | ||
| _mm512_add_ps(_mm512_add_ps(query_sum0, query_sum1), _mm512_add_ps(query_sum2, query_sum3)); | ||
|
|
||
| // Reduce to scalar | ||
| float dot_product = _mm512_reduce_add_ps(dot_total); | ||
| float query_sum = _mm512_reduce_add_ps(query_total); | ||
|
|
||
| // Apply algebraic formula: IP = δ * Σ(query*val) + min * Σ(query) | ||
| return delta * dot_product + min_val * query_sum; | ||
| } | ||
|
|
||
| template <unsigned char residual> // 0..15 | ||
| float SQ8_InnerProductSIMD16_AVX512F_BW_VL_VNNI(const void *pVec1v, const void *pVec2v, | ||
| size_t dimension) { | ||
| // The inner product similarity is 1 - ip | ||
| return 1.0f -SQ8_InnerProductImp_AVX512<residual>(pVec1v, pVec2v, dimension);; | ||
| } | ||
|
|
||
| template <unsigned char residual> // 0..15 | ||
| float SQ8_CosineSIMD16_AVX512F_BW_VL_VNNI(const void *pVec1v, const void *pVec2v, | ||
| size_t dimension) { | ||
| // Assume vectors are normalized. | ||
| return SQ8_InnerProductSIMD16_AVX512F_BW_VL_VNNI<residual>(pVec1v, pVec2v, dimension); | ||
| } |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.