2.6版本RaBitQ索引性能测试结论与官网不符, 2.6 查询性能相比2.4下降很多 #47473
Answered
by
foxspy
lengdanlexin
asked this question in
Q&A and General discussion
Replies: 1 comment 6 replies
-
|
Hi @lengdanlexin Thank you for the update. Could you please provide details regarding the test environment's architecture? Specifically, I am looking for the instruction set (x64 or ARM) and the CPU specifications. Thank you. |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Milvus 版本 2.6.9
部署方式: Helm, https://github.com/zilliztech/milvus-helm/tree/milvus-5.0.11
部署参数:
vectordbbench使用多进程不方便测试性能,所以我又用ai编写了一个go的程序
qps测试程序
索引 QPS测试:
在一台16核, 64G内存的机器上测试, 压测工具在另一台12核机器测试(cpu利用率不超过百分之10)
每轮并发时间5分钟, 间隔1分钟, 预热30s, 测试过程中用grafana查看cpu和内存使用情况,querynode的cpu在并发超过40后触发上限, 其他服务均未达到瓶颈
HNSW M=8, efConstruction=64, ef=100,"nq": 1, 结论: 极限约为 1979 QPS。并发超过 200,P99 延迟飙升至 795msIVF_RABITQ nlist=1024, nprobe=64, rbq_bits_query=0, refine=True, refine_type=SQ8,refine_k=4, "nq": 1 结论: 极限约为 360 QPS,。并发超过 200,QPS 下降至 350,P99 延迟飙升至 4.6s,并出现 0.39% 错误率和超时如果结果不够详细我再放一个详细的
测试程序就是简单的统计规定时间内执行成功的请求,然后计算qps
按照官网 https://milvus.io/blog/bring-vector-compression-to-the-extreme-how-milvus-serves-3%C3%97-more-queries-with-rabitq.md 的结论, qps应该是ivf_rabitq高一些, 但是我测试出来的实际是IVF_FLAT更高, 能帮忙看下是我哪里没设置对么
2.4 和 2.6 qps 对比
另一个问题是我分别测试了 2.4.17 和 2.6.9 的 qps(12核,内存不限, 单实例, m=8, efConstruction=64, ef=100,k=1)
HNSW
2.6 :
2.4:
Beta Was this translation helpful? Give feedback.
All reactions