The memory usage of greptimedb-datanode is too high! #6536
Replies: 1 comment 1 reply
-
hello,您也提过几个 issue 了。现阶段想直接简单地接入 prometheus 到 greptimedb,不做提前的建表和分区,很难效果理想。 我们在改进 prometheus 指标这块的接入,尤其是分布式集群,可以等我们未来 2~3 个月内的发布。 v0.15 本身没有对这块做出显著改进,仅对 promql 的查询性能和兼容性做了部分改进。感谢关注。 Hello, you've also submitted a few issues. At this stage, trying to simply connect Prometheus to GreptimeDB without pre-defining tables and partitions will likely not yield ideal results. We're working on improving the integration of Prometheus metrics, especially for distributed clusters. You can wait for our release in the next 2-3 months. v0.15 itself doesn't have significant improvements in this area, only some improvements to PromQL query performance and compatibility. Thanks. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
What type of enhancement is this?
Performance
What does the enhancement do?
k8s集群内部部署的greptimedb,接入的是prometheus的数据,存储12天的数据,datanode内存使用量超高,最高已经达到100G以上,总会被 oom kill掉。 greptimedb的版本是 最新的 v0.15.0
Implementation challenges
性能很差
Beta Was this translation helpful? Give feedback.
All reactions