-
Notifications
You must be signed in to change notification settings - Fork 183
Description
Hello team,
We are observing abnormal resource utilization behavior on our Trickster pods. The following details summarize the issue:
Version
Trickster v2.0.0-beta3
Commit: 2025/12/08 - 46c4ca5
Issue Description
We have a dashboard configured to auto-refresh every second. Each refresh request is routed through the Trickster ALB, which routes the query to our Mimir backends (~50) and receives approximately 108 series and ~70 kB of response data.
And we are using Redis for caching.
However, under this workload, we observe a significant and continuous increase in CPU and memory consumption within the Trickster pod. Specifically, the following metrics show sharp upward trends over time:
- go_memstats_heap_sys_bytes
- go_memstats_alloc_bytes
- go_goroutines
Resource usage continues to climb until the underlying Kubernetes node approaches CPU and memory saturation, ultimately causing the Trickster pod to crash or be evicted.
Impact
- Severe CPU and memory spikes on Trickster pods
- Progressive resource escalation leading to node-level saturation
- Pod crash / OOM kill


