Skip to content

Commit 374dfa6

Browse files
authored
Mimir query engine: fix panic in min and max over series containing only histograms (#9191)
* Mimir query engine: fix panic in `min` / `max` over series containing only histograms # Conflicts: # pkg/streamingpromql/testdata/ours/aggregators.test * Add changelog entry
1 parent ef8d30a commit 374dfa6

File tree

3 files changed

+13
-3
lines changed

3 files changed

+13
-3
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
* [CHANGE] Ruler: Removed `-ruler.drain-notification-queue-on-shutdown` option, which is now enabled by default. #9115
3232
* [CHANGE] Querier: allow wrapping errors with context errors only when the former actually correspond to `context.Canceled` and `context.DeadlineExceeded`. #9175
3333
* [FEATURE] Alertmanager: Added `-alertmanager.log-parsing-label-matchers` to control logging when parsing label matchers. This flag is intended to be used with `-alertmanager.utf8-strict-mode-enabled` to validate UTF-8 strict mode is working as intended. The default value is `false`. #9173
34-
* [FEATURE] Querier: add experimental streaming PromQL engine, enabled with `-querier.query-engine=mimir`. #8422 #8430 #8454 #8455 #8360 #8490 #8508 #8577 #8660 #8671 #8677 #8747 #8850 #8872 #8838 #8911 #8909 #8923 #8924 #8925 #8932 #8933 #8934 #8962 #8986 #8993 #8995 #9008 #9017 #9018 #9019 #9120 #9121 #9136 #9139 #9145
34+
* [FEATURE] Querier: add experimental streaming PromQL engine, enabled with `-querier.query-engine=mimir`. #8422 #8430 #8454 #8455 #8360 #8490 #8508 #8577 #8660 #8671 #8677 #8747 #8850 #8872 #8838 #8911 #8909 #8923 #8924 #8925 #8932 #8933 #8934 #8962 #8986 #8993 #8995 #9008 #9017 #9018 #9019 #9120 #9121 #9136 #9139 #9145 #9191
3535
* [FEATURE] Experimental Kafka-based ingest storage. #6888 #6894 #6929 #6940 #6951 #6974 #6982 #7029 #7030 #7091 #7142 #7147 #7148 #7153 #7160 #7193 #7349 #7376 #7388 #7391 #7393 #7394 #7402 #7404 #7423 #7424 #7437 #7486 #7503 #7508 #7540 #7621 #7682 #7685 #7694 #7695 #7696 #7697 #7701 #7733 #7734 #7741 #7752 #7838 #7851 #7871 #7877 #7880 #7882 #7887 #7891 #7925 #7955 #7967 #8031 #8063 #8077 #8088 #8135 #8176 #8184 #8194 #8216 #8217 #8222 #8233 #8503 #8542 #8579 #8657 #8686 #8688 #8703 #8706 #8708 #8738 #8750 #8778 #8808 #8809 #8841 #8842 #8845 #8853 #8886 #8988
3636
* What it is:
3737
* When the new ingest storage architecture is enabled, distributors write incoming write requests to a Kafka-compatible backend, and the ingesters asynchronously replay ingested data from Kafka. In this architecture, the write and read path are de-coupled through a Kafka-compatible backend. The write path and Kafka load is a function of the incoming write traffic, the read path load is a function of received queries. Whatever the load on the read path, it doesn't affect the write path.

pkg/streamingpromql/aggregations/min_max.go

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,10 @@ func (g *MinMaxAggregationGroup) minAccumulatePoint(idx int64, f float64) {
4848
}
4949

5050
func (g *MinMaxAggregationGroup) AccumulateSeries(data types.InstantVectorSeriesData, steps int, start int64, interval int64, memoryConsumptionTracker *limiting.MemoryConsumptionTracker, _ functions.EmitAnnotationFunc) error {
51-
if len(data.Floats) > 0 && g.floatValues == nil {
51+
if (len(data.Floats) > 0 || len(data.Histograms) > 0) && g.floatValues == nil {
52+
// Even if we have histograms, we have to populate the float slices, as we'll treat histograms as if they have value 0.
53+
// This is consistent with Prometheus but may not be the desired value: https://github.com/prometheus/prometheus/issues/14711
54+
5255
var err error
5356
// First series with float values for this group, populate it.
5457
g.floatValues, err = types.Float64SlicePool.Get(steps, memoryConsumptionTracker)
@@ -73,7 +76,7 @@ func (g *MinMaxAggregationGroup) AccumulateSeries(data types.InstantVectorSeries
7376
}
7477

7578
// If a histogram exists max treats it as 0. We have to detect this here so that we return a 0 value instead of nothing.
76-
// This is consistent with prometheus but may not be desired value: https://github.com/prometheus/prometheus/issues/14711
79+
// This is consistent with Prometheus but may not be the desired value: https://github.com/prometheus/prometheus/issues/14711
7780
for _, p := range data.Histograms {
7881
idx := (p.T - start) / interval
7982
g.accumulatePoint(idx, 0)

pkg/streamingpromql/testdata/ours/aggregators.test

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -175,13 +175,20 @@ load 1m
175175
series{label="value"} -10 -10 -10 10 -10 {{schema:1 sum:10 count:5 buckets:[1 3 1]}}
176176
series{label="value2"} 0 -9 {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}}
177177
series{label="value3"} -20 -9 -9 {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}}
178+
histogram_only_series {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}} {{schema:1 sum:5 count:5 buckets:[1 3 1]}}
178179

179180
eval range from 0 to 5m step 1m max(series)
180181
{} 0 -9 0 10 0 0
181182

182183
eval range from 0 to 5m step 1m min(series)
183184
{} -20 -10 -10 0 -10 0
184185

186+
eval range from 0 to 2m step 1m max(histogram_only_series)
187+
{} 0 0 0
188+
189+
eval range from 0 to 2m step 1m min(histogram_only_series)
190+
{} 0 0 0
191+
185192
clear
186193

187194
# Test native histogram compaction

0 commit comments

Comments
 (0)