Skip to content
This repository was archived by the owner on Aug 16, 2022. It is now read-only.

Commit ed84553

Browse files
author
ashwinkumar12345
committed
added aggregation to query
1 parent 5a04a3a commit ed84553

File tree

1 file changed

+8
-5
lines changed

1 file changed

+8
-5
lines changed

docs/ism/index-rollups.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Review your configuration and select **Create**.
6262

6363
You can use the standard `_search` API to search the target index. Make sure that the query matches the constraints of the target index. For example, if don’t set up terms aggregations on a field, you don’t receive results for terms aggregations. If you don’t set up the maximum aggregations, you don’t receive results for maximum aggregations.
6464

65-
You can’t access the internal structure of the data in the target index because the plugin automatically rewrites the query in the background to suit the target index. This is to make sure you can use the same query for the source and target index.
65+
You can’t access the internal structure of the data in the target index because the plugin automatically rewrites the query in the background to suit the target index. This is to make sure you can use the same query for the source and target index.
6666

6767
To query the target index, set `size` to 0:
6868

@@ -71,13 +71,16 @@ GET target_index/_search
7171
{
7272
"size": 0,
7373
"query": {
74-
"term": {
75-
"timezone": "America/Los_Angeles"
74+
"match_all": {}
75+
},
76+
"aggs": {
77+
"avg_cpu": {
78+
"avg": {
79+
"field": "cpu_usage"
80+
}
7681
}
7782
}
7883
}
7984
```
8085

81-
You can also search both your source and target indices in the same query.
82-
8386
Consider a scenario where you collect rolled up data from 1 PM to 9 PM in hourly intervals and live data from 7 PM to 11 PM in minutely intervals. If you execute an aggregation over these in the same query, for 7 PM to 9 PM, you see an overlap of both rolled up data and live data because they get counted twice in the aggregations.

0 commit comments

Comments
 (0)