Skip to content

Commit 3dc0af2

Browse files
DOC-5424 added query examples
1 parent 96e9ef4 commit 3dc0af2

File tree

1 file changed

+278
-52
lines changed
  • content/develop/data-types/timeseries

1 file changed

+278
-52
lines changed

content/develop/data-types/timeseries/_index.md

Lines changed: 278 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,6 @@ for queries and aggregations.
105105
.
106106
```
107107

108-
109108
## Adding data points
110109

111110
You can add individual data points with [`TS.ADD`]({{< relref "commands/ts.add/" >}}),
@@ -123,6 +122,233 @@ Unix time, as reported by the server's clock.
123122
3) (integer) 2
124123
```
125124

125+
## Querying data points
126+
127+
Use [`TS.GET`]({{< relref "commands/ts.get/" >}}) to retrieve the last data point
128+
added to a time series. This returns both the timestamp and the value.
129+
130+
```bash
131+
# The last recorded temperature for thermometer:2
132+
# was 10.3 on day 2.
133+
> TS.GET thermometer:2
134+
1) (integer) 2
135+
2) 10.3
136+
```
137+
138+
Use [`TS.RANGE`]({{< relref "commands/ts.range/" >}}) to retrieve data points
139+
from a time series that fall within a given timestamp range. The range is inclusive,
140+
meaning that samples whose timestamp equals the start or end of the range are included.
141+
You can use `-` and `+` as the start and end of the range, respectively, to
142+
indicate the minimum and maximum timestamps in the series. The response is
143+
an array of timestamp-value pairs returned in ascending order by timestamp.
144+
If you want the results in descending order, use [`TS.REVRANGE`]({{< relref "commands/ts.revrange/" >}}) with the same parameters.
145+
146+
```bash
147+
# Add 5 data points to a rain gauge time series.
148+
> TS.CREATE rg:1
149+
OK
150+
> TS.MADD rg:1 0 18 rg:1 1 14 rg:1 2 22 rg:1 3 18 rg:1 4 24
151+
1) (integer) 0
152+
2) (integer) 1
153+
3) (integer) 2
154+
4) (integer) 3
155+
5) (integer) 4
156+
157+
# Retrieve all the data points in ascending order.
158+
> TS.RANGE rg:1 - +
159+
1) 1) (integer) 0
160+
2) 18
161+
2) 1) (integer) 1
162+
2) 14
163+
3) 1) (integer) 2
164+
2) 22
165+
4) 1) (integer) 3
166+
2) 18
167+
5) 1) (integer) 4
168+
2) 24
169+
170+
# Retrieve data points up to day 1 (inclusive).
171+
> TS.RANGE rg:1 - 1
172+
1) 1) (integer) 0
173+
2) 18
174+
2) 1) (integer) 1
175+
2) 14
176+
177+
# Retrieve data points from day 3 onwards.
178+
> TS.RANGE rg:1 3 +
179+
1) 1) (integer) 3
180+
2) 18
181+
2) 1) (integer) 4
182+
2) 24
183+
184+
# Retrieve all the data points in descending order.
185+
> TS.REVRANGE rg:1 - +
186+
1) 1) (integer) 4
187+
2) 24
188+
2) 1) (integer) 3
189+
2) 18
190+
3) 1) (integer) 2
191+
2) 22
192+
4) 1) (integer) 1
193+
2) 14
194+
5) 1) (integer) 0
195+
2) 18
196+
197+
# Retrieve data points up to day 1 (inclusive), but
198+
# return them in descending order.
199+
> TS.REVRANGE rg:1 - 1
200+
1) 1) (integer) 1
201+
2) 14
202+
2) 1) (integer) 0
203+
2) 18
204+
```
205+
206+
Both `TS.RANGE` and `TS.REVRANGE` also let you filter results. Specify
207+
a list of timestamps to include only samples with those exact timestamps
208+
in the results (you must still specify timestamp range parameters if you
209+
use this option). Specify a minimum and maximum value to include only
210+
samples within that range. The value range is inclusive and you can
211+
use the same value for the minimum and maximum to filter for a single value.
212+
213+
```bash
214+
> TS.RANGE rg:1 - + FILTER_BY_TS 0 2 4
215+
1) 1) (integer) 0
216+
2) 18
217+
2) 1) (integer) 2
218+
2) 22
219+
3) 1) (integer) 4
220+
2) 24
221+
> TS.REVRANGE rg:1 - + FILTER_BY_TS 0 2 4 FILTER_BY_VALUE 20 25
222+
1) 1) (integer) 4
223+
2) 24
224+
2) 1) (integer) 2
225+
2) 22
226+
> TS.REVRANGE rg:1 - + FILTER_BY_TS 0 2 4 FILTER_BY_VALUE 22 22
227+
1) 1) (integer) 2
228+
2) 22
229+
```
230+
231+
### Querying multiple time series
232+
233+
The `TS.GET`, `TS.RANGE`, and `TS.REVRANGE` commands also have
234+
corresponding
235+
[`TS.MGET`]({{< relref "commands/ts.mget/" >}}),
236+
[`TS.MRANGE`]({{< relref "commands/ts.mrange/" >}}), and
237+
[`TS.MREVRANGE`]({{< relref "commands/ts.mrevrange/" >}}) versions that
238+
operate on multiple time series. `TS.MGET` returns the last data point added
239+
to each time series, while `TS.MRANGE` and `TS.MREVRANGE`
240+
return data points from a range of timestamps in each time series.
241+
242+
The parameters are mostly the same except that the multiple time series
243+
commands don't take a key name as the first parameter. Instead, you
244+
specify a filter expression to include only time series with
245+
specific labels. (See [Adding data points](#adding-data-points)
246+
above to learn how to add labels to a time series.) The filter expressions
247+
use a simple syntax that lets you include or exclude time series based on
248+
the presence or value of a label. See the description in the
249+
[`TS.MGET`]({{< relref "commands/ts.mget#required-arguments" >}}) command reference
250+
for details of the filter syntax. You can also request that
251+
data points be returned with all their labels or with a selected subset of them.
252+
253+
```bash
254+
# Create three new rain gauge time series, two in the US
255+
# and one in the UK, with different units and add some
256+
# data points.
257+
> TS.CREATE rg:2 LABELS location us unit cm
258+
OK
259+
> TS.CREATE rg:3 LABELS location us unit in
260+
OK
261+
> TS.CREATE rg:4 LABELS location uk unit mm
262+
OK
263+
> TS.MADD rg:2 0 1.8 rg:3 0 0.9 rg:4 0 25
264+
1) (integer) 0
265+
2) (integer) 0
266+
3) (integer) 0
267+
> TS.MADD rg:2 1 2.1 rg:3 1 0.77 rg:4 1 18
268+
1) (integer) 1
269+
2) (integer) 1
270+
3) (integer) 1
271+
127.0.0.1:6379> TS.MADD rg:2 2 2.3 rg:3 2 1.1 rg:4 2 21
272+
1) (integer) 2
273+
2) (integer) 2
274+
3) (integer) 2
275+
127.0.0.1:6379> TS.MADD rg:2 3 1.9 rg:3 3 0.81 rg:4 3 19
276+
1) (integer) 3
277+
2) (integer) 3
278+
3) (integer) 3
279+
127.0.0.1:6379> TS.MADD rg:2 4 1.78 rg:3 4 0.74 rg:4 4 23
280+
1) (integer) 4
281+
2) (integer) 4
282+
3) (integer) 4
283+
284+
# Retrieve the last data point from each US rain gauge. If
285+
# you don't specify any labels, an empty array is returned
286+
# for the labels.
287+
> TS.MGET FILTER location=us
288+
1) 1) "rg:2"
289+
2) (empty array)
290+
3) 1) (integer) 4
291+
2) 1.78
292+
2) 1) "rg:3"
293+
2) (empty array)
294+
3) 1) (integer) 4
295+
2) 7.4E-1
296+
297+
# Retrieve the same data points, but include the `unit`
298+
# label in the results.
299+
> TS.MGET SELECTED_LABELS unit FILTER location=us
300+
1) 1) "rg:2"
301+
2) 1) 1) "unit"
302+
2) "cm"
303+
3) 1) (integer) 4
304+
2) 1.78
305+
2) 1) "rg:3"
306+
2) 1) 1) "unit"
307+
2) "in"
308+
3) 1) (integer) 4
309+
2) 7.4E-1
310+
311+
# Retrieve data points up to day 2 (inclusive) from all
312+
# rain gauges that report in millimeters. Include all
313+
# labels in the results.
314+
> TS.MRANGE - 2 WITHLABELS FILTER unit=mm
315+
1) 1) "rg:4"
316+
2) 1) 1) "location"
317+
2) "uk"
318+
2) 1) "unit"
319+
2) "mm"
320+
3) 1) 1) (integer) 0
321+
2) 25
322+
2) 1) (integer) 1
323+
2) 18
324+
3) 1) (integer) 2
325+
2) 21
326+
327+
# Retrieve data points from day 1 to day 3 (inclusive) from
328+
# all rain gauges that report in centimeters or millimeters,
329+
# but only return the `location` label. Return the results
330+
# in descending order of timestamp.
331+
> TS.MREVRANGE 1 3 SELECTED_LABELS location FILTER unit=(cm,mm)
332+
1) 1) "rg:2"
333+
2) 1) 1) "location"
334+
2) "us"
335+
3) 1) 1) (integer) 3
336+
2) 1.9
337+
2) 1) (integer) 2
338+
2) 2.3
339+
3) 1) (integer) 1
340+
2) 2.1
341+
2) 1) "rg:4"
342+
2) 1) 1) "location"
343+
2) "uk"
344+
3) 1) 1) (integer) 3
345+
2) 19
346+
2) 1) (integer) 2
347+
2) 21
348+
3) 1) (integer) 1
349+
2) 18
350+
```
351+
126352
## Deleting data points
127353

128354
Use [`TS.DEL`]({{< relref "commands/ts.del/" >}}) to delete data points
@@ -177,57 +403,6 @@ If you want to delete a single timestamp, use it as both the start and end of th
177403
.
178404
```
179405

180-
**Note:** When a sample is deleted, the data in all downsampled timeseries will be recalculated for the specific bucket. If part of the bucket has already been removed though, because it's outside of the retention period, we won't be able to recalculate the full bucket, so in those cases we will refuse the delete operation.
181-
182-
## Compaction
183-
184-
A time series can become large if samples are added very frequently. Instead
185-
of dealing with individual samples, it is sometimes useful to split the full
186-
time range of the series into equal-sized "buckets" and represent each
187-
bucket by an aggregate value, such as the average or maximum value.
188-
Reducing the number of data points in this way is known as *compaction*.
189-
190-
For example, if you expect to collect more than one billion data points in a day, you could aggregate the data using buckets of one minute. Since each bucket is represented by a single value, this compacts the dataset size to 1,440 data points (24 hours x 60 minutes = 1,440 minutes).
191-
192-
Use [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}}) to create a
193-
194-
new
195-
compacted time series from an existing one, leaving the original series unchanged.
196-
Specify a duration for each bucket and an aggregation function to apply to each bucket.
197-
The available aggregation functions are:
198-
199-
- `avg`: Arithmetic mean of all values
200-
- `sum`: Sum of all values
201-
- `min`: Minimum value
202-
- `max`: Maximum value
203-
- `range`: Difference between the highest and the lowest value
204-
- `count`: Number of values
205-
- `first`: Value with lowest timestamp in the bucket
206-
- `last`: Value with highest timestamp in the bucket
207-
- `std.p`: Population standard deviation of the values
208-
- `std.s`: Sample standard deviation of the values
209-
- `var.p`: Population variance of the values
210-
- `var.s`: Sample variance of the values
211-
- `twa`: Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8)
212-
213-
It's important to point out that there is no data rewriting on the original timeseries; the compaction happens in a new series, while the original one stays the same. In order to prevent the original timeseries from growing indefinitely, you can use the retention option, which will trim it down to a certain period of time.
214-
215-
**NOTE:** You need to create the destination (the compacted) timeseries before creating the rule.
216-
217-
```
218-
TS.CREATERULE sourceKey destKey AGGREGATION aggregationType bucketDuration
219-
```
220-
221-
Example:
222-
223-
```
224-
TS.CREATE sensor1_compacted # Create the destination timeseries first
225-
TS.CREATERULE sensor1 sensor1_compacted AGGREGATION avg 60000 # Create the rule
226-
```
227-
228-
With this creation rule, datapoints added to the `sensor1` timeseries will be grouped into buckets of 60 seconds (60000ms), averaged, and saved in the `sensor1_compacted` timeseries.
229-
230-
231406
## Filtering
232407
You can filter your time series by value, timestamp and labels:
233408

@@ -333,6 +508,57 @@ To find minimum temperature per region, for example, we can run:
333508
TS.MRANGE - + FILTER region=(east,west) GROUPBY region REDUCE min
334509
```
335510

511+
**Note:** When a sample is deleted, the data in all downsampled timeseries will be recalculated for the specific bucket. If part of the bucket has already been removed though, because it's outside of the retention period, we won't be able to recalculate the full bucket, so in those cases we will refuse the delete operation.
512+
513+
## Compaction
514+
515+
A time series can become large if samples are added very frequently. Instead
516+
of dealing with individual samples, it is sometimes useful to split the full
517+
time range of the series into equal-sized "buckets" and represent each
518+
bucket by an aggregate value, such as the average or maximum value.
519+
Reducing the number of data points in this way is known as *compaction*.
520+
521+
For example, if you expect to collect more than one billion data points in a day, you could aggregate the data using buckets of one minute. Since each bucket is represented by a single value, this compacts the dataset size to 1,440 data points (24 hours x 60 minutes = 1,440 minutes).
522+
523+
Use [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}}) to create a
524+
525+
new
526+
compacted time series from an existing one, leaving the original series unchanged.
527+
Specify a duration for each bucket and an aggregation function to apply to each bucket.
528+
The available aggregation functions are:
529+
530+
- `avg`: Arithmetic mean of all values
531+
- `sum`: Sum of all values
532+
- `min`: Minimum value
533+
- `max`: Maximum value
534+
- `range`: Difference between the highest and the lowest value
535+
- `count`: Number of values
536+
- `first`: Value with lowest timestamp in the bucket
537+
- `last`: Value with highest timestamp in the bucket
538+
- `std.p`: Population standard deviation of the values
539+
- `std.s`: Sample standard deviation of the values
540+
- `var.p`: Population variance of the values
541+
- `var.s`: Sample variance of the values
542+
- `twa`: Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8)
543+
544+
It's important to point out that there is no data rewriting on the original timeseries; the compaction happens in a new series, while the original one stays the same. In order to prevent the original timeseries from growing indefinitely, you can use the retention option, which will trim it down to a certain period of time.
545+
546+
**NOTE:** You need to create the destination (the compacted) timeseries before creating the rule.
547+
548+
```
549+
TS.CREATERULE sourceKey destKey AGGREGATION aggregationType bucketDuration
550+
```
551+
552+
Example:
553+
554+
```
555+
TS.CREATE sensor1_compacted # Create the destination timeseries first
556+
TS.CREATERULE sensor1 sensor1_compacted AGGREGATION avg 60000 # Create the rule
557+
```
558+
559+
With this creation rule, datapoints added to the `sensor1` timeseries will be grouped into buckets of 60 seconds (60000ms), averaged, and saved in the `sensor1_compacted` timeseries.
560+
561+
336562
## Using with other metrics tools
337563

338564
In the [RedisTimeSeries](https://github.com/RedisTimeSeries) GitHub organization you can

0 commit comments

Comments
 (0)