You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use [`TS.DEL`]({{< relref "commands/ts.del/" >}}) to delete data points
@@ -177,57 +403,6 @@ If you want to delete a single timestamp, use it as both the start and end of th
177
403
.
178
404
```
179
405
180
-
**Note:** When a sample is deleted, the data in all downsampled timeseries will be recalculated for the specific bucket. If part of the bucket has already been removed though, because it's outside of the retention period, we won't be able to recalculate the full bucket, so in those cases we will refuse the delete operation.
181
-
182
-
## Compaction
183
-
184
-
A time series can become large if samples are added very frequently. Instead
185
-
of dealing with individual samples, it is sometimes useful to split the full
186
-
time range of the series into equal-sized "buckets" and represent each
187
-
bucket by an aggregate value, such as the average or maximum value.
188
-
Reducing the number of data points in this way is known as *compaction*.
189
-
190
-
For example, if you expect to collect more than one billion data points in a day, you could aggregate the data using buckets of one minute. Since each bucket is represented by a single value, this compacts the dataset size to 1,440 data points (24 hours x 60 minutes = 1,440 minutes).
191
-
192
-
Use [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}}) to create a
193
-
194
-
new
195
-
compacted time series from an existing one, leaving the original series unchanged.
196
-
Specify a duration for each bucket and an aggregation function to apply to each bucket.
197
-
The available aggregation functions are:
198
-
199
-
-`avg`: Arithmetic mean of all values
200
-
-`sum`: Sum of all values
201
-
-`min`: Minimum value
202
-
-`max`: Maximum value
203
-
-`range`: Difference between the highest and the lowest value
204
-
-`count`: Number of values
205
-
-`first`: Value with lowest timestamp in the bucket
206
-
-`last`: Value with highest timestamp in the bucket
207
-
-`std.p`: Population standard deviation of the values
208
-
-`std.s`: Sample standard deviation of the values
209
-
-`var.p`: Population variance of the values
210
-
-`var.s`: Sample variance of the values
211
-
-`twa`: Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8)
212
-
213
-
It's important to point out that there is no data rewriting on the original timeseries; the compaction happens in a new series, while the original one stays the same. In order to prevent the original timeseries from growing indefinitely, you can use the retention option, which will trim it down to a certain period of time.
214
-
215
-
**NOTE:** You need to create the destination (the compacted) timeseries before creating the rule.
TS.CREATE sensor1_compacted # Create the destination timeseries first
225
-
TS.CREATERULE sensor1 sensor1_compacted AGGREGATION avg 60000 # Create the rule
226
-
```
227
-
228
-
With this creation rule, datapoints added to the `sensor1` timeseries will be grouped into buckets of 60 seconds (60000ms), averaged, and saved in the `sensor1_compacted` timeseries.
229
-
230
-
231
406
## Filtering
232
407
You can filter your time series by value, timestamp and labels:
233
408
@@ -333,6 +508,57 @@ To find minimum temperature per region, for example, we can run:
333
508
TS.MRANGE - + FILTER region=(east,west) GROUPBY region REDUCE min
334
509
```
335
510
511
+
**Note:** When a sample is deleted, the data in all downsampled timeseries will be recalculated for the specific bucket. If part of the bucket has already been removed though, because it's outside of the retention period, we won't be able to recalculate the full bucket, so in those cases we will refuse the delete operation.
512
+
513
+
## Compaction
514
+
515
+
A time series can become large if samples are added very frequently. Instead
516
+
of dealing with individual samples, it is sometimes useful to split the full
517
+
time range of the series into equal-sized "buckets" and represent each
518
+
bucket by an aggregate value, such as the average or maximum value.
519
+
Reducing the number of data points in this way is known as *compaction*.
520
+
521
+
For example, if you expect to collect more than one billion data points in a day, you could aggregate the data using buckets of one minute. Since each bucket is represented by a single value, this compacts the dataset size to 1,440 data points (24 hours x 60 minutes = 1,440 minutes).
522
+
523
+
Use [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}}) to create a
524
+
525
+
new
526
+
compacted time series from an existing one, leaving the original series unchanged.
527
+
Specify a duration for each bucket and an aggregation function to apply to each bucket.
528
+
The available aggregation functions are:
529
+
530
+
-`avg`: Arithmetic mean of all values
531
+
-`sum`: Sum of all values
532
+
-`min`: Minimum value
533
+
-`max`: Maximum value
534
+
-`range`: Difference between the highest and the lowest value
535
+
-`count`: Number of values
536
+
-`first`: Value with lowest timestamp in the bucket
537
+
-`last`: Value with highest timestamp in the bucket
538
+
-`std.p`: Population standard deviation of the values
539
+
-`std.s`: Sample standard deviation of the values
540
+
-`var.p`: Population variance of the values
541
+
-`var.s`: Sample variance of the values
542
+
-`twa`: Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8)
543
+
544
+
It's important to point out that there is no data rewriting on the original timeseries; the compaction happens in a new series, while the original one stays the same. In order to prevent the original timeseries from growing indefinitely, you can use the retention option, which will trim it down to a certain period of time.
545
+
546
+
**NOTE:** You need to create the destination (the compacted) timeseries before creating the rule.
TS.CREATE sensor1_compacted # Create the destination timeseries first
556
+
TS.CREATERULE sensor1 sensor1_compacted AGGREGATION avg 60000 # Create the rule
557
+
```
558
+
559
+
With this creation rule, datapoints added to the `sensor1` timeseries will be grouped into buckets of 60 seconds (60000ms), averaged, and saved in the `sensor1_compacted` timeseries.
560
+
561
+
336
562
## Using with other metrics tools
337
563
338
564
In the [RedisTimeSeries](https://github.com/RedisTimeSeries) GitHub organization you can
0 commit comments