You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/develop/data-types/timeseries/_index.md
+15-9Lines changed: 15 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,10 @@ for full installation instructions.
45
45
46
46
## Creating a time series
47
47
48
-
You can create a new empty time series with the [`TS.CREATE`]({{< relref "commands/ts.create/" >}}) command, specifying a key name. If you use [`TS.ADD`]({{< relref "commands/ts.add/" >}}) to add data to a time series key that does not exist, it is automatically created.
48
+
You can create a new empty time series with the [`TS.CREATE`]({{< relref "commands/ts.create/" >}})
49
+
command, specifying a key name. Alternatively, if you use [`TS.ADD`]({{< relref "commands/ts.add/" >}})
50
+
to add data to a time series key that does not exist, it is automatically created (see
51
+
[Adding data points](#adding-data-points) below for more information about `TS.ADD`).
49
52
50
53
```bash
51
54
> TS.CREATE thermometer:1
@@ -64,7 +67,7 @@ to support Unix timestamps, measured in milliseconds since the
64
67
[Unix epoch](https://en.wikipedia.org/wiki/Unix_time). However, you can interpret
65
68
the timestamps in any way you like (for example, as the number of days since a given start date).
66
69
When you create a time series, you can specify a maximum retention period for the
67
-
data, relative to the last reported timestamp. A retention period of `0` means
70
+
data, relative to the last reported timestamp. A retention period of zero means
68
71
the data does not expire.
69
72
70
73
```bash
@@ -242,7 +245,7 @@ return data points from a range of timestamps in each time series.
242
245
The parameters are mostly the same except that the multiple time series
243
246
commands don't take a key name as the first parameter. Instead, you
244
247
specify a filter expression to include only time series with
245
-
specific labels. (See [Adding data points](#adding-data-points)
248
+
specific labels. (See [Creating a time series](#creating-a-time-series)
246
249
above to learn how to add labels to a time series.) The filter expressions
247
250
use a simple syntax that lets you include or exclude time series based on
248
251
the presence or value of a label. See the description in the
@@ -527,13 +530,13 @@ OK
527
530
528
531
## Compaction
529
532
530
-
Aggregation queries let you extract the important information from a large data set
533
+
[Aggregation](#aggregation) queries let you extract the important information from a large data set
531
534
into a smaller, more manageable set. If you are continually adding new data to a
532
535
time series as it is generated, you may need to run the same aggregation
533
536
regularly on the latest data. Instead of running the query manually
534
537
each time, you can add a *compaction rule* to a time series to compute an
535
538
aggregation incrementally on data as it arrives. The values from the
536
-
aggregation buckets are then added to a separate time series, leaving the original
539
+
aggregation buckets are stored in a separate time series, leaving the original
537
540
series unchanged.
538
541
539
542
Use [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}}) to create a
@@ -597,10 +600,12 @@ value for the first bucket and adds it to the compacted series.
597
600
The general strategy is that the rule does not add data to the
598
601
compaction for the latest bucket in the source series, but will add and
599
602
update the compacted data for any previous buckets. This reflects the
600
-
typical usage pattern of adding data samples sequentially in real time.
601
-
Note that earlier buckets are not "closed" when you add data to a later
603
+
typical usage pattern of adding data samples sequentially in real time
604
+
(an aggregate value typically isn't correct until its bucket period is over).
605
+
But note that earlier buckets are not "closed" when you add data to a later
602
606
bucket. If you add or [delete](#deleting-data-points) data in a bucket before
603
-
the latest one, thecompaction rule will update the compacted data for that bucket.
607
+
the latest one, the compaction rule will still update the compacted data for
608
+
that bucket.
604
609
605
610
## Deleting data points
606
611
@@ -668,4 +673,5 @@ find projects that help you integrate RedisTimeSeries with other tools, includin
668
673
669
674
## More information
670
675
671
-
The other pages in this section describe RedisTimeSeries concepts in more detail:
676
+
The other pages in this section describe RedisTimeSeries concepts in more detail.
677
+
See also the [time series command reference]({{< relref "/commands/" >}}?group=timeseries).
@@ -46,11 +46,11 @@ The graphs and tables below make these key points:
46
46
47
47
- We've observed a maximum 95% drop in the achievable ops/sec even at 99% out-of-order ingestion. (Again, reducing the chunk size can cut the impact in half.)
0 commit comments