Skip to content

Commit 2ab2ec4

Browse files
DOC-5424 fixed images and moved lone reference page up
1 parent 2b8117d commit 2ab2ec4

15 files changed

+22
-34
lines changed

content/develop/data-types/timeseries/_index.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,10 @@ for full installation instructions.
4545

4646
## Creating a time series
4747

48-
You can create a new empty time series with the [`TS.CREATE`]({{< relref "commands/ts.create/" >}}) command, specifying a key name. If you use [`TS.ADD`]({{< relref "commands/ts.add/" >}}) to add data to a time series key that does not exist, it is automatically created.
48+
You can create a new empty time series with the [`TS.CREATE`]({{< relref "commands/ts.create/" >}})
49+
command, specifying a key name. Alternatively, if you use [`TS.ADD`]({{< relref "commands/ts.add/" >}})
50+
to add data to a time series key that does not exist, it is automatically created (see
51+
[Adding data points](#adding-data-points) below for more information about `TS.ADD`).
4952

5053
```bash
5154
> TS.CREATE thermometer:1
@@ -64,7 +67,7 @@ to support Unix timestamps, measured in milliseconds since the
6467
[Unix epoch](https://en.wikipedia.org/wiki/Unix_time). However, you can interpret
6568
the timestamps in any way you like (for example, as the number of days since a given start date).
6669
When you create a time series, you can specify a maximum retention period for the
67-
data, relative to the last reported timestamp. A retention period of `0` means
70+
data, relative to the last reported timestamp. A retention period of zero means
6871
the data does not expire.
6972

7073
```bash
@@ -242,7 +245,7 @@ return data points from a range of timestamps in each time series.
242245
The parameters are mostly the same except that the multiple time series
243246
commands don't take a key name as the first parameter. Instead, you
244247
specify a filter expression to include only time series with
245-
specific labels. (See [Adding data points](#adding-data-points)
248+
specific labels. (See [Creating a time series](#creating-a-time-series)
246249
above to learn how to add labels to a time series.) The filter expressions
247250
use a simple syntax that lets you include or exclude time series based on
248251
the presence or value of a label. See the description in the
@@ -527,13 +530,13 @@ OK
527530

528531
## Compaction
529532

530-
Aggregation queries let you extract the important information from a large data set
533+
[Aggregation](#aggregation) queries let you extract the important information from a large data set
531534
into a smaller, more manageable set. If you are continually adding new data to a
532535
time series as it is generated, you may need to run the same aggregation
533536
regularly on the latest data. Instead of running the query manually
534537
each time, you can add a *compaction rule* to a time series to compute an
535538
aggregation incrementally on data as it arrives. The values from the
536-
aggregation buckets are then added to a separate time series, leaving the original
539+
aggregation buckets are stored in a separate time series, leaving the original
537540
series unchanged.
538541

539542
Use [`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}}) to create a
@@ -597,10 +600,12 @@ value for the first bucket and adds it to the compacted series.
597600
The general strategy is that the rule does not add data to the
598601
compaction for the latest bucket in the source series, but will add and
599602
update the compacted data for any previous buckets. This reflects the
600-
typical usage pattern of adding data samples sequentially in real time.
601-
Note that earlier buckets are not "closed" when you add data to a later
603+
typical usage pattern of adding data samples sequentially in real time
604+
(an aggregate value typically isn't correct until its bucket period is over).
605+
But note that earlier buckets are not "closed" when you add data to a later
602606
bucket. If you add or [delete](#deleting-data-points) data in a bucket before
603-
the latest one, thecompaction rule will update the compacted data for that bucket.
607+
the latest one, the compaction rule will still update the compacted data for
608+
that bucket.
604609

605610
## Deleting data points
606611

@@ -668,4 +673,5 @@ find projects that help you integrate RedisTimeSeries with other tools, includin
668673

669674
## More information
670675

671-
The other pages in this section describe RedisTimeSeries concepts in more detail:
676+
The other pages in this section describe RedisTimeSeries concepts in more detail.
677+
See also the [time series command reference]({{< relref "/commands/" >}}?group=timeseries).
Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ categories:
1212
description: 'Out-of-order / backfilled ingestion performance considerations
1313
1414
'
15-
linkTitle: Out-of-order / backfilled ingestion performance considerations
15+
linkTitle: Out-of-order/backfill performance
1616
title: Out-of-order / backfilled ingestion performance considerations
1717
weight: 5
1818
---
@@ -46,11 +46,11 @@ The graphs and tables below make these key points:
4646

4747
- We've observed a maximum 95% drop in the achievable ops/sec even at 99% out-of-order ingestion. (Again, reducing the chunk size can cut the impact in half.)
4848

49-
<img src="images/compressed-overall-ops-sec-vs-out-of-order-percentage.png" alt="compressed-overall-ops-sec-vs-out-of-order-percentage"/>
49+
{{< image filename="/images/timeseries/compressed-overall-ops-sec-vs-out-of-order-percentage.webp" alt="compressed-overall-ops-sec-vs-out-of-order-percentage" >}}
5050

51-
<img src="images/compressed-overall-p50-lat-vs-out-of-order-percentage.png" alt="compressed-overall-p50-lat-vs-out-of-order-percentage"/>
51+
{{< image filename="/images/timeseries/compressed-overall-p50-lat-vs-out-of-order-percentage.webp" alt="compressed-overall-p50-lat-vs-out-of-order-percentage" >}}
5252

53-
<img src="images/compressed-out-of-order-overhead-table.png" alt="compressed-out-of-order-overhead-table"/>
53+
{{< image filename="/images/timeseries/compressed-out-of-order-overhead-table.webp" alt="compressed-out-of-order-overhead-table" >}}
5454

5555
## Uncompressed chunks out-of-order/backfilled impact analysis
5656

@@ -63,8 +63,8 @@ Apart from that, we can observe the following key take-aways:
6363

6464
- We've observed a maximum 45% drop in the achievable ops/sec, even at 99% out-of-order ingestion.
6565

66-
<img src="images/uncompressed-overall-ops-sec-vs-out-of-order-percentage.png" alt="uncompressed-overall-ops-sec-vs-out-of-order-percentage"/>
66+
{{< image filename="/images/timeseries/uncompressed-overall-ops-sec-vs-out-of-order-percentage.webp" alt="uncompressed-overall-ops-sec-vs-out-of-order-percentage" >}}
6767

68-
<img src="images/uncompressed-overall-p50-lat-vs-out-of-order-percentage.png" alt="uncompressed-overall-p50-lat-vs-out-of-order-percentage"/>
68+
{{< image filename="/images/timeseries/uncompressed-overall-p50-lat-vs-out-of-order-percentage.webp" alt="uncompressed-overall-p50-lat-vs-out-of-order-percentage" >}}
6969

70-
<img src="images/uncompressed-out-of-order-overhead-table.png" alt="uncompressed-out-of-order-overhead-table"/>
70+
{{< image filename="/images/timeseries/uncompressed-out-of-order-overhead-table.webp" alt="uncompressed-out-of-order-overhead-table" >}}

content/develop/data-types/timeseries/reference/_index.md

Lines changed: 0 additions & 18 deletions
This file was deleted.
72.6 KB
Loading

0 commit comments

Comments
 (0)