Skip to content

Commit 644d71a

Browse files
DOC-5424 initial tidy up
1 parent ac3291f commit 644d71a

File tree

4 files changed

+191
-450
lines changed

4 files changed

+191
-450
lines changed

content/develop/data-types/timeseries/_index.md

Lines changed: 191 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,199 @@ for full installation instructions.
3535
* Compaction for automatically updated aggregated timeseries
3636
* Secondary indexing for time series entries. Each time series has labels (field value pairs) which will allows to query by labels
3737

38-
## Client libraries
38+
## Creating a timeseries
39+
A new timeseries can be created with the [`TS.CREATE`]({{< relref "commands/ts.create/" >}}) command; for example, to create a timeseries named `sensor1` run the following:
3940

40-
Official and community client libraries in Python, Java, JavaScript, Ruby, Go, C#, Rust, and PHP.
41+
```
42+
TS.CREATE sensor1
43+
```
4144

42-
See the [clients page](clients) for the full list.
45+
You can prevent your timeseries growing indefinitely by setting a maximum age for samples compared to the last event time (in milliseconds) with the `RETENTION` option. The default value for retention is `0`, which means the series will not be trimmed.
46+
47+
```
48+
TS.CREATE sensor1 RETENTION 2678400000
49+
```
50+
This will create a timeseries called `sensor1` and trim it to values of up to one month.
51+
52+
53+
## Adding data points
54+
For adding new data points to a timeseries we use the [`TS.ADD`]({{< relref "commands/ts.add/" >}}) command:
55+
56+
```
57+
TS.ADD key timestamp value
58+
```
59+
60+
The `timestamp` argument is the UNIX timestamp of the sample in milliseconds and `value` is the numeric data value of the sample.
61+
62+
Example:
63+
```
64+
TS.ADD sensor1 1626434637914 26
65+
```
66+
67+
To **add a datapoint with the current timestamp** you can use a `*` instead of a specific timestamp:
68+
69+
```
70+
TS.ADD sensor1 * 26
71+
```
72+
73+
You can **append data points to multiple timeseries** at the same time with the [`TS.MADD`]({{< relref "commands/ts.madd/" >}}) command:
74+
```
75+
TS.MADD key timestamp value [key timestamp value ...]
76+
```
77+
78+
79+
## Deleting data points
80+
Data points between two timestamps (inclusive) can be deleted with the [`TS.DEL`]({{< relref "commands/ts.del/" >}}) command:
81+
```
82+
TS.DEL key fromTimestamp toTimestamp
83+
```
84+
Example:
85+
```
86+
TS.DEL sensor1 1000 2000
87+
```
88+
89+
To delete a single timestamp, use it as both the "from" and "to" timestamp:
90+
```
91+
TS.DEL sensor1 1000 1000
92+
```
93+
94+
**Note:** When a sample is deleted, the data in all downsampled timeseries will be recalculated for the specific bucket. If part of the bucket has already been removed though, because it's outside of the retention period, we won't be able to recalculate the full bucket, so in those cases we will refuse the delete operation.
95+
96+
97+
## Labels
98+
Labels are key-value metadata we attach to data points, allowing us to group and filter. They can be either string or numeric values and are added to a timeseries on creation:
99+
100+
```
101+
TS.CREATE sensor1 LABELS region east
102+
```
103+
104+
105+
106+
## Compaction
107+
Another useful feature of Redis Time Series is compacting data by creating a rule for compaction ([`TS.CREATERULE`]({{< relref "commands/ts.createrule/" >}})). For example, if you have collected more than one billion data points in a day, you could aggregate the data by every minute in order to downsample it, thereby reducing the dataset size to 24 * 60 = 1,440 data points. You can choose one of the many available aggregation types in order to aggregate multiple data points from a certain minute into a single one. The currently supported aggregation types are: `avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s and twa`.
108+
109+
It's important to point out that there is no data rewriting on the original timeseries; the compaction happens in a new series, while the original one stays the same. In order to prevent the original timeseries from growing indefinitely, you can use the retention option, which will trim it down to a certain period of time.
110+
111+
**NOTE:** You need to create the destination (the compacted) timeseries before creating the rule.
112+
113+
```
114+
TS.CREATERULE sourceKey destKey AGGREGATION aggregationType bucketDuration
115+
```
116+
117+
Example:
118+
119+
```
120+
TS.CREATE sensor1_compacted # Create the destination timeseries first
121+
TS.CREATERULE sensor1 sensor1_compacted AGGREGATION avg 60000 # Create the rule
122+
```
123+
124+
With this creation rule, datapoints added to the `sensor1` timeseries will be grouped into buckets of 60 seconds (60000ms), averaged, and saved in the `sensor1_compacted` timeseries.
125+
126+
127+
## Filtering
128+
You can filter your time series by value, timestamp and labels:
129+
130+
### Filtering by label
131+
You can retrieve datapoints from multiple timeseries in the same query, and the way to do this is by using label filters. For example:
132+
133+
```
134+
TS.MRANGE - + FILTER area_id=32
135+
```
136+
137+
This query will show data from all sensors (timeseries) that have a label of `area_id` with a value of `32`. The results will be grouped by timeseries.
138+
139+
Or we can also use the [`TS.MGET`]({{< relref "commands/ts.mget/" >}}) command to get the last sample that matches the specific filter:
140+
141+
```
142+
TS.MGET FILTER area_id=32
143+
```
144+
145+
### Filtering by value
146+
We can filter by value across a single or multiple timeseries:
147+
148+
```
149+
TS.RANGE sensor1 - + FILTER_BY_VALUE 25 30
150+
```
151+
This command will return all data points whose value sits between 25 and 30, inclusive.
152+
153+
To achieve the same filtering on multiple series we have to combine the filtering by value with filtering by label:
154+
155+
```
156+
TS.MRANGE - + FILTER_BY_VALUE 20 30 FILTER region=east
157+
```
158+
159+
### Filtering by timestamp
160+
To retrieve the datapoints for specific timestamps on one or multiple timeseries we can use the `FILTER_BY_TS` argument:
161+
162+
Filter on one timeseries:
163+
```
164+
TS.RANGE sensor1 - + FILTER_BY_TS 1626435230501 1626443276598
165+
```
166+
167+
Filter on multiple timeseries:
168+
```
169+
TS.MRANGE - + FILTER_BY_TS 1626435230501 1626443276598 FILTER region=east
170+
```
171+
172+
173+
## Aggregation
174+
It's possible to combine values of one or more timeseries by leveraging aggregation functions:
175+
```
176+
TS.RANGE ... AGGREGATION aggType bucketDuration...
177+
```
178+
179+
For example, to find the average temperature per hour in our `sensor1` series we could run:
180+
```
181+
TS.RANGE sensor1 - + + AGGREGATION avg 3600000
182+
```
183+
184+
To achieve the same across multiple sensors from the area with id of 32 we would run:
185+
```
186+
TS.MRANGE - + AGGREGATION avg 3600000 FILTER area_id=32
187+
```
188+
189+
### Aggregation bucket alignment
190+
When doing aggregations, the aggregation buckets will be aligned to 0 as so:
191+
```
192+
TS.RANGE sensor3 10 70 + AGGREGATION min 25
193+
```
194+
195+
```
196+
Value: | (1000) (2000) (3000) (4000) (5000) (6000) (7000)
197+
Timestamp: |-------|10|-------|20|-------|30|-------|40|-------|50|-------|60|-------|70|--->
198+
199+
Bucket(25ms): |_________________________||_________________________||___________________________|
200+
V V V
201+
min(1000, 2000)=1000 min(3000, 4000)=3000 min(5000, 6000, 7000)=5000
202+
```
203+
204+
And we will get the following datapoints: 1000, 3000, 5000.
205+
206+
You can choose to align the buckets to the start or end of the queried interval as so:
207+
```
208+
TS.RANGE sensor3 10 70 + AGGREGATION min 25 ALIGN start
209+
```
210+
211+
```
212+
Value: | (1000) (2000) (3000) (4000) (5000) (6000) (7000)
213+
Timestamp: |-------|10|-------|20|-------|30|-------|40|-------|50|-------|60|-------|70|--->
214+
215+
Bucket(25ms): |__________________________||_________________________||___________________________|
216+
V V V
217+
min(1000, 2000, 3000)=1000 min(4000, 5000)=4000 min(6000, 7000)=6000
218+
```
219+
The result array will contain the following datapoints: 1000, 4000 and 6000
220+
221+
222+
### Aggregation across timeseries
223+
224+
By default, results of multiple timeseries will be grouped by timeseries, but (since v1.6) you can use the `GROUPBY` and `REDUCE` options to group them by label and apply an additional aggregation.
225+
226+
To find minimum temperature per region, for example, we can run:
227+
228+
```
229+
TS.MRANGE - + FILTER region=(east,west) GROUPBY region REDUCE min
230+
```
43231

44232
## Using with other metrics tools
45233

@@ -50,14 +238,3 @@ find projects that help you integrate RedisTimeSeries with other tools, includin
50238
2. [Grafana 7.1+](https://github.com/RedisTimeSeries/grafana-redis-datasource), using the [Redis Data Source](https://redislabs.com/blog/introducing-the-redis-data-source-plug-in-for-grafana/).
51239
3. [Telegraf](https://github.com/influxdata/telegraf). Download the plugin from [InfluxData](https://portal.influxdata.com/downloads/).
52240
4. StatsD, Graphite exports using graphite protocol.
53-
54-
## Memory model
55-
56-
A time series is a linked list of memory chunks. Each chunk has a predefined size of samples. Each sample is a 128-bit tuple: 64 bits for the timestamp and 64 bits for the value.
57-
58-
## Forum
59-
60-
Got questions? Feel free to ask at the [RedisTimeSeries mailing list](https://forum.redislabs.com/c/modules/redistimeseries).
61-
62-
## License
63-
RedisTimeSeries is licensed under the [Redis Source Available License 2.0 (RSALv2)](https://redis.com/legal/rsalv2-agreement) or the [Server Side Public License v1 (SSPLv1)](https://www.mongodb.com/licensing/server-side-public-license).

content/develop/data-types/timeseries/clients.md

Lines changed: 0 additions & 28 deletions
This file was deleted.

content/develop/data-types/timeseries/development.md

Lines changed: 0 additions & 109 deletions
This file was deleted.

0 commit comments

Comments
 (0)