Skip to content

Commit a96dd5b

Browse files
authored
Merge branch 'main' into 286-gs
2 parents 67b1fd3 + b80639c commit a96dd5b

File tree

8 files changed

+64
-123
lines changed

8 files changed

+64
-123
lines changed

explore-analyze/find-and-organize/data-views.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ A {{data-source}} can match one rollup index. For a combination rollup {{data-s
119119
rollup_logstash,kibana_sample_data_logs
120120
```
121121

122-
For an example, refer to [Create and visualize rolled up data](../../manage-data/lifecycle/rollup.md#rollup-data-tutorial).
122+
For an example, refer to [Create and visualize rolled up data](/manage-data/lifecycle/rollup/getting-started-kibana.md#rollup-data-tutorial).
123123

124124

125125
### Use {{data-sources}} with {{ccs}} [management-cross-cluster-search]

manage-data/lifecycle/rollup.md

Lines changed: 43 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,23 +2,56 @@
22
mapped_urls:
33
- https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-rollup.html
44
- https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-overview.html
5-
- https://www.elastic.co/guide/en/kibana/current/data-rollups.html
65
---
76

87
# Rollup
98

10-
% What needs to be done: Refine
9+
::::{admonition} Deprecated in 8.11.0.
10+
:class: warning
1111

12-
% GitHub issue: docs-projects#377
12+
Rollups will be removed in a future version. Please [migrate](/manage-data/lifecycle/rollup/migrating-from-rollup-to-downsampling.md) to [downsampling](/manage-data/data-store/index-types/downsampling-time-series-data-stream.md) instead.
13+
::::
1314

14-
% Scope notes: Combine linked resources.
15+
Keeping historical data around for analysis is extremely useful but often avoided due to the financial cost of archiving massive amounts of data. For example, your system may be generating 500 documents every second. That will generate 43 million documents per day, and nearly 16 billion documents a year. Retention periods are thus driven by financial realities rather than by the usefulness of extensive historical data.
1516

16-
% Use migrated content from existing pages that map to this page:
17+
While your analysts and data scientists may wish you stored that data indefinitely for analysis, time is never-ending and so your storage requirements will continue to grow without bound. Retention policies are therefore often dictated by the simple calculation of storage costs over time, and what the organization is willing to pay to retain historical data. Often these policies start deleting data after a few months or years.
1718

18-
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-rollup.md
19-
% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/rollup-overview.md
20-
% - [ ] ./raw-migrated-files/kibana/kibana/data-rollups.md
19+
Storage cost is a fixed quantity. It takes X money to store Y data. But the utility of a piece of data often changes with time. Sensor data gathered at millisecond granularity is extremely useful right now, reasonably useful if from a few weeks ago, and only marginally useful if older than a few months.
2120

22-
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
21+
So while the cost of storing a millisecond of sensor data from ten years ago is fixed, the value of that individual sensor reading often diminishes with time. It’s not useless — it could easily contribute to a useful analysis — but it’s reduced value often leads to deletion rather than paying the fixed storage cost.
2322

24-
$$$rollup-data-tutorial$$$
23+
24+
## Rollup stores historical data at reduced granularity [_rollup_stores_historical_data_at_reduced_granularity]
25+
26+
That’s where Rollup comes into play. The Rollup functionality summarizes old, high-granularity data into a reduced granularity format for long-term storage. By "rolling" the data up into a single summary document, historical data can be compressed greatly compared to the raw data.
27+
28+
For example, consider the system that’s generating 43 million documents every day. The second-by-second data is useful for real-time analysis, but historical analysis looking over ten years of data are likely to be working at a larger interval such as hourly or daily trends.
29+
30+
If we compress the 43 million documents into hourly summaries, we can save vast amounts of space. The Rollup feature automates this process of summarizing historical data.
31+
32+
Details about setting up and configuring Rollup are covered in [Create Job API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-put-job).
33+
34+
35+
## Rollup uses standard Query DSL [_rollup_uses_standard_query_dsl]
36+
37+
The Rollup feature exposes a new search endpoint (`/_rollup_search` vs the standard `/_search`) which knows how to search over rolled-up data. Importantly, this endpoint accepts 100% normal {{es}} Query DSL. Your application does not need to learn a new DSL to inspect historical data, it can simply reuse existing queries and dashboards.
38+
39+
There are some limitations to the functionality available; not all queries and aggregations are supported, certain search features (highlighting, etc) are disabled, and available fields depend on how the rollup was configured. These limitations are covered more in [Rollup Search limitations](/manage-data/lifecycle/rollup/rollup-search-limitations.md).
40+
41+
But if your queries, aggregations and dashboards only use the available functionality, redirecting them to historical data is trivial.
42+
43+
44+
## Rollup merges "live" and "rolled" data [_rollup_merges_live_and_rolled_data]
45+
46+
A useful feature of Rollup is the ability to query both "live", realtime data in addition to historical "rolled" data in a single query.
47+
48+
For example, your system may keep a month of raw data. After a month, it is rolled up into historical summaries using Rollup and the raw data is deleted.
49+
50+
If you were to query the raw data, you’d only see the most recent month. And if you were to query the rolled up data, you would only see data older than a month. The RollupSearch endpoint, however, supports querying both at the same time. It will take the results from both data sources and merge them together. If there is overlap between the "live" and "rolled" data, live data is preferred to increase accuracy.
51+
52+
53+
## Rollup is multi-interval aware [_rollup_is_multi_interval_aware]
54+
55+
Finally, Rollup is capable of intelligently utilizing the best interval available. If you’ve worked with summarizing features of other products, you’ll find that they can be limiting. If you configure rollups at daily intervals…​ your queries and charts can only work with daily intervals. If you need a monthly interval, you have to create another rollup that explicitly stores monthly averages, etc.
56+
57+
The Rollup feature stores data in such a way that queries can identify the smallest available interval and use that for their processing. If you store rollups at a daily interval, queries can be executed on daily or longer intervals (weekly, monthly, etc) without the need to explicitly configure a new rollup job. This helps alleviate one of the major disadvantages of a rollup system; reduced flexibility relative to raw data.

manage-data/lifecycle/rollup/getting-started-with-rollups.md renamed to manage-data/lifecycle/rollup/getting-started-api.md

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,18 @@
11
---
2-
navigation_title: "Getting started"
2+
navigation_title: "Get started using the API"
33
mapped_pages:
44
- https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-getting-started.html
55
---
66

7-
8-
9-
# Getting started with rollups [rollup-getting-started]
10-
7+
# Get started with rollups using the API
118

129
::::{admonition} Deprecated in 8.11.0.
1310
:class: warning
1411

15-
Rollups will be removed in a future version. Please [migrate](migrating-from-rollup-to-downsampling.md) to [downsampling](../../data-store/index-types/downsampling-time-series-data-stream.md) instead.
12+
Rollups will be removed in a future version. Please [migrate](migrating-from-rollup-to-downsampling.md) to [downsampling](/manage-data/data-store/index-types/downsampling-time-series-data-stream.md) instead.
1613
::::
1714

18-
19-
::::{warning}
15+
::::{warning}
2016
From 8.15.0 invoking the put job API in a cluster with no rollup usage will fail with a message about Rollup’s deprecation and planned removal. A cluster either needs to contain a rollup job or a rollup index in order for the put job API to be allowed to execute.
2117
::::
2218

@@ -35,7 +31,7 @@ Imagine you have a series of daily indices that hold sensor data (`sensor-2017-0
3531
```
3632

3733

38-
## Creating a rollup job [_creating_a_rollup_job]
34+
## Creating a rollup job [_creating_a_rollup_job]
3935

4036
We’d like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval one hour or greater. A rollup job might look like this:
4137

@@ -109,7 +105,7 @@ After you execute the above command and create the job, you’ll receive the fol
109105
```
110106

111107

112-
## Starting the job [_starting_the_job]
108+
## Starting the job [_starting_the_job]
113109

114110
After the job is created, it will be sitting in an inactive state. Jobs need to be started before they begin processing data (this allows you to stop them later as a way to temporarily pause, without deleting the configuration).
115111

@@ -120,7 +116,7 @@ POST _rollup/job/sensor/_start
120116
```
121117

122118

123-
## Searching the rolled results [_searching_the_rolled_results]
119+
## Searching the rolled results [_searching_the_rolled_results]
124120

125121
After the job has run and processed some data, we can use the [Rollup search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-rollup-search) endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to…​ it just happens to run on the rolled up data instead.
126122

@@ -275,7 +271,7 @@ Which returns a corresponding response:
275271
In addition to being more complicated (date histogram and a terms aggregation, plus an additional average metric), you’ll notice the date_histogram uses a `7d` interval instead of `60m`.
276272

277273

278-
## Conclusion [_conclusion]
274+
## Conclusion [_conclusion]
279275

280276
This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the [REST API](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-api-quickref.html) for an overview of what is available.
281277

raw-migrated-files/kibana/kibana/data-rollups.md renamed to manage-data/lifecycle/rollup/getting-started-kibana.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,17 @@
1-
# Rollup Jobs [data-rollups]
1+
---
2+
navigation_title: "Get started in Kibana"
3+
mapped_pages:
4+
- https://www.elastic.co/guide/en/kibana/current/data-rollups.html
5+
---
6+
7+
# Get started with the rollups in {{kib}}
28

39
::::{admonition} Deprecated in 8.11.0.
410
:class: warning
511

6-
Rollups are deprecated and will be removed in a future version. Use [downsampling](../../../manage-data/data-store/index-types/downsampling-time-series-data-stream.md) instead.
12+
Rollups are deprecated and will be removed in a future version. Use [downsampling](/manage-data/data-store/index-types/downsampling-time-series-data-stream.md) instead.
713
::::
814

9-
1015
A rollup job is a periodic task that aggregates data from indices specified by an index pattern, and then rolls it into a new index. Rollup indices are a good way to compactly store months or years of historical data for use in visualizations and reports.
1116

1217
You can go to the **Rollup Jobs** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
@@ -16,16 +21,12 @@ You can go to the **Rollup Jobs** page using the navigation menu or the [global
1621
:class: screenshot
1722
:::
1823

19-
Before using this feature, you should be familiar with how rollups work. [Rolling up historical data](../../../manage-data/lifecycle/rollup.md) is a good source for more detailed information.
20-
21-
2224
## Required permissions [_required_permissions_4]
2325

2426
The `manage_rollup` cluster privilege is required to access **Rollup jobs**.
2527

2628
To add the privilege, go to the **Roles** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2729

28-
2930
## Create a rollup job [create-and-manage-rollup-job]
3031

3132
{{kib}} makes it easy for you to create a rollup job by walking you through the process. You fill in the name, data flow, and how often you want to roll up the data. Then you define a date histogram aggregation for the rollup job and optionally define terms, histogram, and metrics aggregations.
@@ -37,7 +38,6 @@ When defining the index pattern, you must enter a name that is different than th
3738
:class: screenshot
3839
:::
3940

40-
4141
## Start, stop, and delete rollup jobs [manage-rollup-job]
4242

4343
Once you’ve saved a rollup job, you’ll see it the **Rollup Jobs** overview page, where you can drill down for further investigation. The **Manage** menu enables you to start, stop, and delete the rollup job. You must first stop a rollup job before deleting it.
@@ -79,11 +79,11 @@ As you walk through the **Create rollup job** UI, enter the data:
7979
| Histogram interval | 1000 |
8080
| Metrics | bytes (average) |
8181

82-
On the **Review and save*** page, click ***Start job now*** and ***Save**.
82+
On the **Review and save** page, click **Start job now** and **Save**.
8383

8484
The terms, histogram, and metrics fields reflect the key information to retain in the rolled up data: where visitors are from (geo.src), what operating system they are using (machine.os.keyword), and how much data is being sent (bytes).
8585

86-
You can now use the rolled up data for analysis at a fraction of the storage cost of the original index. The original data can live side by side with the new rollup index, or you can remove or archive it using [{{ilm}} ({{ilm-init}})](../../../manage-data/lifecycle/index-lifecycle-management.md).
86+
You can now use the rolled up data for analysis at a fraction of the storage cost of the original index. The original data can live side by side with the new rollup index, or you can remove or archive it using [{{ilm}} ({{ilm-init}})](/manage-data/lifecycle/index-lifecycle-management.md).
8787

8888

8989
### Visualize the rolled up data [_visualize_the_rolled_up_data]

manage-data/toc.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,8 @@ toc:
141141
- file: lifecycle/curator.md
142142
- file: lifecycle/rollup.md
143143
children:
144-
- file: lifecycle/rollup/getting-started-with-rollups.md
144+
- file: lifecycle/rollup/getting-started-api.md
145+
- file: lifecycle/rollup/getting-started-kibana.md
145146
- file: lifecycle/rollup/understanding-groups.md
146147
- file: lifecycle/rollup/rollup-aggregation-limitations.md
147148
- file: lifecycle/rollup/rollup-search-limitations.md

raw-migrated-files/elasticsearch/elasticsearch-reference/rollup-overview.md

Lines changed: 0 additions & 58 deletions
This file was deleted.

0 commit comments

Comments
 (0)