Skip to content

Commit 0f87d9f

Browse files
authored
Merge branch 'main' into add-elastic-new-intro-section
2 parents 4a3e353 + cd9745a commit 0f87d9f

File tree

15 files changed

+387
-27
lines changed

15 files changed

+387
-27
lines changed

deploy-manage/deploy/elastic-cloud/regions.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,4 +53,12 @@ The following GCP regions are currently available:
5353
| :--- | :--- |
5454
| asia-south1 | Mumbai |
5555
| europe-west1 | Belgium |
56-
| us-central1 | Iowa |
56+
| us-central1 | Iowa |
57+
58+
59+
## Marketplaces
60+
61+
When procuring {{ecloud}} through [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-voru33wi6xs7k), [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/elastic.ec-azure-pp?tab=overview), or [GCP Marketplace](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud), only the regions corresponding to the same cloud service provider can be used. This ensures that you can enjoy the benefits of the marketplace, such as {{ecloud}} contributing towards your spend commitment with cloud providers.
62+
63+
You can implement a multi-cloud strategy by creating a separate {{ecloud}} organization, either from another marketplace, or directly at [cloud.elastic.co](https://cloud.elastic.co).
64+
For example, if you have created a project in `eu-central-1` after signing up on AWS Marketplace, you can provision another project in GCP `europe-west1` by signing up for a second {{ecloud}} organization on GCP Marketplace, using another email address.

explore-analyze/machine-learning/nlp/ml-nlp-import-model.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,12 @@ products:
1010

1111
# Import the trained model and vocabulary [ml-nlp-import-model]
1212

13+
::::{warning}
14+
PyTorch models can execute code on your {{es}} server, exposing your cluster to potential security vulnerabilities.
15+
16+
**Only use models from trusted sources and never use models from unverified or unknown providers.**
17+
::::
18+
1319
::::{important}
1420
If you want to install a trained model in a restricted or closed network, refer to [these instructions](eland://reference/machine-learning.md#ml-nlp-pytorch-air-gapped).
1521
::::

explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,14 @@ products:
1111

1212
# Compatible third party models [ml-nlp-model-ref]
1313

14+
::::{warning}
15+
PyTorch models can execute code on your {{es}} server, exposing your cluster to potential security vulnerabilities.
16+
17+
**Only use models from trusted sources and never use models from unverified or unknown providers.**
18+
19+
The models listed on this page are all from a trusted source – Hugging Face.
20+
::::
21+
1422
::::{note}
1523
The minimum dedicated ML node size for deploying and using the {{nlp}} models is 16 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself.
1624
::::

manage-data/data-store/data-streams/modify-data-stream.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -251,6 +251,7 @@ If wanted, you can [roll over the data stream](../data-streams/use-data-stream.m
251251

252252
To apply static setting changes to existing backing indices, you must create a new data stream and reindex your data into it. See [Use reindex to change mappings or settings](../data-streams/modify-data-stream.md#data-streams-use-reindex-to-change-mappings-settings).
253253

254+
See [this video](https://www.youtube.com/watch?v=fHL7SkQr7Wc) for a walkthrough of updating [`number_of_shards`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#index-number-of-shards).
254255

255256
### Use reindex to change mappings or settings [data-streams-use-reindex-to-change-mappings-settings]
256257

Lines changed: 323 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,323 @@
1+
---
2+
navigation_title: "Quickstart"
3+
applies_to:
4+
stack: ga
5+
serverless: ga
6+
products:
7+
- id: elasticsearch
8+
---
9+
10+
# Quickstart: Time series data stream basics
11+
12+
Use this quickstart to set up a time series data stream (TSDS), ingest a few documents, and run a basic query. These high-level steps help you see how a TSDS works, so you can decide whether it's right for your data.
13+
14+
A _time series_ is a sequence of data points collected at regular time intervals. For example, you might track CPU usage or stock price over time. This quickstart uses simplified weather sensor readings to show how a TSDS helps you analyze metrics data over time.
15+
16+
## Prerequisites
17+
18+
* Access to [{{dev-tools-app}} Console](/explore-analyze/query-filter/tools/console.md) in {{kib}}, or another way to make {{es}} API requests
19+
20+
* Cluster and index permissions:
21+
* [Cluster privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster): `manage_index_templates`
22+
* [Index privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices): `create_doc` and `create_index`
23+
24+
* Familiarity with [time series data stream concepts](time-series-data-stream-tsds.md) and [{{es}} index and search basics](/solutions/search/get-started.md)
25+
26+
You can follow this guide using any {{es}} deployment.
27+
To see all deployment options, refer to [](/deploy-manage/deploy.md#choosing-your-deployment-type).
28+
To get started quickly, spin up a cluster [locally in Docker](/deploy-manage/deploy/self-managed/local-development-installation-quickstart.md).
29+
30+
31+
## Create and query a TSDS
32+
33+
:::::{stepper}
34+
::::{step} Create an index template
35+
36+
To create a data stream, you need an index template to base it on. The template defines the data stream structure and settings. (For this quickstart, you don't need to understand template details.)
37+
38+
A TSDS uses _dimension_ fields and _metric_ fields. [Dimensions](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md#time-series-dimension) are used to uniquely identify the time series and are typically based on a descriptive property like `location`. [Metrics](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md#time-series-metric) are measurements that change over time.
39+
40+
Use an [`_index_template` request](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template) to create a template with two identifying dimension fields and two metric fields for weather measurements:
41+
42+
``` console
43+
PUT _index_template/quickstart-tsds-template
44+
{
45+
"index_patterns": ["quickstart-*"],
46+
"data_stream": { }, # Indicates this is a data stream, not a regular index.
47+
"priority": 100,
48+
"template": {
49+
"settings": {
50+
"index.mode": "time_series" # The required index mode for TSDS.
51+
},
52+
"mappings": {
53+
"properties": {
54+
"sensor_id": {
55+
"type": "keyword",
56+
"time_series_dimension": true # Defines a dimension field.
57+
},
58+
"location": {
59+
"type": "keyword",
60+
"time_series_dimension": true # Another dimension field.
61+
},
62+
"temperature": {
63+
"type": "half_float",
64+
"time_series_metric": "gauge" # A supported field type for metrics.
65+
},
66+
"humidity": {
67+
"type": "half_float",
68+
"time_series_metric": "gauge" # A second measurement.
69+
},
70+
"@timestamp": {
71+
"type": "date"
72+
}
73+
}
74+
}
75+
}
76+
}
77+
78+
```
79+
80+
This example defines a `@timestamp` field for illustration purposes. In most cases, you can use the default `@timestamp` field (which has a default type of `date`) instead of defining a timestamp in the mapping.
81+
82+
You should get a response of `"acknowledged": true` that confirms the template was created.
83+
84+
::::
85+
86+
::::{step} Create a data stream and add sample data
87+
88+
In this step, create a new data stream called `quickstart-weather` based on the index template defined in Step 1. You can create the data stream and add documents in a single API call.
89+
90+
Use a [`_bulk` API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) request to add multiple documents at once. Make sure to adjust the timestamps to within a few minutes of the current time.
91+
92+
% TODO simplify timestamps
93+
94+
```console
95+
PUT quickstart-weather/_bulk
96+
{ "create":{ } }
97+
{ "@timestamp": "2025-09-08T21:25:00.000Z", "sensor_id": "STATION-0001", "location": "base", "temperature": 26.7, "humidity": 49.9 }
98+
{ "create":{ } }
99+
{ "@timestamp": "2025-09-08T21:26:00.000Z", "sensor_id": "STATION-0002", "location": "base", "temperature": 27.2, "humidity": 50.1 }
100+
{ "create":{ } }
101+
{ "@timestamp": "2025-09-08T21:35:00.000Z", "sensor_id": "STATION-0003", "location": "base", "temperature": 28.1, "humidity": 48.7 }
102+
{ "create":{ } }
103+
{ "@timestamp": "2025-09-08T21:27:00.000Z", "sensor_id": "STATION-0004", "location": "satellite", "temperature": 32.4, "humidity": 88.9 }
104+
{ "create":{ } }
105+
{ "@timestamp": "2025-09-08T21:36:00.000Z", "sensor_id": "STATION-0005", "location": "satellite", "temperature": 32.3, "humidity": 87.5 }
106+
```
107+
108+
The response shows five sample weather data documents.
109+
110+
:::{dropdown} Example response
111+
112+
```console-result
113+
{
114+
"errors": false,
115+
"took": 0,
116+
"items": [
117+
{
118+
"create": {
119+
"_index": ".ds-quickstart-weather-2025.09.08-000001",
120+
"_id": "cFJZQJlNh-Xl8V_rAAABmSs3x-A",
121+
"_version": 1,
122+
"result": "created",
123+
"_shards": {
124+
"total": 2,
125+
"successful": 2,
126+
"failed": 0
127+
},
128+
"_seq_no": 0,
129+
"_primary_term": 1,
130+
"status": 201
131+
}
132+
},
133+
{
134+
"create": {
135+
"_index": ".ds-quickstart-weather-2025.09.08-000001",
136+
"_id": "c-wsTT0T4CtI3hOuAAABmSs4skA",
137+
"_version": 1,
138+
"result": "created",
139+
"_shards": {
140+
"total": 2,
141+
"successful": 2,
142+
"failed": 0
143+
},
144+
"_seq_no": 1,
145+
"_primary_term": 1,
146+
"status": 201
147+
}
148+
},
149+
{
150+
"create": {
151+
"_index": ".ds-quickstart-weather-2025.09.08-000001",
152+
"_id": "Hdee5vMpBvZymWvHAAABmStA76A",
153+
"_version": 1,
154+
"result": "created",
155+
"_shards": {
156+
"total": 2,
157+
"successful": 2,
158+
"failed": 0
159+
},
160+
"_seq_no": 2,
161+
"_primary_term": 1,
162+
"status": 201
163+
}
164+
},
165+
{
166+
"create": {
167+
"_index": ".ds-quickstart-weather-2025.09.08-000001",
168+
"_id": "e3Z2UirUQldsjLr2AAABmSs5nKA",
169+
"_version": 1,
170+
"result": "created",
171+
"_shards": {
172+
"total": 2,
173+
"successful": 2,
174+
"failed": 0
175+
},
176+
"_seq_no": 3,
177+
"_primary_term": 1,
178+
"status": 201
179+
}
180+
},
181+
{
182+
"create": {
183+
"_index": ".ds-quickstart-weather-2025.09.08-000001",
184+
"_id": "N3-RYtQAp6JEsLRNAAABmStB2gA",
185+
"_version": 1,
186+
"result": "created",
187+
"_shards": {
188+
"total": 2,
189+
"successful": 2,
190+
"failed": 0
191+
},
192+
"_seq_no": 4,
193+
"_primary_term": 1,
194+
"status": 201
195+
}
196+
}
197+
]
198+
}
199+
```
200+
:::
201+
202+
:::{tip}
203+
If you get an error about timestamp values, check the error response for the valid timestamp range. For more details, refer to [](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md#tsds-accepted-time-range).
204+
205+
:::
206+
207+
::::
208+
::::{step} Run a query
209+
210+
Now that your data stream has some documents, you can use the [`_search` endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) to query the data. This sample aggregation shows average temperature for each location, in hourly buckets. (You don't need to understand the details of aggregations to follow this example.)
211+
212+
```console
213+
POST quickstart-weather/_search
214+
{
215+
"size": 0,
216+
"aggs": {
217+
"by_location": {
218+
"terms": {
219+
"field": "location" # The location dimension defined in the template.
220+
},
221+
"aggs": {
222+
"avg_temp_per_hour": {
223+
"date_histogram": {
224+
"field": "@timestamp",
225+
"fixed_interval": "1h"
226+
},
227+
"aggs": {
228+
"avg_temp": {
229+
"avg": {
230+
"field": "temperature" # A metric field defined in the template.
231+
}
232+
}
233+
}
234+
}
235+
}
236+
}
237+
}
238+
}
239+
```
240+
241+
:::{dropdown} Example response
242+
243+
```console-result
244+
{
245+
"took": 1,
246+
"timed_out": false,
247+
"_shards": {
248+
"total": 1,
249+
"successful": 1,
250+
"skipped": 0,
251+
"failed": 0
252+
},
253+
"hits": {
254+
"total": {
255+
"value": 5,
256+
"relation": "eq"
257+
},
258+
"max_score": null,
259+
"hits": []
260+
},
261+
"aggregations": {
262+
"by_location": {
263+
"doc_count_error_upper_bound": 0,
264+
"sum_other_doc_count": 0,
265+
"buckets": [
266+
{
267+
"key": "base",
268+
"doc_count": 3,
269+
"avg_temp_per_hour": {
270+
"buckets": [
271+
{
272+
"key_as_string": "2025-09-08T21:00:00.000Z",
273+
"key": 1757365200000,
274+
"doc_count": 3,
275+
"avg_temp": {
276+
"value": 27.333333333333332
277+
}
278+
}
279+
]
280+
}
281+
},
282+
{
283+
"key": "satellite",
284+
"doc_count": 2,
285+
"avg_temp_per_hour": {
286+
"buckets": [
287+
{
288+
"key_as_string": "2025-09-08T21:00:00.000Z",
289+
"key": 1757365200000,
290+
"doc_count": 2,
291+
"avg_temp": {
292+
"value": 32.359375
293+
}
294+
}
295+
]
296+
}
297+
}
298+
]
299+
}
300+
}
301+
}
302+
```
303+
:::
304+
305+
:::{tip}
306+
You can also try this aggregation in a [data view](/explore-analyze/find-and-organize/data-views.md) in {{kib}}.
307+
:::
308+
309+
::::
310+
:::::
311+
312+
## Next steps
313+
314+
This quickstart introduced the basics of time series data streams. To learn more, explore these topics:
315+
316+
* [](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md)
317+
* [](/manage-data/data-store/data-streams/set-up-tsds.md)
318+
319+
For more information about the APIs used in this quickstart, review the {{es}} API reference documentation:
320+
321+
* [Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk)
322+
* [Index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template)
323+
* [Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search)

manage-data/data-store/data-streams/time-series-data-stream-tsds.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ products:
88
- id: elasticsearch
99
---
1010

11-
# Time series data stream (TSDS) [tsds]
11+
# Time series data streams [tsds]
1212

1313
A time series data stream (TSDS) models timestamped metrics data as one or more time series.
1414

@@ -219,4 +219,4 @@ Internally, each TSDS backing index uses [index sorting](elasticsearch://referen
219219

220220
## What’s next? [tsds-whats-next]
221221

222-
Now that you know the basics, you’re ready to [create a TSDS](../data-streams/time-series-data-stream-tsds.md) or [convert an existing data stream to a TSDS](../data-streams/time-series-data-stream-tsds.md).
222+
Now that you know the basics, you’re ready to [create a TSDS](../data-streams/set-up-tsds.md) or [convert an existing data stream to a TSDS](../data-streams/set-up-tsds.md#convert-existing-data-stream-to-tsds).

0 commit comments

Comments
 (0)