Skip to content

Commit 5fd6954

Browse files
authored
Merge branch 'main' into host-schema-handling
2 parents 552b9f8 + a842bc9 commit 5fd6954

File tree

30 files changed

+791
-219
lines changed

30 files changed

+791
-219
lines changed

deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ products:
1515
This guide compares {{ech}} deployments with {{serverless-full}} projects, highlighting key features and capabilities across different project types. Use this information to understand what's available in each deployment option or to plan migrations between platforms.
1616

1717
:::{note}
18-
The information below reflects our strategic goals, plans and objectives and includes estimated release dates, anticipated features and functions, and proposed descriptions for commercial features. All details are for information only and are subject to change in our discretion. Information may be updated, added, or removed from this document as features or products become available, canceled, or postponed.
18+
The following information reflects our strategic goals, plans and objectives and includes estimated release dates, anticipated features and functions, and proposed descriptions for commercial features. All details are for information only and are subject to change in our discretion. Information might be updated, added, or removed from this document as features or products become available, canceled, or postponed.
1919
:::
2020

2121
## Architectural differences
@@ -37,6 +37,7 @@ The information below reflects our strategic goals, plans and objectives and inc
3737
| **Cross-origin resource sharing (CORS)** | Supported | Not available. Browser-based applications must route requests through a backend proxy server. |
3838

3939
In Serverless, Elastic automatically manages:
40+
4041
* Cluster scaling and optimization
4142
* Node management and allocation
4243
* Shard distribution and replication
@@ -107,13 +108,14 @@ This table compares Observability capabilities between {{ech}} deployments and O
107108
| **Feature** | {{ech}} | Serverless Observability Complete projects | Serverless notes |
108109
|---------|----------------------|-----------------------------------|------------------|
109110
| [**AI Assistant**](/solutions/observability/observability-ai-assistant.md) ||| |
110-
| **APM integration** ||| Use **Managed Intake Service** (supports Elastic APM and OTLP protocols) |
111+
| **APM integration** ||| Use **Managed Intake Service** (supports Elastic APM and OTLP protocols) <br> Refer to [Managed OTLP endpoint](opentelemetry://reference/motlp.md) for OTLP data ingestion |
111112
| [**APM Agent Central Configuration**](/solutions/observability/apm/apm-agent-central-configuration.md) ||| Not available in Serverless |
112113
| [**APM Tail-based sampling**](/solutions/observability/apm/transaction-sampling.md#apm-tail-based-sampling) ||| - Not available in Serverless <br>- Consider **OpenTelemetry** tail sampling processor as an alternative |
113114
| [**Android agent/SDK instrumentation**](opentelemetry://reference/edot-sdks/android/index.md) ||| |
114115
| [**AWS Firehose integration**](/solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md) ||| |
115116
| [**Custom roles for Kibana Spaces**](/deploy-manage/manage-spaces.md#spaces-control-user-access) ||| |
116117
| [**Data stream lifecycle**](/manage-data/lifecycle/data-stream.md) ||| Primary lifecycle management method in Serverless |
118+
| [**EDOT Cloud Forwarder**](opentelemetry://reference/edot-cloud-forwarder/index.md) ||| |
117119
| **[Elastic Serverless Forwarder](elastic-serverless-forwarder://reference/index.md)** ||| |
118120
| **[Elastic Synthetics Private Locations](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-add)** ||| |
119121
| **[Fleet Agent policies](/reference/fleet/agent-policy.md)** ||| |

deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,20 @@ mapped_pages:
44
applies_to:
55
deployment:
66
ess: ga
7-
serverless: unavailable
7+
serverless: ga
88
products:
99
- id: cloud-hosted
10+
- id: cloud-serverless
1011
---
1112

1213
# Google Cloud Platform Marketplace [ec-billing-gcp]
1314

14-
Subscribe to {{ecloud}} directly from the Google Cloud Platform (GCP). You then have the convenience of viewing your {{ecloud}} subscription as part of your GCP bill, and you do not have to supply any additional credit card information to Elastic.
15+
Subscribe to {{ecloud}} directly from the Google Cloud Platform (GCP). You then have the convenience of viewing your {{ecloud}} subscription as part of your GCP bill, and you do not have to supply any additional credit card information to Elastic. Your investment in Elastic draws against your cloud purchase commitment.
1516

1617
Some differences exist when you subscribe to {{ecloud}} through the GCP Marketplace:
1718

18-
* There is no trial period. Billing starts when you subscribe to {{ecloud}}.
19-
* Existing {{ecloud}} organizations cannot be converted to use the GCP Marketplace.
20-
* Pricing for an {{ecloud}} subscription through the GCP Marketplace follows the pricing outlined on the [{{ecloud}}](https://console.cloud.google.com/marketplace/product/endpoints/elasticsearch-service.gcpmarketplace.elastic.co) page in the GCP Marketplace. Pricing is based the {{ecloud}} [Billing Dimensions](../../cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md).
19+
* New customers {{ecloud}} obtain a 7-day trial period. During this period, you can use a single deployment and three projects of {{ecloud}}. After this period, usage-based billing starts, unless you delete your cloud resources. Note that once customers unsubscribe from the GCP offer, their trial will end immediately. Even if they resubscribe, they will not be able to resume the trial.
20+
* Pricing for an {{ecloud}} subscription through the GCP Marketplace follows the pricing outlined on the [{{ecloud}}](https://console.cloud.google.com/marketplace/product/endpoints/elasticsearch-service.gcpmarketplace.elastic.co) page in the GCP Marketplace. Pricing is based the {{ecloud}} [billing dimensions](../../cloud-organization/billing.md#pricing-model).
2121
* To access your billing information at any time go to **Account & Billing**. You can also go to **Account & Billing** and then **Usage** to view your usage hours and units per hour.
2222

2323
::::{important}
@@ -47,15 +47,15 @@ To subscribe to {{ecloud}} through the GCP Marketplace:
4747
You are ready to [create your first deployment](create-an-elastic-cloud-hosted-deployment.md).
4848

4949

50-
If you have existing deployments that you want to migrate to your new marketplace account, we recommend using a custom repository to take a snapshot. Then restore that snapshot to a new deployment in your new marketplace account. Check [Snapshot and restore with custom repositories](../../tools/snapshot-and-restore/elastic-cloud-hosted.md) for details.
50+
If you plan to use {{ech}} and have existing deployments that you want to migrate to your new marketplace account, we recommend using a custom repository to take a snapshot. Then restore that snapshot to a new deployment in your new marketplace account. Check [Snapshot and restore with custom repositories](../../tools/snapshot-and-restore/elastic-cloud-hosted.md) for details.
5151

5252
::::{tip}
5353
Your new account is automatically subscribed to the Enterprise subscription level. You can [change your subscription level](../../cloud-organization/billing/manage-subscription.md).
5454
::::
5555

5656

5757

58-
## Changes to your Billing Account [ec-billing-gcp-account-change]
58+
## Changes to your billing account [ec-billing-gcp-account-change]
5959

6060
::::{important}
6161
To prevent downtime, do not remove the currently used billing account before the switch to the new billing account has been confirmed by Elastic.

deploy-manage/deploy/self-managed/_snippets/start-local.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,4 +36,4 @@ For more detailed information about the `start-local` setup, refer to the [READM
3636

3737
## Next steps [local-dev-next-steps]
3838

39-
Use our [quick start guides](/solutions/search/api-quickstarts.md) to learn the basics of {{es}}.
39+
Use our [quick start guides](/solutions/search/get-started/quickstarts.md) to learn the basics of {{es}}.

deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md

Lines changed: 50 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ sub:
1717

1818
# Install {{es}} with RPM [rpm]
1919

20-
The RPM package for {{es}} can be [downloaded from our website](#install-rpm) or from our [RPM repository](#rpm-repo). It can be used to install {{es}} on any RPM-based system such as OpenSuSE, SLES, Centos, Red Hat, and Oracle Enterprise.
20+
The RPM package for {{es}} can be [downloaded from our website](#install-rpm) or from our [RPM repository](#rpm-repo). It can be used to install {{es}} on any RPM-based system such as openSUSE, SUSE Linux Enterprise Server (SLES), CentOS, Red Hat Enterprise Linux (RHEL), and Oracle Linux.
2121

2222
::::{note}
2323
RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and CentOS 5. Refer to [Install {{es}} from archive on Linux or MacOS](install-elasticsearch-from-archive-on-linux-macos.md) instead.
@@ -48,14 +48,35 @@ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
4848

4949
## Step 2: Install {{es}}
5050

51-
You have several options for installing the {{es}} RPM package:
51+
You have two options for installing the {{es}} RPM package:
5252

5353
* [From the RPM repository](#rpm-repo)
5454
* [Manually](#install-rpm)
5555

5656
### Install from the RPM repository [rpm-repo]
5757

58-
Create a file called `elasticsearch.repo` in the `/etc/yum.repos.d/` directory for RedHat based distributions, or in the `/etc/zypp/repos.d/` directory for OpenSuSE based distributions, containing:
58+
1. Define a repository for {{es}}.
59+
60+
::::{tab-set}
61+
:group:linux-distros
62+
:::{tab-item} RedHat distributions
63+
:sync: rhel
64+
For RedHat based distributions, create a file called `elasticsearch.repo` in the `/etc/yum.repos.d/` directory and include the following configuration:
65+
66+
```ini subs=true
67+
[elasticsearch]
68+
name={{es}} repository for 9.x packages
69+
baseurl=https://artifacts.elastic.co/packages/9.x/yum
70+
gpgcheck=1
71+
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
72+
enabled=0
73+
type=rpm-md
74+
```
75+
:::
76+
77+
:::{tab-item} openSUSE distributions
78+
:sync: suse
79+
For openSUSE based distributions, create a file called `elasticsearch.repo` in the `/etc/zypp/repos.d/` directory and include the following configuration:
5980

6081
```ini subs=true
6182
[elasticsearch]
@@ -67,19 +88,38 @@ enabled=0
6788
autorefresh=1
6889
type=rpm-md
6990
```
70-
And your repository is ready for use. You can now install {{es}} with one of the following commands:
91+
:::
92+
::::
93+
94+
2. Install {{es}} from the repository you defined earlier.
95+
96+
::::{tab-set}
97+
:group:linux-distros
98+
:::{tab-item} RedHat distributions
99+
:sync: rhel
100+
If you use Fedora, or Red Hat Enterprise Linux 8 and later, enter the following command:
101+
102+
```sh
103+
sudo dnf install --enablerepo=elasticsearch elasticsearch
104+
```
105+
106+
If you use CentOS, or Red Hat Enterprise Linux 7 and earlier, enter the following command:
107+
```sh
108+
sudo yum install --enablerepo=elasticsearch elasticsearch
109+
```
110+
:::
111+
:::{tab-item} openSUSE distributions
112+
:sync: suse
113+
Enter the following command:
71114

72115
```sh
73-
sudo yum install --enablerepo=elasticsearch elasticsearch <1>
74-
sudo dnf install --enablerepo=elasticsearch elasticsearch <2>
75116
sudo zypper modifyrepo --enable elasticsearch && \
76117
sudo zypper install elasticsearch; \
77-
sudo zypper modifyrepo --disable elasticsearch <3>
118+
sudo zypper modifyrepo --disable elasticsearch
78119
```
120+
:::
121+
::::
79122

80-
1. Use `yum` on CentOS and older Red Hat based distributions.
81-
2. Use `dnf` on Fedora and other newer Red Hat distributions.
82-
3. Use `zypper` on OpenSUSE based distributions.
83123

84124
### Download and install the RPM manually [install-rpm]
85125

deploy-manage/kibana-reporting-configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -167,12 +167,12 @@ PUT <kibana host>:<port>/api/security/role/custom_reporting_user
167167

168168
If you are using an external identity provider, such as LDAP or Active Directory, you can assign roles to individual users or groups of users. Role mappings are configured in [`config/role_mapping.yml`](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md).
169169

170-
For example, assign the `kibana_admin` and `reporting_user` roles to the Bill Murray user:
170+
For example, assign the `kibana_admin` and `custom_reporting_user` roles to the Bill Murray user:
171171

172172
```yaml
173173
kibana_admin:
174174
- "cn=Bill Murray,dc=example,dc=com"
175-
reporting_user:
175+
custom_reporting_user:
176176
- "cn=Bill Murray,dc=example,dc=com"
177177
```
178178

deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,8 +115,10 @@ $$$built-in-roles-remote-monitoring-agent$$$ `remote_monitoring_agent`
115115
$$$built-in-roles-remote-monitoring-collector$$$ `remote_monitoring_collector`
116116
: Grants the minimum privileges required to collect monitoring data for the {{stack}}.
117117

118-
$$$built-in-roles-reporting-user$$$ `reporting_user`
119-
: Grants the necessary privileges required to use {{reporting}} features in {{kib}}, including generating and downloading reports. This role implicitly grants access to all {{kib}} reporting features, with each user having access only to their own reports. Note that reporting users should also be assigned additional roles that grant read access to the [indices](/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md#roles-indices-priv) that will be used to generate reports.
118+
$$$built-in-roles-reporting-user$$$ `reporting_user` {applies_to}`stack: deprecated 9.0`
119+
: This role is deprecated. Use [{{kib}} feature privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md#kibana-feature-privileges) instead.
120+
121+
Grants the necessary privileges required to use {{reporting}} features in {{kib}}, including generating and downloading reports. This role implicitly grants access to all {{kib}} reporting features, with each user having access only to their own reports. Note that reporting users should also be assigned additional roles that grant read access to the [indices](/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md#roles-indices-priv) that will be used to generate reports.
120122

121123
$$$built-in-roles-rollup-admin$$$ `rollup_admin`
122124
: Grants `manage_rollup` cluster privileges, which enable you to manage and execute all rollup actions.

docset.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,7 @@ subs:
200200
ilm: "index lifecycle management"
201201
ilm-cap: "Index lifecycle management"
202202
ilm-init: "ILM"
203+
dlm-init: "DLM"
203204
search-snap: "searchable snapshot"
204205
search-snaps: "searchable snapshots"
205206
search-snaps-cap: "Searchable snapshots"

manage-data/ingest/transform-enrich.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,14 +25,14 @@ Note that you can also perform transforms on existing {{es}} indices to pivot da
2525
: You can use [{{agent}} processors](/reference/fleet/agent-processors.md) to sanitize or enrich raw data at the source. Use {{agent}} processors if you need to control what data is sent across the wire, or if you need to enrich the raw data with information available on the host.
2626

2727
{{es}} ingest pipelines
28-
: You can use [{{es}} ingest pipelines](transform-enrich/ingest-pipelines.md) to enrich incoming data or normalize field data before the data is indexed. {{es}} ingest pipelines enable you to manipulate the data as it comes in. This approach helps you avoid adding processing overhead to the hosts from which you’re collecting data.
28+
: You can use [{{es}} ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md) to enrich incoming data or normalize field data before the data is indexed. {{es}} ingest pipelines enable you to manipulate the data as it comes in. This approach helps you avoid adding processing overhead to the hosts from which you’re collecting data.
2929

3030
: When you define a pipeline, you can configure one or more processors to operate on the incoming data. A typical use case is to transform specific strings to lowercase, or to sort the elements of incoming arrays into a given order. This section describes:
3131
* How to create, view, edit, and delete an ingest pipeline
3232
* How to set up processors to transform the data
3333
* How to test a pipeline before putting it into production.
3434

35-
: You can try out the [Parse logs](transform-enrich/example-parse-logs.md) example which shows you how to set up in ingest pipeline to transform incoming server logs into a standard format.
35+
: You can try out the [Parse logs](/manage-data/ingest/transform-enrich/example-parse-logs.md) example which shows you how to set up in ingest pipeline to transform incoming server logs into a standard format.
3636

3737
: The {{es}} enrich processor enables you to add data from existing indices to your incoming data, based on an enrich policy. The enrich policy contains a set of rules to match incoming documents to the fields containing the data to add. Refer to [Data enrichment](transform-enrich/data-enrichment.md) to learn how to set up an enrich processor. You can also try out a few examples that show how to enrich data based on geographic location, exact values such as email addresses or IDs, or a range of values such as a date or set of IP addresses.
3838

@@ -41,6 +41,9 @@ Note that you can also perform transforms on existing {{es}} indices to pivot da
4141

4242
: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](logstash://reference/index.md) and other [{{ls}} filters](logstash-docs-md://lsr/filter-plugins.md) to [extend Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}.
4343

44+
Ingest lag
45+
: Calculate the time it takes for data to travel from its source to {{es}}. This is key for monitoring performance and finding bottlenecks in your data pipelines. Learn how in [Calculate ingest lag](https://www.elastic.co/blog/calculating-ingest-lag-and-storing-ingest-time-in-elasticsearch-to-improve-observability).
46+
4447
Index mapping
4548
: Index mapping lets you control the structure that incoming data has within an {{es}} index. You can define all of the fields that are included in the index and their respective data types. For example, you can set fields for dates, numbers, or geolocations, and define the fields to have specific formats.
4649

0 commit comments

Comments
 (0)