Skip to content

Commit 0850bd5

Browse files
authored
Merge branch 'main' into on-week-geo
2 parents 92554aa + 4f97142 commit 0850bd5

File tree

92 files changed

+424
-156
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

92 files changed

+424
-156
lines changed

deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -274,10 +274,10 @@ $$$azure-integration-billing-different-deployments$$$How do I get different Elas
274274
: See [Integrated billing](#ec-azure-integration-billing-summary). To have different Elastic deployment/resources costs reported to different Azure subscriptions, they need to be in separate {{ecloud}} organizations. To create a separate {{ecloud}} organization from an Azure subscription, you will need to subscribe as a user who is not already part of an existing {{ecloud}} organization.
275275

276276
$$$azure-integration-billing-elastic-costs$$$Why can’t I see Elastic resources costs in Azure Cost Explorer?
277-
: The costs associated with Elastic resources (deployments) are reported under unassigned in the Azure Portal. Refer to [Understand your Azure external services charges](https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/understand-azure-marketplace-charges) in the Microsoft Documentation to understand Elastic resources/deployments costs. For granular Elastic resources costs, refer to [Monitor and analyze your acccount usage](../../cloud-organization/billing/monitor-analyze-usage.md).
277+
: The costs associated with Elastic resources (deployments) are reported under unassigned in the Azure Portal. Refer to [Understand your Azure external services charges](https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/understand-azure-marketplace-charges) in the Microsoft Documentation to understand Elastic resources/deployments costs. For granular Elastic resources costs, refer to [Monitor and analyze your account usage](../../cloud-organization/billing/monitor-analyze-usage.md).
278278

279279
$$$azure-integration-billing-deployments$$$Why don’t I see my individual Elastic resources (deployments) in the Azure Marketplace Invoice?
280-
: The way Azure Marketplace Billing Integration works today, the costs for Elastic resources (deployments) are reported for an {{ecloud}} organization as a single line item, reported against the Marketplace SaaS ID. This includes the Elastic deployments created using the Azure Portal, API, SDK, or CLI, and also the Elastic deployments created directly from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) in the respective {{ecloud}} organization. For granular Elastic resources costs refer to [Monitor and analyze your acccount usage](../../cloud-organization/billing/monitor-analyze-usage.md). As well, for more detail refer to [Integrated billing](#ec-azure-integration-billing-summary).
280+
: The way Azure Marketplace Billing Integration works today, the costs for Elastic resources (deployments) are reported for an {{ecloud}} organization as a single line item, reported against the Marketplace SaaS ID. This includes the Elastic deployments created using the Azure Portal, API, SDK, or CLI, and also the Elastic deployments created directly from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) in the respective {{ecloud}} organization. For granular Elastic resources costs refer to [Monitor and analyze your account usage](../../cloud-organization/billing/monitor-analyze-usage.md). As well, for more detail refer to [Integrated billing](#ec-azure-integration-billing-summary).
281281

282282
:::{image} /deploy-manage/images/cloud-ec-azure-billing-example.png
283283
:alt: Example billing report in the {{ecloud}} console
@@ -289,7 +289,7 @@ $$$azure-integration-billing-instance-values$$$Why can’t I find Instance ID an
289289

290290
For instance: Elastic Organization `Org1` is associated with a Marketplace SaaS (Microsoft.SaaS) asset `AzureElastic_GUID_NAME`. The Elastic resources (`Microsoft.Elastic`) `E1`, `E2`, and `E3` within `Org1` are all mapped to `AzureElastic_GUID_NAME`.
291291

292-
`Microsoft.SaaS` (Instance name) asset is shown in the Azure Marketplace invoice and represents costs related to an {{ecloud}} organization and not individual Elastic resources (deployments). To see the cost breakdown for individual Elastic resources (deployments), refer to [Monitor and analyze your acccount usage](../../cloud-organization/billing/monitor-analyze-usage.md).
292+
`Microsoft.SaaS` (Instance name) asset is shown in the Azure Marketplace invoice and represents costs related to an {{ecloud}} organization and not individual Elastic resources (deployments). To see the cost breakdown for individual Elastic resources (deployments), refer to [Monitor and analyze your account usage](../../cloud-organization/billing/monitor-analyze-usage.md).
293293

294294
:::{image} /deploy-manage/images/cloud-ec-azure-missing-instance-id.png
295295
:alt: Instance ID not found error in Azure console

deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,9 @@ This table compares Observability capabilities between {{ech}} deployments and O
117117
| [**AWS Firehose integration**](/solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md) ||| |
118118
| [**Custom roles for Kibana Spaces**](/deploy-manage/manage-spaces.md#spaces-control-user-access) ||| |
119119
| [**Data stream lifecycle**](/manage-data/lifecycle/data-stream.md) ||| Primary lifecycle management method in Serverless |
120-
| [**EDOT Cloud Forwarder**](opentelemetry://reference/edot-cloud-forwarder/index.md) ||| |
120+
| [**EDOT Central Configuration**](opentelemetry://reference/central-configuration.md) ||| |
121+
| [**EDOT Cloud Forwarder**](opentelemetry://reference/edot-cloud-forwarder/index.md) ||| |
122+
| [**EDOT Tail-based sampling**](elastic-agent://reference/edot-collector/config/tail-based-sampling.md) ||| |
121123
| **[Elastic Serverless Forwarder](elastic-serverless-forwarder://reference/index.md)** ||| |
122124
| **[Elastic Synthetics Private Locations](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-add)** ||| |
123125
| **[Fleet Agent policies](/reference/fleet/agent-policy.md)** ||| |
@@ -127,6 +129,7 @@ This table compares Observability capabilities between {{ech}} deployments and O
127129
| **[Kibana Alerts](/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md)** ||| |
128130
| **[LogsDB index mode](/manage-data/data-store/data-streams/logs-data-stream.md)** ||| - Reduces storage footprint <br> - Enabled by default <br>- Cannot be disabled |
129131
| **[Logs management](/solutions/observability/logs.md)** ||| |
132+
| **[Managed OTLP Endpoint](opentelemetry:///reference/motlp.md)** ||| |
130133
| **[Metrics monitoring](/solutions/observability/apm/metrics.md)** ||| |
131134
| **[Observability SLO](/solutions/observability/incident-management/service-level-objectives-slos.md)** ||| |
132135
| [**Real User Monitoring (RUM)**](/solutions/observability/applications/user-experience.md) || **Planned** | Anticipated in a future release |

explore-analyze/machine-learning/anomaly-detection/ml-ad-algorithms.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ products:
1010

1111
# Anomaly detection algorithms [ml-ad-algorithms]
1212

13-
The {{anomaly-detect}} {{ml-features}} use a bespoke amalgamation of different techniques such as clustering, various types of time series decomposition, Bayesian distribution modeling, and correlation analysis. These analytics provide sophisticated real-time automated {{anomaly-detect}} for time series data.
13+
The {{anomaly-detect}} {{ml-features}} use a combination of advanced mathematical techniques such as clustering, various types of time series decomposition, Bayesian distribution modeling, and correlation analysis. These analytics provide sophisticated real-time automated {{anomaly-detect}} for time series data.
1414

1515
The {{ml}} analytics statistically model the time-based characteristics of your data by observing historical behavior and adapting to new data. The model represents a baseline of normal behavior and can therefore be used to determine how anomalous new events are.
1616

manage-data/ingest/ingest-reference-architectures.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ You can host {{es}} on your own hardware or send your data to {{es}} on {{ecloud
2424
| --- | --- |
2525
| [*{{agent}} to Elasticsearch*](./ingest-reference-architectures/agent-to-es.md)<br><br>![Image showing {{agent}} collecting data and sending to {{es}}](/manage-data/images/ingest-ea-es.png "") | An [{{agent}} integration](https://docs.elastic.co/en/integrations) is available for your data source:<br><br>* Software components with [{{agent}} installed](./ingest-reference-architectures/agent-installed.md)<br>* Software components using [APIs for data collection](./ingest-reference-architectures/agent-apis.md)<br> |
2626
| [*{{agent}} to {{ls}} to Elasticsearch*](./ingest-reference-architectures/agent-ls.md)<br><br>![Image showing {{agent}} to {{ls}} to {{es}}](/manage-data/images/ingest-ea-ls-es.png "") | You need additional capabilities offered by {{ls}}:<br><br>* [**enrichment**](./ingest-reference-architectures/ls-enrich.md) between {{agent}} and {{es}}<br>* [**persistent queue (PQ) buffering**](./ingest-reference-architectures/lspq.md) to accommodate network issues and downstream unavailability<br>* [**proxying**](./ingest-reference-architectures/ls-networkbridge.md) in cases where {{agent}}s have network restrictions for connecting outside of the {{agent}} network<br>* data needs to be [**routed to multiple**](./ingest-reference-architectures/ls-multi.md) {{es}} clusters and other destinations depending on the content<br> |
27-
| [*{{agent}} to proxy to Elasticsearch*](./ingest-reference-architectures/agent-proxy.md)<br><br>![Image showing connections between {{agent}} and {{es}} using a proxy](/manage-data/images/ingest-ea-proxy-es.png "") | Agents have [network restrictions](./ingest-reference-architectures/agent-proxy.md) that prevent connecting outside of the {{agent}} network Note that [{{ls}} as proxy](./ingest-reference-architectures/ls-networkbridge.md) is one option.<br> |
27+
| [*{{agent}} to proxy to Elasticsearch*](./ingest-reference-architectures/agent-proxy.md)<br><br>![Image showing connections between {{agent}} and {{es}} using a proxy](/manage-data/images/ingest-ea-proxy-es.png "") | Agents have [network restrictions](./ingest-reference-architectures/agent-proxy.md) that prevent connecting outside of the {{agent}} network. [{{ls}} as proxy](./ingest-reference-architectures/ls-networkbridge.md) is one option.<br> |
2828
| [*{{agent}} to {{es}} with Kafka as middleware message queue*](./ingest-reference-architectures/agent-kafka-es.md)<br><br>![Image showing {{agent}} collecting data and using Kafka as a message queue enroute to {{es}}](/manage-data/images/ingest-ea-kafka.png "") | Kafka is your [middleware message queue](./ingest-reference-architectures/agent-kafka-es.md):<br><br>* [Kafka ES sink connector](./ingest-reference-architectures/agent-kafka-essink.md) to write from Kafka to {{es}}<br>* [{{ls}} to read from Kafka and route to {{es}}](./ingest-reference-architectures/agent-kafka-ls.md)<br> |
2929
| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)<br><br>![Image showing {{ls}} collecting data and sending to {{es}}](/manage-data/images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](logstash-docs-md://lsr/input-plugins.md).<br> |
3030
| [*Elastic air-gapped architectures*](./ingest-reference-architectures/airgapped-env.md)<br><br>![Image showing {{stack}} in an air-gapped environment](/manage-data/images/ingest-ea-airgapped.png "") | You want to deploy {{agent}} and {{stack}} in an air-gapped environment (no access to outside networks)<br> |

manage-data/ingest/ingest-reference-architectures/agent-kafka-es.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ products:
1212
:::
1313

1414
Ingest models
15-
: [{{agent}} to {{ls}} to Kafka to {{ls}} to {{es}}: Kafka as middleware message queue](agent-kafka-ls.md).<br> {{ls}} reads data from Kafka and routes it to {{es}} clusters (and/or other destinations).
15+
: [{{agent}} to {{ls}} to Kafka to {{ls}} to {{es}}: Kafka as middleware message queue](agent-kafka-ls.md).<br> {{ls}} reads data from Kafka and routes it to {{es}} clusters and other destinations.
1616

1717
[{{agent}} to {{ls}} to Kafka to Kafka ES Sink to {{es}}: Kafka as middleware message queue](agent-kafka-essink.md).<br> Kafka ES sink connector reads from Kafka and writes to {{es}}.
1818

manage-data/ingest/ingest-reference-architectures/ls-for-input.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ products:
1212
:::
1313

1414
Ingest model
15-
: {{ls}} to collect data from sources not currently supported by {{agent}} and sending the data to {{es}}. Note that the data transformation still happens within the {{es}} ingest pipeline.
15+
: {{ls}} to collect data from sources not currently supported by {{agent}} and sending the data to {{es}}. The data transformation still happens within the {{es}} ingest pipeline.
1616

1717
Use when
1818
: {{agent}} doesn’t currently support your data source.

manage-data/ingest/ingest-reference-architectures/ls-multi.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,15 @@ products:
1313
:::
1414

1515
Ingest model
16-
: {{agent}} to {{ls}} to {{es}} clusters and/or additional destinations
16+
: {{agent}} to {{ls}} to {{es}} clusters and additional destinations
1717

1818
Use when
1919
: Data collected by {{agent}} needs to be routed to different {{es}} clusters or non-{{es}} destinations depending on the content
2020

2121
Example
2222
: Let’s take an example of a Windows workstation, for which we are collecting different types of logs using the System and Windows integrations. These logs need to be sent to different {{es}} clusters and to S3 for backup and a mechanism to send it to other destinations such as different SIEM solutions. In addition, the {{es}} destination is derived based on the type of datastream and an organization identifier.
2323

24-
In such use cases, agents send the data to {{ls}} as a routing mechanism to different destinations. Note that the System and Windows integrations must be installed on all {{es}} clusters to which the data is routed.
24+
In such use cases, agents send the data to {{ls}} as a routing mechanism to different destinations. The System and Windows integrations must be installed on all {{es}} clusters to which the data is routed.
2525

2626

2727
Sample config

manage-data/ingest/ingest-reference-architectures/lspq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Ingest model
1616
: {{agent}} to {{ls}} persistent queue to {{es}}
1717

1818
Use when
19-
: Your data flow may encounter network issues, bursts of events, and/or downstream unavailability and you need the ability to buffer the data before ingestion.
19+
: Your data flow may encounter network issues, bursts of events, or downstream unavailability, and you need the ability to buffer the data before ingestion.
2020

2121

2222
## Resources [lspq-resources]

manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ For this example, let’s create a new database *es_db* with table *es_table*, a
109109
110110
There are two possible ways to address this:
111111
112-
* You can use "soft deletes" in your source database. Essentially, a record is first marked for deletion through a boolean flag. Other programs that are currently using your source database would have to filter out "soft deletes" in their queries. The "soft deletes" are sent over to Elasticsearch, where they can be processed. After that, your source database and Elasticsearch must both remove these "soft deletes."
112+
* You can use "soft deletes" in your source database. Essentially, a record is first marked for deletion through a boolean flag. Other programs that are currently using your source database would have to filter out "soft deletes" in their queries. The "soft deletes" are sent over to Elasticsearch, where they can be processed. After that, your source database and Elasticsearch must both remove these "soft deletes".
113113
* You can periodically clear the Elasticsearch indices that are based off of the database, and then refresh Elasticsearch with a fresh ingest of the contents of the database.
114114
115115
3. Log in to your MySQL server and add three records to your new database:
@@ -122,7 +122,7 @@ For this example, let’s create a new database *es_db* with table *es_table*, a
122122
(3,"Stark");
123123
```
124124
125-
4. Verify your data with a SQL statement:
125+
4. Verify your data with an SQL statement:
126126
127127
```txt
128128
select * from es_table;
@@ -364,7 +364,7 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch.
364364
}
365365
```
366366
367-
4. At this point, if you simply restart Logstash as is with your new output, then no MySQL data is sent to our Elasticsearch index.
367+
4. If you simply restart Logstash as is with your new output, then no MySQL data is sent to our Elasticsearch index.
368368
369369
Why? Logstash retains the previous `sql_last_value` timestamp and sees that no new changes have occurred in the MySQL database since that time. Therefore, based on the SQL query that we configured, there’s no new data to send to Logstash.
370370

manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ For the three following packages, you can create a working directory to install
4242
npm install @elastic/ecs-winston-format
4343
```
4444

45-
* [Got](https://www.npmjs.com/package/got): Got is a "Human-friendly and powerful HTTP request library for Node.js." - this plugin can be used to query the sample web server used in the tutorial. To install the Got package, run the following command in your working directory:
45+
* [Got](https://www.npmjs.com/package/got): Got is a "Human-friendly and powerful HTTP request library for Node.js" - this plugin can be used to query the sample web server used in the tutorial. To install the Got package, run the following command in your working directory:
4646

4747
```sh
4848
npm install got

0 commit comments

Comments
 (0)