You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Cloud ID reduces the number of steps required to start sending data from Beats or Logstash to your hosted Elasticsearch cluster on Elastic Cloud Enterprise. Because we made it easier to send data, you can start exploring visualizations in Kibana on Elastic Cloud Enterprise that much more quickly.
12
+
The Cloud ID reduces the number of steps required to start sending data from Beats or Logstash to your hosted Elasticsearch cluster on {{ece}}. Because we made it easier to send data, you can start exploring visualizations in Kibana on {{ece}} that much more quickly.
:alt: Exploring data from Beats or Logstash in Kibana after sending it to a hosted Elasticsearch cluster
12
16
:::
13
17
14
-
The Cloud ID works by assigning a unique ID to your hosted Elasticsearch cluster on Elastic Cloud Enterprise. All deployments automatically get a Cloud ID.
18
+
The Cloud ID works by assigning a unique ID to your hosted Elasticsearch cluster on {{ece}}. All deployments automatically get a Cloud ID.
15
19
16
-
You include your Cloud ID along with your Elastic Cloud Enterprise user credentials (defined in `cloud.auth`) when you run Beats or Logstash locally, and then let Elastic Cloud Enterprise handle all of the remaining connection details to send the data to your hosted cluster on Elastic Cloud Enterprise safely and securely.
20
+
You include your Cloud ID along with your {{ece}} user credentials (defined in `cloud.auth`) when you run Beats or Logstash locally, and then let {{ece}} handle all of the remaining connection details to send the data to your hosted cluster on {{ece}} safely and securely.
:alt: The Cloud ID and `elastic` user information shown when you create a deployment
20
24
:::
21
25
22
-
23
26
## What are Beats and Logstash? [ece_what_are_beats_and_logstash]
24
27
25
28
Not sure why you need Beats or Logstash? Here’s what they do:
26
29
27
-
*[Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to Elasticsearch. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted Elasticsearch cluster on Elastic Cloud Enterprise. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in Elasticsearch.
28
-
*[Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted Elasticsearch cluster on Elastic Cloud Enterprise. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion.
29
-
30
+
*[Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to Elasticsearch. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted Elasticsearch cluster on {{ece}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in Elasticsearch.
31
+
*[Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted Elasticsearch cluster on {{ece}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion.
30
32
31
33
## Before you begin [ece_before_you_begin_16]
32
34
@@ -41,18 +43,15 @@ To use the Cloud ID, you need:
41
43
42
44
In our examples, we use the `elastic` superuser that every Elasticsearch cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/feature-roles.md) documentation.
43
45
44
-
45
-
46
46
## Configure Beats with your Cloud ID [ece-cloud-id-beats]
47
47
48
-
The following example shows how you can send operational data from Metricbeat to Elastic Cloud Enterprise by using the Cloud ID. Any of the available Beats will work, but we had to pick one for this example.
48
+
The following example shows how you can send operational data from Metricbeat to {{ece}} by using the Cloud ID. Any of the available Beats will work, but we had to pick one for this example.
49
49
50
50
::::{tip}
51
51
For others, you can learn more about [getting started](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md) with each Beat.
52
52
::::
53
53
54
-
55
-
To get started with Metricbeat and Elastic Cloud Enterprise:
54
+
To get started with Metricbeat and {{ece}}:
56
55
57
56
1.[Log into the Cloud UI](log-into-cloud-ui.md).
58
57
2.[Create a new deployment](create-deployment.md) and copy down the password for the `elastic` user.
% note: this document is super outdated, the curl example doesn't use authentication and we add a note about Elasticsearch 5.0 or later to add user & password
11
+
6
12
# Find your endpoint URL [ece-connect]
7
13
8
-
To connect to your Elasticsearch cluster, you need to look up the the cluster Endpoint URL:
14
+
To connect to your Elasticsearch cluster, you need to look up the the Endpoint URLs:
9
15
10
16
1.[Log into the Cloud UI](log-into-cloud-ui.md), if you aren’t logged in already.
11
17
2. On the **Deployments** page, select one of your deployments.
@@ -33,7 +39,7 @@ To connect to your Elasticsearch cluster, you need to look up the the cluster En
33
39
Currently, we support the following ways of connecting to an Elasticsearch cluster:
34
40
35
41
RESTful API with JSON over HTTP and HTTPS
36
-
: Used by the `curl` command and most programming languages that aren’t Java. To interact with your cluster, use your Elasticsearch cluster endpoint information from the **Overview** page in the Cloud UI. Port 9200 is used for plain text, insecure HTTP connections while port 9243 is used for HTTPS. Using HTTPS is generally recommended, as it is more secure.
42
+
: Used by the `curl` command and most programming languages that aren’t Java. To interact with your cluster, use your Elasticsearch cluster endpoint information from the deployment overview page in the Cloud UI. Port 9200 is used for plain text, insecure HTTP connections, while port 9243 is used for HTTPS. Using HTTPS is generally recommended, as it is more secure.
37
43
38
44
If this is your first time using Elasticsearch, you can try out some `curl` commands to become familiar with the basics. If you’re on an operating system like macOS or Linux, you probably already have the `curl` command installed. For example, to connect to your cluster from the command line over HTTPS with the `curl` command:
39
45
@@ -60,7 +66,7 @@ RESTful API with JSON over HTTP and HTTPS
60
66
If you created a cluster on Elasticsearch 5.0 or later or if you already enabled the security features, you must include authentication details with the -u parameter. For example: `curl -u elastic:W0UN0Rh9WX8eKeN69grVk3bX https://85943ce00a934471cb971009e73d2d39.us-east-1.aws.found.io:9243`. You can check [Get existing ECE security certificates](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md) for how to get the CA certificate (`elastic-ece-ca-cert.pem` in this example) and use it to connect to the Elasticsearch cluster.
61
67
::::
62
68
69
+
## Ingest methods
63
70
64
-
Ingest methods
65
-
: There are several ways to connect to Elasticsearch, perform searches, insert data, and more. See the [ingesting data](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html) documentation.
71
+
There are several ways to connect to Elasticsearch, perform searches, insert data, and more. See the [ingesting data](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html) documentation.
You can apply this setting through [cluster update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html), as described in [](./configuring-audit-logs.md). Alternatively, you can modify `elasticsearch.yml` in all nodes and restart for the changes to take effect.
24
+
You can apply this setting through [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings), as described in [](./configuring-audit-logs.md). Alternatively, you can modify `elasticsearch.yml` in all nodes and restart for the changes to take effect.
25
25
26
26
::::{important}
27
27
No filtering is performed when auditing, so sensitive data might be audited in plain text when audit events include the request body. Also, the request body can contain malicious content that can break a parser consuming the audit logs.
Copy file name to clipboardExpand all lines: deploy-manage/monitor/logging-configuration/configuring-audit-logs.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ When auditing security events, a single client request might generate multiple a
20
20
*[{{es}} ignore policies settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/auding-settings.md#audit-event-ignore-policies): Use ignore policies for fine-grained control over which audit events are printed to the log file.
21
21
22
22
::::{tip}
23
-
In {{es}}, all auditing settings except `xpack.security.audit.enabled` are dynamic. This means you can configure them using the [cluster update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html), allowing changes to take effect immediately without requiring a restart. This approach is faster and more convenient than modifying `elasticsearch.yml`.
23
+
In {{es}}, all auditing settings except `xpack.security.audit.enabled` are dynamic. This means you can configure them using the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings), allowing changes to take effect immediately without requiring a restart. This approach is faster and more convenient than modifying `elasticsearch.yml`.
24
24
::::
25
25
26
26
For a complete description of event details and format, refer to the following resources:
Copy file name to clipboardExpand all lines: deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ Note that the environment ID and cluster IDs must be entered fully and correctly
51
51
::::{dropdown} **Using the API**
52
52
You can update a deployment using the appropriate trust settings for the {{es}} payload.
53
53
54
-
Establishing the trust between the two {{ece}} environments can be done using the [trust relationships API](https://www.elastic.co/guide/en/cloud-enterprise/current/Platform_-_Configuration_-_Trust_relationships.html). For example, the list of trusted environments can be obtained calling the [list trust relationships endpoint](https://www.elastic.co/guide/en/cloud-enterprise/current/get-trust-relationships.html):
54
+
Establishing the trust between the two {{ece}} environments can be done using the [trust relationships API](https://www.elastic.co/docs/api/doc/cloud-enterprise/group/endpoint-platformconfigurationtrustrelationships). For example, the list of trusted environments can be obtained calling the [list trust relationships endpoint](https://www.elastic.co/docs/api/doc/cloud-enterprise/group/endpoint-platformconfigurationtrustrelationships):
55
55
56
56
```sh
57
57
curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST:12443//api/v1/regions/ece-region/platform/configuration/trust-relationships?include_certificate=false
Copy file name to clipboardExpand all lines: manage-data/data-store/index-basics.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,8 +77,8 @@ Investigate your indices and perform operations from the **Indices** view.
77
77
:class: screenshot
78
78
:::
79
79
80
-
* To show details and perform operations, click the index name. To perform operations on multiple indices, select their checkboxes and then open the **Manage** menu. For more information on managing indices, refer to [Index APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html).
81
-
* To filter the list of indices, use the search bar or click a badge. Badges indicate if an index is a [follower index](https://www.elastic.co/guide/en/elasticsearch/reference/current/ccr-put-follow.html), a [rollup index](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-get-rollup-index-caps.html), or [frozen](https://www.elastic.co/guide/en/elasticsearch/reference/current/unfreeze-index-api.html).
80
+
* To show details and perform operations, click the index name. To perform operations on multiple indices, select their checkboxes and then open the **Manage** menu. For more information on managing indices, refer to [Index APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-indices).
81
+
* To filter the list of indices, use the search bar or click a badge. Badges indicate if an index is a [follower index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-follow), a [rollup index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-get-rollup-index-caps), or [frozen](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-unfreeze).
82
82
* To drill down into the index [mappings](/manage-data/data-store/mapping.md), [settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-settings/index.md#index-modules-settings), and statistics, click an index name. From this view, you can navigate to **Discover** to further explore the documents in the index.
83
83
* To create new indices, use the **Create index** wizard.
| Integrations | Ingest data using a variety of Elastic integrations. |[Elastic Integrations](asciidocalypse://docs/integration-docs/docs/reference/ingestion-tools/index.md)|
42
42
| File upload | Upload data from a file and inspect it before importing it into {{es}}. |[Upload data files](/manage-data/ingest/upload-data-files.md)|
43
-
| APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. |[Document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html)|
43
+
| APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. |[Document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)|
44
44
| OpenTelemetry | Collect and send your telemetry data to Elastic Observability |[Elastic Distributions of OpenTelemetry](https://github.com/elastic/opentelemetry?tab=readme-ov-file#elastic-distributions-of-opentelemetry)|
45
45
| Fleet and Elastic Agent | Add monitoring for logs, metrics, and other types of data to a host using Elastic Agent, and centrally manage it using Fleet. |[Fleet and {{agent}} overview](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md) <br> [{{fleet}} and {{agent}} restrictions (Serverless)](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/fleet-agent-serverless-restrictions.md) <br> [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)||
46
46
| {{elastic-defend}} | {{elastic-defend}} provides organizations with prevention, detection, and response capabilities with deep visibility for EPP, EDR, SIEM, and Security Analytics use cases across Windows, macOS, and Linux operating systems running on both traditional endpoints and public cloud environments. |[Configure endpoint protection with {{elastic-defend}}](/solutions/security/configure-elastic-defend.md)|
Copy file name to clipboardExpand all lines: manage-data/lifecycle/index-lifecycle-management.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ To use {{ilm-init}}, all nodes in a cluster must run the same version. Although
35
35
36
36
***Rollover**: Creates a new write index when the current one reaches a certain size, number of docs, or age.
37
37
***Shrink**: Reduces the number of primary shards in an index.
38
-
***Force merge**: Triggers a [force merge](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html) to reduce the number of segments in an index’s shards.
38
+
***Force merge**: Triggers a [force merge](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge) to reduce the number of segments in an index’s shards.
39
39
***Delete**: Permanently remove an index, including all of its data and metadata.
Copy file name to clipboardExpand all lines: solutions/search/retrievers-examples.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1271,7 +1271,7 @@ The output of which, albeit a bit verbose, will provide all the necessary info t
1271
1271
1272
1272
## Example: Rerank results of an RRF retriever [retrievers-examples-text-similarity-reranker-on-top-of-rrf]
1273
1273
1274
-
To demonstrate the full functionality of retrievers, the following examples also require access to a [semantic reranking model](/solutions/search/ranking/semantic-reranking.md) set up using the [Elastic inference APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html).
1274
+
To demonstrate the full functionality of retrievers, the following examples also require access to a [semantic reranking model](/solutions/search/ranking/semantic-reranking.md) set up using the [Elastic inference APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference).
1275
1275
1276
1276
In this example we’ll set up a reranking service and use it with the `text_similarity_reranker` retriever to rerank our top results.
0 commit comments