diff --git a/manage-data/ingest/ingesting-data-from-applications.md b/manage-data/ingest/ingesting-data-from-applications.md index a5e6f36a2b..2025e0a9be 100644 --- a/manage-data/ingest/ingesting-data-from-applications.md +++ b/manage-data/ingest/ingesting-data-from-applications.md @@ -15,4 +15,30 @@ mapped_urls: % Use migrated content from existing pages that map to this page: % - [ ] ./raw-migrated-files/cloud/cloud/ec-ingest-guides.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-ingest-guides.md \ No newline at end of file +% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-ingest-guides.md + + +The following tutorials demonstrate how you can use the Elasticsearch language clients to ingest data from an application into a deployment in {{ech}} or {{ece}}. + +[Ingest data with Node.js](ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md) +: Get Node.js application data securely into your {{ech}} or {{ece}} deployment, where it can then be searched and modified. + +[Ingest data with Python](ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md) +: Get Python application data securely into your {{ech}} or {{ece}} deployment, where it can then be searched and modified. + +[Ingest data from Beats with Logstash as a proxy](ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md) +: Get server metrics or other types of data from Filebeat and Metricbeat into Logstash as an intermediary, and then send that data to your {{ech}} or {{ece}} deployment. Using Logstash as a proxy limits your Elastic Stack traffic through a single, external-facing firewall exception or rule. + +[Ingest data from a relational database](ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md) +: Get data from a relational database into your {{ech}} or {{ece}} deployment using the Logstash JDBC input plugin. Logstash can be used as an efficient way to copy records and to receive updates from a relational database as changes happen, and then send the new data to a deployment. + +[Ingest logs from a Python application using Filebeat](ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md) +: Get logs from a Python application and deliver them securely into your {{ech}} or {{ece}} deployment. You’ll set up Filebeat to monitor an ECS-formatted log file, and then view real-time visualizations of the log events in Kibana as they occur. + +[Ingest logs from a Node.js web application using Filebeat](ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md) +: Get HTTP request logs from a Node.js web application and deliver them securely into your {{ech}} or {{ece}} deployment. You’ll set up Filebeat to monitor an ECS-formatted log file and then view real-time visualizations of the log events as HTTP requests occur on your Node.js web server. + +::::{tip} +You can use [Elasticsearch ingest pipelines](transform-enrich/ingest-pipelines.md) to preprocess incoming data. This enables you to optimize how your data is indexed, and simplifies tasks such as extracting error codes from a log file and mapping geographic locations to IP addresses. +:::: + diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md index cf8928ecc3..1cf6cb6716 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md @@ -4,7 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-search-use-cases-beats-logstash.html --- -# Ingest data from Beats to Elastic Cloud with Logstash as a proxy +# Ingest data from Beats with Logstash as a proxy % What needs to be done: Refine @@ -57,13 +57,13 @@ $$$ece-beats-logstash-stdout$$$ $$$ece-beats-logstash-view-kibana$$$ -This guide explains how to ingest data from Filebeat and Metricbeat to {{ls}} as an intermediary, and then send that data to Elasticsearch Service. Using {{ls}} as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup: +This guide explains how to ingest data from Filebeat and Metricbeat to {{ls}} as an intermediary, and then send that data to your {{ech}} or {{ece}} deployment. Using {{ls}} as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup: -* You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to {{ls}}. {{ls}} then acts as a proxy through your firewall to send the Beats data to Elasticsearch Service, as shown in the following diagram: +* You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to {{ls}}. {{ls}} then acts as a proxy through your firewall to send the Beats data to your deployment, as shown in the following diagram: ![A diagram showing data from multiple Beats into Logstash](../../../images/cloud-ec-logstash-beats-dataflow.png "") -* This proxying reduces the firewall exceptions or rules necessary for Beats to communicate with Elasticsearch Service. It’s common to have many Beats dispersed across a network, each installed close to the data that it monitors, and each Beat individually communicating with an Elasticsearch Service deployment. Multiple Beats support multiple servers. Rather than configure each Beat to send its data directly to Elasticsearch Service, you can use {{ls}} to proxy this traffic through one firewall exception or rule. +* This proxying reduces the firewall exceptions or rules necessary for Beats to communicate with your {{ech}} or {{ece}} deployment. It’s common to have many Beats dispersed across a network, each installed close to the data that it monitors, and each Beat individually communicating with a deployment. Multiple Beats support multiple servers. Rather than configure each Beat to send its data directly to your {{ech}} or {{ece}} deployment, you can use {{ls}} to proxy this traffic through one firewall exception or rule. * This setup is not suitable in simple scenarios when there is only one or a couple of Beats in use. {{ls}} makes the most sense for proxying when there are many Beats. The configuration in this example makes use of the System module, available for both Filebeat and Metricbeat. Filebeat’s System sends server system log details (that is, login success/failures, sudo *superuser do* command usage, and other key usage details). Metricbeat’s System module sends memory, CPU, disk, and other server usage metrics. @@ -83,7 +83,7 @@ The configuration in this example makes use of the System module, available for 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ::: :::{tab-item} Elastic Cloud Enterprise @@ -98,9 +98,9 @@ Prefer not to subscribe to yet another service? You can also get Elasticsearch S ## Connect securely [ec-beats-logstash-connect-securely] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. You must pass the Cloud ID that you can find in the cloud console. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to your {{ech}} or {{ece}} deployment, you can use a Cloud ID to specify the connection details. You must pass the Cloud ID that you can find in the cloud console. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}} or {{ece}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ## Set up {{ls}} [ec-beats-logstash-logstash] @@ -110,7 +110,7 @@ To connect to, stream data to, and issue queries with Elasticsearch Service, you ## Set up Metricbeat [ec-beats-logstash-metricbeat] -Now that {{ls}} is downloaded and your Elasticsearch Service deployment is set up, you can configure Metricbeat to send operational data to {{ls}}. +Now that {{ls}} is downloaded and your deployment is set up, you can configure Metricbeat to send operational data to {{ls}}. Install Metricbeat as close as possible to the service that you want to monitor. For example, if you have four servers with MySQL running, we recommend that you run Metricbeat on each server. This allows Metricbeat to access your service from *localhost*. This setup does not cause any additional network traffic and enables Metricbeat to collect metrics even in the event of network problems. Metrics from multiple Metricbeat instances are combined on the {{ls}} server. @@ -126,7 +126,7 @@ Metricbeat has [many modules](https://www.elastic.co/guide/en/beats/metricbeat/c **Load the Metricbeat Kibana dashboards** -Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) *metricbeat-**, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the Elasticsearch Service deployment. +Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) *metricbeat-**, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the {{esh}} or {{ece}} deployment. ::::{note} Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been renamed to *data views*. To learn more, check the Kibana [What’s new in 8.0](https://www.elastic.co/guide/en/kibana/8.0/whats-new.html#index-pattern-rename) page. @@ -142,7 +142,7 @@ sudo ./metricbeat setup \ -E cloud.auth=: <2> ``` -1. Specify the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](https://www.elastic.co/guide/en/beats/libbeat/current/config-file-permissions.html) of the metricbeat.yml. @@ -230,7 +230,7 @@ sudo ./filebeat setup \ -E cloud.auth=: <2> ``` -1. Specify the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](https://www.elastic.co/guide/en/beats/libbeat/current/config-file-permissions.html) of the filebeat.yml. :::: @@ -418,7 +418,7 @@ Now, let’s try out the {{ls}} pipeline with the Metricbeats and Filebeats conf ## Output {{ls}} data to {{es}} [ec-beats-logstash-elasticsearch] -In this section, you configure {{ls}} to send the Metricbeat and Filebeat data to {{es}}. You modify the *beats.conf* created earlier, and specify the output credentials needed for our Elasticsearch Service deployment. Then, you start {{ls}} to send the Beats data into {{es}}. +In this section, you configure {{ls}} to send the Metricbeat and Filebeat data to {{es}}. You modify the *beats.conf* created earlier, and specify the output credentials needed for your {{ech}} or {{ece}} deployment. Then, you start {{ls}} to send the Beats data into {{es}}. 1. In your */logstash-/* folder, open *beats.conf* for editing. 2. Replace the *output {}* section of the JSON with the following code: @@ -436,7 +436,7 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t } ``` - 1. Use the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. + 1. Use the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](https://www.elastic.co/guide/en/beats/filebeat/current/feature-roles.html) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](https://www.elastic.co/guide/en/beats/filebeat/current/feature-roles.html) documentation. @@ -449,14 +449,14 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t If you use Metricbeat version 8.13.1, the index created in {{es}} is named *metricbeat-8.13.1*. Similarly, using the 8.13.1 version of Filebeat, the {{es}} index is named *filebeat-8.13.1*. - * *cloud_id*: This is the ID that uniquely identifies your Elasticsearch Service deployment. - * *ssl*: This should be set to `true` so that Secure Socket Layer (SSL) certificates are used for secure communication between {{ls}} and your Elasticsearch Service deployment. - * *ilm_enabled*: Enables and disables Elasticsearch Service [index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md). + * *cloud_id*: This is the ID that uniquely identifies your {{ech}} or {{ece}} deployment. + * *ssl*: This should be set to `true` so that Secure Socket Layer (SSL) certificates are used for secure communication between {{ls}} and your {{ech}} or {{ece}} deployment. + * *ilm_enabled*: Enables and disables [index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md). * *api_key*: If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here. -3. **Optional**: For additional security, you can generate an {{es}} API key through the Elasticsearch Service console and configure {{ls}} to use the new key to connect securely to the Elasticsearch Service. +3. **Optional**: For additional security, you can generate an {{es}} API key and configure {{ls}} to use the new key to connect securely to {{ech}} or {{ece}}. - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. For {{ech}}, log into [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), or for {{ece}}, log into the admin console. 2. Select the deployment and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following: @@ -524,7 +524,7 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t ./filebeat -c filebeat.yml ``` -7. {{ls}} now outputs the Filebeat and Metricbeat data to your Elasticsearch Service instance. +7. {{ls}} now outputs the Filebeat and Metricbeat data to your {{ech}} or {{ece}} deployment. ::::{note} In this guide, you manually launch each of the Elastic stack applications through the command line interface. In production, you may prefer to configure {{ls}}, Metricbeat, and Filebeat to run as System Services. Check the following pages for the steps to configure each application to run as a service: @@ -539,7 +539,7 @@ In this guide, you manually launch each of the Elastic stack applications throug ## View data in Kibana [ec-beats-logstash-view-kibana] -In this section, you log into Elasticsearch Service, open Kibana, and view the Kibana dashboards populated with our Metricbeat and Filebeat data. +In this section, you log into {{ech}} or {{ece}}, open Kibana, and view the Kibana dashboards populated with our Metricbeat and Filebeat data. **View the Metricbeat dashboard** diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md index 4b44c2f773..46400a1403 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md @@ -4,7 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-search-use-cases-db-logstash.html --- -# Ingest data from a relational database into Elastic Cloud +# Ingest data from a relational database % Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): @@ -40,7 +40,7 @@ $$$ece-db-logstash-pipeline$$$ $$$ece-db-logstash-prerequisites$$$ -This guide explains how to ingest data from a relational database into {{ess}} through [{{ls}}](https://www.elastic.co/guide/en/logstash/current/introduction.html), using the Logstash [JDBC input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an Elasticsearch Service deployment. +This guide explains how to ingest data from a relational database into {{ess}} through [{{ls}}](https://www.elastic.co/guide/en/logstash/current/introduction.html), using the Logstash [JDBC input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} or {{ece}} deployment. The code and methods presented here have been tested with MySQL. They should work with other relational databases. @@ -66,7 +66,7 @@ For this tutorial you need a source MySQL instance for Logstash to read from. A 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ::: :::{tab-item} Elastic Cloud Enterprise @@ -81,9 +81,9 @@ Prefer not to subscribe to yet another service? You can also get Elasticsearch S ## Connect securely [ec-db-logstash-connect-securely] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} or {{ece}} you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. 1. [Download](https://www.elastic.co/downloads/logstash) and unpack Logstash on the local machine that hosts MySQL or another machine granted access to the MySQL machine. @@ -98,7 +98,7 @@ The Logstash JDBC input plugin does not include any database connection drivers. ## Prepare a source MySQL database [ec-db-logstash-database] -Let’s look at a simple database from which you’ll import data and send it to Elasticsearch Service. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer to Elasticsearch Service. +Let’s look at a simple database from which you’ll import data and send it to a {{ech}} or {{ece}} deployment. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer. ### Consider the database structure and design [ec-db-logstash-database-structure] @@ -311,7 +311,7 @@ Let’s set up a sample Logstash input pipeline to ingest data from your new JDB ## Output to Elasticsearch [ec-db-logstash-output] -In this section, we configure Logstash to send the MySQL data to Elasticsearch. We modify the configuration file created in the section [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-pipeline) so that data is output directly to Elasticsearch. We start Logstash to send the data, and then log into Elasticsearch Service to verify the data in Kibana. +In this section, we configure Logstash to send the MySQL data to Elasticsearch. We modify the configuration file created in the section [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-pipeline) so that data is output directly to Elasticsearch. We start Logstash to send the data, and then log into your deployment to verify the data in Kibana. 1. Open the `jdbc.conf` file in the Logstash folder for editing. 2. Update the output section with the one that follows: @@ -329,7 +329,7 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. } ``` - 1. Use the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. + 1. Use the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](https://www.elastic.co/guide/en/logstash/current/ls-security.html) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](https://www.elastic.co/guide/en/logstash/current/ls-security.html) documentation. @@ -341,9 +341,9 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. api_key : If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here. -3. **Optional**: For additional security, you can generate an Elasticsearch API key through the Elasticsearch Service console and configure Logstash to use the new key to connect securely to Elasticsearch Service. +3. **Optional**: For additional security, you can generate an Elasticsearch API key through the {{ech}} or {{ece}} console and configure Logstash to use the new key to connect securely to your deployment. - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. For {{ech}}, log into [{{ecloud}}](https://cloud.elastic.co?page=docs&placement=docs-body), or for {{ece}}, log into the admin console. 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following: @@ -417,9 +417,9 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. bin/logstash -f jdbc.conf ``` -6. Logstash outputs the MySQL data to your Elasticsearch Service deployment. Let’s take a look in Kibana and verify that data: +6. Logstash outputs the MySQL data to your {{ech}} or {{ece}} deployment. Let’s take a look in Kibana and verify that data: - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. For {{ech}}, log into [{{ecloud}}](https://cloud.elastic.co?page=docs&placement=docs-body), or for {{ece}}, log into the admin console. 2. Select the deployment and go to **☰** > **Management** > **Dev Tools** 3. Copy and paste the following API GET request into the Console pane, and then click **▶**. This queries all records in the new `rdbms_idx` index. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md index 378831465e..6b449cbf24 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md @@ -8,9 +8,9 @@ mapped_urls: This guide tells you how to get started with: -* Securely connecting to Elasticsearch Service with Node.js +* Securely connecting to your {{ech}} or {{ece}} deployment with Node.js * Ingesting data into your deployment from your application -* Searching and modifying your data on Elasticsearch Service +* Searching and modifying your data If you are an Node.js application programmer who is new to the Elastic Stack, this content helps you get started more easily. @@ -28,7 +28,7 @@ If you are an Node.js application programmer who is new to the Elastic Stack, th 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ::: :::{tab-item} Elastic Cloud Enterprise @@ -91,9 +91,9 @@ The example here shows what the `config` package expects. You need to update `co ## About connecting securely [ec_about_connecting_securely] -When connecting to Elasticsearch Service use a Cloud ID to specify the connection details. You must pass the Cloud ID that is found in {{kib}} or the cloud console. +When connecting to {{ech}} or {{ece}}, use a Cloud ID to specify the connection details. You must pass the Cloud ID that is found in {{kib}} or the cloud console. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ### Basic authentication [ec_basic_authentication] @@ -290,19 +290,19 @@ const client = new Client({ }) ``` -Check [Create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on Elasticsearch Service, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). ### Best practices [ec_best_practices] Security -: When connecting to Elasticsearch Service, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. +: When connecting to {{ech}} or {{ece}}, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. - Do not enable sniffing when using Elasticsearch Service, since the nodes are behind a load balancer. Elasticsearch Service takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. + Do not enable sniffing when using {{ech}} or {{ece}}, since the nodes are behind a load balancer. {{ech}} and {{ece}} take care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. -Connections -: If your application connecting to Elasticsearch Service runs under the Java security manager, you should at least disable the caching of positive hostname resolutions. To learn more, check the [Java API Client documentation](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/_others.html). +Connections ({{ech}} only) +: If your application connecting to {{ech}} runs under the Java security manager, you should at least disable the caching of positive hostname resolutions. To learn more, check the [Java API Client documentation](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/_others.html). Schema : When the example code was run an index mapping was created automatically. The field types were selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you are designing the schema for your production use cases. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md index ecb9630fb8..10348c7746 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md @@ -4,13 +4,13 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-python.html --- -# Ingest data with Python on Elastic Cloud +# Ingest data with Python This guide tells you how to get started with: -* Securely connecting to Elasticsearch Service with Python +* Securely connecting to {{ech}} or {{ece}} with Python * Ingesting data into your deployment from your application -* Searching and modifying your data on Elasticsearch Service +* Searching and modifying your data If you are a Python application programmer who is new to the Elastic Stack, this content can help you get started more easily. @@ -49,7 +49,7 @@ elasticsearch>=7.0.0,<8.0.0 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ::: :::{tab-item} Elastic Cloud Enterprise @@ -64,9 +64,9 @@ Prefer not to subscribe to yet another service? You can also get Elasticsearch S ## Connect securely [ec_connect_securely] -When connecting to Elasticsearch Service you need to use your Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} or {{ece}}, you need to use your Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ### Basic authentication [ec_basic_authentication_2] @@ -333,7 +333,7 @@ POST /_security/api_key } ``` -Edit the `example.ini` file you created earlier and add the `id` and `api_key` you just created. You should also remove the lines for `user` and `password` you added earlier after you have tested the `api_key`, and consider changing the `elastic` password using the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +Edit the `example.ini` file you created earlier and add the `id` and `api_key` you just created. You should also remove the lines for `user` and `password` you added earlier after you have tested the `api_key`, and consider changing the `elastic` password using the [{{ech}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) or the {{ece}} admin console. ```sh [DEFAULT] @@ -351,7 +351,7 @@ es = Elasticsearch( ) ``` -Check [Create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on Elasticsearch Service, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/examples.html). @@ -359,9 +359,9 @@ For more information on refreshing an index, searching, updating, and deleting, ### Best practices [ec_best_practices_2] Security -: When connecting to Elasticsearch Service, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. +: When connecting to {{ech}} or {{ece}}, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. - Do not enable sniffing when using Elasticsearch Service, since the nodes are behind a load balancer. Elasticsearch Service takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. + Do not enable sniffing when using {{ech}} or {{ece}}, since the nodes are behind a load balancer. {{ech}} and {{ece}} take care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. Schema diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md index d59d711ce7..30c258fa0e 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md @@ -49,7 +49,7 @@ $$$ece-node-logs-send-ess$$$ $$$ece-node-logs-view-kibana$$$ -This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an Elasticsearch Service deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](https://www.elastic.co/guide/en/ecs-logging/overview/{{ecs-logging}}/intro.html#_get_started). +This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an {{ech}} or {{ece}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](https://www.elastic.co/guide/en/ecs-logging/overview/{{ecs-logging}}/intro.html#_get_started). *Time required: 1.5 hours* @@ -96,7 +96,7 @@ For the three following packages, you can create a working directory to install 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {[ech]} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ::: :::{tab-item} Elastic Cloud Enterprise @@ -111,9 +111,9 @@ Prefer not to subscribe to yet another service? You can also get Elasticsearch S ## Connect securely [ec-node-logs-connect-securely] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} or {{ece}}, you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ## Create a Node.js web application with logging [ec-node-logs-create-server-script] @@ -277,13 +277,13 @@ In this step, you’ll create a Node.js application that sends HTTP requests to ## Set up Filebeat [ec-node-logs-filebeat] -Filebeat offers a straightforward, easy to configure way to monitor your Node.js log files and port the log data into Elasticsearch Service. +Filebeat offers a straightforward, easy to configure way to monitor your Node.js log files and port the log data into your deployment. **Get Filebeat** [Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. -**Configure Filebeat to access Elasticsearch Service** +**Configure Filebeat to access {{ech}} or {{ece}}** In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. @@ -401,9 +401,9 @@ The Filebeat data view is now available in Elasticsearch. To verify: **Optional: Use an API key to authenticate** -For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the Elasticsearch Service console, and then configure Filebeat to use the new key to connect securely to the Elasticsearch Service deployment. +For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the {{ech}} or {{ece}} console, and then configure Filebeat to use the new key to connect securely to your deployment. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. For {{ech}}, log into [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), or for {{ece}}, log into the admin console. 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following request: @@ -502,9 +502,9 @@ node webrequests.js Let the script run for a few minutes and maybe brew up a quick coffee or tea ☕ . After that, make sure that the *log.json* file is generated as expected and is populated with several log entries. -**Verify the log entries in Elasticsearch Service** +**Verify the log entries** -The next step is to confirm that the log data has successfully found it’s way into Elasticsearch Service. +The next step is to confirm that the log data has successfully found it’s way into {{ech}} or {{ece}}. 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. @@ -567,5 +567,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat, the Node.js web server, and the client. Enter *CTRL + C* in the terminal window for each application to stop them. -You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an Elasticsearch Service deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md#ec-ingest-methods) to learn all about working in Elasticsearch Service. +You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md#ec-ingest-methods) to learn all about ingesting data. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index 1671f91e74..dd77f31624 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -60,7 +60,7 @@ python -m pip install ecs-logging 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ::: :::{tab-item} Elastic Cloud Enterprise @@ -75,9 +75,9 @@ Prefer not to subscribe to yet another service? You can also get Elasticsearch S ## Connect securely [ec_connect_securely_2] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} or {{ece}}, you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ## Create a Python script with logging [ec-python-logs-create-script] @@ -159,13 +159,13 @@ In this step, you’ll create a Python script that generates logs in JSON format ## Set up Filebeat [ec-python-logs-filebeat] -Filebeat offers a straightforward, easy to configure way to monitor your Python log files and port the log data into Elasticsearch Service. +Filebeat offers a straightforward, easy to configure way to monitor your Python log files and port the log data into your deployment. **Get Filebeat** [Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. -**Configure Filebeat to access Elasticsearch Service** +**Configure Filebeat to access {{ech}} or {{ece}}** In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. @@ -285,9 +285,9 @@ Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been rena **Optional: Use an API key to authenticate** -For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), and then configure Filebeat to use the new key to connect securely to the Elasticsearch Service deployment. +For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the through the {{ech}} or {{ece}} console, and then configure Filebeat to use the new key to connect securely to your deployment. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. For {{ech}}, log into [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), or for {{ece}}, log into the admin console. 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following request: @@ -380,9 +380,9 @@ python elvis.py Let the script run for a few minutes and maybe brew up a quick coffee or tea ☕ . After that, make sure that the *elvis.json* file is generated as expected and is populated with several log entries. -**Verify the log entries in Elasticsearch Service** +**Verify the log entries** -The next step is to confirm that the log data has successfully found it’s way into Elasticsearch Service. +The next step is to confirm that the log data has successfully found it’s way into your deployment. 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. @@ -446,5 +446,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat and the Python script. Enter *CTRL + C* in both your Filebeat terminal and in your `elvis.py` terminal. -You now know how to monitor log files from a Python application, deliver the log event data securely into an Elasticsearch Service deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md#ec-ingest-methods) to learn all about working in Elasticsearch Service. +You now know how to monitor log files from a Python application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md#ec-ingest-methods) to learn all about all about ingesting data.