From 2d4be1adc8a8ecc087c928ab2194fe7f6530ab1f Mon Sep 17 00:00:00 2001 From: wajihaparvez Date: Thu, 20 Feb 2025 15:44:00 -0500 Subject: [PATCH 1/2] [Docs] General clean up of Manage data docs --- .../data-streams/logs-data-stream.md | 6 ++--- .../data-store/data-streams/set-up-tsds.md | 2 +- manage-data/data-store/mapping.md | 24 ------------------- .../ignore-missing-component-templates.md | 11 --------- ...icsearch-service-with-logstash-as-proxy.md | 16 +++++++------ ...nal-database-into-elasticsearch-service.md | 2 +- ...-from-python-application-using-filebeat.md | 2 +- manage-data/ingest/tools.md | 22 ----------------- manage-data/ingest/upload-data-files.md | 7 ------ 9 files changed, 15 insertions(+), 77 deletions(-) diff --git a/manage-data/data-store/data-streams/logs-data-stream.md b/manage-data/data-store/data-streams/logs-data-stream.md index 59d823aaa6..f52081ea13 100644 --- a/manage-data/data-store/data-streams/logs-data-stream.md +++ b/manage-data/data-store/data-streams/logs-data-stream.md @@ -65,7 +65,7 @@ In `logsdb` index mode, indices are sorted by the fields `host.name` and `@times * To prioritize the latest data, `host.name` is sorted in ascending order and `@timestamp` is sorted in descending order. -You can override the default sort settings by manually configuring `index.sort.field` and `index.sort.order`. For more details, see [*Index Sorting*](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-settings/index-sorting-settings.md). +You can override the default sort settings by manually configuring `index.sort.field` and `index.sort.order`. For more details, see [Index Sorting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-settings/index-sorting-settings.md). To modify the sort configuration of an existing data stream, update the data stream’s component templates, and then perform or wait for a [rollover](../data-streams.md#data-streams-rollover). @@ -176,6 +176,6 @@ The `logsdb` index mode uses the following settings: * **`index.mapping.total_fields.ignore_dynamic_beyond_limit`**: `true` -## Notes about upgrading to Logsdb [upgrade-to-logsdb-notes] +% ## Notes about upgrading to Logsdb [upgrade-to-logsdb-notes] -TODO: add notes. +% TODO: add notes. diff --git a/manage-data/data-store/data-streams/set-up-tsds.md b/manage-data/data-store/data-streams/set-up-tsds.md index 66a3ed10e0..780cd45a89 100644 --- a/manage-data/data-store/data-streams/set-up-tsds.md +++ b/manage-data/data-store/data-streams/set-up-tsds.md @@ -220,7 +220,7 @@ The reasons for this is that each component template needs to be valid on its ow Now that you’ve set up your TSDS, you can manage and use it like a regular data stream. For more information, refer to: -* [*Use a data stream*](use-data-stream.md) +* [Use a data stream](use-data-stream.md) * [Change mappings and settings for a data stream](modify-data-stream.md#data-streams-change-mappings-and-settings) * [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream) diff --git a/manage-data/data-store/mapping.md b/manage-data/data-store/mapping.md index 6ecca8abd1..948c3865fe 100644 --- a/manage-data/data-store/mapping.md +++ b/manage-data/data-store/mapping.md @@ -6,30 +6,6 @@ mapped_urls: # Mapping -% What needs to be done: Refine - -% GitHub issue: docs-projects#370 - -% Scope notes: Use the content in the linked source and add links to the relevent reference content. - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/mapping.md -% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/index-modules-mapper.md -% Notes: redirect only - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$mapping-limit-settings$$$ - -$$$updating-field-mappings$$$ - -$$$mapping-manage-update$$$ - -$$$mapping-dynamic$$$ - -$$$mapping-explicit$$$ - Mapping is the process of defining how a document and the fields it contains are stored and indexed. Each document is a collection of fields, which each have their own [data type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/field-data-types.md). When mapping your data, you create a mapping definition, which contains a list of fields that are pertinent to the document. A mapping definition also includes [metadata fields](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/document-metadata-fields.md), like the `_source` field, which customize how a document’s associated metadata is handled. diff --git a/manage-data/data-store/templates/ignore-missing-component-templates.md b/manage-data/data-store/templates/ignore-missing-component-templates.md index 7570bb0281..634e77b42f 100644 --- a/manage-data/data-store/templates/ignore-missing-component-templates.md +++ b/manage-data/data-store/templates/ignore-missing-component-templates.md @@ -6,17 +6,6 @@ mapped_urls: # Ignore missing component templates -% What needs to be done: Refine - -% GitHub issue: docs-projects#372 - -% Scope notes: Combine with "Usage example" subpage. - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/ignore_missing_component_templates.md -% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/_usage_example.md - When an index template references a component template that might not exist, the `ignore_missing_component_templates` configuration option can be used. With this option, every time a data stream is created based on the index template, the existence of the component template will be checked. If it exists, it will used to form the index’s composite settings. If it is missing, it will be ignored. The following example demonstrates how this configuration option works. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md index 378fe0dfad..c054a3f62e 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md @@ -172,7 +172,7 @@ Loaded dashboards 2. Scroll down to the *Elasticsearch Output* section. Place a comment pound sign (*#*) in front of *output.elasticsearch* and {{es}} *hosts*. 3. Scroll down to the *Logstash Output* section. Remove the comment pound sign (*#*) from in front of *output.logstash* and *hosts*, as follows: -```txt +```json # ---------------- Logstash Output ----------------- output.logstash: # The Logstash hosts @@ -219,19 +219,21 @@ The system module is now enabled in Filebeat and it will be used the next time F **Load the Filebeat Kibana dashboards** -Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view *filebeat-**, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet. +Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view **filebeat-**, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet. 1. Open a command line instance and then go to */filebeat-/* 2. Run the following command: -```txt +```json sudo ./filebeat setup \ -E cloud.id= \ <1> -E cloud.auth=: <2> ``` 1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} +2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. + +::::{important} Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the filebeat.yml. :::: @@ -265,7 +267,7 @@ The data views for *filebeat-** and *metricbeat-** are now available in {{es}}. 2. Scroll down to the *Outputs* section. Place a comment pound sign (*#*) in front of *output.elasticsearch* and {{es}} *hosts*. 3. Scroll down to the *Logstash Output* section. Remove the comment pound sign (*#*) from in front of *output.logstash* and *hosts* as follows: -```txt +```json # ---------------- Logstash Output ----------------- output.logstash: # The Logstash hosts @@ -283,7 +285,7 @@ Now the Filebeat and Metricbeat are set up, let’s configure a {{ls}} pipeline 1. In */logstash-/*, create a new text file named *beats.conf*. 2. Copy and paste the following code into the new text file. This code creates a {{ls}} pipeline that listens for connections from Beats on port 5044 and writes to standard out (typically to your terminal) with formatting provided by the {{ls}} rubydebug output plugin. - ```txt + ```json input { beats{port => 5044} <1> } @@ -423,7 +425,7 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t 1. In your */logstash-/* folder, open *beats.conf* for editing. 2. Replace the *output {}* section of the JSON with the following code: - ```txt + ```json output { elasticsearch { index => "%{[@metadata][beat]}-%{[@metadata][version]}" diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md index fd78194bf8..9674689b6f 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md @@ -316,7 +316,7 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. 1. Open the `jdbc.conf` file in the Logstash folder for editing. 2. Update the output section with the one that follows: - ```txt + ```json output { elasticsearch { index => "rdbms_idx" diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index bfd7a25f58..07edaa03b7 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -169,7 +169,7 @@ Filebeat offers a straightforward, easy to configure way to monitor your Python In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. -```txt +```json # =============================== Elastic Cloud ================================ # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). diff --git a/manage-data/ingest/tools.md b/manage-data/ingest/tools.md index 1e858d96ad..a5a296bd45 100644 --- a/manage-data/ingest/tools.md +++ b/manage-data/ingest/tools.md @@ -10,28 +10,6 @@ mapped_urls: # Ingest tools overview -% What needs to be done: Finish draft - -% GitHub issue: docs-projects#327 - -% Scope notes: Read more about the scope in the tracking issue - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md -% Notes: These are resources to pull from, but this new "Ingest tools overiew" page will not be a replacement for any of these old AsciiDoc pages. File upload: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#upload-data-kibana https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-file-upload.html API: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#_add_data_with_programming_languages https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-api.html OpenTelemetry: https://github.com/elastic/opentelemetry Fleet and Agent: https://www.elastic.co/guide/en/fleet/current/fleet-overview.html https://www.elastic.co/guide/en/serverless/current/fleet-and-elastic-agent.html Logstash: https://www.elastic.co/guide/en/logstash/current/introduction.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html https://www.elastic.co/guide/en/serverless/current/logstash-pipelines.html Beats: https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-beats.html APM: /solutions/observability/apps/application-performance-monitoring-apm.md Application logging: https://www.elastic.co/guide/en/observability/current/application-logs.html ECS logging: https://www.elastic.co/guide/en/observability/current/logs-ecs-application.html Elastic serverless forwarder for AWS: https://www.elastic.co/guide/en/esf/current/aws-elastic-serverless-forwarder.html Integrations: https://www.elastic.co/guide/en/integrations/current/introduction.html Search connectors: https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-integrations-connector-client.html Web crawler: https://github.com/elastic/crawler/tree/main/docs -% - [This comparison page is being moved to the reference section, so I'm linking to that from the current page - Wajiha] ./raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md -% - [x] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md -% - [x] https://www.elastic.co/customer-success/data-ingestion -% - [x] https://github.com/elastic/ingest-docs/pull/1373 - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): -% These IDs are from content that I'm not including on this current page. I've resolved them by changing the internal links to anchor links where needed. - Wajiha - -$$$supported-outputs-beats-and-agent$$$ - -$$$additional-capabilities-beats-and-agent$$$ - Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.
diff --git a/manage-data/ingest/upload-data-files.md b/manage-data/ingest/upload-data-files.md index 212fe3842d..0d5e6328f5 100644 --- a/manage-data/ingest/upload-data-files.md +++ b/manage-data/ingest/upload-data-files.md @@ -6,13 +6,6 @@ mapped_urls: # Upload data files [upload-data-kibana] -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md -% - [x] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md - % Note from David: I've removed the ID $$$upload-data-kibana$$$ from manage-data/ingest.md as those links should instead point to this page. So, please ensure that the following ID is included on this page. I've added it beside the title. You can upload files, view their fields and metrics, and optionally import them to {{es}} with the Data Visualizer. From 091e745e5bab5d309cd0a6d84b7b0b3367dac58d Mon Sep 17 00:00:00 2001 From: wajihaparvez Date: Thu, 20 Feb 2025 16:07:15 -0500 Subject: [PATCH 2/2] Fix link --- troubleshoot/elasticsearch/troubleshooting-searches.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/troubleshooting-searches.md b/troubleshoot/elasticsearch/troubleshooting-searches.md index 6af0aa1440..12fac25569 100644 --- a/troubleshoot/elasticsearch/troubleshooting-searches.md +++ b/troubleshoot/elasticsearch/troubleshooting-searches.md @@ -108,7 +108,7 @@ GET /my-index-000001/_analyze } ``` -To change the mapping of an existing field, refer to [Changing the mapping of a field](../../manage-data/data-store/mapping.md#updating-field-mappings). +To change the mapping of an existing field, refer to [Manage and update mappings](../../manage-data/data-store/mapping.md#mapping-manage-update). ## Check the field’s values [troubleshooting-check-field-values]