diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md index dc4d4ca217..2f6c60edd9 100644 --- a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md @@ -8,7 +8,7 @@ mapped_pages: # Logstash plugins [k8s-logstash-plugins] -The power of {{ls}} is in the plugins--[inputs](logstash://reference/input-plugins.md), [outputs](logstash://reference/output-plugins.md), [filters](logstash://reference/filter-plugins.md), and [codecs](logstash://reference/codec-plugins.md). +The power of {{ls}} is in the plugins--[inputs](logstash-docs-md://lsr/input-plugins.md), [outputs](logstash-docs-md://lsr/output-plugins.md), [filters](logstash-docs-md://lsr/filter-plugins.md), and [codecs](logstash-docs-md://lsr/codec-plugins.md). In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances—​including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation. @@ -90,7 +90,7 @@ spec: **Static read-only files** -Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash://reference/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash://reference/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest. +Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash-docs-md://lsr/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash-docs-md://lsr/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest. ::::{tip} In the plugin documentation, these plugin settings are typically identified by `path` or an `array` of `paths`. @@ -99,7 +99,7 @@ In the plugin documentation, these plugin settings are typically identified by ` To use these in your manifest, create a ConfigMap or Secret representing the asset, a Volume in your `podTemplate.spec` containing the ConfigMap or Secret, and mount that Volume with a VolumeMount in your `podTemplateSpec.container` section of your {{ls}} resource. -This example illustrates configuring a ConfigMap from a ruby source file, and including it in a [`logstash-filter-ruby`](logstash://reference/plugins-filters-ruby.md) plugin. +This example illustrates configuring a ConfigMap from a ruby source file, and including it in a [`logstash-filter-ruby`](logstash-docs-md://lsr/plugins-filters-ruby.md) plugin. First, create the ConfigMap. @@ -143,7 +143,7 @@ spec: ### Larger read-only assets (1 MiB+) [k8s-logstash-working-with-plugins-large-ro] -Some plugins require or allow access to static read-only files that exceed the 1 MiB (mebibyte) limit imposed by ConfigMap and Secret. For example, you may need JAR files to load drivers when using a JDBC or JMS plugin, or a large [`logstash-filter-translate`](logstash://reference/plugins-filters-translate.md) dictionary. +Some plugins require or allow access to static read-only files that exceed the 1 MiB (mebibyte) limit imposed by ConfigMap and Secret. For example, you may need JAR files to load drivers when using a JDBC or JMS plugin, or a large [`logstash-filter-translate`](logstash-docs-md://lsr/plugins-filters-translate.md) dictionary. You can add files using: @@ -239,7 +239,7 @@ After you build and deploy the custom image, include it in the {{ls}} manifest. ### Writable storage [k8s-logstash-working-with-plugins-writable] -Some {{ls}} plugins need access to writable storage. This could be for checkpointing to keep track of events already processed, a place to temporarily write events before sending a batch of events, or just to actually write events to disk in the case of [`logstash-output-file`](logstash://reference/plugins-outputs-file.md). +Some {{ls}} plugins need access to writable storage. This could be for checkpointing to keep track of events already processed, a place to temporarily write events before sending a batch of events, or just to actually write events to disk in the case of [`logstash-output-file`](logstash-docs-md://lsr/plugins-outputs-file.md). {{ls}} on ECK by default supplies a small 1.5 GiB (gibibyte) default persistent volume to each pod. This volume is called `logstash-data` and is located at `/usr/logstash/data`, and is typically the default location for most plugin use cases. This volume is stable across restarts of {{ls}} pods and is suitable for many use cases. @@ -333,7 +333,7 @@ spec: ::::{admonition} Horizontal scaling for {{ls}} plugins * Not all {{ls}} deployments can be scaled horizontally by increasing the number of {{ls}} Pods defined in the {{ls}} resource. Depending on the types of plugins in a {{ls}} installation, increasing the number of pods may cause data duplication, data loss, incorrect data, or may waste resources with pods unable to be utilized correctly. -* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin. +* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin. :::: @@ -345,12 +345,12 @@ spec: * They **must** specify `pipeline.workers=1` for any pipelines that use them. * The number of pods cannot be scaled above 1. -Examples of aggregating filters include [`logstash-filter-aggregate`](logstash://reference/plugins-filters-aggregate.md), [`logstash-filter-csv`](logstash://reference/plugins-filters-csv.md) when `autodetect_column_names` set to `true`, and any [`logstash-filter-ruby`](logstash://reference/plugins-filters-ruby.md) implementations that perform aggregations. +Examples of aggregating filters include [`logstash-filter-aggregate`](logstash-docs-md://lsr/plugins-filters-aggregate.md), [`logstash-filter-csv`](logstash-docs-md://lsr/plugins-filters-csv.md) when `autodetect_column_names` set to `true`, and any [`logstash-filter-ruby`](logstash-docs-md://lsr/plugins-filters-ruby.md) implementations that perform aggregations. ### Input plugins: events pushed to {{ls}} [k8s-logstash-inputs-data-pushed] -{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash://reference/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash://reference/plugins-inputs-http.md). +{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash-docs-md://lsr/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash-docs-md://lsr/plugins-inputs-http.md). ### Input plugins: {{ls}} maintains state [k8s-logstash-inputs-local-checkpoints] @@ -361,16 +361,16 @@ Note that plugins that retrieve data from external sources, and require some lev Input plugins that include configuration settings such as `sincedb`, `checkpoint` or `sql_last_run_metadata` may fall into this category. -Examples of these plugins include [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash://reference/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash://reference/plugins-inputs-file.md). +Examples of these plugins include [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash-docs-md://lsr/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash-docs-md://lsr/plugins-inputs-file.md). ### Input plugins: external source stores state [k8s-logstash-inputs-external-state] {{ls}} installations that use input plugins that retrieve data from an external source, and **rely on the external source to store state** can scale based on the parameters of the external source. -For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data. +For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data. -Examples of these plugins include [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash://reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash://reference/plugins-inputs-kinesis.md). +Examples of these plugins include [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash-docs-md://lsr/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash-docs-md://lsr/plugins-inputs-kinesis.md). @@ -390,12 +390,12 @@ Use these guidelines *in addition* to the general guidelines provided in [Scalin ### {{ls}} integration plugin [k8s-logstash-plugin-considerations-ls-integration] -When your pipeline uses the [`Logstash integration`](logstash://reference/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash://reference/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod. +When your pipeline uses the [`Logstash integration`](logstash-docs-md://lsr/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash-docs-md://lsr/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod. ### Elasticsearch output plugin [k8s-logstash-plugin-considerations-es-output] -The [`elasticsearch output`](logstash://reference/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}. +The [`elasticsearch output`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}. You can customize roles in {{es}}. Check out [creating custom roles](../../users-roles/cluster-or-deployment-auth/native.md) @@ -419,7 +419,7 @@ stringData: ### Elastic_integration filter plugin [k8s-logstash-plugin-considerations-integration-filter] -The [`elastic_integration filter`](logstash://reference/plugins-filters-elastic_integration.md) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables. +The [`elastic_integration filter`](logstash-docs-md://lsr/plugins-filters-elastic_integration.md) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables. ```json elastic_integration { @@ -448,7 +448,7 @@ stringData: ### Elastic Agent input and Beats input plugins [k8s-logstash-plugin-considerations-agent-beats] -When you use the [Elastic Agent input](logstash://reference/plugins-inputs-elastic_agent.md) or the [Beats input](logstash://reference/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately. +When you use the [Elastic Agent input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or the [Beats input](logstash-docs-md://lsr/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately. @@ -456,7 +456,7 @@ When you use the [Elastic Agent input](logstash://reference/plugins-inputs-elast If you need plugins in addition to those included in the standard {{ls}} distribution, you can add them. Create a custom Docker image that includes the installed plugins, using the `bin/logstash-plugin install` utility to add more plugins to the image so that they can be used by {{ls}} pods. -This sample Dockerfile installs the [`logstash-filter-tld`](logstash://reference/plugins-filters-tld.md) plugin to the official {{ls}} Docker image: +This sample Dockerfile installs the [`logstash-filter-tld`](logstash-docs-md://lsr/plugins-filters-tld.md) plugin to the official {{ls}} Docker image: ```shell FROM docker.elastic.co/logstash/logstash:8.16.1 diff --git a/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md b/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md index 5688c99052..d2bb694835 100644 --- a/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md +++ b/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md @@ -18,7 +18,7 @@ applies_to: Learn how to set up disaster recovery between two clusters based on bi-directional {{ccr}}. The following tutorial is designed for data streams which support [update by query](../../../manage-data/data-store/data-streams/use-data-stream.md#update-docs-in-a-data-stream-by-query) and [delete by query](../../../manage-data/data-store/data-streams/use-data-stream.md#delete-docs-in-a-data-stream-by-query). You can only perform these actions on the leader index. -This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash://reference/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial. +This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial. * Setting up a remote cluster on `clusterA` and `clusterB`. * Setting up bi-directional cross-cluster replication with exclusion patterns. diff --git a/docset.yml b/docset.yml index 76359d5414..31132dc7b5 100644 --- a/docset.yml +++ b/docset.yml @@ -52,6 +52,7 @@ cross_links: - integrations - kibana - logstash + - logstash-docs-md - search-ui - security-docs diff --git a/explore-analyze/query-filter/tools/grok-debugger.md b/explore-analyze/query-filter/tools/grok-debugger.md index 53c17e1945..c2ced6d664 100644 --- a/explore-analyze/query-filter/tools/grok-debugger.md +++ b/explore-analyze/query-filter/tools/grok-debugger.md @@ -10,7 +10,7 @@ mapped_pages: You can build and debug grok patterns in the {{kib}} **Grok Debugger** before you use them in your data processing pipelines. Grok is a pattern matching syntax that you can use to parse arbitrary text and structure it. Grok is good for parsing syslog, apache, and other webserver logs, mysql logs, and in general, any log format that is written for human consumption. -Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash://reference/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md). +Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash-docs-md://lsr/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md). The {{stack}} ships with more than 120 reusable grok patterns. For a complete list of patterns, see [{{es}} grok patterns](https://github.com/elastic/elasticsearch/tree/master/libs/grok/src/main/resources/patterns) and [{{ls}} grok patterns](https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns). diff --git a/explore-analyze/scripting/grok.md b/explore-analyze/scripting/grok.md index 2b5e389f89..0d6136b071 100644 --- a/explore-analyze/scripting/grok.md +++ b/explore-analyze/scripting/grok.md @@ -46,7 +46,7 @@ The first value is a number, followed by what appears to be an IP address. You c To ease migration to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema. -The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash://reference/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes. +The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash-docs-md://lsr/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes. New features and enhancements will be added to the ECS-compliant files. The legacy patterns may still receive bug fixes which are backwards compatible. diff --git a/manage-data/data-store/data-streams/set-up-data-stream.md b/manage-data/data-store/data-streams/set-up-data-stream.md index 4fbd22defe..79d5b98aae 100644 --- a/manage-data/data-store/data-streams/set-up-data-stream.md +++ b/manage-data/data-store/data-streams/set-up-data-stream.md @@ -21,7 +21,7 @@ You can also [convert an index alias to a data stream](#convert-index-alias-to-d ::::{important} If you use {{fleet}}, {{agent}}, or {{ls}}, skip this tutorial. They all set up data streams for you. -For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash://reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin. +For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin. :::: diff --git a/manage-data/ingest/ingest-reference-architectures.md b/manage-data/ingest/ingest-reference-architectures.md index 2147be6b1a..c873ab9338 100644 --- a/manage-data/ingest/ingest-reference-architectures.md +++ b/manage-data/ingest/ingest-reference-architectures.md @@ -24,6 +24,6 @@ You can host {{es}} on your own hardware or send your data to {{es}} on {{ecloud | [*{{agent}} to {{ls}} to Elasticsearch*](./ingest-reference-architectures/agent-ls.md)

![Image showing {{agent}} to {{ls}} to {{es}}](/manage-data/images/ingest-ea-ls-es.png "") | You need additional capabilities offered by {{ls}}:

* [**enrichment**](./ingest-reference-architectures/ls-enrich.md) between {{agent}} and {{es}}
* [**persistent queue (PQ) buffering**](./ingest-reference-architectures/lspq.md) to accommodate network issues and downstream unavailability
* [**proxying**](./ingest-reference-architectures/ls-networkbridge.md) in cases where {{agent}}s have network restrictions for connecting outside of the {{agent}} network
* data needs to be [**routed to multiple**](./ingest-reference-architectures/ls-multi.md) {{es}} clusters and other destinations depending on the content
| | [*{{agent}} to proxy to Elasticsearch*](./ingest-reference-architectures/agent-proxy.md)

![Image showing connections between {{agent}} and {{es}} using a proxy](/manage-data/images/ingest-ea-proxy-es.png "") | Agents have [network restrictions](./ingest-reference-architectures/agent-proxy.md) that prevent connecting outside of the {{agent}} network Note that [{{ls}} as proxy](./ingest-reference-architectures/ls-networkbridge.md) is one option.
| | [*{{agent}} to {{es}} with Kafka as middleware message queue*](./ingest-reference-architectures/agent-kafka-es.md)

![Image showing {{agent}} collecting data and using Kafka as a message queue enroute to {{es}}](/manage-data/images/ingest-ea-kafka.png "") | Kafka is your [middleware message queue](./ingest-reference-architectures/agent-kafka-es.md):

* [Kafka ES sink connector](./ingest-reference-architectures/agent-kafka-essink.md) to write from Kafka to {{es}}
* [{{ls}} to read from Kafka and route to {{es}}](./ingest-reference-architectures/agent-kafka-ls.md)
| -| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)

![Image showing {{ls}} collecting data and sending to {{es}}](/manage-data/images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](logstash://reference/input-plugins.md).
| +| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)

![Image showing {{ls}} collecting data and sending to {{es}}](/manage-data/images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](logstash-docs-md://lsr/input-plugins.md).
| | [*Elastic air-gapped architectures*](./ingest-reference-architectures/airgapped-env.md)

![Image showing {{stack}} in an air-gapped environment](/manage-data/images/ingest-ea-airgapped.png "") | You want to deploy {{agent}} and {{stack}} in an air-gapped environment (no access to outside networks)
| diff --git a/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md b/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md index 7730aca771..e534cfa71d 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md @@ -32,8 +32,8 @@ Info on {{agent}} and agent integrations: Info on {{ls}} and {{ls}} plugins: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} {{agent}} input](logstash://reference/plugins-inputs-elastic_agent.md) -* [{{ls}} Kafka output](logstash://reference/plugins-outputs-kafka.md) +* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) +* [{{ls}} Kafka output](logstash-docs-md://lsr/plugins-outputs-kafka.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md b/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md index 240224e840..68cec26f22 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md @@ -32,10 +32,10 @@ Info on {{agent}} and agent integrations: Info on {{ls}} and {{ls}} Kafka plugins: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} {{agent}} input](logstash://reference/plugins-inputs-elastic_agent.md) -* [{{ls}} Kafka input](logstash://reference/plugins-inputs-kafka.md) -* [{{ls}} Kafka output](logstash://reference/plugins-outputs-kafka.md) -* [{{ls}} Elasticsearch output](logstash://reference/plugins-outputs-elasticsearch.md) +* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) +* [{{ls}} Kafka input](logstash-docs-md://lsr/plugins-inputs-kafka.md) +* [{{ls}} Kafka output](logstash-docs-md://lsr/plugins-outputs-kafka.md) +* [{{ls}} Elasticsearch output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md b/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md index 3ffb00369b..7a3462fc3d 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md @@ -30,5 +30,5 @@ Info for air-gapped environments: ## Geoip database management in air-gapped environments [ls-geoip] -The [{{ls}} geoip filter](logstash://reference/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](logstash://reference/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info. +The [{{ls}} geoip filter](logstash-docs-md://lsr/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](logstash-docs-md://lsr/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info. diff --git a/manage-data/ingest/ingest-reference-architectures/ls-enrich.md b/manage-data/ingest/ingest-reference-architectures/ls-enrich.md index a475c79f90..fdf53d0af3 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-enrich.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-enrich.md @@ -35,10 +35,10 @@ Info on configuring {{agent}}: For info on {{ls}} for enriching data, check out these sections in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current): -* [{{ls}} {{agent}} input](logstash://reference/plugins-inputs-elastic_agent.md) +* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) * [{{ls}} plugins for enriching data](logstash://reference/lookup-enrichment.md) -* [Logstash filter plugins](logstash://reference/filter-plugins.md) -* [{{ls}} {{es}} output](logstash://reference/plugins-outputs-elasticsearch.md) +* [Logstash filter plugins](logstash-docs-md://lsr/filter-plugins.md) +* [{{ls}} {{es}} output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/ls-for-input.md b/manage-data/ingest/ingest-reference-architectures/ls-for-input.md index ebbf7f995e..8fb6acee81 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-for-input.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-for-input.md @@ -29,8 +29,8 @@ Info on {{ls}} and {{ls}} input and output plugins: * [{{ls}} plugin support matrix](https://www.elastic.co/support/matrix#logstash_plugins) * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} input plugins](logstash://reference/input-plugins.md) -* [{{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) +* [{{ls}} input plugins](logstash-docs-md://lsr/input-plugins.md) +* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) Info on {{es}} and ingest pipelines: diff --git a/manage-data/ingest/ingest-reference-architectures/ls-multi.md b/manage-data/ingest/ingest-reference-architectures/ls-multi.md index 0753b7e46f..7209969882 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-multi.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-multi.md @@ -62,8 +62,8 @@ Info on configuring {{agent}}: Info on {{ls}} and {{ls}} outputs: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} {{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) -* [{{ls}} output plugins](logstash://reference/output-plugins.md) +* [{{ls}} {{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) +* [{{ls}} output plugins](logstash-docs-md://lsr/output-plugins.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md b/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md index 63587b3153..4b36c3ed4f 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md @@ -29,7 +29,7 @@ Info on configuring {{agent}}: Info on {{ls}} and {{ls}} plugins: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) +* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/lspq.md b/manage-data/ingest/ingest-reference-architectures/lspq.md index 39906ebf61..c9fab17c3e 100644 --- a/manage-data/ingest/ingest-reference-architectures/lspq.md +++ b/manage-data/ingest/ingest-reference-architectures/lspq.md @@ -25,8 +25,8 @@ Info on configuring {{agent}}: For info on {{ls}} plugins: -* [{{agent}} input](logstash://reference/plugins-inputs-elastic_agent.md) -* [{{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) +* [{{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) +* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) For info on using {{ls}} for buffering and data resiliency, check out this section in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current): diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md index 598b069306..744b35c900 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md @@ -8,7 +8,7 @@ applies_to: # Ingest data from a relational database -This guide explains how to ingest data from a relational database into {{ecloud}} through [{{ls}}](logstash://reference/index.md), using the Logstash [JDBC input plugin](logstash://reference/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} or {{ece}} deployment. +This guide explains how to ingest data from a relational database into {{ecloud}} through [{{ls}}](logstash://reference/index.md), using the Logstash [JDBC input plugin](logstash-docs-md://lsr/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} or {{ece}} deployment. The code and methods presented here have been tested with MySQL. They should work with other relational databases. @@ -202,13 +202,13 @@ Let’s set up a sample Logstash input pipeline to ingest data from your new JDB : The Logstash JDBC plugin does not come packaged with JDBC driver libraries. The JDBC driver library must be passed explicitly into the plugin using the `jdbc_driver_library` configuration option. tracking_column - : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](logstash://reference/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. + : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](logstash-docs-md://lsr/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. unix_ts_in_secs : The field generated by the SELECT statement, which contains the `modification_time` as a standard [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) (seconds since the epoch). The field is referenced by the `tracking column`. A Unix timestamp is used for tracking progress rather than a normal timestamp, as a normal timestamp may cause errors due to the complexity of correctly converting back and forth between UMT and the local timezone. sql_last_value - : This is a [built-in parameter](logstash://reference/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. + : This is a [built-in parameter](logstash-docs-md://lsr/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. schedule : This uses cron syntax to specify how often Logstash should poll MySQL for changes. The specification `*/5 * * * * *` tells Logstash to contact MySQL every 5 seconds. Input from this plugin can be scheduled to run periodically according to a specific schedule. This scheduling syntax is powered by [rufus-scheduler](https://github.com/jmettraux/rufus-scheduler). The syntax is cron-like with some extensions specific to Rufus (for example, timezone support). diff --git a/manage-data/ingest/ingesting-timeseries-data.md b/manage-data/ingest/ingesting-timeseries-data.md index d82a5af2e4..95a6efbd08 100644 --- a/manage-data/ingest/ingesting-timeseries-data.md +++ b/manage-data/ingest/ingesting-timeseries-data.md @@ -47,10 +47,10 @@ In addition to supporting upstream OTel development, Elastic provides [Elastic D ## Logstash [ingest-logstash] -[{{ls}}](https://www.elastic.co/guide/en/logstash/current) is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. {{ls}} can *collect data* from a wide variety of data sources with {{ls}} [input plugins](logstash://reference/input-plugins.md), *enrich and transform* the data with {{ls}} [filter plugins](logstash://reference/filter-plugins.md), and *output* the data to {{es}} and other destinations with the {{ls}} [output plugins](logstash://reference/output-plugins.md). +[{{ls}}](https://www.elastic.co/guide/en/logstash/current) is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. {{ls}} can *collect data* from a wide variety of data sources with {{ls}} [input plugins](logstash-docs-md://lsr//input-plugins.md), *enrich and transform* the data with {{ls}} [filter plugins](logstash-docs-md://lsr/filter-plugins.md), and *output* the data to {{es}} and other destinations with the {{ls}} [output plugins](logstash-docs-md://lsr/output-plugins.md). Many users never need to use {{ls}}, but it’s available if you need it for: -* **Data collection** (if an Elastic integration isn’t available). {{agent}} and Elastic [integrations](https://docs.elastic.co/en/integrations/all_integrations) provide many features out-of-the-box, so be sure to search or browse integrations for your data source. If you don’t find an Elastic integration for your data source, check {{ls}} for an [input plugin](logstash://reference/input-plugins.md) for your data source. +* **Data collection** (if an Elastic integration isn’t available). {{agent}} and Elastic [integrations](https://docs.elastic.co/en/integrations/all_integrations) provide many features out-of-the-box, so be sure to search or browse integrations for your data source. If you don’t find an Elastic integration for your data source, check {{ls}} for an [input plugin](logstash-docs-md://lsr/input-plugins.md) for your data source. * **Additional processing.** One of the most common {{ls}} use cases is [extending Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md). You can take advantage of the extensive, built-in capabilities of Elastic Agent and Elastic Integrations, and then use {{ls}} for additional data processing before sending the data on to {{es}}. * **Advanced use cases.** {{ls}} can help with advanced use cases, such as when you need [persistence or buffering](/manage-data/ingest/ingest-reference-architectures/lspq.md), additional [data enrichment](/manage-data/ingest/ingest-reference-architectures/ls-enrich.md), [proxying](/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md) as a way to bridge network connections, or the ability to route data to [multiple destinations](/manage-data/ingest/ingest-reference-architectures/ls-multi.md). diff --git a/manage-data/ingest/transform-enrich.md b/manage-data/ingest/transform-enrich.md index c8b286845c..ddb45a3932 100644 --- a/manage-data/ingest/transform-enrich.md +++ b/manage-data/ingest/transform-enrich.md @@ -36,7 +36,7 @@ Finally, to help ensure optimal query results, you may want to customize how tex {{ls}} and the {{ls}} `elastic_integration filter` : If you're using {{ls}} as your primary ingest tool, you can take advantage of its built-in pipeline capabilities to transform your data. You configure a pipeline by stringing together a series of input, output, filtering, and optional codec plugins to manipulate all incoming data. -: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](https://www.elastic.co/guide/en/logstash/current/) and other [{{ls}} filters](logstash://reference/filter-plugins.md) to [extend Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}. +: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](https://www.elastic.co/guide/en/logstash/current/) and other [{{ls}} filters](logstash-docs-md://lsr/filter-plugins.md) to [extend Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}. Index mapping : Index mapping lets you control the structure that incoming data has within an {{es}} index. You can define all of the fields that are included in the index and their respective data types. For example, you can set fields for dates, numbers, or geolocations, and define the fields to have specific formats. diff --git a/reference/fleet/agent-processors.md b/reference/fleet/agent-processors.md index 13626e568e..006b9d1088 100644 --- a/reference/fleet/agent-processors.md +++ b/reference/fleet/agent-processors.md @@ -95,7 +95,7 @@ The {{stack}} provides several options for processing data collected by {{agent} | Sanitize or enrich raw data at the source | Use an {{agent}} processor | | Convert data to ECS, normalize field data, or enrich incoming data | Use [ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md#pipelines-for-fleet-elastic-agent) | | Define or alter the schema at query time | Use [runtime fields](/manage-data/data-store/mapping/runtime-fields.md) | -| Do something else with your data | Use [Logstash plugins](logstash://reference/filter-plugins.md) | +| Do something else with your data | Use [Logstash plugins](logstash-docs-md://lsr/filter-plugins.md) | ## How are {{agent}} processors different from {{ls}} plugins or ingest pipelines? [how-different] diff --git a/reference/fleet/logstash-output.md b/reference/fleet/logstash-output.md index 4aa6977b8b..4d8300994d 100644 --- a/reference/fleet/logstash-output.md +++ b/reference/fleet/logstash-output.md @@ -60,7 +60,7 @@ output { 3. The API Key used by {{ls}} to ship data to the destination data streams. -For more information about configuring {{ls}}, refer to [Configuring {{ls}}](logstash://reference/creating-logstash-pipeline.md) and [{{agent}} input plugin](logstash://reference/plugins-inputs-elastic_agent.md). +For more information about configuring {{ls}}, refer to [Configuring {{ls}}](logstash://reference/creating-logstash-pipeline.md) and [{{agent}} input plugin](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md). ## {{ls}} output configuration settings [_ls_output_configuration_settings] @@ -88,7 +88,7 @@ The `logstash` output supports the following settings, grouped by category. Many When sending data to a secured cluster through the `logstash` output, {{agent}} can use SSL/TLS. For a list of available settings, refer to [SSL/TLS](/reference/fleet/elastic-agent-ssl-configuration.md), specifically the settings under [Table 7, Common configuration options](/reference/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). ::::{note} -To use SSL/TLS, you must also configure the [{{agent}} input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md) to use SSL/TLS. +To use SSL/TLS, you must also configure the [{{agent}} input plugin for {{ls}}](logstash-docs-md://lsr/plugins-inputs-beats.md) to use SSL/TLS. :::: diff --git a/reference/fleet/secure-logstash-connections.md b/reference/fleet/secure-logstash-connections.md index c88f914c02..25b7fcbf69 100644 --- a/reference/fleet/secure-logstash-connections.md +++ b/reference/fleet/secure-logstash-connections.md @@ -155,8 +155,8 @@ output { To learn more about the {{ls}} configuration, refer to: -* [{{agent}} input plugin](logstash://reference/plugins-inputs-elastic_agent.md) -* [{{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) +* [{{agent}} input plugin](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) +* [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) * [Secure your connection to {{es}}](logstash://reference/secure-connection.md) When you’re done configuring the pipeline, restart {{ls}}: diff --git a/solutions/observability/apps/configure-logstash-output.md b/solutions/observability/apps/configure-logstash-output.md index 678928008e..10d0625b78 100644 --- a/solutions/observability/apps/configure-logstash-output.md +++ b/solutions/observability/apps/configure-logstash-output.md @@ -51,7 +51,7 @@ To enable the {{ls}} output in APM Server, edit the `apm-server.yml` file to: Finally, you must create a {{ls}} configuration pipeline that listens for incoming APM Server connections and indexes received events into {{es}}. -1. Use the [Elastic Agent input plugin](logstash://reference/plugins-inputs-elastic_agent.md) to configure {{ls}} to receive events from the APM Server. A minimal `input` config might look like this: +1. Use the [Elastic Agent input plugin](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) to configure {{ls}} to receive events from the APM Server. A minimal `input` config might look like this: ```json input { @@ -61,7 +61,7 @@ Finally, you must create a {{ls}} configuration pipeline that listens for incomi } ``` -2. Use the [{{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) to send events to {{es}} for indexing. A minimal `output` config might look like this: +2. Use the [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) to send events to {{es}} for indexing. A minimal `output` config might look like this: ```json output { @@ -74,7 +74,7 @@ Finally, you must create a {{ls}} configuration pipeline that listens for incomi ``` 1. Enables indexing into {{es}} data streams. - 2. This example assumes you’re sending data to {{ecloud}}. If you’re using a self-hosted version of {{es}}, use `hosts` instead. See [{{es}} output plugin](logstash://reference/plugins-outputs-elasticsearch.md) for more information. + 2. This example assumes you’re sending data to {{ecloud}}. If you’re using a self-hosted version of {{es}}, use `hosts` instead. See [{{es}} output plugin](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) for more information. Here’s what your basic {{ls}} configuration file will look like when we put everything together: @@ -120,7 +120,7 @@ As an example, you might want to use {{ls}} to route all `metrics` events to the However, if when you combine all `metrics` events there are events that have the `data_stream.dataset` field set to different values, indexing will fail with a message stating that the field does not accept any other values. For example, the error might say something like `failed to parse field [data_stream.dataset] of type [constant_keyword]` or `[constant_keyword] field [data_stream.dataset] only accepts values that are equal to the value defined in the mappings`. This is because the `data_stream.dataset` field’s mapping is set to `constant_keyword`, which expects all values of the fields in the index to be the same. -To prevent losing data due to failed indexing, add a [Logstash mutate filter](logstash://reference/plugins-filters-mutate.md) to update the value of `data_stream.dataset`. Then, you can send all metrics events to one custom metrics data stream: +To prevent losing data due to failed indexing, add a [Logstash mutate filter](logstash-docs-md://lsr/plugins-filters-mutate.md) to update the value of `data_stream.dataset`. Then, you can send all metrics events to one custom metrics data stream: ```json filter { @@ -254,7 +254,7 @@ This parameter’s value will be assigned to the `metadata.beat` field. It can t #### `ssl` [_ssl_2] -Configuration options for SSL parameters like the root CA for {{ls}} connections. See [SSL/TLS output settings](ssltls-output-settings.md) for more information. To use SSL, you must also configure the [{{beats}} input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md) to use SSL/TLS. +Configuration options for SSL parameters like the root CA for {{ls}} connections. See [SSL/TLS output settings](ssltls-output-settings.md) for more information. To use SSL, you must also configure the [{{beats}} input plugin for {{ls}}](logstash-docs-md://lsr/plugins-inputs-beats.md) to use SSL/TLS. #### `timeout` [_timeout_2] @@ -328,7 +328,7 @@ To use SSL mutual authentication: For more information about these configuration options, see [SSL/TLS output settings](ssltls-output-settings.md). -3. Configure {{ls}} to use SSL. In the {{ls}} config file, specify the following settings for the [{{beats}} input plugin for {{ls}}](logstash://reference/plugins-inputs-beats.md): +3. Configure {{ls}} to use SSL. In the {{ls}} config file, specify the following settings for the [{{beats}} input plugin for {{ls}}](logstash-docs-md://lsr/plugins-inputs-beats.md): * `ssl`: When set to true, enables {{ls}} to use SSL/TLS. * `ssl_certificate_authorities`: Configures {{ls}} to trust any certificates signed by the specified CA. @@ -350,7 +350,7 @@ To use SSL mutual authentication: } ``` - For more information about these options, see the [documentation for the {{beats}} input plugin](logstash://reference/plugins-inputs-beats.md). + For more information about these options, see the [documentation for the {{beats}} input plugin](logstash-docs-md://lsr/plugins-inputs-beats.md). diff --git a/solutions/observability/apps/configure-redis-output.md b/solutions/observability/apps/configure-redis-output.md index e97ed2c765..ba4d32d41a 100644 --- a/solutions/observability/apps/configure-redis-output.md +++ b/solutions/observability/apps/configure-redis-output.md @@ -19,7 +19,7 @@ The Redis output is not yet supported by {{fleet}}-managed APM Server. :::: -The Redis output inserts the events into a Redis list or a Redis channel. This output plugin is compatible with the [Redis input plugin](logstash://reference/plugins-inputs-redis.md) for {{ls}}. +The Redis output inserts the events into a Redis list or a Redis channel. This output plugin is compatible with the [Redis input plugin](logstash-docs-md://lsr/plugins-inputs-redis.md) for {{ls}}. To use this output, edit the APM Server configuration file to disable the {{es}} output by commenting it out, and enable the Redis output by adding `output.redis`. diff --git a/troubleshoot/elasticsearch/mapping-explosion.md b/troubleshoot/elasticsearch/mapping-explosion.md index 27c1c8a06c..c3748dd348 100644 --- a/troubleshoot/elasticsearch/mapping-explosion.md +++ b/troubleshoot/elasticsearch/mapping-explosion.md @@ -66,6 +66,6 @@ Mapping explosion is not easily resolved, so it is better prevented via the abov * Disable [dynamic mappings](../../manage-data/data-store/mapping.md). * [Reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) into an index with a corrected mapping, either via [index template](../../manage-data/data-store/templates.md) or [explicitly set](../../manage-data/data-store/mapping.md). * If index is unneeded and/or historical, consider [deleting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete). -* [Export](logstash://reference/plugins-inputs-elasticsearch.md) and [re-import](logstash://reference/plugins-outputs-elasticsearch.md) data into a mapping-corrected index after [pruning](logstash://reference/plugins-filters-prune.md) problematic fields via Logstash. +* [Export](logstash-docs-md://lsr/plugins-inputs-elasticsearch.md) and [re-import](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) data into a mapping-corrected index after [pruning](logstash-docs-md://lsr/plugins-filters-prune.md) problematic fields via Logstash. [Splitting index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-split) would not resolve the core issue. diff --git a/troubleshoot/ingest/logstash/kafka.md b/troubleshoot/ingest/logstash/kafka.md index 462e5aaf30..873b027cd0 100644 --- a/troubleshoot/ingest/logstash/kafka.md +++ b/troubleshoot/ingest/logstash/kafka.md @@ -67,7 +67,7 @@ From a performance perspective, decreasing the `max_poll_records` value is prefe By default, the kafka input plugin checks connectivity and validates the schema registry during plugin registration before events are processed. In some circumstances, this process may fail when it tries to validate an authenticated schema registry, causing the plugin to crash. -The plugin offers a `schema_registry_validation` setting to change the default behavior. This setting allows the plugin to skip validation during registration, which allows the plugin to continue and events to be processed. See the [kafka input plugin documentation](logstash://reference/plugins-inputs-kafka.md#plugins-inputs-kafka-schema_registry_validation) for more information about the plugin and other configuration options. +The plugin offers a `schema_registry_validation` setting to change the default behavior. This setting allows the plugin to skip validation during registration, which allows the plugin to continue and events to be processed. See the [kafka input plugin documentation](logstash-docs-md://lsr/plugins-inputs-kafka.md#plugins-inputs-kafka-schema_registry_validation) for more information about the plugin and other configuration options. ::::{note} An incorrectly configured schema registry will still stop the plugin from processing events.