Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ mapped_pages:

# Logstash plugins [k8s-logstash-plugins]

The power of {{ls}} is in the plugins--[inputs](logstash://reference/input-plugins.md), [outputs](logstash://reference/output-plugins.md), [filters](logstash://reference/filter-plugins.md), and [codecs](logstash://reference/codec-plugins.md).
The power of {{ls}} is in the plugins--[inputs](logstash-docs-md://lsr/input-plugins.md), [outputs](logstash-docs-md://lsr/output-plugins.md), [filters](logstash-docs-md://lsr/filter-plugins.md), and [codecs](logstash-docs-md://lsr/codec-plugins.md).

In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances—​including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation.

Expand Down Expand Up @@ -90,7 +90,7 @@ spec:

**Static read-only files**

Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash://reference/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash://reference/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest.
Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash-docs-md://lsr/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash-docs-md://lsr/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest.

::::{tip}
In the plugin documentation, these plugin settings are typically identified by `path` or an `array` of `paths`.
Expand Down Expand Up @@ -333,7 +333,7 @@ spec:

::::{admonition} Horizontal scaling for {{ls}} plugins
* Not all {{ls}} deployments can be scaled horizontally by increasing the number of {{ls}} Pods defined in the {{ls}} resource. Depending on the types of plugins in a {{ls}} installation, increasing the number of pods may cause data duplication, data loss, incorrect data, or may waste resources with pods unable to be utilized correctly.
* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin.
* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin.

::::

Expand All @@ -350,7 +350,7 @@ Examples of aggregating filters include [`logstash-filter-aggregate`](logstash:/

### Input plugins: events pushed to {{ls}} [k8s-logstash-inputs-data-pushed]

{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash://reference/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash://reference/plugins-inputs-http.md).
{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash-docs-md://lsr/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash-docs-md://lsr/plugins-inputs-http.md).


### Input plugins: {{ls}} maintains state [k8s-logstash-inputs-local-checkpoints]
Expand All @@ -361,16 +361,16 @@ Note that plugins that retrieve data from external sources, and require some lev

Input plugins that include configuration settings such as `sincedb`, `checkpoint` or `sql_last_run_metadata` may fall into this category.

Examples of these plugins include [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash://reference/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash://reference/plugins-inputs-file.md).
Examples of these plugins include [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash-docs-md://lsr/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash-docs-md://lsr/plugins-inputs-file.md).


### Input plugins: external source stores state [k8s-logstash-inputs-external-state]

{{ls}} installations that use input plugins that retrieve data from an external source, and **rely on the external source to store state** can scale based on the parameters of the external source.

For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data.
For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data.

Examples of these plugins include [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash://reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash://reference/plugins-inputs-kinesis.md).
Examples of these plugins include [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash://reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash-docs-md://lsr/plugins-inputs-kinesis.md).



Expand All @@ -390,12 +390,12 @@ Use these guidelines *in addition* to the general guidelines provided in [Scalin

### {{ls}} integration plugin [k8s-logstash-plugin-considerations-ls-integration]

When your pipeline uses the [`Logstash integration`](logstash://reference/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash://reference/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod.
When your pipeline uses the [`Logstash integration`](logstash-docs-md://lsr/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash-docs-md://lsr/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod.


### Elasticsearch output plugin [k8s-logstash-plugin-considerations-es-output]

The [`elasticsearch output`](logstash://reference/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}.
The [`elasticsearch output`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}.

You can customize roles in {{es}}. Check out [creating custom roles](../../users-roles/cluster-or-deployment-auth/native.md)

Expand Down Expand Up @@ -448,7 +448,7 @@ stringData:

### Elastic Agent input and Beats input plugins [k8s-logstash-plugin-considerations-agent-beats]

When you use the [Elastic Agent input](logstash://reference/plugins-inputs-elastic_agent.md) or the [Beats input](logstash://reference/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately.
When you use the [Elastic Agent input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or the [Beats input](logstash-docs-md://lsr/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ applies_to:

Learn how to set up disaster recovery between two clusters based on bi-directional {{ccr}}. The following tutorial is designed for data streams which support [update by query](../../../manage-data/data-store/data-streams/use-data-stream.md#update-docs-in-a-data-stream-by-query) and [delete by query](../../../manage-data/data-store/data-streams/use-data-stream.md#delete-docs-in-a-data-stream-by-query). You can only perform these actions on the leader index.

This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash://reference/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial.
This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial.

* Setting up a remote cluster on `clusterA` and `clusterB`.
* Setting up bi-directional cross-cluster replication with exclusion patterns.
Expand Down
1 change: 1 addition & 0 deletions docset.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
project: 'Elastic documentation'

Check notice on line 1 in docset.yml

View workflow job for this annotation

GitHub Actions / preview / build

Substitution key 'reports-app' is not used in any file

Check notice on line 1 in docset.yml

View workflow job for this annotation

GitHub Actions / preview / build

Substitution key 'release-date' is not used in any file
max_toc_depth: 2

features:
Expand Down Expand Up @@ -52,6 +52,7 @@
- integrations
- kibana
- logstash
- logstash-docs-md
- search-ui
- security-docs

Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/query-filter/tools/grok-debugger.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ mapped_pages:

You can build and debug grok patterns in the {{kib}} **Grok Debugger** before you use them in your data processing pipelines. Grok is a pattern matching syntax that you can use to parse arbitrary text and structure it. Grok is good for parsing syslog, apache, and other webserver logs, mysql logs, and in general, any log format that is written for human consumption.

Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash://reference/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md).
Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash-docs-md://lsr/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md).

The {{stack}} ships with more than 120 reusable grok patterns. For a complete list of patterns, see [{{es}} grok patterns](https://github.com/elastic/elasticsearch/tree/master/libs/grok/src/main/resources/patterns) and [{{ls}} grok patterns](https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns).

Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/scripting/grok.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The first value is a number, followed by what appears to be an IP address. You c

To ease migration to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema.

The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash://reference/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.
The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash-docs-md://lsr/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.

New features and enhancements will be added to the ECS-compliant files. The legacy patterns may still receive bug fixes which are backwards compatible.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You can also [convert an index alias to a data stream](#convert-index-alias-to-d
::::{important}
If you use {{fleet}}, {{agent}}, or {{ls}}, skip this tutorial. They all set up data streams for you.

For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash://reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin.
For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin.

::::

Expand Down
2 changes: 1 addition & 1 deletion manage-data/ingest/ingest-reference-architectures.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ You can host {{es}} on your own hardware or send your data to {{es}} on {{ecloud
| [*{{agent}} to {{ls}} to Elasticsearch*](./ingest-reference-architectures/agent-ls.md)<br><br>![Image showing {{agent}} to {{ls}} to {{es}}](/manage-data/images/ingest-ea-ls-es.png "") | You need additional capabilities offered by {{ls}}:<br><br>* [**enrichment**](./ingest-reference-architectures/ls-enrich.md) between {{agent}} and {{es}}<br>* [**persistent queue (PQ) buffering**](./ingest-reference-architectures/lspq.md) to accommodate network issues and downstream unavailability<br>* [**proxying**](./ingest-reference-architectures/ls-networkbridge.md) in cases where {{agent}}s have network restrictions for connecting outside of the {{agent}} network<br>* data needs to be [**routed to multiple**](./ingest-reference-architectures/ls-multi.md) {{es}} clusters and other destinations depending on the content<br> |
| [*{{agent}} to proxy to Elasticsearch*](./ingest-reference-architectures/agent-proxy.md)<br><br>![Image showing connections between {{agent}} and {{es}} using a proxy](/manage-data/images/ingest-ea-proxy-es.png "") | Agents have [network restrictions](./ingest-reference-architectures/agent-proxy.md) that prevent connecting outside of the {{agent}} network Note that [{{ls}} as proxy](./ingest-reference-architectures/ls-networkbridge.md) is one option.<br> |
| [*{{agent}} to {{es}} with Kafka as middleware message queue*](./ingest-reference-architectures/agent-kafka-es.md)<br><br>![Image showing {{agent}} collecting data and using Kafka as a message queue enroute to {{es}}](/manage-data/images/ingest-ea-kafka.png "") | Kafka is your [middleware message queue](./ingest-reference-architectures/agent-kafka-es.md):<br><br>* [Kafka ES sink connector](./ingest-reference-architectures/agent-kafka-essink.md) to write from Kafka to {{es}}<br>* [{{ls}} to read from Kafka and route to {{es}}](./ingest-reference-architectures/agent-kafka-ls.md)<br> |
| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)<br><br>![Image showing {{ls}} collecting data and sending to {{es}}](/manage-data/images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](logstash://reference/input-plugins.md).<br> |
| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)<br><br>![Image showing {{ls}} collecting data and sending to {{es}}](/manage-data/images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](logstash-docs-md://lsr/input-plugins.md).<br> |
| [*Elastic air-gapped architectures*](./ingest-reference-architectures/airgapped-env.md)<br><br>![Image showing {{stack}} in an air-gapped environment](/manage-data/images/ingest-ea-airgapped.png "") | You want to deploy {{agent}} and {{stack}} in an air-gapped environment (no access to outside networks)<br> |

Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ Info on {{agent}} and agent integrations:
Info on {{ls}} and {{ls}} plugins:

* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
* [{{ls}} {{agent}} input](logstash://reference/plugins-inputs-elastic_agent.md)
* [{{ls}} Kafka output](logstash://reference/plugins-outputs-kafka.md)
* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md)
* [{{ls}} Kafka output](logstash-docs-md://lsr/plugins-outputs-kafka.md)

Info on {{es}}:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ Info on {{agent}} and agent integrations:
Info on {{ls}} and {{ls}} Kafka plugins:

* [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current)
* [{{ls}} {{agent}} input](logstash://reference/plugins-inputs-elastic_agent.md)
* [{{ls}} Kafka input](logstash://reference/plugins-inputs-kafka.md)
* [{{ls}} Kafka output](logstash://reference/plugins-outputs-kafka.md)
* [{{ls}} Elasticsearch output](logstash://reference/plugins-outputs-elasticsearch.md)
* [{{ls}} {{agent}} input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md)
* [{{ls}} Kafka input](logstash-docs-md://lsr/plugins-inputs-kafka.md)
* [{{ls}} Kafka output](logstash-docs-md://lsr/plugins-outputs-kafka.md)
* [{{ls}} Elasticsearch output](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md)

Info on {{es}}:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,5 @@ Info for air-gapped environments:

## Geoip database management in air-gapped environments [ls-geoip]

The [{{ls}} geoip filter](logstash://reference/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](logstash://reference/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info.
The [{{ls}} geoip filter](logstash-docs-md://lsr/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](logstash-docs-md://lsr/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info.

Loading