Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ mapped_pages:

# Logstash plugins [k8s-logstash-plugins]

The power of {{ls}} is in the plugins--[inputs](logstash://reference/input-plugins.md), [outputs](logstash://reference/output-plugins.md), [filters](logstash://reference/filter-plugins.md), and [codecs](logstash://reference/codec-plugins.md).
The power of {{ls}} is in the plugins--[inputs](logstash-docs-md://lsr/input-plugins.md), [outputs](logstash-docs-md://lsr/output-plugins.md), [filters](logstash-docs-md://lsr/filter-plugins.md), and [codecs](logstash-docs-md://lsr/codec-plugins.md).

In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances—​including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation.

Expand Down Expand Up @@ -90,7 +90,7 @@ spec:

**Static read-only files**

Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash://reference/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash://reference/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest.
Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash-docs-md://lsr/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash-docs-md://lsr/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest.

::::{tip}
In the plugin documentation, these plugin settings are typically identified by `path` or an `array` of `paths`.
Expand All @@ -99,7 +99,7 @@ In the plugin documentation, these plugin settings are typically identified by `

To use these in your manifest, create a ConfigMap or Secret representing the asset, a Volume in your `podTemplate.spec` containing the ConfigMap or Secret, and mount that Volume with a VolumeMount in your `podTemplateSpec.container` section of your {{ls}} resource.

This example illustrates configuring a ConfigMap from a ruby source file, and including it in a [`logstash-filter-ruby`](logstash://reference/plugins-filters-ruby.md) plugin.
This example illustrates configuring a ConfigMap from a ruby source file, and including it in a [`logstash-filter-ruby`](logstash-docs-md://lsr/plugins-filters-ruby.md) plugin.

First, create the ConfigMap.

Expand Down Expand Up @@ -143,7 +143,7 @@ spec:

### Larger read-only assets (1 MiB+) [k8s-logstash-working-with-plugins-large-ro]

Some plugins require or allow access to static read-only files that exceed the 1 MiB (mebibyte) limit imposed by ConfigMap and Secret. For example, you may need JAR files to load drivers when using a JDBC or JMS plugin, or a large [`logstash-filter-translate`](logstash://reference/plugins-filters-translate.md) dictionary.
Some plugins require or allow access to static read-only files that exceed the 1 MiB (mebibyte) limit imposed by ConfigMap and Secret. For example, you may need JAR files to load drivers when using a JDBC or JMS plugin, or a large [`logstash-filter-translate`](logstash-docs-md://lsr/plugins-filters-translate.md) dictionary.

You can add files using:

Expand Down Expand Up @@ -239,7 +239,7 @@ After you build and deploy the custom image, include it in the {{ls}} manifest.

### Writable storage [k8s-logstash-working-with-plugins-writable]

Some {{ls}} plugins need access to writable storage. This could be for checkpointing to keep track of events already processed, a place to temporarily write events before sending a batch of events, or just to actually write events to disk in the case of [`logstash-output-file`](logstash://reference/plugins-outputs-file.md).
Some {{ls}} plugins need access to writable storage. This could be for checkpointing to keep track of events already processed, a place to temporarily write events before sending a batch of events, or just to actually write events to disk in the case of [`logstash-output-file`](logstash-docs-md://lsr/plugins-outputs-file.md).

{{ls}} on ECK by default supplies a small 1.5 GiB (gibibyte) default persistent volume to each pod. This volume is called `logstash-data` and is located at `/usr/logstash/data`, and is typically the default location for most plugin use cases. This volume is stable across restarts of {{ls}} pods and is suitable for many use cases.

Expand Down Expand Up @@ -333,7 +333,7 @@ spec:

::::{admonition} Horizontal scaling for {{ls}} plugins
* Not all {{ls}} deployments can be scaled horizontally by increasing the number of {{ls}} Pods defined in the {{ls}} resource. Depending on the types of plugins in a {{ls}} installation, increasing the number of pods may cause data duplication, data loss, incorrect data, or may waste resources with pods unable to be utilized correctly.
* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin.
* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin.

::::

Expand All @@ -345,12 +345,12 @@ spec:
* They **must** specify `pipeline.workers=1` for any pipelines that use them.
* The number of pods cannot be scaled above 1.

Examples of aggregating filters include [`logstash-filter-aggregate`](logstash://reference/plugins-filters-aggregate.md), [`logstash-filter-csv`](logstash://reference/plugins-filters-csv.md) when `autodetect_column_names` set to `true`, and any [`logstash-filter-ruby`](logstash://reference/plugins-filters-ruby.md) implementations that perform aggregations.
Examples of aggregating filters include [`logstash-filter-aggregate`](logstash-docs-md://lsr/plugins-filters-aggregate.md), [`logstash-filter-csv`](logstash-docs-md://lsr/plugins-filters-csv.md) when `autodetect_column_names` set to `true`, and any [`logstash-filter-ruby`](logstash-docs-md://lsr/plugins-filters-ruby.md) implementations that perform aggregations.


### Input plugins: events pushed to {{ls}} [k8s-logstash-inputs-data-pushed]

{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash://reference/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash://reference/plugins-inputs-http.md).
{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash-docs-md://lsr/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash-docs-md://lsr/plugins-inputs-http.md).


### Input plugins: {{ls}} maintains state [k8s-logstash-inputs-local-checkpoints]
Expand All @@ -361,16 +361,16 @@ Note that plugins that retrieve data from external sources, and require some lev

Input plugins that include configuration settings such as `sincedb`, `checkpoint` or `sql_last_run_metadata` may fall into this category.

Examples of these plugins include [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash://reference/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash://reference/plugins-inputs-file.md).
Examples of these plugins include [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash-docs-md://lsr/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash-docs-md://lsr/plugins-inputs-file.md).


### Input plugins: external source stores state [k8s-logstash-inputs-external-state]

{{ls}} installations that use input plugins that retrieve data from an external source, and **rely on the external source to store state** can scale based on the parameters of the external source.

For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data.
For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data.

Examples of these plugins include [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash://reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash://reference/plugins-inputs-kinesis.md).
Examples of these plugins include [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash-docs-md://lsr/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash-docs-md://lsr/plugins-inputs-kinesis.md).



Expand All @@ -390,12 +390,12 @@ Use these guidelines *in addition* to the general guidelines provided in [Scalin

### {{ls}} integration plugin [k8s-logstash-plugin-considerations-ls-integration]

When your pipeline uses the [`Logstash integration`](logstash://reference/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash://reference/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod.
When your pipeline uses the [`Logstash integration`](logstash-docs-md://lsr/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash-docs-md://lsr/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod.


### Elasticsearch output plugin [k8s-logstash-plugin-considerations-es-output]

The [`elasticsearch output`](logstash://reference/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}.
The [`elasticsearch output`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}.

You can customize roles in {{es}}. Check out [creating custom roles](../../users-roles/cluster-or-deployment-auth/native.md)

Expand All @@ -419,7 +419,7 @@ stringData:

### Elastic_integration filter plugin [k8s-logstash-plugin-considerations-integration-filter]

The [`elastic_integration filter`](logstash://reference/plugins-filters-elastic_integration.md) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables.
The [`elastic_integration filter`](logstash-docs-md://lsr/plugins-filters-elastic_integration.md) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables.

```json
elastic_integration {
Expand Down Expand Up @@ -448,15 +448,15 @@ stringData:

### Elastic Agent input and Beats input plugins [k8s-logstash-plugin-considerations-agent-beats]

When you use the [Elastic Agent input](logstash://reference/plugins-inputs-elastic_agent.md) or the [Beats input](logstash://reference/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately.
When you use the [Elastic Agent input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or the [Beats input](logstash-docs-md://lsr/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately.



## Adding custom plugins [k8s-logstash-working-with-custom-plugins]

If you need plugins in addition to those included in the standard {{ls}} distribution, you can add them. Create a custom Docker image that includes the installed plugins, using the `bin/logstash-plugin install` utility to add more plugins to the image so that they can be used by {{ls}} pods.

This sample Dockerfile installs the [`logstash-filter-tld`](logstash://reference/plugins-filters-tld.md) plugin to the official {{ls}} Docker image:
This sample Dockerfile installs the [`logstash-filter-tld`](logstash-docs-md://lsr/plugins-filters-tld.md) plugin to the official {{ls}} Docker image:

```shell
FROM docker.elastic.co/logstash/logstash:8.16.1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ applies_to:

Learn how to set up disaster recovery between two clusters based on bi-directional {{ccr}}. The following tutorial is designed for data streams which support [update by query](../../../manage-data/data-store/data-streams/use-data-stream.md#update-docs-in-a-data-stream-by-query) and [delete by query](../../../manage-data/data-store/data-streams/use-data-stream.md#delete-docs-in-a-data-stream-by-query). You can only perform these actions on the leader index.

This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash://reference/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial.
This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial.

* Setting up a remote cluster on `clusterA` and `clusterB`.
* Setting up bi-directional cross-cluster replication with exclusion patterns.
Expand Down
1 change: 1 addition & 0 deletions docset.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
project: 'Elastic documentation'

Check notice on line 1 in docset.yml

View workflow job for this annotation

GitHub Actions / preview / build

Substitution key 'reports-app' is not used in any file

Check notice on line 1 in docset.yml

View workflow job for this annotation

GitHub Actions / preview / build

Substitution key 'api-prereq-title' is not used in any file

Check notice on line 1 in docset.yml

View workflow job for this annotation

GitHub Actions / preview / build

Substitution key 'api-description-title' is not used in any file

Check notice on line 1 in docset.yml

View workflow job for this annotation

GitHub Actions / preview / build

Substitution key 'release-date' is not used in any file
max_toc_depth: 2

features:
Expand Down Expand Up @@ -52,6 +52,7 @@
- integrations
- kibana
- logstash
- logstash-docs-md
- search-ui
- security-docs

Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/query-filter/tools/grok-debugger.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ mapped_pages:

You can build and debug grok patterns in the {{kib}} **Grok Debugger** before you use them in your data processing pipelines. Grok is a pattern matching syntax that you can use to parse arbitrary text and structure it. Grok is good for parsing syslog, apache, and other webserver logs, mysql logs, and in general, any log format that is written for human consumption.

Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash://reference/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md).
Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash-docs-md://lsr/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md).

The {{stack}} ships with more than 120 reusable grok patterns. For a complete list of patterns, see [{{es}} grok patterns](https://github.com/elastic/elasticsearch/tree/master/libs/grok/src/main/resources/patterns) and [{{ls}} grok patterns](https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns).

Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/scripting/grok.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The first value is a number, followed by what appears to be an IP address. You c

To ease migration to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema.

The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash://reference/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.
The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash-docs-md://lsr/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.

New features and enhancements will be added to the ECS-compliant files. The legacy patterns may still receive bug fixes which are backwards compatible.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You can also [convert an index alias to a data stream](#convert-index-alias-to-d
::::{important}
If you use {{fleet}}, {{agent}}, or {{ls}}, skip this tutorial. They all set up data streams for you.

For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash://reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin.
For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin.

::::

Expand Down
Loading