You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ spec:
90
90
91
91
**Static read-only files**
92
92
93
-
Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash://reference/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash://reference/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest.
93
+
Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](logstash-docs-md://lsr/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](logstash-docs-md://lsr/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest.
94
94
95
95
::::{tip}
96
96
In the plugin documentation, these plugin settings are typically identified by `path` or an `array` of `paths`.
@@ -333,7 +333,7 @@ spec:
333
333
334
334
::::{admonition} Horizontal scaling for {{ls}} plugins
335
335
* Not all {{ls}} deployments can be scaled horizontally by increasing the number of {{ls}} Pods defined in the {{ls}} resource. Depending on the types of plugins in a {{ls}} installation, increasing the number of pods may cause data duplication, data loss, incorrect data, or may waste resources with pods unable to be utilized correctly.
336
-
* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin.
336
+
* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin.
337
337
338
338
::::
339
339
@@ -350,7 +350,7 @@ Examples of aggregating filters include [`logstash-filter-aggregate`](logstash:/
350
350
351
351
### Input plugins: events pushed to {{ls}} [k8s-logstash-inputs-data-pushed]
352
352
353
-
{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash://reference/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash://reference/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash://reference/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash://reference/plugins-inputs-http.md).
353
+
{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](logstash-docs-md://lsr/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](logstash-docs-md://lsr/plugins-inputs-tcp.md), and [`logstash-input-http`](logstash-docs-md://lsr/plugins-inputs-http.md).
354
354
355
355
356
356
### Input plugins: {{ls}} maintains state [k8s-logstash-inputs-local-checkpoints]
@@ -361,16 +361,16 @@ Note that plugins that retrieve data from external sources, and require some lev
361
361
362
362
Input plugins that include configuration settings such as `sincedb`, `checkpoint` or `sql_last_run_metadata` may fall into this category.
363
363
364
-
Examples of these plugins include [`logstash-input-jdbc`](logstash://reference/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash://reference/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash://reference/plugins-inputs-file.md).
364
+
Examples of these plugins include [`logstash-input-jdbc`](logstash-docs-md://lsr/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](logstash-docs-md://lsr/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](logstash-docs-md://lsr/plugins-inputs-file.md).
365
365
366
366
367
367
### Input plugins: external source stores state [k8s-logstash-inputs-external-state]
368
368
369
369
{{ls}} installations that use input plugins that retrieve data from an external source, and **rely on the external source to store state** can scale based on the parameters of the external source.
370
370
371
-
For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data.
371
+
For example, a {{ls}} installation that uses a [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data.
372
372
373
-
Examples of these plugins include [`logstash-input-kafka`](logstash://reference/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash://reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash://reference/plugins-inputs-kinesis.md).
373
+
Examples of these plugins include [`logstash-input-kafka`](logstash-docs-md://lsr/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](logstash://reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](logstash-docs-md://lsr/plugins-inputs-kinesis.md).
374
374
375
375
376
376
@@ -390,12 +390,12 @@ Use these guidelines *in addition* to the general guidelines provided in [Scalin
When your pipeline uses the [`Logstash integration`](logstash://reference/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash://reference/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod.
393
+
When your pipeline uses the [`Logstash integration`](logstash-docs-md://lsr/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash-docs-md://lsr/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod.
The [`elasticsearch output`](logstash://reference/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}.
398
+
The [`elasticsearch output`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}.
399
399
400
400
You can customize roles in {{es}}. Check out [creating custom roles](../../users-roles/cluster-or-deployment-auth/native.md)
401
401
@@ -448,7 +448,7 @@ stringData:
448
448
449
449
### Elastic Agent input and Beats input plugins [k8s-logstash-plugin-considerations-agent-beats]
450
450
451
-
When you use the [Elastic Agent input](logstash://reference/plugins-inputs-elastic_agent.md) or the [Beats input](logstash://reference/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately.
451
+
When you use the [Elastic Agent input](logstash-docs-md://lsr/plugins-inputs-elastic_agent.md) or the [Beats input](logstash-docs-md://lsr/plugins-inputs-beats.md), set the [`ttl`](beats://reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately.
Copy file name to clipboardExpand all lines: deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ applies_to:
18
18
19
19
Learn how to set up disaster recovery between two clusters based on bi-directional {{ccr}}. The following tutorial is designed for data streams which support [update by query](../../../manage-data/data-store/data-streams/use-data-stream.md#update-docs-in-a-data-stream-by-query) and [delete by query](../../../manage-data/data-store/data-streams/use-data-stream.md#delete-docs-in-a-data-stream-by-query). You can only perform these actions on the leader index.
20
20
21
-
This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash://reference/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial.
21
+
This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial.
22
22
23
23
* Setting up a remote cluster on `clusterA` and `clusterB`.
24
24
* Setting up bi-directional cross-cluster replication with exclusion patterns.
Copy file name to clipboardExpand all lines: explore-analyze/query-filter/tools/grok-debugger.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ mapped_pages:
10
10
11
11
You can build and debug grok patterns in the {{kib}} **Grok Debugger** before you use them in your data processing pipelines. Grok is a pattern matching syntax that you can use to parse arbitrary text and structure it. Grok is good for parsing syslog, apache, and other webserver logs, mysql logs, and in general, any log format that is written for human consumption.
12
12
13
-
Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash://reference/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md).
13
+
Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](elasticsearch://reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](logstash-docs-md://lsr/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md).
14
14
15
15
The {{stack}} ships with more than 120 reusable grok patterns. For a complete list of patterns, see [{{es}} grok patterns](https://github.com/elastic/elasticsearch/tree/master/libs/grok/src/main/resources/patterns) and [{{ls}} grok patterns](https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns).
Copy file name to clipboardExpand all lines: explore-analyze/scripting/grok.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ The first value is a number, followed by what appears to be an IP address. You c
46
46
47
47
To ease migration to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema.
48
48
49
-
The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash://reference/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.
49
+
The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](logstash-docs-md://lsr/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes.
50
50
51
51
New features and enhancements will be added to the ECS-compliant files. The legacy patterns may still receive bug fixes which are backwards compatible.
Copy file name to clipboardExpand all lines: manage-data/data-store/data-streams/set-up-data-stream.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ You can also [convert an index alias to a data stream](#convert-index-alias-to-d
21
21
::::{important}
22
22
If you use {{fleet}}, {{agent}}, or {{ls}}, skip this tutorial. They all set up data streams for you.
23
23
24
-
For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash://reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin.
24
+
For {{fleet}} and {{agent}}, check out this [data streams documentation](/reference/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin.
Copy file name to clipboardExpand all lines: manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,5 +30,5 @@ Info for air-gapped environments:
30
30
31
31
## Geoip database management in air-gapped environments [ls-geoip]
32
32
33
-
The [{{ls}} geoip filter](logstash://reference/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](logstash://reference/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info.
33
+
The [{{ls}} geoip filter](logstash-docs-md://lsr/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](logstash-docs-md://lsr/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info.
0 commit comments