Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 16 additions & 5 deletions docs/plugins/filters/elastic_integration.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: v9.0.0
:release_date: 2025-04-28
:changelog_url: https://github.com/elastic/logstash-filter-elastic_integration/blob/v9.0.0/CHANGELOG.md
include_path: ../include
:version: v9.2.0
:release_date: 2025-10-02
:changelog_url: https://github.com/elastic/logstash-filter-elastic_integration/blob/v9.2.0/CHANGELOG.md
:include_path: ../../../../logstash/docs/include
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
Expand All @@ -19,7 +19,7 @@ END - GENERATED VARIABLES, DO NOT EDIT!

=== {elastic-integration-name} filter plugin

include::{include_path}/plugin_header.asciidoc[]
include::{include_path}/plugin_header-nonstandard.asciidoc[]

.Elastic Enterprise License
****
Expand Down Expand Up @@ -361,6 +361,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
| <<plugins-{type}s-{plugin}-hosts>> |<<array,array>>|No
| <<plugins-{type}s-{plugin}-password>> | <<password,password>>|No
| <<plugins-{type}s-{plugin}-pipeline_name>> | <<string,string>>|No
| <<plugins-{type}s-{plugin}-proxy>> | <<uri,uri>>|No
| <<plugins-{type}s-{plugin}-ssl_certificate>> | <<path,path>>|No
| <<plugins-{type}s-{plugin}-ssl_certificate_authorities>> |<<array,array>>|No
| <<plugins-{type}s-{plugin}-ssl_enabled>> | <<boolean,boolean>>|No
Expand Down Expand Up @@ -495,6 +496,16 @@ A password when using HTTP Basic Authentication to connect to {es}.
* When present, the event's initial pipeline will _not_ be auto-detected from the event's data stream fields.
* Value may be a {logstash-ref}/event-dependent-configuration.html#sprintf[sprintf-style] template; if any referenced fields cannot be resolved the event will not be routed to an ingest pipeline.

[id="plugins-{type}s-{plugin}-proxy"]
===== `proxy`

* Value type is <<uri,uri>>
* There is no default value for this setting.

Address of the HTTP forward proxy used to connect to the {es} cluster.
An empty string is treated as if proxy was not set.
Environment variables may be used to set this value, e.g. `proxy => '${LS_PROXY:}'`.

[id="plugins-{type}s-{plugin}-ssl_certificate"]
===== `ssl_certificate`

Expand Down
146 changes: 136 additions & 10 deletions docs/plugins/filters/elasticsearch.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: v4.2.0
:release_date: 2025-05-07
:changelog_url: https://github.com/logstash-plugins/logstash-filter-elasticsearch/blob/v4.2.0/CHANGELOG.md
include_path: ../include
:version: v4.3.1
:release_date: 2025-09-23
:changelog_url: https://github.com/logstash-plugins/logstash-filter-elasticsearch/blob/v4.3.1/CHANGELOG.md
:include_path: ../../../../logstash/docs/include
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
Expand Down Expand Up @@ -55,7 +55,7 @@ if [type] == "end" {

The example below reproduces the above example but utilises the query_template.
This query_template represents a full Elasticsearch query DSL and supports the
standard Logstash field substitution syntax. The example below issues
standard {ls} field substitution syntax. The example below issues
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
standard {ls} field substitution syntax. The example below issues
standard {{ls}} field substitution syntax. The example below issues

the same query as the first example but uses the template shown.

[source,ruby]
Expand Down Expand Up @@ -119,6 +119,110 @@ Authentication to a secure Elasticsearch cluster is possible using _one_ of the
Authorization to a secure Elasticsearch cluster requires `read` permission at index level and `monitoring` permissions at cluster level.
The `monitoring` permission at cluster level is necessary to perform periodic connectivity checks.

[id="plugins-{type}s-{plugin}-esql"]
==== {esql} support

.Technical Preview
****
The {esql} feature that allows using ES|QL queries with this plugin is in Technical Preview.
Configuration options and implementation details are subject to change in minor releases without being preceded by deprecation warnings.
****

{es} Query Language ({esql}) provides a SQL-like interface for querying your {es} data.

To use {esql}, this plugin needs to be installed in {ls} 8.17.4 or newer, and must be connected to {es} 8.11 or newer.

To configure {esql} query in the plugin, set your {esql} query in the `query` parameter.

IMPORTANT: We recommend understanding {ref}/esql-limitations.html[{esql} current limitations] before using it in production environments.

The following is a basic {esql} query that sets the food name to transaction event based on upstream event's food ID:
[source, ruby]
filter {
elasticsearch {
hosts => [ 'https://..']
api_key => '....'
query => '
FROM food-index
| WHERE id == ?food_id
'
query_params => {
"food_id" => "[food][id]"
}
}
}

Set `config.support_escapes: true` in `logstash.yml` if you need to escape special chars in the query.

In the result event, the plugin sets total result size in `[@metadata][total_values]` field.

[id="plugins-{type}s-{plugin}-esql-event-mapping"]
===== Mapping {esql} result to {ls} event
{esql} returns query results in a structured tabular format, where data is organized into _columns_ (fields) and _values_ (entries).
The plugin maps each value entry to an event, populating corresponding fields.
For example, a query might produce a table like:

[cols="2,1,1,1,2",options="header"]
|===
|`timestamp` |`user_id` | `action` | `status.code` | `status.desc`

|2025-04-10T12:00:00 |123 |login |200 | Success
|2025-04-10T12:05:00 |456 |purchase |403 | Forbidden (unauthorized user)
|===

For this case, the plugin creates two JSON look like objects as below and places them into the `target` field of the event if `target` is defined.
If `target` is not defined, the plugin places the _only_ first result at the root of the event.
[source, json]
[
{
"timestamp": "2025-04-10T12:00:00",
"user_id": 123,
"action": "login",
"status": {
"code": 200,
"desc": "Success"
}
},
{
"timestamp": "2025-04-10T12:05:00",
"user_id": 456,
"action": "purchase",
"status": {
"code": 403,
"desc": "Forbidden (unauthorized user)"
}
}
]

NOTE: If your index has a mapping with sub-objects where `status.code` and `status.desc` actually dotted fields, they appear in {ls} events as a nested structure.

[id="plugins-{type}s-{plugin}-esql-multifields"]
===== Conflict on multi-fields

{esql} query fetches all parent and sub-fields fields if your {es} index has https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/multi-fields[multi-fields] or https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/subobjects[subobjects].
Since {ls} events cannot contain parent field's concrete value and sub-field values together, the plugin ignores sub-fields with warning and includes parent.
We recommend using the `RENAME` (or `DROP` to avoid warning) keyword in your {esql} query explicitly rename the fields to include sub-fields into the event.

This is a common occurrence if your template or mapping follows the pattern of always indexing strings as "text" (`field`) + " keyword" (`field.keyword`) multi-field.
In this case it's recommended to do `KEEP field` if the string is identical and there is only one subfield as the engine will optimize and retrieve the keyword, otherwise you can do `KEEP field.keyword | RENAME field.keyword as field`.

To illustrate the situation with example, assuming your mapping has a time `time` field with `time.min` and `time.max` sub-fields as following:
[source, ruby]
"properties": {
"time": { "type": "long" },
"time.min": { "type": "long" },
"time.max": { "type": "long" }
}

The {esql} result will contain all three fields but the plugin cannot map them into {ls} event.
To avoid this, you can use the `RENAME` keyword to rename the `time` parent field to get all three fields with unique fields.
[source, ruby]
...
query => 'FROM my-index | RENAME time AS time.current'
...

For comprehensive ES|QL syntax reference and best practices, see the https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-syntax.html[{esql} documentation].

[id="plugins-{type}s-{plugin}-options"]
==== Elasticsearch Filter Configuration Options

Expand All @@ -144,6 +248,8 @@ NOTE: As of version `4.0.0` of this plugin, a number of previously deprecated se
| <<plugins-{type}s-{plugin}-password>> |<<password,password>>|No
| <<plugins-{type}s-{plugin}-proxy>> |<<uri,uri>>|No
| <<plugins-{type}s-{plugin}-query>> |<<string,string>>|No
| <<plugins-{type}s-{plugin}-query_type>> |<<string,string>>, one of `["dsl", "esql"]`|No
| <<plugins-{type}s-{plugin}-query_params>> |<<hash,hash>> or <<hash,hash>>|No
| <<plugins-{type}s-{plugin}-query_template>> |<<string,string>>|No
| <<plugins-{type}s-{plugin}-result_size>> |<<number,number>>|No
| <<plugins-{type}s-{plugin}-retry_on_failure>> |<<number,number>>|No
Expand Down Expand Up @@ -340,11 +446,30 @@ environment variables e.g. `proxy => '${LS_PROXY:}'`.
* Value type is <<string,string>>
* There is no default value for this setting.

Elasticsearch query string. More information is available in the
{ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch query
string documentation].
Use either `query` or `query_template`.
The query to be executed.
The accepted query shape is DSL query string or ES|QL.
For the DSL query string, use either `query` or `query_template`.
Read the {ref}/query-dsl-query-string-query.html[{es} query
string documentation] or {ref}/esql.html[{es} ES|QL documentation] for more information.

[id="plugins-{type}s-{plugin}-query_type"]
===== `query_type`

* Value can be `dsl` or `esql`
* Default value is `dsl`

Defines the <<plugins-{type}s-{plugin}-query>> shape.
When `dsl`, the query shape must be valid {es} JSON-style string.
When `esql`, the query shape must be a valid {esql} string and `index`, `query_template` and `sort` parameters are not allowed.

[id="plugins-{type}s-{plugin}-query_params"]
===== `query_params`

* The value type is <<hash,hash>> or <<array,array>>. When an array provided, the array elements are pairs of `key` and `value`.
* There is no default value for this setting

Named parameters in {esql} to send to {es} together with <<plugins-{type}s-{plugin}-query>>.
Visit {ref}/esql-rest.html#esql-rest-params[passing parameters to query page] for more information.

[id="plugins-{type}s-{plugin}-query_template"]
===== `query_template`
Expand Down Expand Up @@ -541,8 +666,9 @@ Tags the event on failure to look up previous log event information. This can be

Define the target field for placing the result data.
If this setting is omitted, the target will be the root (top level) of the event.
It is highly recommended to set when using `query_type=>'esql'` to set all query results into the event.

The destination fields specified in <<plugins-{type}s-{plugin}-fields>>, <<plugins-{type}s-{plugin}-aggregation_fields>>, and <<plugins-{type}s-{plugin}-docinfo_fields>> are relative to this target.
When `query_type=>'dsl'`, the destination fields specified in <<plugins-{type}s-{plugin}-fields>>, <<plugins-{type}s-{plugin}-aggregation_fields>>, and <<plugins-{type}s-{plugin}-docinfo_fields>> are relative to this target.

For example, if you want the data to be put in the `operation` field:
[source,ruby]
Expand Down
8 changes: 4 additions & 4 deletions docs/plugins/filters/jdbc_static.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: v5.6.0
:release_date: 2025-05-30
:changelog_url: https://github.com/logstash-plugins/logstash-integration-jdbc/blob/v5.6.0/CHANGELOG.md
:include_path: ../include
:version: v5.6.1
:release_date: 2025-09-30
:changelog_url: https://github.com/logstash-plugins/logstash-integration-jdbc/blob/v5.6.1/CHANGELOG.md
:include_path: ../../../../logstash/docs/include
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
Expand Down
8 changes: 4 additions & 4 deletions docs/plugins/filters/jdbc_streaming.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: v5.6.0
:release_date: 2025-05-30
:changelog_url: https://github.com/logstash-plugins/logstash-integration-jdbc/blob/v5.6.0/CHANGELOG.md
:include_path: ../include
:version: v5.6.1
:release_date: 2025-09-30
:changelog_url: https://github.com/logstash-plugins/logstash-integration-jdbc/blob/v5.6.1/CHANGELOG.md
:include_path: ../../../../logstash/docs/include
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
Expand Down
33 changes: 25 additions & 8 deletions docs/plugins/filters/translate.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: v3.4.2
:release_date: 2023-06-14
:changelog_url: https://github.com/logstash-plugins/logstash-filter-translate/blob/v3.4.2/CHANGELOG.md
:include_path: ../include
:version: v3.5.0
:release_date: 2025-08-04
:changelog_url: https://github.com/logstash-plugins/logstash-filter-translate/blob/v3.5.0/CHANGELOG.md
:include_path: ../../../../logstash/docs/include
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
Expand All @@ -25,12 +25,12 @@ A general search and replace tool that uses a configured hash
and/or a file to determine replacement values. Currently supported are
YAML, JSON, and CSV files. Each dictionary item is a key value pair.

You can specify dictionary entries in one of two ways:
You can specify dictionary entries in one of two ways:

* The `dictionary` configuration item can contain a hash representing
the mapping.
the mapping.
* An external file (readable by logstash) may be specified in the
`dictionary_path` configuration item.
`dictionary_path` configuration item.

These two methods may not be used in conjunction; it will produce an error.

Expand Down Expand Up @@ -110,6 +110,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ
| <<plugins-{type}s-{plugin}-refresh_behaviour>> |<<string,string>>|No
| <<plugins-{type}s-{plugin}-target>> |<<string,string>>|No
| <<plugins-{type}s-{plugin}-yaml_dictionary_code_point_limit>> |<<number,number>>|No
| <<plugins-{type}s-{plugin}-yaml_load_strategy>> |<<string,string>>, one of `["one_shot", "streaming"]`|No
|=======================================================================

Also see <<plugins-{type}s-{plugin}-common-options>> for a list of options supported by all
Expand Down Expand Up @@ -158,7 +159,7 @@ NOTE: It is an error to specify both `dictionary` and `dictionary_path`.
* There is no default value for this setting.

The full path of the external dictionary file. The format of the table should be
a standard YAML, JSON, or CSV.
a standard YAML, JSON, or CSV.

Specify any integer-based keys in quotes. The value taken from the event's
`source` setting is converted to a string. The lookup dictionary keys must also
Expand Down Expand Up @@ -433,5 +434,21 @@ the filter will succeed. This will clobber the old value of the source field!
The max amount of code points in the YAML file in `dictionary_path`. Please be aware that byte limit depends on the encoding.
This setting is effective for YAML file only. YAML over the limit throws exception.

[id="plugins-{type}s-{plugin}-yaml_load_strategy"]
===== `yaml_load_strategy`

* Value can be any of: `one_shot`, `streaming`
* Default value is `one_shot`

How to load and parse the YAML file. This setting defaults to `one_shot`, which loads the entire
YAML file into the parser in one go, emitting the final dictionary from the fully parsed YAML document.

Setting to `streaming` will instead instruct the parser to emit one "YAML element" at a time, constructing the dictionary
during parsing. This mode drastically reduces the amount of memory required to load or refresh the dictionary and it is also faster.

Due to underlying implementation differences this mode only supports basic types such as Arrays, Objects, Strings, numbers and booleans, and does not support tags.

If you have a lot of translate filters with large YAML documents consider changing this setting to `streaming` instead.

[id="plugins-{type}s-{plugin}-common-options"]
include::{include_path}/{type}.asciidoc[]
8 changes: 4 additions & 4 deletions docs/plugins/filters/xml.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
:version: v4.3.1
:release_date: 2025-04-22
:changelog_url: https://github.com/logstash-plugins/logstash-filter-xml/blob/v4.3.1/CHANGELOG.md
:include_path: ../include
:version: v4.3.2
:release_date: 2025-07-24
:changelog_url: https://github.com/logstash-plugins/logstash-filter-xml/blob/v4.3.2/CHANGELOG.md
:include_path: ../../../../logstash/docs/include
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:include_path: ../../../../logstash/docs/include
:include_path: ../include

///////////////////////////////////////////
END - GENERATED VARIABLES, DO NOT EDIT!
///////////////////////////////////////////
Expand Down
Loading