Skip to content
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,29 @@ outputs:
verification_mode: full
----

== Kafka output and using Logstash to index data to Elasticsearch

If you are considering using {ls} to ship the data from `kafka` to {es}, please
be aware Elastic is not currently testing this kind of setup.

The structure of the documents sent from {agent} to `kafka` must not be modified by {ls}.
We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec.

Refer to {ls} output for {agent} for more details.

[source,yaml]
----
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...
----

== Kafka output configuration settings

The `kafka` output supports the following settings, grouped by category.
Expand Down Expand Up @@ -502,4 +525,4 @@ Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silentl

// =============================================================================

|===
|===
Original file line number Diff line number Diff line change
Expand Up @@ -33,27 +33,38 @@ The {ls} configuration pipeline listens for incoming {agent} connections,
processes received events, and then sends the events to {es}.

The following example configures a {ls} pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}:
incoming {agent} connections and routes received events to {es}.

The {ls} pipeline definition below is an example.

[source,yaml]
----
input {
elastic_agent {
port => 5044
enrich => none # don't modify the events' schema at all
# or minimal change, add only ssl and source metadata
# enrich => [ssl_peer_metadata, source_metadata]
ssl => true
ssl_certificate_authorities => ["<ca_path>"]
ssl_certificate => "<server_cert_path>"
ssl_key => "<server_cert_key_in_pkcs8>"
ssl_verify_mode => "force_peer"
}
}

output {
elasticsearch {
hosts => ["http://localhost:9200"] <1>
# cloud_id => "..."
data_stream => "true"
api_key => "<api_key>" <2>
data_stream => true
ssl => true
# cacert => "<elasticsearch_ca_path>"
}
}
----
<1> The {es} server and the port (`9200`) where {es} is running.
<2> The API Key used by {ls} to ship data to the destination data streams.

For more information about configuring {ls}, refer to
{logstash-ref}/configuration.html[Configuring {ls}] and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,29 @@

Specify these settings to send data over a secure connection to Kafka. In the {fleet} <<output-settings,Output settings>>, make sure that the Kafka output type is selected.

== Kafka output and using Logstash to index data to Elasticsearch

If you are considering using {ls} to ship the data from `kafka` to {es}, please
be aware Elastic is not currently testing this kind of setup.

The structure of the documents sent from {agent} to `kafka` must not be modified by {ls}.
We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec.

Refer to {ls} output for {agent} for more details.

[source,yaml]
----
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...
----

[discrete]
== General settings

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,44 @@ Before using the {ls} output, you need to make sure that for any integrations th

To learn how to generate certificates, refer to <<secure-logstash-connections>>.

To receive the events in {ls}, you also need to create a {ls} configuration pipeline.
The {ls} configuration pipeline listens for incoming {agent} connections,
processes received events, and then sends the events to {es}.

The following example configures a {ls} pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}.

The {ls} pipeline definition below is an example. Please refer to the `Additional Logstash
configuration required` steps when creating the {ls} output in the Fleet outputs page.

[source,yaml]
----
input {
elastic_agent {
port => 5044
enrich => none # don't modify the events' schema at all
ssl => true
ssl_certificate_authorities => ["<ca_path>"]
ssl_certificate => "<server_cert_path>"
ssl_key => "<server_cert_key_in_pkcs8>"
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"] <1>
# cloud_id => "..."
data_stream => "true"
api_key => "<api_key>" <2>
data_stream => true
ssl => true
# cacert => "<elasticsearch_ca_path>"
}
}
----
<1> The {es} server and the port (`9200`) where {es} is running.
<2> The API Key obtained from the {ls} output creation steps in Fleet.

[cols="2*<a"]
|===
|
Expand Down Expand Up @@ -196,4 +234,4 @@ include::../elastic-agent/configuration/outputs/output-shared-settings.asciidoc[

|===

:type!:
:type!: