diff --git a/docs/reference/ea-integration-tutorial.md b/docs/reference/ea-integration-tutorial.md
new file mode 100644
index 00000000000..7e6e587f51f
--- /dev/null
+++ b/docs/reference/ea-integration-tutorial.md
@@ -0,0 +1,187 @@
+[[ea-integrations-tutorial]]
+=== Tutorial: Using the {ls} `elastic_integration` filter to extend Elastic {integrations}
+++++
+Tutorial: {ls} `elastic_integration` filter
+++++
+
+You can use {ls} to transform events collected by {agent} and paired with an {integrations-docs}[Elastic integration].
+You get the benefits of Elastic integrations--such as the simplicity of ingesting data from a wide variety of data
+sources and ensuring compliance with the {ecs-ref}/index.html[Elastic Common Schema (ECS)]--combined with the extra
+processing power of {ls}.
+
+This new functionality is made possible by the <> plugin.
+When you include the `elastic_integration` filter in your configuration, {ls} reads certain field values generated by the {agent},
+and uses them to apply the transformations from Elastic integrations.
+This allows you to further process events in the Logstash pipeline before sending them to their
+configured destinations.
+
+This tutorial walks you through adding the {integrations-docs}/crowdstrike-intro[Crowdstrike integration] sending the data to {ess} or self-managed {es}.
+
+
+[[ea-integrations-prereqs]]
+==== Prerequisites
+
+You need:
+
+* A working {es} cluster
+* A {ls} instance
+* {fleet-server}
+* An {fleet-guide}/elastic-agent-installation.html[{agent} installed] on the hosts you want to collect data from, and configured to {fleet-guide}/logstash-output.html[send output to {ls}]
+* An active Elastic Enterprise https://www.elastic.co/subscriptions[subscription]
+* A user configured with the <>
+
+NOTE: Even though the focus of this tutorial is {Fleet}-managed agents, you can use the `elastic_integration` filter and this
+general approach with {fleet-guide}/elastic-agent-configuration.html[self-managed agents].
+
+
+[[ea-integrations-process-overview]]
+==== Process overview
+
+* <>
+* <>
+* <>
+
+[discrete]
+[[ea-integrations-fleet]]
+=== Configure {fleet} to send data from {agent} to {ls}
+
+. For {fleet}-managed agents, go to {kib} and navigate to *Fleet > Settings*.
+
+. Create a new output and specify {ls} as the output type.
+
+. Add the {ls} hosts (domain or IP address/s) that the {agent} should send data to.
+
+. Add the client SSL certificate and the Client SSL certificate key to the configuration.
+
+. Click *Save and apply settings* in the bottom right-hand corner of the page.
+
+[discrete]
+[[ea-integrations-create-policy]]
+=== Create an {agent} policy with the necessary integrations
+
+. In {kib} navigate to *Fleet > Agent* policies, and select *Create agent policy*.
+
+. Give this policy a name, and then select *Advanced options*.
+
+. Change the *Output for integrations* setting to the {ls} output you created.
+
+. Click *Create agent policy*.
+
+. Select the policy name, and click *Add integration*.
++
+This step takes you to the integrations browser, where you can select an integration that has everything
+necessary to _integrate_ the data source with your other data in the {stack}.
+We'll use Crowdstrike as our example in this tutorial.
+
+. On the *Crowdstrike* integration overview page, click *Add Crowdstrike* to configure the integration.
+
+. Configure the integration to collect the data you need.
+On step 2 at the bottom of the page (*Where to add this integration?*), make sure that the “Existing hosts” option
+is selected and the Agent policy selected is the Logstash policy we created for our Logstash output.
+This policy should be selected by default.
+
+. Click *Save and continue*.
++
+You have the option to add the {agent} to your hosts.
+If you haven't already, {fleet-guide}/elastic-agent-installation.html[install the {agent}] on the host where you want to collect data.
+
+
+[discrete]
+[[ea-integrations-pipeline]]
+=== Configure {ls} to use the `elastic_integration` filter plugin
+
+. Create a new {logstash-ref}/configuration.html[{ls} pipeline].
+. Be sure to include these plugins:
+
+* <>
+* <>
+* <>
+
+Note that every event sent from the {agent} to {ls} contains specific meta-fields.
+{ls} expects events to contain a top-level `data_stream` field with `type`, `dataset`, and `namespace` sub-fields.
+
+{ls} uses this information and its connection to {es} to determine which integrations to apply to the event before sending the event to its destination output.
+{ls} frequently synchronizes with {es} to ensure that it has the most recent versions of the enabled integrations.
+
+
+[discrete]
+[[ea-integrations-ess-sample]]
+==== Sample configuration: output to {ess}
+
+This sample illustrates using the `elastic_agent` input and the `elastic_integration` filter for processing in {ls}, and then sending the output to {ess}.
+
+Check out the <> for the full list of configuration options.
+
+[source,txt]
+-----
+input {
+ elastic_agent { port => 5055 }
+}
+
+filter {
+ elastic_integration {
+ cloud_id => "your-cloud:id"
+ api_key => "your-api-key"
+ }
+}
+
+output {
+ stdout {}
+ elasticsearch {
+ cloud_id => "your-cloud:id"
+ api_key => "your-api-key"
+ }
+}
+-----
+
+All processing occurs in {ls} before events are forwarded to {ess}.
+
+[discrete]
+[[ea-integrations-es-sample]]
+==== Sample configuration: output to self-managed {es}
+
+This sample illustrates using the `elastic_agent` input and the `elastic_integration` filter for processing in {ls}, and then sending the output to {es}.
+
+Check out the <> for the full list of configuration options.
+
+Check out <> for more info.
+
+[source,txt]
+-----
+input {
+ elastic_agent { port => 5055 }
+}
+
+filter {
+ elastic_integration {
+ hosts => ["{es-host}:9200"]
+ ssl_enabled => true
+ ssl_certificate_authorities => "/usr/share/logstash/config/certs/ca-cert.pem"
+ username => "elastic"
+ password => "changeme"
+ }
+}
+
+output {
+ stdout {
+ codec => rubydebug # to debug datastream inputs
+ }
+ ## add elasticsearch
+ elasticsearch {
+ hosts => "{es-host}:9200"
+ user => "elastic"
+ password => "changeme"
+ ssl_certificate_authorities => "/usr/share/logstash/config/certs/ca-cert.pem"
+ }
+}
+-----
+
+Note that the user credentials that you specify in the `elastic_integration` filter must have sufficient privileges to get information about {es} and the integrations that you are using.
+
+If your {agent} and {ls} pipeline are configured correctly, then events go to {ls} for processing before {ls} forwards them on to {es}.
+
+Checkout <> page for possible troubleshooting guides if you are facing issue.
+
+
+
+