diff --git a/README.md b/README.md index 94738d69b..0cc6da7a2 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,17 @@ +> [!IMPORTANT] +> Starting with Elastic Stack v9.0.0 the content in this repo has moved. +> +> **If you need to update the docs for a version prior to 9.0.0**, you can open a PR in this repo targeting the specific version branch and backport to other branches using the relevant `backport-{branch}` labels. +> +> **If you're updating the docs for version 9.0 or later**, you can open a PR in the content's new home: [elastic/docs-content](https://github.com/elastic/docs-content). + # ingest-docs + Home for Elastic ingest documentation ## Backporting -Pull requests should be tagged with the target version of the Elastic Stack along with any relevant backport labels. In general, we only backport documentation changes to [live stack versions](https://github.com/elastic/docs/blob/master/conf.yaml#L80). For manual backports, we recommend using the [backport tool](https://github.com/sqren/backport) to easily open backport PRs. If you need help, ping **[ingest-docs](https://github.com/orgs/elastic/teams/ingest-docs)** and we'd be happy to handle the backport process for you. +Pull requests should be tagged with the target version of the Elastic Stack along with any relevant backport labels. In general, we only backport documentation changes to [live stack versions](https://github.com/elastic/docs/blob/master/conf.yaml#L80). If you need help, ping **[ingest-docs](https://github.com/orgs/elastic/teams/ingest-docs)** and we'd be happy to handle the backport process for you. ## License diff --git a/docs/en/ingest-guide/index.asciidoc b/docs/en/ingest-guide/index.asciidoc deleted file mode 100644 index eff3fb7bd..000000000 --- a/docs/en/ingest-guide/index.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[] -include::{docs-root}/shared/attributes.asciidoc[] - -:doctype: book - -[[ingest-guide]] -= Elastic Ingest Overview - -include::ingest-intro.asciidoc[] -include::ingest-tools.asciidoc[] -include::ingest-additional-proc.asciidoc[] -include::ingest-solutions.asciidoc[] diff --git a/docs/en/ingest-guide/ingest-additional-proc.asciidoc b/docs/en/ingest-guide/ingest-additional-proc.asciidoc deleted file mode 100644 index 23d8bf54b..000000000 --- a/docs/en/ingest-guide/ingest-additional-proc.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[ingest-addl-proc]] -== Additional ingest processing - -You can start with {agent} and Elastic {integrations-docs}[integrations], and still -take advantage of additional processing options if you need them. - -{agent} processors:: -You can use link:{fleet-guide}/elastic-agent-processor-configuration.html[{agent} processors] to sanitize or enrich raw data at the source. -Use {agent} processors if you need to control what data is sent across the wire, or if you need to enrich the raw data with information available on the host. - -{es} ingest pipelines:: -You can use {es} link:{ref}/[ingest pipelines] to enrich incoming data or normalize field data before the data is indexed. -{es} ingest pipelines enable you to manipulate the data as it comes in. -This approach helps you avoid adding processing overhead to the hosts from which you're collecting data. - -{es} runtime fields:: -You can use {es} link:{ref}/runtime.html[runtime fields] to define or alter the schema at query time. -You can start working with your data without needing to understand how it is -structured, add fields to existing documents without reindexing your data, -override the value returned from an indexed field, and/or define fields for a -specific use without modifying the underlying schema. - -{ls} `elastic_integration filter`:: -You can use the {ls} link:{logstash-ref}/[`elastic_integration filter`] and -other link:{logstash-ref}/filter-plugins.html[{ls} filters] to -link:{logstash-ref}/ea-integrations.html[extend Elastic integrations] by -transforming data before it goes to {es}. diff --git a/docs/en/ingest-guide/ingest-faq.asciidoc b/docs/en/ingest-guide/ingest-faq.asciidoc deleted file mode 100644 index dea6534b2..000000000 --- a/docs/en/ingest-guide/ingest-faq.asciidoc +++ /dev/null @@ -1,77 +0,0 @@ -[[ingest-faq]] -== Frequently Asked Questions - -Q: What Elastic products and tools are available for ingesting data into Elasticsearch. - -Q: What's the best option for ingesting data? - -Q: What's the role of Logstash `filter-elastic-integration`? - - - -.WORK IN PROGRESS -**** -Temporary parking lot to capture outstanding questions and notes. -**** - - - -Also cover (here or in general outline): - -- https://www.elastic.co/guide/en/kibana/master/connect-to-elasticsearch.html#_add_sample_data[Sample data] -- OTel -- Beats -- Use case: GeoIP -- Airgapped -- Place for table, also adding use case + products (Exp: Logstash for multi-tenant) -- Role of LS in general content use cases - - - -[discrete] -=== Questions to answer: - -* Messaging for data sources that don't have an integration - - We're deemphasizing beats in preparation for deprecation - - We're not quite there with OTel yet - * How should we handle this in the near term? - Probably doesn't make sense to either ignore or jump them straight to Logstash - -* Should we mention Fleet and Stand-alone agent? -** If so, when, where, and how? -* How does this relate to Ingest Architectures -* Enrichment for general content - -* How to message current vs. desired state. - Especially Beats and OTel. -* HOW TO MESSAGE OTel - Current state. Future state. -* Consistent use of terminology vs. matching users' vocabulary (keywords) - -[discrete] -==== Random - -* DocsV3 - need for a sheltered space to develop new content -** Related: https://github.com/elastic/docsmobile/issues/708 -** Need a place to incubate a new doc (previews, links, etc.) -** Refine messaging in private - - -[discrete] -=== Other resources to use, reference, reconcile - -* Timeseries decision tree (needs updates) -* PM's video -** Needs an update. (We might relocate content before updating.) -* PM's product table -** Needs an update.(We might relocate content before updating.) -** Focuses on Agent over integrations. -** Same link text resolves to different locations. -** Proposal: Harvest the good and possibly repurpose the table format. -* Ingest Reference architectures -* Linkable content such as beats? Solutions ingest resources? - -* https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-guides.html[Starting with the Elastic Platform and Solutions] -* https://www.elastic.co/guide/en/observability/current/observability-get-started.html[Get started with Elastic Observability] -* https://www.elastic.co/guide/en/security/current/ingest-data.html[Ingest data into Elastic Security] -* - diff --git a/docs/en/ingest-guide/ingest-intro.asciidoc b/docs/en/ingest-guide/ingest-intro.asciidoc deleted file mode 100644 index ce3022a13..000000000 --- a/docs/en/ingest-guide/ingest-intro.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[ingest-intro]] -== Ingesting data into {es} - -Bring your data! -Whether you call it _adding_, _indexing_, or _ingesting_ data, you have to get -the data into {es} before you can search it, visualize it, and use it for insights. - -Our ingest tools are flexible, and support a wide range of scenarios. -We can help you with everything from popular and straightforward use cases, all -the way to advanced use cases that require additional processing in order to modify or -reshape your data before it goes to {es}. - -You can ingest: - -* **General content** (data without timestamps), such as HTML pages, catalogs, and files -* **Timestamped (time series) data**, such as logs, metrics, and traces for Elastic Security, Observability, Search solutions, or for your own custom solutions - -[discrete] -[[ingest-general]] -=== Ingesting general content - -Elastic offer tools designed to ingest specific types of general content. -The content type determines the best ingest option. - -* To index **documents** directly into {es}, use the {es} link:{ref}/docs.html[document APIs]. -* To send **application data** directly to {es}, use an link:https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} -language client]. -* To index **web page content**, use the Elastic link:https://www.elastic.co/web-crawler[web crawler]. -* To sync **data from third-party sources**, use link:{ref}/es-connectors.html[connectors]. - A connector syncs content from an original data source to an {es} index. - Using connectors you can create _searchable_, read-only replicas of your data sources. -* To index **single files** for testing in a non-production environment, use the {kib} link:{kibana-ref}/connect-to-elasticsearch.html#upload-data-kibana[file uploader]. - -If you would like to try things out before you add your own data, try using our {kibana-ref}/connect-to-elasticsearch.html#_add_sample_data[sample data]. - -[discrete] -[[ingest-timestamped]] -=== Ingesting time-stamped data - -[ingest-best-timestamped] -.What's the best approach for ingesting time-stamped data? -**** -The best approach for ingesting data is the _simplest option_ that _meets your needs_ and _satisfies your use case_. - -In most cases, the _simplest option_ for ingesting timestamped data is using {agent} paired with an Elastic integration. - -* Install {fleet-guide}[Elastic Agent] on the computer(s) from which you want to collect data. -* Add the {integrations-docs}[Elastic integration] for the data source to your deployment. - -Integrations are available for many popular platforms and services, and are a -good place to start for ingesting data into Elastic solutions--Observability, -Security, and Search--or your own search application. - -Check out the {integrations-docs}/all_integrations[Integration quick reference] -to search for available integrations. -If you don't find an integration for your data source or if you need -additional processing to extend the integration, we still have you covered. -Check out <> for a sneak peek. -**** diff --git a/docs/en/ingest-guide/ingest-solutions.asciidoc b/docs/en/ingest-guide/ingest-solutions.asciidoc deleted file mode 100644 index b76f3dd5c..000000000 --- a/docs/en/ingest-guide/ingest-solutions.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[ingest-for-solutions]] -== Ingesting data for Elastic solutions - -Elastic solutions--Security, Observability, and Search--are loaded with features -and functionality to help you get value and insights from your data. -{fleet-guide}[Elastic Agent] and {integrations-docs}[Elastic integrations] can help, and are the best place to start. - -When you use integrations with solutions, you have an integrated experience that offers -easier implementation and decreases the time it takes to get insights and value from your data. - -[ingest-process-overview] -.High-level overview -**** -To use {fleet-guide}[Elastic Agent] and {integrations-docs}[Elastic integrations] -with Elastic solutions: - -1. Create an link:https://www.elastic.co/cloud[{ecloud}] deployment for your solution. - If you don't have an {ecloud} account, you can sign up for a link:https://cloud.elastic.co/registration[free trial] to get started. -2. Add the {integrations-docs}[Elastic integration] for your data source to the deployment. -3. link:{fleet-guide}/elastic-agent-installation.html[Install {agent}] on the systems whose data you want to collect. -**** - -NOTE: {serverless-docs}[Elastic serverless] makes using solutions even easier. -Sign up for a link:{serverless-docs}/general/sign-up-trial[free trial], and check it out. - - -[discrete] -[[ingest-for-search]] -=== Ingesting data for Search - -{es} is the magic behind Search and our other solutions. -The solution gives you more pre-built components to get you up and running quickly for common use cases. - -**Resources** - -* link:{fleet-guide}/elastic-agent-installation.html[Install {agent}] -* link:https://www.elastic.co/integrations/data-integrations?solution=search[Elastic Search for integrations] -* link:{ref}[{es} Guide] -** link:{ref}/docs.html[{es} document APIs] -** link:https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} language clients] -** link:https://www.elastic.co/web-crawler[Elastic web crawler] -** link:{ref}/es-connectors.html[Elastic connectors] - - -[discrete] -[[ingest-for-obs]] -=== Ingesting data for Observability - -With link:https://www.elastic.co/observability[Elastic Observability], you can -monitor and gain insights into logs, metrics, and application traces. -The guides and resources in this section illustrate how to ingest data and use -it with the Observability solution. - - -**Guides for popular Observability use cases** - -* link:{estc-welcome}/getting-started-observability.html[Monitor applications and systems with Elastic Observability] -* link:https://www.elastic.co/guide/en/observability/current/logs-metrics-get-started.html[Get started with logs and metrics] -** link:https://www.elastic.co/guide/en/observability/current/logs-metrics-get-started.html#add-system-integration[Step 1: Add the {agent} System integration] -** link:https://www.elastic.co/guide/en/observability/current/logs-metrics-get-started.html#add-agent-to-fleet[Step 2: Install and run {agent}] - -* link:{serverless-docs}/observability/what-is-observability-serverless[Observability] on link:{serverless-docs}[{serverless-full}]: -** link:{serverless-docs}/observability/quickstarts/monitor-hosts-with-elastic-agent[Monitor hosts with {agent} ({serverless-short})] -** link:{serverless-docs}/observability/quickstarts/k8s-logs-metrics[Monitor your K8s cluster with {agent} ({serverless-short})] - -**Resources** - -* link:{fleet-guide}/elastic-agent-installation.html[Install {agent}] -* link:https://www.elastic.co/integrations/data-integrations?solution=observability[Elastic Observability integrations] - -[discrete] -[[ingest-for-security]] -=== Ingesting data for Security - -You can detect and respond to threats when you use -link:https://www.elastic.co/security[Elastic Security] to analyze and take -action on your data. -The guides and resources in this section illustrate how to ingest data and use it with the Security solution. - -**Guides for popular Security use cases** - -* link:https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-siem-security.html[Use Elastic Security for SIEM] -* link:https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-endpoint-security.html[Protect hosts with endpoint threat intelligence from Elastic Security] - -**Resources** - -* link:{fleet-guide}/elastic-agent-installation.html[Install {agent}] -* link:https://www.elastic.co/integrations/data-integrations?solution=search[Elastic Security integrations] -* link:{security-guide}/es-overview.html[Elastic Security documentation] - - -[discrete] -[[ingest-for-custom]] -=== Ingesting data for your own custom search solution - -Elastic solutions can give you a head start for common use cases, but you are not at all limited. -You can still do your own thing with a custom solution designed by _you_. - -Bring your ideas and use {es} and the {stack} to store, search, and visualize your data. - -**Resources** - -* link:{fleet-guide}/elastic-agent-installation.html[Install {agent}] -* link:{ref}[{es} Guide] -** link:{ref}/docs.html[{es} document APIs] -** link:https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} language clients] -** link:https://www.elastic.co/web-crawler[Elastic web crawler] -** link:{ref}/es-connectors.html[Elastic connectors] -* link:{estc-welcome}/getting-started-general-purpose.html[Tutorial: Get started with vector search and generative AI] - diff --git a/docs/en/ingest-guide/ingest-static.asciidoc b/docs/en/ingest-guide/ingest-static.asciidoc deleted file mode 100644 index 162bd243c..000000000 --- a/docs/en/ingest-guide/ingest-static.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[intro-general]] -== Ingesting general content - -Describe general content (non-timestamped)and give examples. - -.WORK IN PROGRESS -**** -Progressive disclosure: Start with basic use cases and work up to advanced processing - -Possibly repurpose and use ingest decision tree with Beats removed? -**** - -[discrete] -=== Basic use cases - -* {es} document APIs for documents. -* Elastic language clients for application data. -* Elastic web crawler for web page content. -* Connectors for data from third-party sources, such as Slack, etc. -* Kibana file uploader for individual files. -* LOGSTASH??? -** ToDO: Check out Logstash enterprisesearch-integration - -* To index **documents** directly into {es}, use the {es} document APIs. -* To send **application data** directly to {es}, use an Elastic language client. -* To index **web page content**, use the Elastic web crawler. -* To sync **data from third-party sources**, use connectors. -* To index **single files** for testing, use the Kibana file uploader. - -[discrete] -=== Advanced use cases: Data enrichment and transformation - -Tools for enriching ingested data: - -- Logstash - GEOIP enrichment. Other examples? -** Use enterprisesearch input -> Filter(s) -> ES or enterprisesearch output -- What else? - - diff --git a/docs/en/ingest-guide/ingest-timestamped.asciidoc b/docs/en/ingest-guide/ingest-timestamped.asciidoc deleted file mode 100644 index a73fe30c9..000000000 --- a/docs/en/ingest-guide/ingest-timestamped.asciidoc +++ /dev/null @@ -1,104 +0,0 @@ -[[intro-timeseries]] -== Ingesting timeseries data - -.WORK IN PROGRESS -**** -Progressive disclosure: Start with basic use cases and work up to advanced processing - -Possibly repurpose and use ingest decision tree with Beats removed? -**** - -Timestamped data: -The preferred way to index timestamped data is to use Elastic Agent. Elastic Agent is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, and forward data from remote services or hardware. Each Elastic Agent based integration includes default ingestion rules, dashboards, and visualizations to start analyzing your data right away. Fleet Management enables you to centrally manage all of your deployed Elastic Agents from Kibana. - -If no Elastic Agent integration is available for your data source, use Beats to collect your data. Beats are data shippers designed to collect and ship a particular type of data from a server. You install a separate Beat for each type of data to collect. Modules that provide default configurations, Elasticsearch ingest pipeline definitions, and Kibana dashboards are available for some Beats, such as Filebeat and Metricbeat. No Fleet management capabilities are provided for Beats. - -If neither Elastic Agent or Beats supports your data source, use Logstash. Logstash is an open source data collection engine with real-time pipelining capabilities that supports a wide variety of data sources. You might also use Logstash to persist incoming data to ensure data is not lost if there’s an ingestion spike, or if you need to send the data to multiple destinations. - ----> Basic diagram - -[discrete] -=== Basic use case: Integrations to ES - -Reiterate Integrations as basic ingest use case - -ToDo: evaluate terminology (basic???) - - -[discrete] -=== Advanced use case: Integration to Logstash to ES - -Highlight logstash-filter-elastic_agent capabilities - - -[discrete] -=== Other advanced use cases (from decision tree) - -* Agent + Agent processors??? -* Agent + Runtime fields??? - - - -// CONTENT LIFTED FROM former `TOOLS` topic - - -[discrete] -=== Elastic agent and Elastic integrations -The best choice for ingesting data is the _simplest option_ that _meets your needs_ and _satisfies your use case_. -For many popular ingest scenarios, the best option is Elastic agent and Elastic integrations. - -* Elastic agent installed on the endpoints where you want to collect data. -Elastic Agent collects the data from one or more endpoints, and forwards the data to the service or location where is used. -* An Elastic integration to receive that data from agents - -TIP: Start here! -Elastic Agent for data collection paired with Elastic integrations is the best ingest option for most use cases. - - -[discrete] -=== OTel -Coming on strong. Where are we now, and cautiously explain where we're going in the near term. - -Open Telemetry is a leader for collecting Observability data - -Elastic is a supporting member. -We're contributing to the OTel project, and are using elastic/opentelemetry for specialized development not applicable to upstream. - -* https://www.elastic.co/guide/en/observability/current/apm-open-telemetry.html - -Contributing to upstream and doing our on for work specific to Elastic -* https://github.com/open-telemetry/opentelemetry-collector-contrib -* https://github.com/elastic/opentelemetry - -[discrete] -=== Logstash - -{ls} is an open source data collection engine with real-time pipelining capabilities. -It supports a wide variety of data sources, and can dynamically unify data from disparate sources and normalize the data into destinations of your choice. - -{ls} can collect data using a variety of {ls} input plugins, enrich and transform the data with {ls} filter plugins, and output the data to {es} and other destinations using the {ls} output plugins. - -You can use Logstash to extend Beats for advanced use cases, such as data routed to multiple destinations or when you need to make your data persistent. - -* {ls} input for when no integration is available -* {ls} integrations filter for advanced processing - -TIP: - -If an integration is available for your datasource, start with Elastic Agent + integration. - -Use Logstash if there's no integration for your data source or for advanced processing: - -Use {ls} when: - -* no integration (use Logstash input) -* an Elastic integration exists, but you need advanced processing between the Elastic integration and {es}: - -Advanced use cases solved by {ls}: - -* {ls} for https://www.elastic.co/guide/en/ingest/current/ls-enrich.html[data enrichment] before sending data to {es} -* https://www.elastic.co/guide/en/ingest/current/lspq.html[{ls} Persistent Queue (PQ) for buffering] -* https://www.elastic.co/guide/en/ingest/current/ls-networkbridge.html[{ls} as a proxy] when there are network restrictions that prevent connections between Elastic Agent and {es} -* https://www.elastic.co/guide/en/ingest/current/ls-multi.html[{ls} for routing data to multiple {es} clusters and additional destinations] -* https://www.elastic.co/guide/en/ingest/current/agent-proxy.html[{ls} as a proxy] - diff --git a/docs/en/ingest-guide/ingest-tools.asciidoc b/docs/en/ingest-guide/ingest-tools.asciidoc deleted file mode 100644 index 28a876338..000000000 --- a/docs/en/ingest-guide/ingest-tools.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[ingest-tools]] -== Tools for ingesting time-series data - - -Elastic and others offer tools to help you get your data from the original data source into {es}. -Some tools are designed for particular data sources, and others are multi-purpose. - -// Iterative messaging as our recommended strategy morphs. -// This section is the summary. "Here's the story _now_." -// Hint at upcoming changes, but do it cautiously and responsibly. -// Modular and co-located to make additions/updates/deprecations easier as our story matures. - - -In this section, we'll help you determine which option is best for you. - -* <> -* <> -* <> -* <> - -[discrete] -[[ingest-ea]] -=== {agent} and Elastic integrations - -A single link:{fleet-guide}[{agent}] can collect multiple types of data when it is link:{fleet-guide}/elastic-agent-installation.html[installed] on a host computer. -You can use standalone {agent}s and manage them locally on the systems where they are installed, or you can manage all of your agents and policies with the link:{fleet-guide}/manage-agents-in-fleet.html[Fleet UI in {kib}]. - -Use {agent} with one of hundreds of link:{integrations-docs}[Elastic integrations] to simplify collecting, transforming, and visualizing data. -Integrations include default ingestion rules, dashboards, and visualizations to help you start analyzing your data right away. -Check out the {integrations-docs}/all_integrations[Integration quick reference] to search for available integrations that can reduce your time to value. - -{agent} is the best option for collecting timestamped data for most data sources -and use cases. -If your data requires additional processing before going to {es}, you can use -link:{fleet-guide}/elastic-agent-processor-configuration.html[{agent} -processors], link:{logstash-ref}[{ls}], or additional processing features in -{es}. -Check out <> to see options. - -Ready to try link:{fleet-guide}[{agent}]? Check out the link:{fleet-guide}/elastic-agent-installation.html[installation instructions]. - -[discrete] -[[ingest-beats]] -=== {beats} - -link:{beats-ref}/beats-reference.html[Beats] are the original Elastic lightweight data shippers, and their capabilities live on in Elastic Agent. -When you use Elastic Agent, you're getting core Beats functionality, but with more added features. - - -Beats require that you install a separate Beat for each type of data you want to collect. -A single Elastic Agent installed on a host can collect and transport multiple types of data. - -**Best practice:** Use link:{fleet-guide}[{agent}] whenever possible. -If your data source is not yet supported by {agent}, use {beats}. -Check out the {beats} and {agent} link:{fleet-guide}/beats-agent-comparison.html#additional-capabilities-beats-and-agent[comparison] for more info. -When you are ready to upgrade, check out link:{fleet-guide}/migrate-beats-to-agent.html[Migrate from {beats} to {agent}]. - -[discrete] -[[ingest-otel]] -=== OpenTelemetry (OTel) collectors - -link:https://opentelemetry.io/docs[OpenTelemetry] is a vendor-neutral observability framework for collecting, processing, and exporting telemetry data. -Elastic is a member of the Cloud Native Computing Foundation (CNCF) and active contributor to the OpenTelemetry project. - -In addition to supporting upstream OTel development, Elastic provides link:https://github.com/elastic/opentelemetry[Elastic Distributions of OpenTelemetry], specifically designed to work with Elastic Observability. -We're also expanding link:{fleet-guide}[{agent}] to use OTel collection. - -[discrete] -[[ingest-logstash]] -=== Logstash - -link:{logstash-ref}[{ls}] is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. -{ls} can _collect data_ from a wide variety of data sources with {ls} link:{logstash-ref}/input-plugins.html[input -plugins], _enrich and transform_ the data with {ls} link:{logstash-ref}/filter-plugins.html[filter plugins], and _output_ the -data to {es} and other destinations with the {ls} link:{logstash-ref}/output-plugins.html[output plugins]. - -Many users never need to use {ls}, but it's available if you need it for: - -* **Data collection** (if an Elastic integration isn't available). -{agent} and Elastic {integrations-docs}/all_integrations[integrations] provide many features out-of-the-box, so be sure to search or browse integrations for your data source. -If you don't find an Elastic integration for your data source, check {ls} for an {logstash-ref}/input-plugins.html[input plugin] for your data source. -* **Additional processing.** One of the most common {ls} use cases is link:{logstash-ref}/ea-integrations.html[extending Elastic integrations]. -You can take advantage of the extensive, built-in capabilities of Elastic Agent and Elastic Integrations, and -then use {ls} for additional data processing before sending the data on to {es}. -* **Advanced use cases.** {ls} can help with advanced use cases, such as when you need -link:{ingest-guide}/lspq.html[persistence or buffering], -additional link:{ingest-guide}/ls-enrich.html[data enrichment], -link:{ingest-guide}/ls-networkbridge.html[proxying] as a way to bridge network connections, or the ability to route data to -link:{ingest-guide}/ls-multi.html[multiple destinations]. diff --git a/docs/en/ingest-management/agent-policies-environment-variables.asciidoc b/docs/en/ingest-management/agent-policies-environment-variables.asciidoc deleted file mode 100644 index 8ffa6aa8e..000000000 --- a/docs/en/ingest-management/agent-policies-environment-variables.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[fleet-agent-environment-variables]] -= Set environment variables in an {agent} policy - -As an advanced use case, you may wish to configure environment variables in your {agent} policy. This is useful, for example, if there are configuration details about the system on which {agent} is running that you may not know in advance. As a solution, you may want to configure environment variables to be interpreted by {agent} at runtime, using information from the running environment. - -For {fleet}-managed {agents}, you can configure environment variables using the <>. Refer to <> in the standalone {agent} documentation for more detail. \ No newline at end of file diff --git a/docs/en/ingest-management/agent-policies.asciidoc b/docs/en/ingest-management/agent-policies.asciidoc deleted file mode 100644 index 03c6ad4f7..000000000 --- a/docs/en/ingest-management/agent-policies.asciidoc +++ /dev/null @@ -1,562 +0,0 @@ -:y: image:images/green-check.svg[yes] -:n: image:images/red-x.svg[no] - -[[agent-policy]] -= {agent} policies - -++++ -Policies -++++ - -A policy is a collection of inputs and settings that defines the data to be collected -by an {agent}. Each {agent} can only be enrolled in a single policy. - -Within an {agent} policy is a set of individual integration policies. -These integration policies define the settings for each input type. -The available settings in an integration depend on the version of -the integration in use. - -{fleet} uses {agent} policies in two ways: - -* Policies are stored in a plain-text YAML file and sent to each {agent} to configure its inputs. -* Policies provide a visual representation of an {agent}s configuration -in the {fleet} UI. - -[discrete] -[[policy-benefits]] -== Policy benefits - -{agent} policies have many benefits that allow you to: - -* Apply a logical grouping of inputs aimed for a particular set of hosts. -* Maintain flexibility in large-scale deployments by quickly testing changes before rolling them out. -* Provide a way to group and manage larger swaths of your infrastructure landscape. - -For example, it might make sense to create a policy per operating system type: -Windows, macOS, and Linux hosts. -Or, organize policies by functional groupings of how the hosts are -used: IT email servers, Linux servers, user work-stations, etc. -Or perhaps by user categories: engineering department, marketing department, etc. - -[discrete] -[[agent-policy-types]] -== Policy types - -In most use cases, {fleet} provides complete central management of {agent}s. -However some use cases, like running in Kubernetes or using our hosted {ess} on {ecloud}, -require {agent} infrastructure management outside of {fleet}. -With this in mind, there are two types of {agent} policies: - -* **regular policy**: The default use case, where {fleet} provides full central -management for {agent}s. Users can manage {agent} infrastructure by adding, -removing, or upgrading {agent}s. Users can also manage {agent} configuration by updating -the {agent} policy. - -* **hosted policy**: A policy where _something else_ provides central management for {agent}s. -For example, in Kubernetes, adding, removing, and upgrading {agent}s should be configured directly in Kubernetes. -Allowing {fleet} users to manage {agent}s would conflict with any Kubernetes configuration. -+ -TIP: Hosted policies also apply when using our hosted {ess} on {ecloud}. -{ecloud} is responsible for hosting {agent}s and assigning them to a policy. -Platform operators, who create and manage Elastic deployments can add, upgrade, -and remove {agent}s through the {ecloud} console. - -Hosted policies display a lock icon in the {fleet} UI, and actions are restricted. -The following table illustrates the {fleet} user actions available to different policy types: - -[options,header] -|=== -|{fleet} user action |Regular policy |Hosted policy - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|<> -|{y} -|{n} - -|=== - -See also the <> for an {agent} policy. - -[discrete] -[[create-a-policy]] -== Create a policy - -To manage your {agent}s and the data they collect, create a new policy: - -In {fleet}, open the **Agent policies** tab and click **Create agent policy**. - -. Name your policy. All other fields are optional and can be modified later. -By default, each policy enables the _system_ integration, which collects system information and metrics. -. Create the agent policy: -* To use the UI, click **Create agent policy**. -* To use the {fleet} API, click **Preview API request** and run the -request. - -Also see <>. - -[discrete] -[[add-integration]] -== Add an integration to a policy - -An {agent} policy consists of one or more integrations that are applied to the agents enrolled in that policy. -When you add an integration, the policy created for that integration can be shared with multiple {agent} policies. -This reduces the number of integrations policies that you need to actively manage. - -To add a new integration to one or more {agent} policies: - -. In {fleet}, click **Agent policies**. -Click the name of a policy you want to add an integration to. -. Click **Add **. -. The Integrations page shows {agent} integrations along with other types, such as {beats}. Scroll down and select **Elastic Agent only** to view only integrations that work with {agent}. -. You can opt to install an {agent} if you haven't already, or choose **Add integration only** to proceed. -. In Step 1 on the **Add ** page, you can select the configuration settings specific to the integration. -. In Step 2 on the page, you have two options: -.. If you'd like to create a new policy for your {agent}s, on the **New hosts** tab specify a name for the new agent policy and choose whether or not to collect system logs and metrics. -Collecting logs and metrics will add the System integration to the new agent policy. -.. If you already have an {agent} policy created, on the **Existing hosts** tab use the drop-down menu to specify one or more agent policies that you'd like to add the integration to. Please note this this feature, known as "reusable integrations", requires an link:https://www.elastic.co/subscriptions[Enterprise subscription]. -. Click **Save and continue** to confirm your settings. - -This action installs the integration and adds it to the {agent} policies that you specified. -{fleet} distributes the new integration policy to all {agent}s that are enrolled in the agent policies. - -You can update the settings for an installed integration at any time: - -. In {kib}, go to the **Integrations** page. -. On the **Integration policies** tab, for the integration that you like to update open the **Actions** menu and select **Edit integration**. -. On the **Edit ** page you can update any configuration settings and also update the list of {agent} polices to which the integration is added. -+ -If you clear the **Agent policies** field, the integration will be removed from any {agent} policies to which it had been added. -+ -To identify any integrations that have been "orphaned", that is, not associated with any {agent} policies, check the **Agent polices** column on the **Integration policies** tab. -Any integrations that are installed but not associated with an {agent} policy are as labeled as `No agent policies`. - -[discrete] -[[apply-a-policy]] -== Apply a policy - -You can apply policies to one or more {agent}s. -To apply a policy: - -. In {fleet}, click **Agents**. - -. Select the {agent}s you want to assign to the new policy. -+ -After selecting one or more {agent}s, click **Assign to new policy** under the -Actions menu. -+ -[role="screenshot"] -image::images/apply-agent-policy.png[Assign to new policy dropdown] -+ -Unable to select multiple agents? Confirm that your subscription level supports -selective agent policy reassignment in {fleet}. For more information, refer to -{subscriptions}[{stack} subscriptions]. - -. Select the {agent} policy from the dropdown list, and click **Assign policy**. - -The {agent} status indicator and {agent} logs indicate that the policy is being applied. -It may take a few minutes for the policy change to complete before the {agent} status updates to "Healthy". - -[discrete] -[[policy-edit-or-delete]] -== Edit or delete an integration policy - -Integrations can easily be reconfigured or deleted. -To edit or delete an integration policy: - -. In {fleet}, click **Agent policies**. -Click the name of the policy you want to edit or delete. - -. Search or scroll to a specific integration. -Open the **Actions** menu and select **Edit integration** or **Delete integration**. -+ -Editing or deleting an integration is permanent and cannot be undone. -If you make a mistake, you can always re-configure or re-add an integration. - -Any saved changes are immediately distributed and applied to all {agent}s enrolled in the given policy. - -To update any secret values in an integration policy, refer to <>. - -[discrete] -[[copy-policy]] -== Copy a policy - -Policy definitions are stored in a plain-text YAML file that can be downloaded or copied to another policy: - -. In {fleet}, click **Agent policies**. -Click the name of the policy you want to copy or download. - -. To copy a policy, click **Actions -> Copy policy**. -Name the new policy, and provide a description. -The exact policy definition is copied to the new policy. -+ -Alternatively, view and download the policy definition by clicking **Actions -> View policy**. - -[discrete] -[[policy-main-settings]] -== Edit or delete a policy - -You can change high-level configurations like a policy's name, description, default namespace, -and agent monitoring status as necessary: - -. In {fleet}, click **Agent policies**. -Click the name of the policy you want to edit or delete. - -. Click the **Settings** tab, make changes, and click **Save changes** -+ -Alternatively, click **Delete policy** to delete the policy. -Existing data is not deleted. -Any agents assigned to a policy must be unenrolled or assigned to a different policy before a policy can be deleted. - -[discrete] -[[add-custom-fields]] -== Add custom fields - -Use this setting to add a custom field and value set to all data collected from the {agents} enrolled in an {agent} policy. -Custom fields are useful when you want to identify or visualize all of the data from a group of agents, and possibly manipulate the data downstream. - -To add a custom field: - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Custom fields**. - -. Click **Add field**. - -. Specify a field name and value. -+ -[role="screenshot"] -image::images/agent-policy-custom-field.png[Sceen capture showing the UI to add a custom field and value] - -. Click **Add another field** for additional fields. Click **Save changes** when you're done. - -To edit a custom field: - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Custom fields**. Any custom fields that have been configured are shown. - -. Click the edit icon to update a field or click the delete icon to remove it. - -Note that adding custom tags is not supported for a small set of inputs: - -* `apm` -* `cloudbeat` and all `cloudbeat/*` inputs -* `cloud-defend` -* `fleet-server` -* `pf-elastic-collector`, `pf-elastic-symbolizer`, and `pf-host-agent` -* `endpoint` inputs. Instead, use the advanced settings (`*.advanced.document_enrichment.fields`) of the {elastic-defend} Integration. - - -[discrete] -[[change-policy-enable-agent-monitoring]] -== Configure agent monitoring - -Use these settings to collect monitoring logs and metrics from {agent}. All monitoring data will be written to the specified **Default namespace**. - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Agent monitoring**. - -. Select whether to collect agent logs, agent metrics, or both, from the {agents} that use the policy. -+ -When this setting is enabled an {agent} integration is created automatically. - -. Expand the **Advanced monitoring options** section to access <>. - -. Save your changes for the updated monitoring settings to take effect. - -[discrete] -[[advanced-agent-monitoring-settings]] -=== Advanced agent monitoring settings - -**HTTP monitoring endpoint** - -Enabling this setting exposes a `/liveness` API endpoint that you can use to monitor {agent} health according to the following HTTP codes: - -* `200`: {agent} is healthy. The endpoint returns a `200` OK status as long as {agent} is responsive and can process configuration changes. -* `500`: A component or unit is in a failed state. -* `503`: The agent coordinator is unresponsive. - -You can pass a `failon` parameter to the `/liveness` endpoint to determine what component state will result in a `500` status. For example, `curl 'localhost:6792/liveness?failon=degraded'` will return `500` if a component is in a degraded state. - -The possible values for `failon` are: - -* `degraded`: Return an error if a component is in a degraded state or failed state, or if the agent coordinator is unresponsive. -* `failed`: Return an error if a unit is in a failed state, or if the agent coordinator is unresponsive. -* `heartbeat`: Return an error only if the agent coordinator is unresponsive. - -If no `failon` parameter is provided, the default `failon` behavior is `heartbeat`. - -The HTTP monitoring endpoint can also be link:https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request[used with Kubernetes], to restart the container for example. - -When you enable this setting, you need to provide the host URL and port where the endpoint can be accessed. Using the default `localhost` is recommended. - -When the HTTP monitoring endpoint is enabled you can also select to **Enable profiling at `/debug/pprof`**. This controls whether the {agent} exposes the `/debug/pprof/` endpoints together with the monitoring endpoints. - -The heap profiles available from `/debug/pprof/` are included in <> by default. CPU profiles are also included when the `--cpu-profile` option is included. For full details about the profiles exposed by `/debug/pprof/` refer to the link:https://pkg.go.dev/net/http/pprof[pprof package documentation]. - -Profiling at `/debug/pprof` is disabled by default. Data produced by these endpoints can be useful for debugging but present a security risk. It's recommended to leave this option disabled if the monitoring endpoint is accessible over a network. - -**Diagnostics rate limiting** - -You can set a rate limit for the action handler for diagnostics requests coming from {fleet}. The setting affects only {fleet}-managed {agents}. By default, requests are limited to an interval of `1m` and a burst value of `1`. This setting does not affect diagnostics collected through the CLI. - -**Diagnostics file upload** - -This setting configures retries for the file upload client handling diagnostics requests coming from {fleet}. The setting affects only {fleet}-managed {agents}. By default, a maximum of `10` retries are allowed with an initial duration of `1s` and a backoff duration of `1m`. The client may retry failed requests with exponential backoff. - -[discrete] -[[change-policy-output]] -== Change the output of a policy - -Assuming your {subscriptions}[{stack} subscription level] supports per-policy -outputs, you can change the output of a policy to send data to a different -output. - -. In {fleet}, click **Settings** and view the list of available outputs. -If necessary, click **Add output** to add a new output with the settings you -require. For more information, refer to <>. - -. Click **Agent policies**. -Click the name of the policy you want to change, then click **Settings**. - -. Set **Output for integrations** and (optionally) **Output for agent monitoring** -to use a different output, for example, {ls}. You might need to scroll down to -see these options. -+ -Unable to select a different output? Confirm that your subscription level -supports per-policy outputs in {fleet}. -+ -[role="screenshot"] -image::images/agent-output-settings.png[Screen capture showing the {ls} output policy selected in an agent policy] - -. Save your changes. - -Any {agent}s enrolled in the agent policy will begin sending data to the -specified outputs. - -[discrete] -[[add-fleet-server-to-policy]] -== Add a {fleet-server} to a policy - -If you want to connect multiple agents to a specific on-premises {fleet-server}, -you can add that {fleet-server} to a policy. - -[role="screenshot"] -image::images/add-fleet-server-to-policy.png[Screen capture showing how to add a {fleet-server} to a policy when creating or updating the policy.] - -When the policy is saved, all agents assigned to the policy are configured -to use the new {fleet-server} as the controller. - -Make sure that the {agent}s assigned to this policy all have connectivity to the {fleet-server} -that you added. Lack of connectivity will prevent the {agent} -from checking in with the {fleet-server} and receiving policy updates, but the agents -will still forward data to the cluster. - -[discrete] -[[agent-policy-secret-values]] -== Configure secret values in a policy - -When you create an integration policy you often need to provide sensitive information such as an API key or a password. To help ensure that data can't be accessed inappropriately, any secret values used in an integration policy are stored separately from other policy details. - -As well, after you've saved a secret value in {fleet}, the value is hidden in both the {fleet} UI and in the agent policy definition. When you view the agent policy (**Actions -> View policy**), an environment variable is displayed in place of any secret values, for example `${SECRET_0}`. - -WARNING: In order for sensitive values to be stored secretly in {fleet}, all configured {fleet-server}s must be on version 8.10.0 or higher. - -Though secret values stored in {fleet} are hidden, they can be updated. To update a secret value in an integration policy: - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Search or scroll to a specific integration. -Open the **Actions** menu and select **Edit integration**. Any secret information is marked as being hidden. - -. Click the link to replace the secret value with a new one. -+ -[role="screenshot"] -image::images/fleet-policy-hidden-secret.png[Screen capture showing a hidden secret value as part of an integration policy] -// This graphic should be updated once a higher resolution version is available. - -. Click **Save integration**. The original secret value is overwritten in the policy. - -[discrete] -[[agent-policy-limit-cpu]] -== Set the maximum CPU usage - -You can limit the amount of CPU consumed by {agent}. This parameter limits the number of operating system threads that can be executing Go code simultaneously in each Go process. You can specify an integer value not less than `0`, which is the default value that stands for "all available CPUs". - -This limit applies independently to the agent and each underlying Go process that it supervises. For example, if {agent} is configured to supervise two {beats} with a CPU usage limit of `2` set in the policy, then the total CPU limit is six, where each of the three processes (one {agent} and two {beats}) may execute independently on two CPUs. - -This setting is similar to the {beats} {filebeat-ref}/configuration-general-options.html#_max_procs[`max_procs`] setting. For more detail, refer to the link:https://pkg.go.dev/runtime#GOMAXPROCS[GOMAXPROCS] function in the Go runtime documentation. - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Advanced settings**. - -. Set **Limit CPU usage** as needed. For example, to limit Go processes supervised by {agent} to two operating system threads each, set this value to `2`. - -[discrete] -[[agent-policy-log-level]] -== Set the {agent} log level - -You can set the minimum log level that {agents} using the selected policy will send to the configured output. The default setting is `info`. - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Advanced settings**. - -. Set the **Agent logging level**. - -. Save your changes. - -You can also set the log level for an individual agent: - -. In {fleet}, click **Agents**. -Under the **Host** header, select the {agent} you want to edit. - -. On the **Logs** tab, set the **Agent logging level** and apply your changes. Or, you can choose to reset the agent to use the logging level specified in the agent policy. - -[discrete] -[[agent-binary-download-settings]] -== Change the {agent} binary download location - -{agent}s must be able to access the {artifact-registry} to download -binaries during upgrades. By default {agent}s download artifacts from the -artifact registry at `https://artifacts.elastic.co/downloads/`. - -For {agent}s that cannot access the internet, you can specify agent binary -download settings, and then configure agents to download their artifacts from -the alternate location. For more information about running {agent}s in a -restricted environment, refer to <>. - -To change the binary download location: - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Agent binary download**. - -. Specify the address where you are hosting the artifacts repository or select the default to use the location specified in the {fleet} <>. - -[discrete] -[[fleet-agent-hostname-format-settings]] -== Set the {agent} host name format - -The **Host name format** setting controls the format of information provided about the current host through the <> key, in events produced by {agent}. - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Host name format**. - -. Select one of the following: - -** **Hostname**: Information about the current host is in a non-fully-qualified format (`somehost`, rather than `somehost.example.com`). This is the default reporting format. - -** **Fully Qualified Domain Name (FQDN)**: Information about the current host is in FQDN format (`somehost.example.com` rather than `somehost`). This helps you to distinguish between hosts on different domains that have similar names. The fully qualified hostname allows each host to be more easily identified when viewed in {kib}, for example. - -. Save your changes. - -NOTE: FQDN reporting is not currently supported in APM. - -For FQDN reporting to work as expected, the hostname of the current host must either: - -* Have a CNAME entry defined in DNS. -* Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup. - -If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host in a non-fully-qualified format. - - -[discrete] -[[fleet-agent-unenrollment-timeout]] -== Set an unenrollment timeout for inactive agents - -You can configure a length of time after which any inactive {agent}s are automatically unenrolled and their API keys invalidated. -This setting is useful when you have agents running in an ephemeral environment, such as Docker or {k8s}, and you want to prevent inactive agents from consuming unused API keys. - - -To configure an unenrollment timeout for inactive agents: - -. In {fleet}, click **Agent policies**. -Select the name of the policy you want to edit. - -. Click the **Settings** tab and scroll to **Inactive agent unenrollment timeout**. - -. Specify an unenrollment timeout period in seconds. - -. Save your changes. - -After you set an unenrollment timeout, any inactive agents are unenrolled automatically after the specified period of time. -The unenroll task runs every ten minutes, and it unenrolls a maximum of one thousand agents at a time. - -[discrete] -[[agent-policy-scale]] -== Policy scaling recommendations - -A single instance of {fleet} supports a maximum of 1000 {agent} policies. If more policies are configured, UI performance might be impacted. The maximum number of policies is not affected by the number of spaces in which the policies are used. - -If you are using {agent} with link:{serverless-docs}[{serverless-full}], the maximum supported number of {agent} policies is 500. diff --git a/docs/en/ingest-management/beats-agent-comparison.asciidoc b/docs/en/ingest-management/beats-agent-comparison.asciidoc deleted file mode 100644 index b90c8d23f..000000000 --- a/docs/en/ingest-management/beats-agent-comparison.asciidoc +++ /dev/null @@ -1,346 +0,0 @@ -:y: image:images/green-check.svg[yes] -:n: image:images/red-x.svg[no] - -[[beats-agent-comparison]] -= {beats} and {agent} capabilities - -Elastic provides two main ways to send data to {es}: - -* *{beats}* are lightweight data shippers that send operational data to -{es}. Elastic provides separate {beats} for different types of data, such as -logs, metrics, and uptime. Depending on what data you want to collect, you may -need to install multiple shippers on a single host. - -* *{agent}* is a single agent for logs, metrics, security data, and threat -prevention. The {agent} can be deployed in two different modes: - -** *Managed by {fleet}* -- The {agent} policies and lifecycle are centrally managed by the {fleet} app in {kib}. The Integrations app also lets you centrally add integrations with other popular services and systems. This is the recommended option for most users. - -** *Standalone mode* -- All policies are applied to the {agent} manually as a YAML file. This is intended for more advanced users. -See <> for more information. - -The method you use depends on your use case, which features you need, and -whether you want to centrally manage your agents. - -{beats} and {agent} can both send data directly to {es} or via {ls}, where you -can further process and enhance the data, before visualizing it in {kib}. - -This article summarizes the features and functionality you need to be aware of -before adding new {agent}s or replacing your current {beats} with {agent}s. -Starting in version 7.14.0, {agent} is generally available (GA). - -[discrete] -[[choosing-between-agent-and-beats]] -== Choosing between {agent} and {beats} - -{agent} is a single binary designed to provide the same functionality that the various {beats} provide today. However, some functionality gaps are being addressed as we strive to reach feature parity. - -The following steps will help you determine if {agent} can support your use case: - -. Determine if the integrations you need are supported and Generally Available -(GA) on {agent}. To find out if an integration is GA, see the -{integrations-docs}/all_integrations[{integrations} quick reference table]. - -. If the integration is available, check <> to see whether the required output is also supported. - -. Review <> to determine if any features required by your deployment are supported. {agent} should support most of the features available on {beats} and is updated for each release. - -If you are satisfied with all three steps, then {agent} is suitable for your deployment. However, if any steps fail your assessment, you should continue using {beats}, and review future updates or contact us in the {forum}[discuss forum]. - -[discrete] -[[supported-inputs-beats-and-agent]] -== Supported inputs - -For {agent}s that are centrally managed by {fleet}, data collection is -further simplified and defined by integrations. In this model, the decision on -the inputs is driven by the integration you want to collect data from. The -complexity of configuration details of various inputs is driven centrally by -{fleet} and specifically by the integrations. - -To find out if an integration is GA, see the -{integrations-docs}/all_integrations[{integrations} quick reference table]. - - -[discrete] -[[supported-outputs-beats-and-agent]] -== Supported outputs - -The following table shows the outputs supported by the {agent} in {version}: - - -NOTE: {elastic-defend} and APM Server have a different output matrix. - -[options,header] -|=== -|Output |{beats} |{fleet}-managed {agent} |Standalone {agent} - -|{es} Service -|{y} -|{y} -|{y} - -|{es} -|{y} -|{y} -|{y} - -|{ls} -|{y} -|{y} -|{y} - -|Kafka -|{y} -|{y} -|{y} - -|Remote {es} -|{y} -|{y} -|{y} - -|Redis -|{y} -|{n} -|{n} - -|File -|{y} -|{n} -|{n} - -|Console -|{y} -|{n} -|{n} -|=== - - -[discrete] -[[supported-configurations]] -== Supported configurations - -[options,header] -|=== -|{beats} configuration |{agent} support - -|{filebeat-ref}/configuration-filebeat-modules.html[Modules] -|Supported via integrations. - -|{filebeat-ref}/advanced-settings.html[Input setting overrides] -|Not configurable. Set to default values. - -|{filebeat-ref}/configuration-general-options.html[General settings] -| Many of these global settings are now internal to the agent and for proper -operations should not be modified. - -|{filebeat-ref}/configuration-path.html[Project paths] -|{agent} configures these paths to provide a simpler and more streamlined -configuration experience. - -|{filebeat-ref}/filebeat-configuration-reloading.html[External configuration file loading] -|Config is distributed via policy. - -|{filebeat-ref}/_live_reloading.html[Live reloading] -|Related to the config file reload. - -|{filebeat-ref}/configuring-output.html[Outputs] -|Configured through {fleet}. See <>. - -|{filebeat-ref}/configuration-ssl.html[SSL] -|Supported - -|{filebeat-ref}/ilm.html[{ilm-cap}] -|Enabled by default although the Agent uses <>. - -|{filebeat-ref}/configuration-template.html[{es} index template loading] -|No longer applicable - -|{filebeat-ref}/setup-kibana-endpoint.html[{kib} endpoint] -|New {agent} workflow doesn’t need this. - -|{filebeat-ref}/configuration-dashboards.html[{kib} dashboard loading] -|New {agent} workflow doesn’t need this. - -|{filebeat-ref}/defining-processors.html[Processors] -|Processors can be defined at the integration level. Global processors, configured at the policy level, are currently under consideration. - -|{filebeat-ref}/configuration-autodiscover.html[Autodiscover] -|Autodiscover is facilitated through <>. {agent} does not support hints-based autodiscovery. - -|{filebeat-ref}/configuring-internal-queue.html[Internal queues] -|{fleet}-managed {agent} and Standalone {agent} both support configuration of the internal memory -queues by an end user. Neither support configuration of the internal disk queues by an end user. - -|{filebeat-ref}/elasticsearch-output.html#_loadbalance[Load balance output hosts] -|Within the {fleet} UI, you can add YAML settings to configure multiple hosts -per output type, which enables load balancing. - -|{filebeat-ref}/configuration-logging.html[Logging] -|Supported - -|{filebeat-ref}/http-endpoint.html[HTTP Endpoint] -|Supported - -|{filebeat-ref}/regexp-support.html[Regular expressions] -|Supported -|=== - -[discrete] -[[additional-capabilities-beats-and-agent]] -== Capabilities comparison - -The following table shows a comparison of capabilities supported by {beats} and the {agent} in {version}: - - -[options,header] -|=== -|Item |{beats} |{fleet}-managed {agent} |Standalone {agent} |Description - -|Single agent for all use cases -|{n} -|{y} -|{y} -|{agent} provides logs, metrics, and more. You'd need to install multiple {beats} for these use cases. - -|Install integrations from web UI or API -|{n} -|{y} -|{y} -|{agent} integrations are installed with a convenient web UI or API, but {beats} modules are installed with a CLI. This installs {es} assets such as index templates and ingest pipelines, and {kib} assets such as dashboards. - -|Configure from web UI or API -|{n} -|{y} -|{y} (optional) -|{fleet}-managed {agent} integrations can be configured in the web UI or API. Standalone {agent} can use the web UI, API, or YAML. {beats} can only be configured via YAML files. - -|Central, remote agent policy management -|{n} -|{y} -|{n} -|{agent} policies can be centrally managed through {fleet}. You have to manage {beats} configuration yourself or with a third-party solution. - -|Central, remote agent binary upgrades -|{n} -|{y} -|{n} -|{agent}s can be remotely upgraded through {fleet}. You have to upgrade {beats} yourself or with a third-party solution. - -|Add {kib} and {es} assets for a single integration or module -|{n} -|{y} -|{y} -|{agent} integrations allow you to add {kib} and {es} assets for a single integration at a time. {beats} installs hundreds of assets for all modules at once. - -|Auto-generated {es} API keys -|{n} -|{y} -|{n} -|{fleet} can automatically generate API keys with limited permissions for each {agent}, which can be individually revoked. Standalone {agent} and {beats} require you to create and manage credentials, and users often share them across hosts. - -|Auto-generate minimal {es} permissions -|{n} -|{y} -|{n} -|{fleet} can automatically give {agent}s minimal output permissions based on the inputs running. With standalone {agent} and {beats}, users often give overly broad permissions because it's more convenient. - -|Data streams support -|{y} -|{y} -|{y} -|Both {beats} (default as of version 8.0) and {agent}s use <> with easier index life cycle management and the https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme[data stream naming scheme]. - -|Variables and input conditions -|{n} -|{y} (limited) -|{y} -|{agent} offers {fleet-guide}/dynamic-input-configuration.html[variables and input conditions] to dynamically adjust based on the local host environment. Users can configure these directly in YAML for standalone {agent} or using the {fleet} API for {fleet}-managed {agent}. The Integrations app allows users to enter variables, and we are considering a https://github.com/elastic/kibana/issues/108525[UI to edit conditions]. {beats} only offers static configuration. - -|Allow non-superusers to manage assets and agents -|{y} -|{y} -|{y} (it's optional) -|Starting with version 8.1.0, a superuser role is no longer required to use the {fleet} and Integrations apps and corresponding APIs. These apps are optional for standalone {agent}. {beats} offers {filebeat-ref}/feature-roles.html[finer grained] roles. - -|Air-gapped network support -|{y} -|{y} (with Limits) -|{y} -|The {integrations} and {fleet} apps can be deployed in an air-gapped environment {fleet-guide}/air-gapped.html#air-gapped-diy-epr[self-managed deployment of the {package-registry}]. {fleet}-managed {agent}s require a connection to our artifact repository for agent binary upgrades. However the policy can be modified to have agents point to a local server in order to fetch the agent binary. - -|Run without root on host -|{y} -|{y} -|{y} -|{fleet}-managed {agent}s, Standalone {agent}s, and {beats} require root permission only if they're configured to capture data that requires that level of permission. - -|Multiple outputs -|{y} -|{y} -|{y} -|The policy for a single {fleet}-managed {agent} can specify multiple outputs. - -|Separate monitoring cluster -|{y} -|{y} -|{y} -|{fleet}-managed {agent}s, Standalone {agent} and {beats} can send to a remote monitoring cluster. - -|Secret management -|{y} -|{n} -|{n} -|{agent} stores credentials in the agent policy. We are considering adding https://github.com/elastic/integrations/issues/244[keystore support]. {beats} allows users to access credentials in a local {filebeat-ref}/keystore.html[keystore]. - -|Progressive or canary deployments -|{y} -|{n} -|{y} -|{fleet} does not have a feature to deploy an {agent} policy update progressively but we are considering https://github.com/elastic/kibana/issues/108267[improved support]. With standalone {agent} and {beats} you can deploy configuration files progressively using third party solutions. - -|Multiple configurations per host -|{y} -|{n} (uses input conditions instead) -|{n} (uses input conditions instead) -|{agent} uses a single {agent} policy for configuration, and uses {fleet-guide}/dynamic-input-configuration.html[variables and input conditions] to adapt on a per-host basis. {beats} supports multiple configuration files per host, enabling third party solutions to deploy files hierarchically or in multiple groups, and enabling finer-grained access control to those files. - -|Compatible with version control and infrastructure as code solutions -|{y} -|{n} (only via API) -|{y} -|{fleet} stores the agent policy in {es}. It does not integrate with external version control or infrastructure as code solutions, but we are considering https://github.com/elastic/kibana/issues/108524[improved support]. However, {beats} and {agent} in standalone mode use a YAML file that is compatible with these solutions. - -|Spooling data to local disk -|{y} -|{n} -|{n} -|This feature is currently being link:https://github.com/elastic/elastic-agent/issues/3490[considered for development]. -|=== - -[discrete] -[[agent-monitoring-support]] -== {agent} monitoring support - -You configure the collection of agent metrics in the agent policy. If metrics -collection is selected (the default), all {agent}s enrolled in the policy will -send metrics data to {es} (the output is configured globally). - -The following image shows the *Agent monitoring* settings for the default agent -policy: - -[role="screenshot"] -image::images/agent-monitoring-settings.png[Screen capture of agent monitoring settings in the default agent policy] - -There are also pre-built dashboards for agent metrics that you can access -under *Assets* in the {agent} integration: - -[role="screenshot"] -image::images/agent-monitoring-assets.png[Screen capture of {agent} monitoring assets] - -The *[{agent}] Agent metrics* dashboard shows an aggregated view of agent metrics: - -[role="screenshot"] -image::images/agent-metrics-dashboard.png[Screen capture showing {agent} metrics] - -For more information, refer to <>. diff --git a/docs/en/ingest-management/commands.asciidoc b/docs/en/ingest-management/commands.asciidoc deleted file mode 100644 index a56492a4d..000000000 --- a/docs/en/ingest-management/commands.asciidoc +++ /dev/null @@ -1,1391 +0,0 @@ -:global-flags-link: For more flags, see <>. - -[[elastic-agent-cmd-options]] -= {agent} command reference - -++++ -Command reference -++++ - -{agent} provides commands for running {agent}, managing {fleet-server}, and -doing common tasks. The commands listed here apply to both <> -and <> {agent}. - -[IMPORTANT] -.Restrictions -==== -Note the following restrictions for running {agent} commands: - -* You might need to log in as a root user (or Administrator on Windows) to -run the commands described here. After the {agent} service is installed and running, -make sure you run these commands without prepending them with `./` to avoid -invoking the wrong binary. -* Running {agent} commands using the Windows PowerShell ISE is not supported. -==== - -* <> -* <> -* <> -* <> -* <> -* <> preview:[] -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -//* <> - -++++ -
-++++ - -[discrete] -[[elastic-agent-diagnostics-command]] -== elastic-agent diagnostics - -Gather diagnostics information from the {agent} and component/unit it's running. -This command produces an archive that contains: - -* version.txt - version information -* pre-config.yaml - pre-configuration before variable substitution -* variables.yaml - current variable contexts from providers -* computed-config.yaml - configuration after variable substitution -* components-expected.yaml - expected computed components model from the computed-config.yaml -* components-actual.yaml - actual running components model as reported by the runtime manager -* state.yaml - current state information of all running components -* Components Directory - diagnostic information from each running component: -** goroutine.txt - goroutine dump -** heap.txt - memory allocation of live objects -** allocs.txt - sampling past memory allocations -** threadcreate.txt - traces led to creation of new OS threads -** block.txt - stack traces that led to blocking on synchronization primitives -** mutex.txt - stack traces of holders of contended mutexes -** Unit Directory - If a given unit provides specific diagnostics, it will be placed here. - -Note that *credentials may not be redacted* in the archive; they may appear in plain text in the configuration or policy files inside the archive. - -This command is intended for debugging purposes only. The output format and structure of the archive may change between releases. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent diagnostics [--file ] - [--cpu-profile] - [--exclude-events] - [--help] - [global-flags] ----- - -[discrete] -=== Options - -`--file`:: -Specifies the output archive name. Defaults to `elastic-agent-diagnostics-.zip`, where the timestamp is the current time in UTC. - -`--help`:: -Show help for the `diagnostics` command. - -`--cpu-profile`:: -Additionally runs a 30-second CPU profile on each running component. This will generate an additional `cpu.pprof` file for each component. - -`--p`:: -Alias for `--cpu-profile`. - -`--exclude-events`:: -Exclude the events log files from the diagnostics archive. - -{global-flags-link} - -[discrete] -=== Example - -[source,shell] ----- -elastic-agent diagnostics ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-enroll-command]] -== elastic-agent enroll - -//MAINTAINERs: There's a GitHub issue open to consolidate the enroll and install -//entries here, but for now, make sure the syntax stays in sync. - -Enroll the {agent} in {fleet}. - -Use this command to enroll the {agent} in {fleet} without installing the -agent as a service. You will need to do this if you installed -the {agent} from a DEB or RPM package and plan to use systemd commands to -start and manage the service. This command is also useful for testing -{agent} prior to installing it. - -If you've already installed {agent}, use this command to modify the settings that {agent} runs with. - -TIP: To enroll an {agent} _and_ install it as a service, use the -<> instead. Installing as a service is the most common scenario. - -We recommend that you run the `enroll` (or `install`) command as the root user because some -integrations require root privileges to collect sensitive data. This command -overwrites the `elastic-agent.yml` file in the agent directory. - -This command includes optional flags to set up <>. - -IMPORTANT: This command enrolls the {agent} in {fleet}; it does not start the -agent. To start the agent, either <>, if one exists, or use the <> -to start the agent from a terminal. - - -[discrete] -=== Synopsis - -// tag::enroll[] - -To enroll the {agent} in {fleet}: - -[source,shell] ----- -elastic-agent enroll --url - --enrollment-token - [--ca-sha256 ] - [--certificate-authorities ] - [--daemon-timeout ] - [--delay-enroll] - [--elastic-agent-cert ] - [--elastic-agent-cert-key ] - [--elastic-agent-cert-key-passphrase ] - [--force] - [--header ] - [--help] - [--insecure ] - [--proxy-disabled] - [--proxy-header ] - [--proxy-url ] - [--staging ] - [--tag ] - [global-flags] ----- - -// end::enroll[] - -To enroll the {agent} in {fleet} and set up {fleet-server}: - -[source,shell] ----- -elastic-agent enroll --fleet-server-es - --fleet-server-service-token - [--fleet-server-service-token-path ] - [--ca-sha256 ] - [--certificate-authorities ] - [--daemon-timeout ] - [--delay-enroll] - [--elastic-agent-cert ] - [--elastic-agent-cert-key ] - [--elastic-agent-cert-key-passphrase ] - [--fleet-server-cert ] <1> - [--fleet-server-cert-key ] - [--fleet-server-cert-key-passphrase ] - [--fleet-server-client-auth ] - [--fleet-server-es-ca ] - [--fleet-server-es-ca-trusted-fingerprint ] <2> - [--fleet-server-es-cert ] - [--fleet-server-es-cert-key ] - [--fleet-server-es-insecure] - [--fleet-server-host ] - [--fleet-server-policy ] - [--fleet-server-port ] - [--fleet-server-timeout ] - [--force] - [--header ] - [--help] - [--proxy-disabled] - [--proxy-header ] - [--proxy-url ] - [--staging ] - [--tag ] - [--url ] <3> - [global-flags] ----- -<1> If no `fleet-server-cert*` flags are specified, {agent} auto-generates a -self-signed certificate with the hostname of the machine. Remote {agent}s -enrolling into a {fleet-server} with self-signed certificates must specify -the `--insecure` flag. -<2> Required when using self-signed certificates with {es}. -<3> Required when enrolling in a {fleet-server} with custom certificates. The -URL must match the DNS name used to generate the certificate specified by -`--fleet-server-cert`. - -For more information about custom certificates, refer to <>. - -[discrete] -=== Options - -`--ca-sha256 `:: -Comma-separated list of certificate authority hash pins used for certificate -verification. - -`--certificate-authorities `:: -Comma-separated list of root certificates used for server verification. - -`--daemon-timeout `:: -Timeout waiting for {agent} daemon. - -`--delay-enroll`:: -Delays enrollment to occur on first start of the {agent} service. This setting -is useful when you don't want the {agent} to enroll until the next reboot or manual start of the service, for -example, when you're preparing an image that includes {agent}. - -`--elastic-agent-cert`:: -Certificate to use as the client certificate for the {agent}'s connections to {fleet-server}. - -`--elastic-agent-cert-key`:: -Private key to use as for the {agent}'s connections to {fleet-server}. - -`--elastic-agent-cert-key-passphrase`:: -The path to the file that contains the passphrase for the mutual TLS private key that {agent} will use to connect to {fleet-server}. -The file must only contain the characters of the passphrase, no newline or extra non-printing characters. -+ -This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use. - -`--enrollment-token `:: -Enrollment token to use to enroll {agent} into {fleet}. You can use -the same enrollment token for multiple agents. - -`--fleet-server-cert `:: -Certificate to use for exposed {fleet-server} HTTPS endpoint. - -`--fleet-server-cert-key `:: -Private key to use for exposed {fleet-server} HTTPS endpoint. - -`--fleet-server-cert-key-passphrase `:: -Path to passphrase file for decrypting {fleet-server}'s private key if an encrypted private key is used. - -`--fleet-server-client-auth `:: -One of `none`, `optional`, or `required`. Defaults to `none`. {fleet-server}'s `client_authentication` option -for client mTLS connections. If `optional`, or `required` is specified, client certificates are verified -using CAs specified in the `--certificate-authorities` flag. - -`--fleet-server-es `:: -Start a {fleet-server} process when {agent} is started, and connect to the -specified {es} URL. - -`--fleet-server-es-ca `:: -Path to certificate authority to use to communicate with {es}. - -`--fleet-server-es-ca-trusted-fingerprint `:: -The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {es} certificates. -This fingerprint will be used to verify self-signed certificates presented by {fleet-server} and any inputs started by {agent} for communication. -This flag is required when using self-signed certificates with {es}. - -`--fleet-server-es-cert`:: -The path to the client certificate that {fleet-server} will use when connecting to {es}. - -`--fleet-server-es-cert-key`:: -The path to the private key that {fleet-server} will use when connecting to {es}. - -`--fleet-server-es-insecure`:: -Allows fleet server to connect to {es} in the following situations: -+ --- -* When connecting to an HTTP server. -* When connecting to an HTTPs server and the certificate chain cannot be -verified. The content is encrypted, but the certificate is not verified. --- -+ -When this flag is used the certificate verification is disabled. - -`--fleet-server-host `:: -{fleet-server} HTTP binding host (overrides the policy). - -`--fleet-server-policy `:: -Used when starting a self-managed {fleet-server} to allow a specific policy to be used. - -`--fleet-server-port `:: -{fleet-server} HTTP binding port (overrides the policy). - -`--fleet-server-service-token `:: -Service token to use for communication with {es}. -Mutually exclusive with `--fleet-server-service-token-path`. - -`--fleet-server-service-token-path `:: -Service token file to use for communication with {es}. -Mutually exclusive with `--fleet-server-service-token`. - -`--fleet-server-timeout `:: -Timeout waiting for {fleet-server} to be ready to start enrollment. - -`--force`:: -Force overwrite of current configuration without prompting for confirmation. -This flag is helpful when using automation software or scripted deployments. -+ -NOTE: If the {agent} is already installed on the host, using `--force` may -result in unpredictable behavior with duplicate {agent}s appearing in {fleet}. - -`--header `:: -Headers used in communication with elasticsearch. - -`--help`:: -Show help for the `enroll` command. - -`--insecure`:: -Allow the {agent} to connect to {fleet-server} over insecure connections. This -setting is required in the following situations: -+ --- -* When connecting to an HTTP server. The API keys are sent in clear text. -* When connecting to an HTTPs server and the certificate chain cannot be -verified. The content is encrypted, but the certificate is not verified. -* When using self-signed certificates generated by {agent}. --- -+ -We strongly recommend that you use a secure connection. - -`--proxy-disabled`:: -Disable proxy support including environment variables. - -`--proxy-header `:: -Proxy headers used with CONNECT request. - -`--proxy-url `:: -Configures the proxy URL. - -`--staging `:: -Configures agent to download artifacts from a staging build. - -`--tag `:: -A comma-separated list of tags to apply to {fleet}-managed {agent}s. You can -use these tags to filter the list of agents in {fleet}. -+ -NOTE: Currently, there is no way to remove or edit existing tags. To change the -tags, you must unenroll the {agent}, then re-enroll it using new tags. - -`--url `:: -{fleet-server} URL to use to enroll the {agent} into {fleet}. - -{global-flags-link} - -[discrete] -=== Examples - -Enroll the {agent} in {fleet}: - -[source,shell] ----- -elastic-agent enroll \ - --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \ - --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== ----- - -Enroll the {agent} in {fleet} and set up {fleet-server}: - -[source,shell] ----- -elastic-agent enroll --fleet-server-es=http://elasticsearch:9200 \ - --fleet-server-service-token=AbEAAdesYXN1abMvZmxlZXQtc2VldmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg7dzEta0JDTmZUcGlDTjlwRmNVTjNVQQ \ - --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 ----- - -Start {agent} with {fleet-server} (running on a custom CA). This example -assumes you've generated the certificates with the following names: - -* `ca.crt`: Root CA certificate -* `fleet-server.crt`: {fleet-server} certificate -* `fleet-server.key`: {fleet-server} private key -* `elasticsearch-ca.crt`: CA certificate to use to connect to {es} - -[source,shell] ----- -elastic-agent enroll \ - --url=https://fleet-server:8220 \ - --fleet-server-es=https://elasticsearch:9200 \ - --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \ - --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 \ - --certificate-authorities=/path/to/ca.crt \ - --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \ - --fleet-server-cert=/path/to/fleet-server.crt \ - --fleet-server-cert-key=/path/to/fleet-server.key \ - --fleet-server-port=8220 ----- - -Then enroll another {agent} into the {fleet-server} started in the previous -example: - -[source,shell] ----- -elastic-agent enroll --url=https://fleet-server:8220 \ - --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \ - --certificate-authorities=/path/to/ca.crt ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-help-command]] -== elastic-agent help - -Show help for a specific command. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent help [--help] [global-flags] ----- - -[discrete] -=== Options - -`command`:: -The name of the command. - -`--help`:: -Show help for the `help` command. - -{global-flags-link} - -[discrete] -=== Example - -[source,shell] ----- -elastic-agent help enroll ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-inspect-command]] -== elastic-agent inspect - -Show the current {agent} configuration. - -If no parameters are specified, shows the full {agent} configuration. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent inspect [--help] -elastic-agent inspect components [--show-config] - [--show-spec] - [--help] - [id] ----- - -[discrete] -=== Options - -`components`:: Display the current configuration for the component. This command -accepts additional flags: -+ --- -`--show-config`:: -Use to display the configuration in all units. - -`--show-spec`:: -Use to get input/output runtime spectification for a component. --- - -`--help`:: -Show help for the `inspect` command. - -{global-flags-link} - -[discrete] -=== Examples - -[source,shell] ----- -elastic-agent inspect -elastic-agent inspect components --show-config -elastic-agent inspect components log-default ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-privileged-command]] -== elastic-agent privileged - -Run {agent} with full superuser privileges. -This is the usual, default running mode for {agent}. -The `privileged` command allows you to switch back to running an agent with full administrative privileges when you have been running it in `unprivileged` mode. - -Refer to {fleet-guide}/elastic-agent-unprivileged.html[Run {agent} without administrative privileges] for more detail. - -[discrete] -=== Examples - -[source,shell] ----- -elastic-agent privileged ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-install-command]] -== elastic-agent install - -Install {agent} permanently on the system and manage it by using the system's -service manager. The agent will start automatically after installation is -complete. On Linux (tar package), this command requires a system and service -manager like systemd. - -IMPORTANT: If you installed {agent} from a DEB or RPM package, the `install` -command will skip the installation itself and function as an alias of the -<> instead. Note that after -an upgrade of the {agent} using DEB or RPM the {agent} service needs to be restarted. - -You must run this command as the root user (or Administrator on Windows) -to write files to the correct locations. This command overwrites the -`elastic-agent.yml` file in the agent directory. - -The syntax for running this command varies by platform. For platform-specific -examples, refer to <>. - -[discrete] -=== Synopsis - -To install the {agent} as a service, enroll it in {fleet}, and start the -`elastic-agent` service: - -[source,shell] ----- -elastic-agent install --url - --enrollment-token - [--base-path ] - [--ca-sha256 ] - [--certificate-authorities ] - [--daemon-timeout ] - [--delay-enroll] - [--elastic-agent-cert ] - [--elastic-agent-cert-key ] - [--elastic-agent-cert-key-passphrase ] - [--force] - [--header ] - [--help] - [--insecure ] - [--non-interactive] - [--privileged] - [--proxy-disabled] - [--proxy-header ] - [--proxy-url ] - [--staging ] - [--tag ] - [--unprivileged] - [global-flags] ----- - -To install the {agent} as a service, enroll it in {fleet}, and start -a `fleet-server` process alongside the `elastic-agent` service: - -[source,shell] ----- - -elastic-agent install --fleet-server-es - --fleet-server-service-token - [--fleet-server-service-token-path ] - [--base-path ] - [--ca-sha256 ] - [--certificate-authorities ] - [--daemon-timeout ] - [--delay-enroll] - [--elastic-agent-cert ] - [--elastic-agent-cert-key ] - [--elastic-agent-cert-key-passphrase ] - [--fleet-server-cert ] <1> - [--fleet-server-cert-key ] - [--fleet-server-cert-key-passphrase ] - [--fleet-server-client-auth ] - [--fleet-server-es-ca ] - [--fleet-server-es-ca-trusted-fingerprint ] <2> - [--fleet-server-es-cert ] - [--fleet-server-es-cert-key ] - [--fleet-server-es-insecure] - [--fleet-server-host ] - [--fleet-server-policy ] - [--fleet-server-port ] - [--fleet-server-timeout ] - [--force] - [--header ] - [--help] - [--non-interactive] - [--privileged] - [--proxy-disabled] - [--proxy-header ] - [--proxy-url ] - [--staging ] - [--tag ] - [--unprivileged] - [--url ] <3> - [global-flags] ----- -<1> If no `fleet-server-cert*` flags are specified, {agent} auto-generates a -self-signed certificate with the hostname of the machine. Remote {agent}s -enrolling into a {fleet-server} with self-signed certificates must specify -the `--insecure` flag. -<2> Required when using self-signed certificate on {es} side. -<3> Required when enrolling in a {fleet-server} with custom certificates. The -URL must match the DNS name used to generate the certificate specified by -`--fleet-server-cert`. - -For more information about custom certificates, refer to <>. - -[discrete] -=== Options - -`--base-path `:: -Install {agent} in a location other than the <>. -Specify the custom base path for the install. -+ -The `--base-path` option is not currently supported with {security-guide}/install-endpoint.html[{elastic-defend}]. - -`--ca-sha256 `:: -Comma-separated list of certificate authority hash pins used for certificate -verification. - -`--certificate-authorities `:: -Comma-separated list of root certificates used for server verification. - -`--daemon-timeout `:: -Timeout waiting for {agent} daemon. - -`--delay-enroll`:: -Delays enrollment to occur on first start of the {agent} service. This setting -is useful when you don't want the {agent} to enroll until the next reboot or manual start of the service, for -example, when you're preparing an image that includes {agent}. - -`--elastic-agent-cert`:: -Certificate to use as the client certificate for the {agent}'s connections to {fleet-server}. - -`--elastic-agent-cert-key`:: -Private key to use as for the {agent}'s connections to {fleet-server}. - -`--elastic-agent-cert-key-passphrase`:: -The path to the file that contains the passphrase for the mutual TLS private key that {agent} will use to connect to {fleet-server}. -The file must only contain the characters of the passphrase, no newline or extra non-printing characters. -+ -This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use. - -`--enrollment-token `:: -Enrollment token to use to enroll {agent} into {fleet}. You can use -the same enrollment token for multiple agents. - -`--fleet-server-cert `:: -Certificate to use for exposed {fleet-server} HTTPS endpoint. - -`--fleet-server-cert-key `:: -Private key to use for exposed {fleet-server} HTTPS endpoint. - -`--fleet-server-cert-key-passphrase `:: -Path to passphrase file for decrypting {fleet-server}'s private key if an encrypted private key is used. - -`--fleet-server-client-auth `:: -One of `none`, `optional`, or `required`. Defaults to `none`. {fleet-server}'s `client_authentication` option -for client mTLS connections. If `optional`, or `required` is specified, client certificates are verified -using CAs specified in the `--certificate-authorities` flag. - -`--fleet-server-es `:: -Start a {fleet-server} process when {agent} is started, and connect to the -specified {es} URL. - -`--fleet-server-es-ca `:: -Path to certificate authority to use to communicate with {es}. - -`--fleet-server-es-ca-trusted-fingerprint `:: -The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {es} certificates. -This fingerprint will be used to verify self-signed certificates presented by {fleet-server} and any inputs started by {agent} for communication. -This flag is required when using self-signed certificates with {es}. - -`--fleet-server-es-cert`:: -The path to the client certificate that {fleet-server} will use when connecting to {es}. - -`--fleet-server-es-cert-key`:: -The path to the private key that {fleet-server} will use when connecting to {es}. - -`--fleet-server-es-insecure`:: -Allows fleet server to connect to {es} in the following situations: -+ --- -* When connecting to an HTTP server. -* When connecting to an HTTPs server and the certificate chain cannot be -verified. The content is encrypted, but the certificate is not verified. --- -+ -When this flag is used the certificate verification is disabled. - -`--fleet-server-host `:: -{fleet-server} HTTP binding host (overrides the policy). - -`--fleet-server-policy `:: -Used when starting a self-managed {fleet-server} to allow a specific policy to be used. - -`--fleet-server-port `:: -{fleet-server} HTTP binding port (overrides the policy). - -`--fleet-server-service-token `:: -Service token to use for communication with {es}. -Mutually exclusive with `--fleet-server-service-token-path`. - -`--fleet-server-service-token-path `:: -Service token file to use for communication with {es}. -Mutually exclusive with `--fleet-server-service-token`. - -`--fleet-server-timeout `:: -Timeout waiting for {fleet-server} to be ready to start enrollment. - -`--force`:: -Force overwrite of current configuration without prompting for confirmation. -This flag is helpful when using automation software or scripted deployments. -+ -NOTE: If the {agent} is already installed on the host, using `--force` may -result in unpredictable behavior with duplicate {agent}s appearing in {fleet}. - -`--header `:: -Headers used in communication with elasticsearch. - -`--help`:: -Show help for the `enroll` command. - -`--insecure`:: -Allow the {agent} to connect to {fleet-server} over insecure connections. This -setting is required in the following situations: -+ --- -* When connecting to an HTTP server. The API keys are sent in clear text. -* When connecting to an HTTPs server and the certificate chain cannot be -verified. The content is encrypted, but the certificate is not verified. -* When using self-signed certificates generated by {agent}. --- -+ -We strongly recommend that you use a secure connection. - -`--non-interactive`:: -Install {agent} in a non-interactive mode. This flag is helpful when -using automation software or scripted deployments. If {agent} is -already installed on the host, the installation will terminate. - -`--privileged`:: -Run {agent} with full superuser privileges. -This is the usual, default running mode for {agent}. -The `--privileged` option allows you to switch back to running an agent with full administrative privileges when you have been running it in `unprivileged`. - -See the `--unprivileged` option and {fleet-guide}/elastic-agent-unprivileged.html[Run {agent} without administrative privileges] for more detail. - -`--proxy-disabled`:: -Disable proxy support including environment variables. - -`--proxy-header `:: -Proxy headers used with CONNECT request. - -`--proxy-url `:: -Configures the proxy URL. - -`--staging `:: -Configures agent to download artifacts from a staging build. - -`--tag `:: -A comma-separated list of tags to apply to {fleet}-managed {agent}s. You can -use these tags to filter the list of agents in {fleet}. -+ -NOTE: Currently, there is no way to remove or edit existing tags. To change the -tags, you must unenroll the {agent}, then re-enroll it using new tags. - -`--unprivileged`:: -Run {agent} without full superuser privileges. -This option is useful in organizations that limit `root` access on Linux or macOS systems, or `admin` access on Windows systems. -For details and limitations for running {agent} in this mode, refer to {fleet-guide}/elastic-agent-unprivileged.html[Run {agent} without administrative privileges]. -+ -Note that changing to `unprivileged` mode is prevented if the agent is currently enrolled in a policy that includes an integration that requires administrative access, such as the {elastic-defend} integration. -+ -preview:[] To run {agent} without superuser privileges as a pre-existing user or group, for instance under an Active Directory account, you can specify the user or group, and the password to use. -+ -For example: -+ -[source,shell] ----- -elastic-agent install --unprivileged --user="my.path\username" --password="mypassword" ----- -+ -[source,shell] ----- -elastic-agent install --unprivileged --group="my.path\groupname" --password="mypassword" ----- - -`--url `:: -{fleet-server} URL to use to enroll the {agent} into {fleet}. - -{global-flags-link} - -[discrete] -=== Examples - -Install the {agent} as a service, enroll it in {fleet}, and start the -`elastic-agent` service: - -[source,shell] ----- -elastic-agent install \ - --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \ - --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== ----- - -Install the {agent} as a service, enroll it in {fleet}, and start -a `fleet-server` process alongside the `elastic-agent` service: - -[source,shell] ----- -elastic-agent install --fleet-server-es=http://elasticsearch:9200 \ - --fleet-server-service-token=AbEAAdesYXN1abMvZmxlZXQtc2VldmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg7dzEta0JDTmZUcGlDTjlwRmNVTjNVQQ \ - --fleet-server-policy=a35fd620-26f6-11ec-8bd9-3374690f57b6 ----- - -Start {agent} with {fleet-server} (running on a custom CA). This example -assumes you've generated the certificates with the following names: - -* `ca.crt`: Root CA certificate -* `fleet-server.crt`: {fleet-server} certificate -* `fleet-server.key`: {fleet-server} private key -* `elasticsearch-ca.crt`: CA certificate to use to connect to {es} - -[source,shell] ----- -elastic-agent install \ - --url=https://fleet-server:8220 \ - --fleet-server-es=https://elasticsearch:9200 \ - --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \ - --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 \ - --certificate-authorities=/path/to/ca.crt \ - --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \ - --fleet-server-cert=/path/to/fleet-server.crt \ - --fleet-server-cert-key=/path/to/fleet-server.key \ - --fleet-server-port=8220 ----- - -Then install another {agent} and enroll it into the {fleet-server} started in -the previous example: - -[source,shell] ----- -elastic-agent install --url=https://fleet-server:8220 \ - --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \ - --certificate-authorities=/path/to/ca.crt ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-otel-command]] -== elastic-agent otel - -preview::[] - -Run {agent} as an <>. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent otel [flags] -elastic-agent otel [command] ----- - -NOTE: You can also run the `./otelcol` command, which calls `./elastic-agent otel` and passes any arguments to it. - -[discrete] -=== Available commands - -`validate`:: -Validates the OpenTelemetry collector configuration without running the collector. - -[discrete] -=== Flags - -`--config=file:/path/to/first --config=file:path/to/second`:: -Locations to the config file(s). Note that only a single location can be set per flag entry, for example `--config=file:/path/to/first --config=file:path/to/second`. - -`--feature-gates flag`:: -Comma-delimited list of feature gate identifiers. Prefix with `-` to disable the feature. Prefixing with `+` or no prefix will enable the feature. - -`-h, --help`:: -Get help for the `otel` sub-command. Use `elastic-agent otel [command] --help` for more information about a command. - -`--set string`:: -Set an arbitrary component config property. The component has to be defined in the configuration file and the flag has a higher precedence. Array configuration properties are overridden and maps are joined. For example, `--set=processors::batch::timeout=2s`. - -[discrete] -=== Examples - -Run {agent} as on OTel Collector using the supplied `otel.yml` configuration file. -[source,shell] ----- -./elastic-agent otel --config otel.yml ----- - -Change the default verbosity setting in the {agent} OTel configuration from `detailed` to `normal`. - -[source,shell] ----- -./elastic-agent otel --config otel.yml --set "exporters::debug::verbosity=normal" ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-restart-command]] -== elastic-agent restart - -Restart the currently running {agent} daemon. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent restart [--help] [global-flags] ----- - -[discrete] -=== Options - -`--help`:: -Show help for the `restart` command. - -{global-flags-link} - -[discrete] -=== Examples - -[source,shell] ----- -elastic-agent restart ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-run-command]] -== elastic-agent run - -Start the `elastic-agent` process. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent run [global-flags] ----- - -[discrete] -[[elastic-agent-global-flags]] -=== Global flags - -These flags are valid whenever you run `elastic-agent` on the command line. - -`-c `:: -The configuration file to use. If not specified, {agent} uses -`{path.config}/elastic-agent.yml`. - -`--e`:: -Log to stderr and disable syslog/file output. - -`--environment `:: -The environment in which the agent will run. - -`--path.config `:: -The directory where {agent} looks for its configuration file. The default -varies by platform. - -`--path.home `:: -The root directory of {agent}. `path.home` determines the location of the -configuration files and data directory. -+ -If not specified, {agent} uses the current working directory. - -`--path.logs `:: -Path to the log output for {agent}. The default varies by platform. - -`--v`:: -Set log level to INFO. - -[discrete] -=== Example - -[source,shell] ----- -elastic-agent run -c myagentconfig.yml ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-status-command]] -== elastic-agent status - -Returns the current status of the running {agent} daemon and of each process -in the {agent}. The last known status of the {fleet} server is also returned. -The `output` option controls the level of detail and formatting of the information. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent status [--output ] - [--help] - [global-flags] ----- - -[discrete] -=== Options - -`--output `:: -Output the status information in either `human` (the default), -`full`, `json`, or `yaml`. `human` returns limited information -when {agent} is in the `HEALTHY` state. If any components or units are -not in `HEALTHY` state, then full details are displayed for that -component or unit. `full`, `json` and `yaml` always return the -full status information. Components map to individual processes -running underneath {agent}, for example {filebeat} or {endpoint-sec}. -Units map to discrete configuration units within that process, for -example {filebeat} inputs or {metricbeat} modules. - -When the output is `json` or `yaml`, status codes are returned as -numerical values. The status codes can be mapped using the following -table: - -+ --- -|=== -|Code |Status - -|0 |`STARTING` -|1 |`CONFIGURING` -|2 |`HEALTHY` -|3 |`DEGRADED` -|4 |`FAILED` -|5 |`STOPPING` -|6 |`UPGRADING` -|7 |`ROLLBACK` -|=== --- - -`--help`:: -Show help for the `status` command. - -{global-flags-link} - -[discrete] -=== Examples - -[source,shell] ----- -elastic-agent status ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-uninstall-command]] -== elastic-agent uninstall - -Permanently uninstall {agent} from the system. - -You must run this command as the root user (or Administrator on Windows) -to remove files. - -[IMPORTANT] -==== -Be sure to run the `uninstall` command from a directory outside of where {agent} is installed. - -For example, on a Windows system the install location is `C:\Program Files\Elastic\Agent`. Run the uninstall command from `C:\Program Files\Elastic` or `\tmp`, or even your default home directory: - -[source,shell] ----- -C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall ----- - -==== - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/uninstall-widget.asciidoc[] - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent uninstall [--force] [--help] [global-flags] ----- - -[discrete] -=== Options - -`--force`:: -Uninstall {agent} and do not prompt for confirmation. This flag is helpful -when using automation software or scripted deployments. - -`--skip-fleet-audit`:: -Skip auditing with the {fleet-server}. - -`--help`:: -Show help for the `uninstall` command. - -{global-flags-link} - -[discrete] -=== Examples - -[source,shell] ----- -elastic-agent uninstall ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-unprivileged-command]] -== elastic-agent unprivileged - -Run {agent} without full superuser privileges. -This is useful in organizations that limit `root` access on Linux or macOS systems, or `admin` access on Windows systems. -For details and limitations for running {agent} in this mode, refer to {fleet-guide}/elastic-agent-unprivileged.html[Run {agent} without administrative privileges]. - -Note that changing a running {agent} to `unprivileged` mode is prevented if the agent is currently enrolled with a policy that contains the {elastic-defend} integration. - -preview:[] To run {agent} without superuser privileges as a pre-existing user or group, for instance under an Active Directory account, add either a `--user` or `--group` parameter together with a `--password` parameter. - -[discrete] -=== Examples - -Run {agent} without administrative privileges: - -[source,shell] ----- -elastic-agent unprivileged ----- - -preview:[] Run {agent} without administrative privileges, as a pre-existing user: - -[source,shell] ----- -elastic-agent unprivileged --user="my.pathl\username" --password="mypassword" ----- - -preview:[] Run {agent} without administrative privileges, as a pre-existing group: - -[source,shell] ----- -elastic-agent unprivileged --group="my.pathl\groupname" --password="mypassword" ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-upgrade-command]] -== elastic-agent upgrade - -Upgrade the currently running {agent} to the specified version. This should only -be used with agents running in standalone mode. Agents enrolled in {fleet} -should be upgraded through {fleet}. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent upgrade [--source-uri ] [--help] [flags] ----- - -[discrete] -=== Options - -`version`:: -The version of {agent} to upgrade to. - -`--source-uri `:: -The source URI to download the new version from. By default, {agent} uses the -Elastic Artifacts URL. - -`--skip-verify`:: -Skip the package verification process. This option is not recommended as it is insecure. - -`--pgp-path `:: -Use a locally stored copy of the PGP key to verify the upgrade package. - -`--pgp-uri `:: -Use the specified online PGP key to verify the upgrade package. - -`--help`:: -Show help for the `upgrade` command. - -For details about using the `--skip-verify`, `--pgp-path `, and `--pgp-uri ` -package verification options, refer to <>. - -{global-flags-link} - -[discrete] -=== Examples - -[source,shell] ----- -elastic-agent upgrade 7.10.1 ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-logs-command]] -== elastic-agent logs - -Show the logs of the running {agent}. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent logs [--follow] [--number ] [--component ] [--no-color] [--help] [global-flags] ----- - -[discrete] -=== Options - -`--follow` or `-f`:: -Follow log updates until the command is interrupted (for example with `Ctrl-C`). - -`--number ` or `-n `:: -How many lines of logs to print. If logs following is enabled, affects the initial output. - -`--component ` or `-C `:: -Filter logs based on the component name. - -`--no-color`:: -Disable color based on log-level of each entry. - -`--help`:: -Show help for the `logs` command. - -{global-flags-link} - -[discrete] -=== Example - -[source,shell] ----- -elastic-agent logs -n 100 -f -C "system/metrics-default" ----- - -++++ -
-++++ - -[discrete] -[[elastic-agent-version-command]] -== elastic-agent version - -Show the version of {agent}. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent version [--help] [global-flags] ----- - -[discrete] -=== Options - -`--help`:: -Show help for the `version` command. - -{global-flags-link} - -[discrete] -=== Example - -[source,shell] ----- -elastic-agent version ----- - -++++ -
-++++ - -//// -//commenting out until we decide whether we want to expose this in public docs -[discrete] -[[elastic-agent-watch-command]] -== elastic-agent watch - -Watch the {agent} for failures and initiate rollback. - -[discrete] -=== Synopsis - -[source,shell] ----- -elastic-agent watch [--help] [global-flags] ----- - -[discrete] -=== Options - -`--help`:: -Show help for the `watch` command. - -{global-flags-link} - -[discrete] -=== Example - -[source,shell] ----- -elastic-agent watch ----- - -++++ -
-++++ -//// diff --git a/docs/en/ingest-management/create-agent-policies-no-UI.asciidoc b/docs/en/ingest-management/create-agent-policies-no-UI.asciidoc deleted file mode 100644 index 33a267763..000000000 --- a/docs/en/ingest-management/create-agent-policies-no-UI.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[create-a-policy-no-ui]] -= Create an agent policy without using the UI - -For use cases where you want to provide a default agent policy or support -automation, you can set up an agent policy without using the {fleet} UI. To do -this, either use the {fleet} API or add a preconfigured policy to {kib}: - -[discrete] -[[use-api-to-create-policy]] -== Option 1. Create an agent policy with the API - -[source,sh] ----- -curl -u : --request POST \ - --url /api/fleet/agent_policies?sys_monitoring=true \ - --header 'content-type: application/json' \ - --header 'kbn-xsrf: true' \ - --data '{"name":"Agent policy 1","namespace":"default","monitoring_enabled":["logs","metrics"]}' ----- - -In this API call: - -* `sys_monitoring=true` adds the system integration to the agent policy -* `monitoring_enabled` turns on {agent} monitoring - -For more information, refer to <>. - -[discrete] -[[use-preconfiguration-to-create-policy]] -== Option 2. Create agent policies with preconfiguration - -Add preconfigured policies to `kibana.yml` config. - -For example, the following example adds a {fleet-server} policy for -self-managed setup: - -[source,yaml] ----- -xpack.fleet.packages: - - name: fleet_server - version: latest -xpack.fleet.agentPolicies: - - name: Fleet Server policy - id: fleet-server-policy - namespace: default - package_policies: - - name: fleet_server-1 - package: - name: fleet_server ----- - -The following example creates an agent policy for general use, and customizes the `period` setting for the `system.core` data stream. You can find all available inputs and variables in the **Integrations** app in {kib}. - -[source,yaml] ----- -xpack.fleet.packages: - - name: system - version: latest - - name: elastic_agent - version: latest -xpack.fleet.agentPolicies: - - name: Agent policy 1 - id: agent-policy-1 - namespace: default - monitoring_enabled: - - logs - - metrics - package_policies: - - package: - name: system - name: System Integration 1 - id: preconfigured-system-1 - inputs: - system-system/metrics: - enabled: true - vars: - '[system.hostfs]': home/test - streams: - '[system.core]': - enabled: true - vars: - period: 20s - system-winlog: - enabled: false ----- - -For more information about preconfiguration settings, refer to the -{kibana-ref}/fleet-settings-kb.html[{kib} documentation]. diff --git a/docs/en/ingest-management/data-streams.asciidoc b/docs/en/ingest-management/data-streams.asciidoc deleted file mode 100644 index a113548d1..000000000 --- a/docs/en/ingest-management/data-streams.asciidoc +++ /dev/null @@ -1,1132 +0,0 @@ -[[data-streams]] -= Data streams - -{agent} uses data streams to store time series data across multiple indices -while giving you a single named resource for requests. -Data streams are well-suited for logs, metrics, traces, and other continuously generated data. -They offer a host of benefits over other indexing strategies: - -* *Reduced number of fields per index*: Indices only need to store a specific subset of your -data–meaning no more indices with hundreds of thousands of fields. -This leads to better space efficiency and faster queries. -As an added bonus, only relevant fields are shown in Discover. - -* *More granular data control*: For example, file system, load, CPU, network, and process metrics are sent -to different indices–each potentially with its own rollover, retention, and security permissions. - -* *Flexible*: Use the custom namespace component to divide and organize data in a way that -makes sense to your use case or company. - -* *Fewer ingest permissions required*: Data ingestion only requires permissions to append data. - -[discrete] -[[data-streams-naming-scheme]] -= Data stream naming scheme - -{agent} uses the Elastic data stream naming scheme to name data streams. -The naming scheme splits data into different streams based on the following components: - -`type`:: -A generic `type` describing the data, such as `logs`, `metrics`, `traces`, or `synthetics`. -// Corresponds to the `data_stream.type` field. - -`dataset`:: -The `dataset` is defined by the integration and describes the ingested data and its structure for each index. -For example, you might have a dataset for process metrics with a field describing whether the process is running or not, -and another dataset for disk I/O metrics with a field describing the number of bytes read. - -`namespace`:: -A user-configurable arbitrary grouping, such as an environment (`dev`, `prod`, or `qa`), -a team, or a strategic business unit. -A `namespace` can be up to 100 bytes in length (multibyte characters will count toward this limit faster). -Using a namespace makes it easier to search data from a given source by using a matching pattern. -You can also use matching patterns to give users access to data when creating user roles. -// Corresponds to the `data_stream.dataset` field. -+ -By default the namespace defined for an {agent} policy is propagated to all integrations in that policy. if you'd like to define a more granular namespace for a policy: - -. In {kib}, go to **Integrations**. -. On the **Installed integrations** tab, select the integration that you'd like to update. -. Open the **Integration policies** tab. -. From the **Actions** menu next to the integration, select *Edit integration*. -. Open the advanced options and update the **Namespace** field. Data streams from the integration will now use the specified namespace rather than the default namespace inherited from the {agent} policy. - -The naming scheme separates each components with a `-` character: - -[source,text] --- --- --- - -For example, if you've set up the Nginx integration with a namespace of `prod`, -{agent} uses the `logs` type, `nginx.access` dataset, and `prod` namespace to store data in the following data stream: - -[source,text] --- -logs-nginx.access-prod --- - -Alternatively, if you use the APM integration with a namespace of `dev`, -{agent} stores data in the following data stream: - -[source,text] --- -traces-apm-dev --- - -All data streams, and the pre-built dashboards that they ship with, -are viewable on the {fleet} Data Streams page: - -[role="screenshot"] -image::images/kibana-fleet-datastreams.png[Data streams page] - -TIP: If you're familiar with the concept of indices, you can think of each data stream as a separate index in {es}. -Under the hood though, things are a bit more complex. -All of the juicy details are available in {ref}/data-streams.html[{es} Data streams]. - -[discrete] -[[data-streams-data-view]] -= {data-sources-cap} - -When searching your data in {kib}, you can use a {kibana-ref}/data-views.html[{data-source}] -to search across all or some of your data streams. - -[discrete] -[[data-streams-index-templates]] -= Index templates - -An index template is a way to tell {es} how to configure an index when it is created. -For data streams, the index template configures the stream's backing indices as they are created. - -{es} provides the following built-in, ECS based templates: `logs-*-*`, `metrics-*-*`, and `synthetics-*-*`. -{agent} integrations can also provide dataset-specific index templates, like `logs-nginx.access-*`. -These templates are loaded when the integration is installed, and are used to configure the integration's data streams. - -[discrete] -[[data-streams-index-templates-edit]] -== Edit the {es} index template - -WARNING: Custom index mappings may conflict with the mappings defined by the integration -and may break the integration in {kib}. Do not change or customize any default mappings. - -When you install an integration, {fleet} creates two default `@custom` component templates: - -* A `@custom` component template allowing customization across all documents of a given data stream type, named following the pattern: `@custom`. -* A `@custom` component template for each data stream, named following the pattern: `@custom`. - -The `@custom` component template specific to a datastream has higher precedence over the data stream type `@custom` component template. - -You can edit a `@custom` component template to customize your {es} indices: - -. Open {kib} and navigate to to **{stack-manage-app}** > **Index Management** > **Data Streams**. -. Find and click the name of the integration data stream, such as `logs-cisco_ise.log-default`. -. Click the index template link for the data stream to see the list of associated component templates. -. Navigate to **{stack-manage-app}** > **Index Management** > **Component Templates**. -. Search for the name of the data stream's custom component template and click the edit icon. -. Add any custom index settings, metadata, or mappings. -For example, you may want to: - -* Customize the index lifecycle policy applied to a data stream. -See {apm-guide-ref}/ilm-how-to.html#data-streams-custom-policy[Configure a custom index lifecycle policy] in the APM Guide for a walk-through. -+ -Specify lifecycle name in the **index settings**: -+ -[source,json] ----- -{ - "index": { - "lifecycle": { - "name": "my_policy" - } - } -} ----- - -* Change the number of {ref}/docs-replication.html[replicas] per index. -Specify the number of replica shards in the **index settings**: -+ -[source,json] ----- -{ - "index": { - "number_of_replicas": "2" - } -} ----- - -Changes to component templates are not applied retroactively to existing indices. -For changes to take effect, you must create a new write index for the data stream. -You can do this with the {es} {ref}/indices-rollover-index.html[Rollover API]. - -[discrete] -[[data-streams-ilm]] -= Index lifecycle management ({ilm-init}) - -Use the {ref}/index-lifecycle-management.html[index lifecycle -management] ({ilm-init}) feature in {es} to manage your {agent} data stream indices as they age. -For example, create a new index after a certain period of time, -or delete stale indices to enforce data retention standards. - -Installed integrations may have one or many associated data streams--each with an associated {ilm-init} policy. -By default, these data streams use an {ilm-init} policy that matches their data type. -For example, the data stream `metrics-system.logs-*`, -uses the metrics {ilm-init} policy as defined in the `metrics-system.logs` index template. - -Want to customize your index lifecycle management? See <>. - -[discrete] -[[data-streams-pipelines]] -= Ingest pipelines - -{agent} integration data streams ship with a default {ref}/ingest.html[ingest pipeline] -that preprocesses and enriches data before indexing. -The default pipeline should not be directly edited as changes can easily break the functionality of the integration. - -Starting in version 8.4, all default ingest pipelines call a non-existent and non-versioned "`@custom`" ingest pipeline. -If left uncreated, this pipeline has no effect on your data. However, if added to a data stream and customized, -this pipeline can be used for custom data processing, adding fields, sanitizing data, and more. - -Starting in version 8.12, ingest pipelines can be configured to process events at various levels of customization. - -NOTE: If you create a custom index pipeline, Elastic is not responsible for ensuring that it indexes and behaves as expected. Creating a custom pipeline involves custom processing of the incoming data, which should be done with caution and tested carefully. - -`global@custom`:: -Apply processing to all events -+ -For example, the following {ref}/put-pipeline-api.html[pipeline API] request adds a new field `my-global-field` for all events: -+ -[source,console] ----- -PUT _ingest/pipeline/global@custom -{ - "processors": [ - { - "set": { - "description": "Process all events", - "field": "my-global-field", - "value": "foo" - } - } - ] -} ----- - -`${type}`:: -Apply processing to all events of a given data type. -+ -For example, the following request adds a new field `my-logs-field` for all log events: -+ -[source,console] ----- -PUT _ingest/pipeline/logs@custom -{ - "processors": [ - { - "set": { - "description": "Process all log events", - "field": "my-logs-field", - "value": "foo" - } - } - ] -} ----- - -`${type}-${package}.integration`:: -Apply processing to all events of a given type in an integration -+ -For example, the following request creates a `logs-nginx.integration@custom` pipeline that adds a new field `my-nginx-field` for all log events in the Nginx integration: -+ -[source,console] ----- -PUT _ingest/pipeline/logs-nginx.integration@custom -{ - "processors": [ - { - "set": { - "description": "Process all nginx events", - "field": "my-nginx-field", - "value": "foo" - } - } - ] -} ----- -+ -Note that `.integration` is included in the pipeline pattern to avoid possible collision with existing dataset pipelines. - - -`${type}-${dataset}`:: -Apply processing to a specific dataset. -+ -For example, the following request creates a `metrics-system.cpu@custom` pipeline that adds a new field `my-system.cpu-field` for all CPU metrics events in the System integration: -+ -[source,console] ----- -PUT _ingest/pipeline/metrics-system.cpu@custom -{ - "processors": [ - { - "set": { - "description": "Process all events in the system.cpu dataset", - "field": "my-system.cpu-field", - "value": "foo" - } - } - ] -} ----- - -Custom pipelines can directly contain processors or you can use the pipeline processor to call other pipelines that can be shared across multiple data streams or integrations. -These pipelines will persist across all version upgrades. - -[[data-streams-pipelines-warning]] -[WARNING] -==== -If you have a custom pipeline defined that matches the naming scheme used for any {fleet} custom ingest pipelines, this can produce unintended results. For example, if you have a pipeline named like one of the following: - -* `global@custom` -* `traces@custom` -* `traces-apm@custom` - -The pipeline may be unexpectedly called for other data streams in other integrations. To avoid this problem, avoid the naming schemes defined above when naming your custom pipelines. - -Refer to the breaking change in the 8.12.0 Release Notes for more detail and workaround options. -==== - -See <> to get started. - -[[data-streams-ilm-tutorial]] -== Tutorials: Customize data retention policies - -These tutorials explain how to apply a custom {ilm-init} policy to an integration's data stream. - -[discrete] -[[data-streams-general-info]] -== Before you begin - -For certain features you'll need to use a slightly different procedure to manage the index lifecycle: - -* APM: For verions 8.15 and later, refer to {observability-guide}/apm-ilm-how-to.html[Index lifecycle management]. -* Synthetic monitoring: Refer to {observability-guide}/synthetics-manage-retention.html[Manage data retention]. -* Universal Profiling: Refer to {observability-guide}/profiling-index-lifecycle-management.html[Universal Profiling index life cycle management]. - -[discrete] -[[data-streams-scenarios]] -== Identify your scenario - -How you apply an ILM policy depends on your use case. Choose a scenario for the detailed steps. - -* **<>**: You want to apply an ILM policy to all logs or metrics data streams across all namespaces. - -* **<>**: You want to apply an ILM policy to selected data streams in an integration. - -* **<>**: You want apply an ILM policy for data streams in a selected namespace in an integration. - - -[[data-streams-scenario1]] -== Scenario 1: Apply an ILM policy to all data streams generated from Fleet integrations across all namespaces - -++++ -Scenario 1: All data streams in all namespaces -++++ - -NOTE: This tutorial uses a `logs@custom` and a `metrics@custom` component template which are available in versions 8.13 and later. -For versions later than 8.4 and earlier than 8.13, you instead need to use the `@custom component template` and add the ILM policy to that template. -This needs to be done for every newly added integration. - -Mappings and settings for data streams can be customized through the creation of `*@custom` component templates, which are referenced by the index templates created by each integration. -The easiest way to configure a custom index lifecycle policy per data stream is to edit this template. - -This tutorial explains how to apply a custom index lifecycle policy to all of the data streams associated with the `System` integration, as an example. -Similar steps can be used for any other integration. -Setting a custom index lifecycle policy must be done separately for all logs and for all metrics, as described in the following steps. - -[discrete] -[id="data-streams-scenario1-step1"] -== Step 1: Create an index lifecycle policy - -. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -. Click **Create policy**. - -Name your new policy. -For this tutorial, you can use `my-ilm-policy`. -Customize the policy to your liking, and when you're done, click **Save policy**. - -[discrete] -[id="data-streams-scenario1-step2"] -== Step 2: Create a component template for the `logs` index templates - -The **Index Templates** view in {kib} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: - -. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -. Select **Index Templates**. -. Search for `system` to see all index templates associated with the System integration. -. Select any `logs-*` index template to view the associated component templates. For example, you can select the `logs-system.application` index template. -+ -[role="screenshot"] -image::images/component-templates-list.png[List of component templates available for the index template] - -. Select `logs@custom` in the list to view the component template properties. -. For a newly added integration, the component template won't exist yet. -Select **Create component template** to create it. -If the component template already exists, click **Manage** to update it. -. On the **Logistics** page, keep all defaults and click **Next**. -. On the **Index settings** page, in the **Index settings** field, specify the ILM policy that you created. For example: -+ -[source,json] ----- -{ - "index": { - "lifecycle": { - "name": "my-ilm-policy" - } - } -} ----- - -. Click **Next**. -. For both the **Mappings** and **Aliases** pages, keep all defaults and click **Next**. -. Finally, on the **Review** page, review the summary and request. If everything looks good, select **Create component template**. -+ -[role="screenshot"] -image::images/review-component-template01.png[Review details for the new component template] - -[discrete] -[id="data-streams-scenario1-step3"] -== Step 3: Roll over the data streams (optional) - -To confirm that the index template is using the `logs@custom` component template with your custom ILM policy: - -. Reopen the **Index Management** page and open the **Component Templates** tab. -. Search for `logs@` and select the `logs@custom` component template. -. The **Summary** shows the list of all data streams that use the component template, and the **Settings** view shows your newly configured ILM policy. - -New ILM policies only take effect when new indices are created, -so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), -or force a rollover of each data stream using the {ref}/indices-rollover-index.html[{es} rollover API. - -For example: - -[source,bash] ----- -POST /logs-system.auth/_rollover/ ----- - -[discrete] -[id="data-streams-scenario1-step4"] -== Step 4: Repeat these steps for the metrics data streams - -You've now applied a custom index lifecycle policy to all of the `logs-*` data streams in the `System` integration. -For the metrics data streams, you can repeat steps 2 and 3, using a `metrics-*` index template and the `metrics@custom` component template. - - - -[[data-streams-scenario2]] -== Scenario 2: Apply an ILM policy to specific data streams generated from Fleet integrations across all namespaces - -++++ -Scenario 2: Selected data streams in all namespaces -++++ - -Mappings and settings for data streams can be customized through the creation of `*@custom` component templates, -which are referenced by the index templates created by the {es} apm-data plugin. -The easiest way to configure a custom index lifecycle policy per data stream is to edit this template. - -This tutorial explains how to apply a custom index lifecycle policy to the `logs-system.auth` data stream. - -[discrete] -[id="data-streams-scenario2-step1"] -== Step 1: Create an index lifecycle policy - -. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -. Click **Create policy**. - -Name your new policy. -For this tutorial, you can use `my-ilm-policy`. -Customize the policy to your liking, and when you're done, click **Save policy**. - -[discrete] -[id="data-streams-scenario2-step2"] -== Step 2: View index templates - -The **Index Templates** view in {kib} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: - -. To open **Index Management**, find **Stack Management** in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field]. -. Select **Index Templates**. -. Search for `system` to see all index templates associated with the System integration. -. Select the index template that matches the data stream for which you want to set up an ILM policy. For this example, you can select the `logs-system.auth` index template. -+ -[role="screenshot"] -image::images/index-template-system-auth.png[List of component templates available for the logs-system.auth index template] - -. In the **Summary**, select `logs-system.auth@custom` from the list to view the component template properties. -. For a newly added integration, the component template won't exist yet. -Select **Create component template** to create it. -If the component template already exists, click **Manage** to update it. -.. On the **Logistics** page, keep all defaults and click **Next**. -.. On the **Index settings** page, in the **Index settings** field, specify the ILM policy that you created. For example: -+ -[source,json] ----- -{ - "index": { - "lifecycle": { - "name": "my-ilm-policy" - } - } -} ----- - -.. Click **Next**. -.. For both the **Mappings** and **Aliases** pages, keep all defaults and click **Next**. -.. Finally, on the **Review** page, review the summary and request. If everything looks good, select **Create component template**. -+ -[role="screenshot"] -image::images/review-component-template02.png[Review details for the new component template] - -[discrete] -[id="data-streams-scenario2-step3"] -== Step 3: Roll over the data streams (optional) - -To confirm that the index template is using the `logs@custom` component template with your custom ILM policy: - -. Reopen the **Index Management** page and open the **Component Templates** tab. -. Search for `system` and select the `logs-system.auth@custom` component template. -. The **Summary** shows the list of all data streams that use the component template, and the **Settings** view shows your newly configured ILM policy. - -New ILM policies only take effect when new indices are created, -so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), -or force a rollover of the data stream using the {ref}/indices-rollover-index.html[{es} rollover API: - -[source,bash] ----- -POST /logs-system.auth/_rollover/ ----- - -[discrete] -[id="data-streams-scenario2-step4"] -== Step 4: Repeat these steps for other data streams - -You've now applied a custom index lifecycle policy to the `logs-system.auth` data stream in the `System` integration. -Repeat these steps for any other data streams for which you'd like to configure a custom ILM policy. - - - -[[data-streams-scenario3]] -== Scenario 3: Apply an ILM policy with integrations using multiple namespaces - -++++ -Scenario 3: Selected integrations and namespaces -++++ - -In this scenario, you have {agent}s collecting system metrics with the System integration in two environments--one with the namespace `development`, and one with `production`. - -**Goal:** Customize the {ilm-init} policy for the `system.network` data stream in the `production` namespace. -Specifically, apply the built-in `90-days-default` {ilm-init} policy so that data is deleted after 90 days. - -[NOTE] -==== -* This scenario involves cloning an index template. We strongly recommend repeating this procedure on every minor {stack} upgrade in order to avoid missing any possible changes to the structure of the managed index template(s) that are shipped with integrations. - -* If you cloned an index template to customize the data retention policy on an {es} version prior to 8.13, you must update the index template in the clone to use the `ecs@mappings` component template on {es} version 8.13 or later. See <> for the step-by-step instructions. -==== - -[discrete] -[[data-streams-ilm-one]] -== Step 1: View data streams - -The **Data Streams** view in {kib} shows you the data streams, -index templates, and {ilm-init} policies associated with a given integration. - -. Navigate to **{stack-manage-app}** > **Index Management** > **Data Streams**. -. Search for `system` to see all data streams associated with the System integration. -. Select the `metrics-system.network-{namespace}` data stream to view its associated index template and {ilm-init} policy. -As you can see, the data stream follows the <> and starts with its type, `metrics-`. -+ -[role="screenshot"] -image::images/data-stream-info.png[Data streams info] - -[discrete] -[[data-streams-ilm-two]] -== Step 2: Create a component template - -For your changes to continue to be applied in future versions, -you must put all custom index settings into a component template. -The component template must follow the data stream naming scheme, -and end with `@custom`: - -[source,text] ----- ---@custom ----- - -For example, to create custom index settings for the `system.network` data stream with a namespace of `production`, -the component template name would be: - -[source,text] ----- -metrics-system.network-production@custom ----- - -. Navigate to **{stack-manage-app}** > **Index Management** > **Component Templates** -. Click **Create component template**. -. Use the template above to set the name--in this case, `metrics-system.network-production@custom`. Click **Next**. -. Under **Index settings**, set the {ilm-init} policy name under the `lifecycle.name` key: -+ -[source,json] ----- -{ - "lifecycle": { - "name": "90-days-default" - } -} ----- -. Continue to **Review** and ensure your request looks similar to the image below. -If it does, click **Create component template**. -+ -[role="screenshot"] -image::images/create-component-template.png[Create component template] - -[discrete] -[[data-streams-ilm-three]] -== Step 3: Clone and modify the existing index template - -Now that you've created a component template, -you need to create an index template to apply the changes to the correct data stream. -The easiest way to do this is to duplicate and modify the integration's existing index template. - -[WARNING] -==== -Please note the following: -* When duplicating the index template, do not change or remove any managed properties. This may result in problems when upgrading. Cloning the index template of an integration package involves some risk as any changes made to the original index template when it is upgraded will not be propagated to the cloned version. -* These steps assume that you want to have a namespace specific ILM policy, which requires index template cloning. Cloning the index template of an integration package involves some risk because any changes made to the original index template as part of package upgrades are not propagated to the cloned version. See <> for details. -+ -If you want to change the ILM Policy, the number of shards, or other settings for the datastreams of one or more integrations, but **the changes do not need to be specific to a given namespace**, it's strongly recommended to use a `@custom` component template, as described in <> and <>, so as to avoid the problems mentioned above. See the <> section for details. -==== - -. Navigate to **{stack-manage-app}** > **Index Management** > **Index Templates**. -. Find the index template you want to clone. The index template will have the `` and `` in its name, -but not the ``. In this case, it's `metrics-system.network`. -. Select **Actions** > **Clone**. -. Set the name of the new index template to `metrics-system.network-production`. -. Change the index pattern to include a namespace--in this case, `metrics-system.network-production*`. -This ensures the previously created component template is only applied to the `production` namespace. -. Set the priority to `250`. This ensures that the new index template takes precedence over other index templates that match the index pattern. -. Under **Component templates**, search for and add the component template created in the previous step. -To ensure your namespace-specific settings are applied over other custom settings, -the new template should be added below the existing `@custom` template. -. Create the index template. - -[role="screenshot"] -image::images/create-index-template.png[Create index template] - -[discrete] -[[data-streams-ilm-four]] -== Step 4: Roll over the data stream (optional) - -To confirm that the data stream is now using the new index template and {ilm-init} policy, -you can either repeat Step 1, or navigate to **{dev-tools-app}** and run the following: - -[source,bash] ----- -GET /_data_stream/metrics-system.network-production <1> ----- -<1> The name of the data stream we've been hacking on - -The result should include the following: - -[source,json] ----- -{ - "data_streams" : [ - { - ... - "template" : "metrics-system.network-production", <1> - "ilm_policy" : "90-days-default", <2> - ... - } - ] -} ----- -<1> The name of the custom index template created in step three -<2> The name of the {ilm-init} policy applied to the new component template in step two - -New {ilm-init} policies only take effect when new indices are created, -so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), -or force a rollover using the {ref}/indices-rollover-index.html[{es} rollover API]: - -[source,bash] ----- -POST /metrics-system.network-production/_rollover/ ----- - -[discrete] -[[data-streams-pipeline-update-cloned-template-before-8.13]] -== Update index template cloned before {es} 8.13 - -If you cloned an index template to customize the data retention policy on an {es} version prior to 8.13, you must update the index cloned index template to add the `ecs@mappings` component template on {es} version 8.13 or later. - -To update the cloned index template: - -. Navigate to **{stack-manage-app}** > **Index Management** > **Index Templates**. -. Find the index template you cloned. The index template will have the `` and `` in its name. -. Select **Manage** > **Edit**. -. Select **(2) Component templates** -. In the **Search component templates** field, search for `ecs@mappings`. -. Click on the **+ (plus)** icon to add the `ecs@mappings` component template. -. Move the `ecs@mappings` component template right below the `@package` component template. -. Save the index template. - -Roll over the data stream to apply the changes. - -[[data-streams-pipeline-tutorial]] -== Tutorial: Transform data with custom ingest pipelines - -This tutorial explains how to add a custom ingest pipeline to an Elastic Integration. -Custom pipelines can be used to add custom data processing, -like adding fields, obfuscate sensitive information, and more. - -**Scenario:** You have {agent}s collecting system metrics with the System integration. - -**Goal:** Add a custom ingest pipeline that adds a new field to each {es} document before it is indexed. - -[discrete] -[[data-streams-pipeline-one]] -== Step 1: Create a custom ingest pipeline - -Create a custom ingest pipeline that will be called by the default integration pipeline. -In this tutorial, we'll create a pipeline that adds a new field to our documents. - -. In {kib}, navigate to **Stack Management** -> **Ingest Pipelines** -> **Create pipeline** -> **New pipeline**. - -. Name your pipeline. We'll call this one, `add_field`. - -. Select **Add a processor**. Fill out the following information: -+ -** Processor: "Set" -** Field: `test` -** Value: `true` -+ -The {ref}/set-processor.html[Set processor] sets a document field and associates it with the specified value. - -. Click **Add**. - -. Click **Create pipeline**. - -[discrete] -[[data-streams-pipeline-two]] -== Step 2: Apply your ingest pipeline - -Add a custom pipeline to an integration by calling it from the default ingest pipeline. -The custom pipeline will run after the default pipeline but before the final pipeline. - -[discrete] -=== Edit integration - -Add a custom pipeline to an integration from the **Edit integration** workflow. -The integration must already be configured and installed before a custom pipeline can be added. -To enter this workflow, do the following: - -. Navigate to **{fleet}** -. Select the relevant {agent} policy -. Search for the integration you want to edit -. Select **Actions** -> **Edit integration** - -[discrete] -=== Select a data stream - -Most integrations write to multiple data streams. -You'll need to add the custom pipeline to each data stream individually. - -. Find the first data stream you wish to edit and select **Change defaults**. -For this tutorial, find the data stream configuration titled, **Collect metrics from System instances**. - -. Scroll to **System CPU metrics** and under **Advanced options** select **Add custom pipeline**. -+ -This will take you to the **Create pipeline** workflow in **Stack management**. - -[discrete] -=== Add the pipeline - -Add the pipeline you created in step one. - -. Select **Add a processor**. Fill out the following information: -+ -** Processor: "Pipeline" -** Pipeline name: "add_field" -** Value: `true` - -. Click **Create pipeline** to return to the **Edit integration** page. - -[discrete] -=== Roll over the data stream (optional) - -For pipeline changes to take effect immediately, you must roll over the data stream. -If you do not, the changes will not take effect until the next scheduled roll over. -Select **Apply now and rollover**. - -After the data stream rolls over, note the name of the custom ingest pipeline. -In this tutorial, it's `metrics-system.cpu@custom`. -The name follows the pattern `-@custom`: - -* type: `metrics` -* dataset: `system.cpu` -* Custom ingest pipeline designation: `@custom` - -[discrete] -=== Repeat - -Add the custom ingest pipeline to any other data streams you wish to update. - -[discrete] -[[data-streams-pipeline-three]] -== Step 3: Test the ingest pipeline (optional) - -Allow time for new data to be ingested before testing your pipeline. -In a new window, open {kib} and navigate to **{kib} Dev tools**. - -Use an {ref}/query-dsl-exists-query.html[exists query] to ensure that the -new field, "test" is being applied to documents. - -[source,console] ----- -GET metrics-system.cpu-default/_search <1> -{ - "query": { - "exists": { - "field": "test" <2> - } - } -} ----- -<1> The data stream to search. In this tutorial, we've edited the `metrics-system.cpu` type and dataset. -`default` is the default namespace. -Combining all three of these gives us a data stream name of `metrics-system.cpu-default`. -<2> The name of the field set in step one. - -If your custom pipeline is working correctly, this query will return at least one document. - -[discrete] -[[data-streams-pipeline-four]] -== Step 4: Add custom mappings - -Now that a new field is being set in your {es} documents, you'll want to assign a new mapping for that field. -Use the `@custom` component template to apply custom mappings to an integration data stream. - -In the **Edit integration** workflow, do the following: - -. Under **Advanced options** select the pencil icon to edit the `@custom` component template. - -. Define the new field for your indexed documents. Select **Add field** and add the following information: -+ -* Field name: `test` -* Field type: `Boolean` - -. Click **Add field**. - -. Click **Review** to fast-forward to the review step and click **Save component template** to return to the **Edit integration** workflow. - -. For changes to take effect immediately, select **Apply now and rollover**. - -[discrete] -[[data-streams-pipeline-five]] -== Step 5: Test the custom mappings (optional) - -Allow time for new data to be ingested before testing your mappings. -In a new window, open {kib} and navigate to **{kib} Dev tools**. - -Use the {ref}/indices-get-field-mapping.html[Get field mapping API] to ensure that the -custom mapping has been applied. - -[source,console] ----- -GET metrics-system.cpu-default/_mapping/field/test <1> ----- -<1> The data stream to search. In this tutorial, we've edited the `metrics-system.cpu` type and dataset. -`default` is the default namespace. -Combining all three of these gives us a data stream name of `metrics-system.cpu-default`. - -The result should include `type: "boolean"` for the specified field. - -[source,json] ----- -".ds-metrics-system.cpu-default-2022.08.10-000002": { - "mappings": { - "test": { - "full_name": "test", - "mapping": { - "test": { - "type": "boolean" - } - } - } - } -} ----- - -[discrete] -[[data-streams-pipeline-six]] -== Step 6: Add an ingest pipeline for a data type - -The previous steps demonstrated how to create a custom ingest pipeline that adds a new field to each {es} document generated for the Systems integration CPU metrics (`system.cpu`) dataset. - -You can create an ingest pipeline to process data at various levels of customization. -An ingest pipeline processor can be applied: - -* Globally to all events -* To all events of a certain type (for example `logs` or `metrics`) -* To all events of a certain type in an integration -* To all events in a specific dataset - -Let's create a new custom ingest pipeline `logs@custom` that processes all log events. - -. Open {kib} and navigate to **{kib} Dev tools**. - -. Run a {ref}/put-pipeline-api.html[pipeline API] request to add a new field `my-logs-field`: -+ -[source,console] ----- -PUT _ingest/pipeline/logs@custom -{ - "processors": [ - { - "set": { - "description": "Custom field for all log events", - "field": "my-logs-field", - "value": "true" - } - } - ] -} ----- - -. Allow some time for new data to be ingested, and then use a new {ref}/query-dsl-exists-query.html[exists query] to confirm that the -new field "my-logs-field" is being applied to log event documents. -+ -For this example, we'll check the System integration `system.syslog` dataset: -+ -[source,console] ----- -GET /logs-system.syslog-default/_search?pretty -{ - "query": { - "exists": { - "field": "my-logs-field" - } - } -} ----- - -With the new pipeline applied, this query should return at least one document. - -You can modify your pipeline API request as needed to apply custom processing at various levels. -Refer to <> to learn more. - - -[[data-streams-advanced-features]] -== Enabling and disabling advanced indexing features for {fleet}-managed data streams - -++++ -Advanced data stream features -++++ - -{fleet} provides support for several advanced features around its data streams, including: - -* link:{ref}/tsds.html[Time series data streams (TSDS)] -* link:{ref}/mapping-source-field.html#synthetic-source[Synthetic `_source`] - -These features can be enabled and disabled for {fleet}-managed data streams by using the index template API and a few key settings. -Note that in versions 8.17.0 and later, Synthetic `_source` requires an Enterprise license. - -NOTE: If you are already making use of `@custom` component templates for ingest or retention customization (as shown for example in <>), exercise care to ensure you don't overwrite your customizations when making these requests. - -We recommended using link:{kibana-ref}/devtools-kibana.html[{kib} Dev Tools] to run the following requests. Replace `` with the name of a given integration data stream. For example specifying `metrics-nginx.stubstatus` results in making a PUT request to `_component_template/metrics-nginx.stubstatus@custom`. Use the index management interface to explore what integration data streams are available to you. - -Once you've executed a given request below, you also need to execute a data stream rollover to ensure any incoming data is ingested with your new settings immediately. For example: - -[source,sh] ----- -POST metrics-nginx.stubstatus-default/_rollover ----- - -Refer to the following steps to enable or disable advanced data stream features: - -* <> - -[discrete] -[[data-streams-advanced-tsds-enable]] -== Enable TSDS - -NOTE: TSDS uses synthetic `_source`, so if you want to trial both features you need to enable only TSDS. - -Due to restrictions in the {es} API, TSDS must be enabled at the *index template* level. So, you'll need to make some sequential requests to enable or disable TSDS. - -. Send a GET request to retrieve the index template: -+ -[source,json] ----- -GET _index_template/ ----- -+ -. Use the JSON payload returned from the GET request to populate a PUT request, for example: -+ -[source,json] ----- -PUT _index_template/ -{ - # You can copy & paste this directly from the GET request above - "index_patterns": [ - "" - ], - - # Make sure this is added - "template": { - "settings": { - "index": { - "mode": "time_series" - } - } - }, - - # You can copy & paste this directly from the GET request above - "composed_of": [ - "@package", - "@custom", - ".fleet_globals-1", - ".fleet_agent_id_verification-1" - ], - - # You can copy & paste this directly from the GET request above - "priority": 200, - - # Make sure this is added - "data_stream": { - "allow_custom_routing": false - } -} - ----- - -[discrete] -[[data-streams-advanced-tsds-disable]] -== Disable TSDS - -To disable TSDS, follow the same procedure as to <>, but specify `null` for `index.mode` instead of `time_series`. Follow the steps below or you can copy the <>. - -. Send a GET request to retrieve the index template: -+ -[source,json] ----- -GET _index_template/ ----- -+ -. Use the JSON payload returned from the GET request to populate a PUT request, for example: -+ -[source,json] ----- -PUT _index_template/ -{ - # You can copy/paste this directly from the GET request above - "index_patterns": [ - "" - ], - - # Make sure this is added - "template": { - "settings": { - "index": { - "mode": null - } - } - }, - - # You can copy/paste this directly from the GET request above - "composed_of": [ - "@package", - "@custom", - ".fleet_globals-1", - ".fleet_agent_id_verification-1" - ], - - # You can copy/paste this directly from the GET request above - "priority": 200, - - # Make sure this is added - "data_stream": { - "allow_custom_routing": false - } -} ----- -+ -For example, the following payload disables TSDS on `nginx.stubstatus`: -+ -[[data-streams-advanced-tsds-disable-nginx-example]] -[source,json] ----- -{ - "index_patterns": [ - "metrics-nginx.stubstatus-*" - ], - - "template": { - "settings": { - "index": { - "mode": null - } - } - }, - - "composed_of": [ - "metrics-nginx.stubstatus@package", - "metrics-nginx.stubstatus@custom", - ".fleet_globals-1", - ".fleet_agent_id_verification-1" - ], - - "priority": 200, - - "data_stream": { - "allow_custom_routing": false - } -} ----- - -[discrete] -[[data-streams-advanced-synthetic-enable]] -== Enable synthetic `_source` - -[source,json] ----- -PUT _component_template/@custom -{ - "settings": { - "index": { - "mapping": { - "source": { - "mode": "synthetic" - } - } - } - } -} - ----- - -[discrete] -[[data-streams-advanced-synthetic-disable]] -== Disable synthetic `_source` - -[source,json] ----- -PUT _component_template/@custom -{ - "settings": { - "index": { - "mapping": { - "source": {"mode": "stored"} - } - } - } -} ----- diff --git a/docs/en/ingest-management/elastic-agent/advanced-kubernetes-managed-by-fleet.asciidoc b/docs/en/ingest-management/elastic-agent/advanced-kubernetes-managed-by-fleet.asciidoc deleted file mode 100644 index 7ba89744f..000000000 --- a/docs/en/ingest-management/elastic-agent/advanced-kubernetes-managed-by-fleet.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[advanced-kubernetes-managed-by-fleet]] -= Advanced {agent} configuration managed by {fleet} - -For basic {agent} managed by {fleet} scenarios follow the steps in <>. - -On managed {agent} installations it can be useful to provide the ability to configure more advanced options, such as the configuration of providers during the startup. Refer to <> for more details. - -Following steps demonstrate above scenario: - -[discrete] -== Step 1: Download the {agent} manifest - -It is advisable to follow the steps of <> with Kubernetes Integration installed in your policy and download the {agent} manifest from Kibana UI - -image::images/k8skibanaUI.png[{agent} with K8s Package manifest] - -Notes:: -Sample manifests can also be found https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml[here] - -[discrete] -== Step 2: Create a new configmap - -[source,yaml] -.Create a new configmap ------------------------------------------------- -apiVersion: v1 -kind: ConfigMap -metadata: - name: agent-node-datastreams - namespace: kube-system - labels: - k8s-app: elastic-agent -data: - agent.yml: |- - providers.kubernetes_leaderelection.enabled: false - fleet.enabled: true - fleet.access_token: "" ---- ------------------------------------------------- - -Notes:: -1. In the above example the disablement of `kubernetes_leaderelection` provider is demonstrated. Same procedure can be followed for alternative scenarios. -[source,yaml] -.Example of configmap to configure kubernetes metadata enrichment ------------------------------------------------- -apiVersion: v1 -kind: ConfigMap -metadata: - name: agent-node-datastreams - namespace: kube-system - labels: - k8s-app: elastic-agent -data: - agent.yml: |- - providers.kubernetes: - add_resource_metadata: - deployment: true - cronjob: true - fleet.enabled: true - fleet.access_token: "" ---- ------------------------------------------------- - -1. Find more information about https://www.elastic.co/guide/en/fleet/current/fleet-enrollment-tokens.html[Enrollment Tokens]. - -[discrete] -== Step 3: Configure Daemonset - -Inside the downloaded manifest, update the Daemonset resource: - -[source,yaml] -.Update entrypoint ------------------------------------------------- -containers: - - name: elastic-agent - image: docker.elastic.co/elastic-agent/elastic-agent: - args: ["-c", "/etc/elastic-agent/agent.yml", "-e"] ------------------------------------------------- - -Notes:: -The is just a placeholder for the elastic-agent image version that you will download in your manifest: eg. `image: docker.elastic.co/elastic-agent/elastic-agent: 8.11.0` -Important thing is to update your manifest with args details - -[source,yaml] -.Add extra Volume Mount ------------------------------------------------- -volumeMounts: - - name: datastreams - mountPath: /etc/elastic-agent/agent.yml - readOnly: true - subPath: agent.yml ------------------------------------------------- - -[source,yaml] -.Add new Volume ------------------------------------------------- -volumes: - - name: datastreams - configMap: - defaultMode: 0640 - name: agent-node-datastreams ------------------------------------------------- - -[discrete] -== Important Notes - -1. By default the manifests for {agent} managed by {fleet} have `hostNetwork:true`. In order to support multiple installations of {agent}s in the same node you should set `hostNetwork:false`. See this relevant https://github.com/elastic/elastic-agent/tree/main/docs/manifests/hostnetwork[example] as described in https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-ksm-sharding.md[{agent} Manifests in order to support Kube-State-Metrics Sharding]. - -2. The volume `/usr/share/elastic-agent/state` must remain mounted in https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml[elastic-agent-managed-kubernetes.yaml], otherwise custom config map provided above will be overwritten. - diff --git a/docs/en/ingest-management/elastic-agent/configuration/authentication/kerberos-shared-settings.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/authentication/kerberos-shared-settings.asciidoc deleted file mode 100644 index 216e339b5..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/authentication/kerberos-shared-settings.asciidoc +++ /dev/null @@ -1,121 +0,0 @@ -// These settings are shared across some inputs and outputs. - -// You can include this whole block, or individual settings -// tag::kerberos-all-settings[] - - -[cols="2*SSL/TLS -++++ - -There are a number of SSL configuration settings available depending on whether -you are configuring a client, server, or both. See the following tables for -available settings: - -* <>. These settings are valid in both client and -server configurations. - -* <> - -* <> - -TIP: For more information about using certificates, refer to -<>. - -[[common-ssl-options]] -.Common configuration options -[cols="2*> to learn how to retrieve and configure it. - -There are two different ways to use autodiscover: - -* <> - -* <> - - -[discrete] -== How to configure autodiscover - -`Conditions Based Autodiscover` is more suitable for scenarios when users know the different group of containers they want to monitor in advance. It is advisable to choose conditions-based configuration when administrators can configure specific conditions that match their needs. Conditions are supported in both Managed and Standalone {agent}. - -`Hints Based Autodiscover` is suitable for more generic scenarios, especially when users don't know the exact configuration of the system to monitor and can not create in advance conditions. Additionally a big advantage of Hints Autodiscover is the ability to offer dynamic configuration of inputs based on annotations from Pods/Containers. If dynamic configuration is needed, then Hints should be enabled. Hints are supported only in Standalone {agent} mode. - -*Best Practises when you configure autodiscover:* - -- Always define alternatives and default values to your variables that are used in conditions or [hint templates](eg. See `auth.basic` set as `auth.basic.user: ${kubernetes.hints.nginx.access.username|kubernetes.hints.nginx.username|''}`` in [nginx.yml](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-standalone/templates.d/nginx.yml#L8)) - -IMPORTANT: When an input uses a variable substitution that is not present in the current key/value mappings being evaluated, the input is removed in the result. (See more information in <>) - -- To debug configurations that include variable substitution and conditions, use the inspect command of {agent}. (See more information in <> in *Debugging* Section) - -- In Condition Based autodiscover is advisable to define a generic last condition that will act as your default condition and will be validated when all others fail or don't apply. If applicable, such conditions might help to identify processing and troubleshoot possible problems. diff --git a/docs/en/ingest-management/elastic-agent/configuration/autodiscovery/kubernetes-conditions-autodiscover.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/autodiscovery/kubernetes-conditions-autodiscover.asciidoc deleted file mode 100644 index c6c7bdad5..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/autodiscovery/kubernetes-conditions-autodiscover.asciidoc +++ /dev/null @@ -1,311 +0,0 @@ -[[conditions-based-autodiscover]] -= Conditions based autodiscover - -You can define autodiscover conditions in each input to allow {agent} to automatically identify Pods and start monitoring them using predefined integrations. Refer to <> to get an idea. - -IMPORTANT: Condition definition is supported both in *{agent} managed by {fleet}* and in *standalone* scenarios. - -For more information about variables and conditions in input configurations, refer to <>. -You can find available variables of autodiscovery in <>. - - -== Example: Target Pods by label - -To automatically identify a Redis Pod and monitor it with the Redis integration, uncomment the following input configuration inside the https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml[{agent} Standalone manifest]: - - -[source,yaml] ------------------------------------------------- -- name: redis - type: redis/metrics - use_output: default - meta: - package: - name: redis - version: 0.3.6 - data_stream: - namespace: default - streams: - - data_stream: - dataset: redis.info - type: metrics - metricsets: - - info - hosts: - - '${kubernetes.pod.ip}:6379' - idle_timeout: 20s - maxconn: 10 - network: tcp - period: 10s - condition: ${kubernetes.labels.app} == 'redis' ------------------------------------------------- - -The condition `${kubernetes.labels.app} == 'redis'` will make the {agent} look for a Pod with the label `app:redis` within the scope defined in its manifest. - -For a list of provider fields that you can use in conditions, refer to <>. -Some examples of conditions usage are: - -1. For a pod with label `app.kubernetes.io/name=ingress-nginx` the condition should be `condition: ${kubernetes.labels.app.kubernetes.io/name} == "ingress-nginx"`. -2. For a pod with annotation `prometheus.io/scrape: "true"` the condition should be `${kubernetes.annotations.prometheus.io/scrape} == "true"`. -3. For a pod with name `kube-scheduler-kind-control-plane` the condition should be `${kubernetes.pod.name} == "kube-scheduler-kind-control-plane"`. - - -The `redis` input defined in the {agent} manifest only specifies the`info` metricset. To learn about other available metricsets and their configuration settings, refer to the {metricbeat-ref}/metricbeat-module-redis.html[Redis module page]. - -To deploy Redis, you can apply the following example manifest: - -[source,yaml] ------------------------------------------------- -apiVersion: v1 -kind: Pod -metadata: - name: redis - labels: - k8s-app: redis - app: redis -spec: - containers: - - image: redis - imagePullPolicy: IfNotPresent - name: redis - ports: - - name: redis - containerPort: 6379 - protocol: TCP ------------------------------------------------- - -You should now be able to see Redis data flowing in on index `metrics-redis.info-default`. Make sure the port in your Redis manifest file matches the port used in the Redis input. - -NOTE: All assets (dashboards, ingest pipelines, and so on) related to the Redis integration are not installed. You need to explicitly <>. - -Conditions can also be used in inputs configuration in order to set the target host dynamically for a targeted Pod based on its labels. -This is useful for datasets that target specific pods like `kube-scheduler` or `kube-controller-manager`. -The following configuration will enable `kubernetes.scheduler` dataset only for pods which have the label `component=kube-scheduler` defined. - -[source,yaml] ----- -- data_stream: - dataset: kubernetes.scheduler - type: metrics - metricsets: - - scheduler - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - hosts: - - 'https://${kubernetes.pod.ip}:10259' - period: 10s - ssl.verification_mode: none - condition: ${kubernetes.labels.component} == 'kube-scheduler' ----- - -NOTE: Pods' labels and annotations can be used in autodiscover conditions. In the case of labels or annotations that include dots(`.`), they can be used in conditions exactly as -they are defined in the pods. For example `condition: ${kubernetes.labels.app.kubernetes.io/name} == 'ingress-nginx'`. This should not be confused with the dedoted (by default) labels and annotations -stored into Elasticsearch(<>). - -WARNING: Before the 8.6 release, labels used in autodiscover conditions were dedoted in case the `labels.dedot` parameter was set to `true` in Kubernetes Provider -configuration (by default `true`). The same did not apply for annotations. This was fixed in 8.6 release. Refer to the Release Notes section of the version 8.6.0 documentation. - -WARNING: In some "As a Service" Kubernetes implementations, like GKE, the control plane nodes or even the Pods running on them won't be visible. In these cases, it won't be possible to use scheduler metricsets, necessary for this example. Refer https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html#_scheduler_and_controllermanager[scheduler and controller manager] to find more information. - -Following the Redis example, if you deploy another Redis Pod with a different port, it should be detected. To check this, go, for example, to the field `service.address` under `metrics-redis.info-default`. It should be displaying two different services. - -To obtain the policy generated by this configuration, connect to {agent} container: - -["source", "sh", subs="attributes"] ------------------------------------------------- -kubectl exec -n kube-system --stdin --tty elastic-agent-standalone-id -- /bin/bash ------------------------------------------------- - -Do not forget to change the `elastic-agent-standalone-id` to your {agent} Pod's name. Moreover, make sure that your Pod is inside `kube-system`. If not, change `-n kube-system` to the correct namespace. - -Inside the container <> of the configuration file you used for the {agent}: - -["source", "sh", subs="attributes"] ------------------------------------------------- -elastic-agent inspect --variables --variables-wait 1s -c /etc/elastic-agent/agent.yml ------------------------------------------------- - -[%collapsible] -.You should now be able to see the generated policy. If you look for the `scheduler`, it will look similar to this. -==== -[source,yaml] ----- -- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - hosts: - - https://172.19.0.2:10259 - index: metrics-kubernetes.scheduler-default - meta: - package: - name: kubernetes - version: 1.9.0 - metricsets: - - scheduler - module: kubernetes - name: kubernetes-node-metrics - period: 10s - processors: - - add_fields: - fields: - labels: - component: kube-scheduler - tier: control-plane - namespace: kube-system - namespace_labels: - kubernetes_io/metadata_name: kube-system - namespace_uid: 03d6fd2f-7279-4db4-9a98-51e50bbe5c62 - node: - hostname: kind-control-plane - labels: - beta_kubernetes_io/arch: amd64 - beta_kubernetes_io/os: linux - kubernetes_io/arch: amd64 - kubernetes_io/hostname: kind-control-plane - kubernetes_io/os: linux - node-role_kubernetes_io/control-plane: "" - node_kubernetes_io/exclude-from-external-load-balancers: "" - name: kind-control-plane - uid: b8d65d6b-61ed-49ef-9770-3b4f40a15a8a - pod: - ip: 172.19.0.2 - name: kube-scheduler-kind-control-plane - uid: f028ad77-c82a-4f29-ba7e-2504d9b0beef - target: kubernetes - - add_fields: - fields: - cluster: - name: kind - url: kind-control-plane:6443 - target: orchestrator - - add_fields: - fields: - dataset: kubernetes.scheduler - namespace: default - type: metrics - target: data_stream - - add_fields: - fields: - dataset: kubernetes.scheduler - target: event - - add_fields: - fields: - id: "" - snapshot: false - version: 8.3.0 - target: elastic_agent - - add_fields: - fields: - id: "" - target: agent - ssl.verification_mode: none ----- -==== - -== Example: Dynamic logs path - -To set the log path of Pods dynamically in the configuration, use a variable in the -{agent} policy to return path information from the provider: - -[source,yaml] ----- -- name: container-log - id: container-log-${kubernetes.pod.name}-${kubernetes.container.id} - type: filestream - use_output: default - meta: - package: - name: kubernetes - version: 1.9.0 - data_stream: - namespace: default - streams: - - data_stream: - dataset: kubernetes.container_logs - type: logs - prospector.scanner.symlinks: true - parsers: - - container: ~ - paths: - - /var/log/containers/*${kubernetes.container.id}.log ----- - -[%collapsible] -.The policy generated by this configuration will look similar to this for every Pod inside the scope defined in the manifest. -==== -[source,yaml] ----- -- id: container-log-etcd-kind-control-plane-af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819 - index: logs-kubernetes.container_logs-default - meta: - package: - name: kubernetes - version: 1.9.0 - name: container-log - parsers: - - container: null - paths: - - /var/log/containers/*af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819.log - processors: - - add_fields: - fields: - id: af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819 - image: - name: registry.k8s.io/etcd:3.5.4-0 - runtime: containerd - target: container - - add_fields: - fields: - container: - name: etcd - labels: - component: etcd - tier: control-plane - namespace: kube-system - namespace_labels: - kubernetes_io/metadata_name: kube-system - namespace_uid: 03d6fd2f-7279-4db4-9a98-51e50bbe5c62 - node: - hostname: kind-control-plane - labels: - beta_kubernetes_io/arch: amd64 - beta_kubernetes_io/os: linux - kubernetes_io/arch: amd64 - kubernetes_io/hostname: kind-control-plane - kubernetes_io/os: linux - node-role_kubernetes_io/control-plane: "" - node_kubernetes_io/exclude-from-external-load-balancers: "" - name: kind-control-plane - uid: b8d65d6b-61ed-49ef-9770-3b4f40a15a8a - pod: - ip: 172.19.0.2 - name: etcd-kind-control-plane - uid: 08970fcf-bb93-487e-b856-02399d81fb29 - target: kubernetes - - add_fields: - fields: - cluster: - name: kind - url: kind-control-plane:6443 - target: orchestrator - - add_fields: - fields: - dataset: kubernetes.container_logs - namespace: default - type: logs - target: data_stream - - add_fields: - fields: - dataset: kubernetes.container_logs - target: event - - add_fields: - fields: - id: "" - snapshot: false - version: 8.3.0 - target: elastic_agent - - add_fields: - fields: - id: "" - target: agent - prospector.scanner.symlinks: true - type: filestream ----- -==== diff --git a/docs/en/ingest-management/elastic-agent/configuration/autodiscovery/kubernetes-hints-autodiscover.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/autodiscovery/kubernetes-hints-autodiscover.asciidoc deleted file mode 100644 index 1adc1ae89..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/autodiscovery/kubernetes-hints-autodiscover.asciidoc +++ /dev/null @@ -1,430 +0,0 @@ -[[hints-annotations-autodiscovery]] -= Hints annotations based autodiscover - -beta[] - -NOTE: Make sure you are using {agent} 8.5+. - -NOTE: Hints autodiscovery only works with {agent} Standalone. - -Standalone {agent} supports autodiscover based on hints from the <>. -The hints mechanism looks for hints in Kubernetes Pod annotations that have the prefix `co.elastic.hints`. -As soon as the container starts, {agent} checks it for hints and launches the proper configuration -for the container. Hints tell {agent} how to monitor the container by using the proper integration. -This is the full list of supported hints: - -[discrete] -== Required hints: - -[float] -=== `co.elastic.hints/package` - -The package to use for monitoring. - -[discrete] -== Optional hints available: - -[float] -=== `co.elastic.hints/host` - -The host to use for metrics retrieval. If not defined, the host will be set as the default one: `:`. - - - -[float] -=== `co.elastic.hints/data_stream` - -The list of data streams to enable. If not specified, the integration's default data streams are used. To find the defaults, refer to the {integrations-docs}[Elastic integrations documentation]. - -If data streams are specified, additional hints can be defined per data stream. For example, `co.elastic.hints/info.period: 5m` if the data stream specified is `info` for the {metricbeat-ref}/metricbeat-module-redis.html[Redis module]. - -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: redis - annotations: - co.elastic.hints/package: redis - co.elastic.hints/data_streams: info - co.elastic.hints/info.period: 5m ----- - -If data stream hints are not specified, the top level hints will be used in its configuration. - -[float] -=== `co.elastic.hints/metrics_path` - -The path to retrieve the metrics from. - -[float] -=== `co.elastic.hints/period` - -The time interval for metrics retrieval, for example, 10s. - -[float] -=== `co.elastic.hints/timeout` - -Metrics retrieval timeout, for example, 3s. - -[float] -=== `co.elastic.hints/username` - -The username to use for authentication. - -[float] -=== `co.elastic.hints/password` - -The password to use for authentication. It is recommended to retrieve this sensitive information from an ENV variable -and avoid placing passwords in plain text. - -[float] -=== `co.elastic.hints/stream` - -The stream to use for logs collection, for example, stdout/stderr. - -If the specified package has no logs support, a generic container's logs input will be used as a fallback. See the `Hints autodiscovery for kubernetes log collection` example below. - -[float] -=== `co.elastic.hints/processors` - -Define a processor to be added to the input configuration. See <> for the list of supported processors. - -If the processors configuration uses list data structure, object fields must be enumerated. For example, hints for the rename processor configuration below - -[source,yaml] ----- -processors: - - rename: - fields: - - from: "a.g" - to: "e.d" - fail_on_error: true ----- - -will look like: - -[source,yaml] ----- -co.elastic.hints/processors.rename.fields.0.from: "a.g" -co.elastic.hints/processors.rename.fields.1.to: "e.d" -co.elastic.hints/processors.rename.fail_on_error: 'true' ----- - -If the processors configuration uses map data structure, enumeration is not needed. For example, the equivalent to the `add_fields` configuration below - -[source,yaml] ----- -processors: - - add_fields: - target: project - fields: - name: myproject ----- - -is - -[source,yaml] ----- -co.elastic.hints/processors.1.add_fields.target: "project" -co.elastic.hints/processors.1.add_fields.fields.name: "myproject" ----- - -In order to provide ordering of the processor definition, numbers can be provided. If not, the hints builder will do arbitrary ordering: - -[source,yaml] ----- -co.elastic.hints/processors.1.dissect.tokenizer: "%{key1} %{key2}" -co.elastic.hints/processors.dissect.tokenizer: "%{key2} %{key1}" ----- - -In the above sample the processor definition tagged with `1` would be executed first. - -IMPORTANT: Processor configuration is not supported on the datastream level, so annotations like `co.elastic.hints/.processors` are ignored. - -[discrete] -== Multiple containers - -When a pod has multiple containers, the settings are shared unless you put the container name in the hint. For example, these hints configure `processors.decode_json_fields` for all containers in the pod, but set a specific `stream` hint for the container called sidecar. - -[source,yaml] ----- -annotations: - co.elastic.hints/processors.decode_json_fields.fields: "message" - co.elastic.hints/processors.decode_json_fields.add_error_key: true - co.elastic.hints/processors.decode_json_fields.overwrite_keys: true - co.elastic.hints/processors.decode_json_fields.target: "team" - co.elastic.hints.sidecar/stream: "stderr" ----- - -[discrete] -== Available packages that support hints autodiscovery - -The available packages that are supported through hints can be found -https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes/elastic-agent-standalone/templates.d[here]. - -[discrete] -== Configure hints autodiscovery - -To enable hints autodiscovery, you must add `hints.enabled: true` to the provider's configuration: - -[source,yaml] ----- -providers: - kubernetes: - hints.enabled: true ----- - -Then ensure that an init container is specified by uncommenting the respective sections in the {agent} manifest. -An init container is required to download the hints templates. - -["source", "yaml", subs="attributes"] ----- -initContainers: -- name: k8s-templates-downloader - image: docker.elastic.co/elastic-agent/elastic-agent:{branch} - command: ['bash'] - args: - - -c - - >- - mkdir -p /usr/share/elastic-agent/state/inputs.d && - curl -sL https://github.com/elastic/elastic-agent/archive/{branch}.tar.gz | tar xz -C /usr/share/elastic-agent/state/inputs.d --strip=5 "elastic-agent-{branch}/deploy/kubernetes/elastic-agent-standalone/templates.d" - securityContext: - runAsUser: 0 - volumeMounts: - - name: elastic-agent-state - mountPath: /usr/share/elastic-agent/state ----- - - -NOTE: The {agent} can load multiple configuration files from `{path.config}/inputs.d` and finally produce a unified one (refer to <>). Users have the ability to manually mount their own templates under `/usr/share/elastic-agent/state/inputs.d` *if they want to skip enabling initContainers section*. - - -[discrete] -== Examples: - -[discrete] -=== Hints autodiscovery for redis - -Enabling hints allows users deploying Pods on the cluster to automatically turn on Elastic -monitoring at Pod deployment time. -For example, to deploy a Redis Pod on the cluster and automatically enable Elastic monitoring, add the proper hints as annotations on the Pod manifest file: - -[source,yaml] ----- -... -apiVersion: v1 -kind: Pod -metadata: - name: redis - annotations: - co.elastic.hints/package: redis - co.elastic.hints/data_streams: info - co.elastic.hints/host: '${kubernetes.pod.ip}:6379' - co.elastic.hints/info.period: 5s - labels: - k8s-app: redis - app: redis -... ----- - -After deploying this Pod, the data will start flowing in automatically. You can find it on the index `metrics-redis.info-default`. - -NOTE: All assets (dashboards, ingest pipelines, and so on) related to the Redis integration are not installed. You need to explicitly <>. - - -[discrete] -=== Hints autodiscovery for kubernetes log collection - -The log collection for Kubernetes autodiscovered pods can be supported by using https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes/elastic-agent-standalone/templates.d/container_logs.yml[container_logs.yml template]. Elastic Agent needs to emit a container_logs mapping so as to start collecting logs for all the discovered containers *even if no annotations are present in the containers*. - -1. Follow steps described above to enable Hints Autodiscover -2. Make sure that relevant `container_logs.yml` template will be mounted under /usr/share/elastic-agent/state/inputs.d/ folder of Elastic Agent -3. Deploy Elastic Agent Manifest -4. Elastic Agent should be able to discover all containers inside kuernetes cluster and to collect available logs. - -The previous default behaviour can be disabled with `hints.default_container_logs: false`. -So this will disable the automatic logs collection from all discovered pods. Users need specifically to annotate their pod with following annotations: - -[source,yaml] ----- -annotations: - co.elastic.hints/package: "container_logs" ----- - - -[source,yaml] ----- -providers.kubernetes: - node: ${NODE_NAME} - scope: node - hints: - enabled: true - default_container_logs: false -... ----- - -In the following sample nginx manifest, we will additionally provide specific stream annotation, in order to configure the filestream input to read only stderr stream: - -[source,yaml] ----- -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: nginx - name: nginx - namespace: default -spec: - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - annotations: - co.elastic.hints/package: "container_logs" - co.elastic.hints/stream: "stderr" - spec: - containers: - - image: nginx - name: nginx -... ----- - -Users can monitor the final rendered Elastic Agent configuration: - -[source,bash] ----- -kubectl exec -ti -n kube-system elastic-agent-7fkzm -- bash - - -/usr/share/elastic-agent# /elastic-agent inspect -v --variables --variables-wait 2s - -inputs: -- data_stream.namespace: default - id: hints-container-logs-3f69573a1af05c475857c1d0f98fc55aa01b5650f146d61e9653a966cd50bd9c-kubernetes-1780aca0-3741-4c8c-aced-b9776ba3fa81.nginx - name: filestream-generic - original_id: hints-container-logs-3f69573a1af05c475857c1d0f98fc55aa01b5650f146d61e9653a966cd50bd9c - [output truncated ....] - streams: - - data_stream: - dataset: kubernetes.container_logs - type: logs - exclude_files: [] - exclude_lines: [] - parsers: - - container: - format: auto - stream: stderr - paths: - - /var/log/containers/*3f69573a1af05c475857c1d0f98fc55aa01b5650f146d61e9653a966cd50bd9c.log - prospector: - scanner: - symlinks: true - tags: [] - type: filestream - use_output: default -outputs: - default: - hosts: - - https://elasticsearch:9200 - password: changeme - type: elasticsearch - username: elastic -providers: - kubernetes: - hints: - default_container_logs: false - enabled: true - node: control-plane - scope: node ----- - -[discrete] -=== Hints autodiscovery for kubernetes logs with JSON decoding - -Based on the previous example, users might want to perform extra processing on specific logs, for example to decode specific fields containing JSON strings. Use of <> is advisable as follows: - -You need to have enabled hints autodiscovery, as described in the previous `Hints autodiscovery for Kubernetes log collection` example. - -The pod that will produce JSON logs needs to be annotated with: - -[source,yaml] ----- - - annotations: - co.elastic.hints/package: "container_logs" - co.elastic.hints/processors.decode_json_fields.fields: "message" - co.elastic.hints/processors.decode_json_fields.add_error_key: 'true' - co.elastic.hints/processors.decode_json_fields.overwrite_keys: 'true' - co.elastic.hints/processors.decode_json_fields.target: "team" ----- - -> NOTE: These parameters for the `decode_json_fields` processor are just an example. - -The following log entry: - -[source,json] ----- -{"myteam": "ole"} ----- - -Will produce both fields: the original `message` field and also the target field `team`. - -[source,json] ----- - -"team": { - "myteam": "ole" - }, - -"message": "{\"myteam\": \"ole\"}", ----- - -[discrete] -== Troubleshooting - -When things do not work as expected, you may need to troubleshoot your setup. Here we provide some directions to speed up your investigation: - -. Exec inside an Agent's Pod and run the `inspect` command to verify how inputs are constructed dynamically: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -./elastic-agent inspect --variables --variables-wait 1s -c /etc/elastic-agent/agent.yml ------------------------------------------------- -+ -Specifically, examine how the inputs are being populated. - -. View the {agent} logs: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -tail -f /etc/elastic-agent/data/logs/elastic-agent-*.ndjson ------------------------------------------------- -+ -Verify that the hints feature is enabled in the config and look for hints-related logs like: -"Generated hints mappings are ..." -In these logs, you can find the mappings that are extracted out of the annotations and determine if the values can populate a specific input. - -. View the {metricbeat} logs: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -tail -f /etc/elastic-agent/data/logs/default/metricbeat-*.ndjson ------------------------------------------------- - -. View the {filebeat} logs: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -tail -f /etc/elastic-agent/data/logs/default/filebeat-*.ndjson ------------------------------------------------- - -. View the target input template. For the Redis example: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -cat f /usr/share/elastic-agent/state/inputs.d/redis.yml ------------------------------------------------- diff --git a/docs/en/ingest-management/elastic-agent/configuration/create-standalone-agent-policy.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/create-standalone-agent-policy.asciidoc deleted file mode 100644 index 8480c9a3e..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/create-standalone-agent-policy.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[create-standalone-agent-policy]] -= Create a standalone {agent} policy - -To get started quickly, use {kib} to add integrations to an agent policy, then -download the policy to use as a starting point for your standalone {agent} -policy. This approach saves time, is less error prone, and populates the -policy with a lot of details that are tedious to add manually. Also, -adding integrations in {kib} loads required assets, such as index templates, -and ingest pipelines, before you start your {agent}s. - -TIP: If you're a {fleet} user and already have an agent policy you want to -use in standalone mode, go to *{fleet} > Agents* and click *Add agent*. Follow -the steps under *Run standalone* to download the policy file. - -You don't need {fleet} to perform the following steps, but on self-managed -clusters, API keys must be enabled in the {es} configuration (set -`xpack.security.authc.api_key.enabled: true`). - -. From the main menu in {kib}, click *Add integrations*, and search for the -{agent} integration you want to use. Read the description to make sure the -integration works with {agent}. - -. Click the integration to see more details about it, then click -*Add *. -+ -[role="screenshot"] -image::images/add-integration-standalone.png[Add Nginx integration screen with agent policy selected] -+ -NOTE: If you're adding your first integration and no {agent}s are installed, -{kib} may display a page that walks you through configuring the integration and -installing {agent}. If you see this page, click **Install {agent}**, then click the -**standalone mode** link. Follow the in-product instructions instead of the -steps described here. - -. Under *Configure integration*, enter a name and description for the integration. - -. Click the down arrow next to enabled streams and make sure the settings are -correct for your host. - -. Under *Apply to agent policy*, select an existing policy, or click -*Create agent policy* and create a new one. - -. When you’re done, save and continue. -+ -A popup window gives you the option to add {agent} to your hosts. -+ -[role="screenshot"] -image::images/add-agent-to-hosts.png[Popup window showing the option to add {agent} to your hosts] - -. (Optional) To add more integrations to the agent policy, click *Add {agent} -later* and go back to the *Integrations* page. Repeat the previous steps for each -integration. - -. When you're done adding integrations, in the popup window, click -*Add {agent} to your hosts* to open the *Add agent* flyout. - -. Click *Run standalone* and follow the in-product instructions to download -{agent} (if you haven't already). - -. Click *Download Policy* to download the policy file. -+ -[role="screenshot"] -image::images/download-agent-policy.png[Add data screen with option to download the default agent policy] - -The downloaded policy already contains a default {es} address and port for your -setup. You may need to change them if you use a proxy or load balancer. Modify -the policy, as required, making sure that you provide credentials for connecting -to {es} - -If you need to add integrations to the policy _after_ deploying -it, you'll need to run through these steps again and re-deploy the -updated policy to the host where {agent} is running. - -For detailed information about starting the agent, including the permissions -needed for the {es} user, refer to <>. - -[discrete] -[[update-standalone-policies]] -== Upgrade standalone agent policies after upgrading an integration - -Because standalone agents are not managed by {fleet}, they are unable to upgrade -to new integration package versions automatically. When you upgrade an -integration in {kib} (or it gets upgraded automatically), you'll need to update -the standalone policy to use new features and capabilities. - -You'll also need to update the standalone policy if you want to add new -integrations. - -To update your standalone policy, use the same steps you used to create and -download the original policy file: - -. Follow the steps under <> to create and -download a new policy, then compare the new policy file to the old one. - -. Either use the new policy and apply your customizations to it, or -update your old policy to include changes, such as field changes, added -by the upgrade. - -IMPORTANT: Make sure you update the standalone agent policy in the directory -where {agent} is running, not the directory where you downloaded the -installation package. Refer to <> for the location of -installed {agent} files. diff --git a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-configuration.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-configuration.asciidoc deleted file mode 100644 index 8cbb1f9cb..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-configuration.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -[[elastic-agent-configuration]] -= Configure standalone {agent}s - -TIP: To get started quickly, use {kib} to create and download a standalone -policy file. You'll still need to deploy and manage the file, though. For more -information, refer to <> or try out our example: -<>. - -Standalone {agent}s are manually configured and managed locally on the systems -where they are installed. They are useful when you are not interested in -centrally managing agents in {fleet}, either due to your company's security -requirements, or because you prefer to use another configuration management -system. - -To configure standalone {agent}s, specify settings in the `elastic-agent.yml` -policy file deployed with the agent. Prior to installation, -the file is located in the extracted {agent} package. After installation, the -file is copied to the directory described in <>. To apply -changes after installation, you must modify the installed file. - -For installation details, refer to <>. - -Alternatively, you can put input configurations in YAML files into the -folder `{path.config}/inputs.d` to separate your configuration into -multiple smaller files. -The YAML files in the `inputs.d` folder should contain input configurations only. -Any other configurations are ignored. -The files are reloaded at the same time as the standalone configuration. - -TIP: The first line of the configuration must be `inputs`. Then you can list the -inputs you would like to run. Each input in the policy must have a unique value -for the `id` key. If the `id` key is missing its value defaults to the empty -string `""`. - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - data_stream.namespace: default - paths: [/path/to/file] - use_output: default - - - id: unique-system-metrics-id - type: system/metrics - data_stream.namespace: default - use_output: default - streams: - - metricset: cpu - data_stream.dataset: system.cpu ----- - -The following sections describe some settings you might need to configure to -run an {agent} standalone. For a full reference example, refer to the -<> file. - -The settings described here are available for standalone {agent}s. Settings for -{fleet}-managed agents are specified through the UI. You do not set them -explicitly in a policy file. diff --git a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-monitoring.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-monitoring.asciidoc deleted file mode 100644 index 568776f89..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-monitoring.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[elastic-agent-monitoring-configuration]] -= Configure monitoring for standalone {agent}s - -++++ -Monitoring -++++ - -{agent} monitors {beats} by default. To turn off or change monitoring -settings, set options under `agent.monitoring` in the `elastic-agent.yml` file. - -This example configures {agent} monitoring: - -[source,yaml] ----- -agent.monitoring: - # enabled turns on monitoring of running processes - enabled: true - # enables log monitoring - logs: true - # enables metrics monitoring - metrics: true - # exposes /debug/pprof/ endpoints for Elastic Agent and Beats - # enable these endpoints if the monitoring endpoint is set to localhost - pprof.enabled: false - # specifies output to be used - use_output: monitoring - http: - # exposes a /buffer endpoint that holds a history of recent metrics - buffer.enabled: false ----- - -To turn off monitoring, set `agent.monitoring.enabled` to `false`. When set to -`false`, {beats} monitoring is turned off, and all other options in this section -are ignored. - -To enable monitoring, set `agent.monitoring.enabled` to `true`. Also set the -`logs` and `metrics` settings to control whether logs, metrics, or both are -collected. If neither setting is specified, monitoring is turned off. Set -`use_output` to specify the output to which monitoring events are sent. - -You can also add the setting `agent.monitoring.http.enabled: true` to expose a `/liveness` endpoint. -By default, the endpoint returns a `200` OK status as long as {agent}'s internal main loop is responsive and can process configuration changes. -It can be configured to also monitor the component states and return an error if anything is degraded or has failed. - -The `agent.monitoring.pprof.enabled` option controls whether the {agent} and {beats} expose the -`/debug/pprof/` endpoints with the monitoring endpoints. It is set to `false` -by default. Data produced by these endpoints can be useful for debugging but present a -security risk. It is recommended that this option remains `false` if the monitoring endpoint -is accessible over a network. - -The `agent.monitoring.http.buffer.enabled` option controls whether the {agent} and {beats} -collect metrics into an in-memory buffer and expose these through a `/buffer` endpoint. -It is set to `false` by default. This data can be useful for debugging or if the {agent} -has issues communicating with {es}. Enabling this option may slightly increase process -memory usage. diff --git a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-standalone-download.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-standalone-download.asciidoc deleted file mode 100644 index f28dd182d..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-standalone-download.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[elastic-agent-standalone-download]] -= Configure download settings for standalone {agent} upgrades - -++++ -Agent download -++++ - -The `agent.download` section of the elastic-agent.yml config file contains settings for where to download and store artifacts used for {agent} upgrades. - -[[elastic-agent-standalone-download-settings]] -.{agent} download settings -[cols="2*Feature flags -++++ - -The Feature Flags section of the elastic-agent.yml config file contains settings in {agent} that are disabled by default. These may include experimental features, changes to behaviors within {agent} or its components, or settings that could cause a breaking change. For example a setting that changes information included in events might be inconsistent with the naming pattern expected in your configured {agent} output. - -To enable any of the settings listed on this page, change the associated `enabled` flag from `false` to `true`. - -[source,yaml] ----- -agent.features: - mysetting: - enabled: true ----- - -[discrete] -[[elastic-agent-standalone-feature-flag-settings]] -== Feature flag configuration settings - -You can specify the following settings in the Feature Flag section of the -`elastic-agent.yml` config file. - -Fully qualified domain name (FQDN):: -When enabled, information provided about the current host through the <> key, in events produced by {agent}, is in FQDN format (`somehost.example.com` rather than `somehost`). This helps you to distinguish between hosts on different domains that have similar names. With `fqdn` enabled, the fully qualified hostname allows each host to be more easily identified when viewed in {kib}. -+ -NOTE: FQDN reporting is not currently supported in APM. -+ -For FQDN reporting to work as expected, the hostname of the current host must either: -+ --- -* Have a CNAME entry defined in DNS. -* Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup. --- -+ -If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host as if the FQDN feature flag were not enabled. -+ -To enable fully qualified domain names set `enabled: true` for the `fqdn` setting: -+ -["source","yaml",subs="attributes"] ----- -agent.features: - fqdn: - enabled: true ----- diff --git a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-standalone-logging.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-standalone-logging.asciidoc deleted file mode 100644 index fbf5debd3..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/elastic-agent-standalone-logging.asciidoc +++ /dev/null @@ -1,176 +0,0 @@ -[[elastic-agent-standalone-logging-config]] -= Configure logging for standalone {agent}s - -++++ -Logging -++++ - -The Logging section of the `elastic-agent.yml` config file contains settings for configuring the logging output. -The logging system can write logs to the `syslog`, `file`, `stderr`, `eventlog`, or rotate log files. -If you do not explicitly configure logging, the `stderr` output is used. - -This example configures {agent} logging: - -["source","yaml",subs="attributes"] ----- -agent.logging.level: info -agent.logging.to_files: true -agent.logging.files: - path: /var/log/elastic-agent - name: elastic-agent - keepfiles: 7 - permissions: 0600 ----- - -[discrete] -[[elastic-agent-standalone-logging-settings]] -== Logging configuration settings - -You can specify the following settings in the Logging section of the -`elastic-agent.yml` config file. - -Some outputs will log raw events on errors like indexing errors in the -Elasticsearch output, to prevent logging raw events (that may contain -sensitive information) together with other log messages, a different -log file, only for log entries containing raw events, is used. It will -use the same level, selectors and all other configurations from the -default logger, but it will have it's own file configuration. - -Having a different log file for raw events also prevents event data -from drowning out the regular log files. Use -`agent.logging.event_data` to configure the events logger. - -The events log file is not collected by the {agent} monitoring. -If the events log files are needed, they can be collected with -the diagnostics or directly copied from the host running {agent}. - -[cols="2*Environment variables -++++ - -Use environment variables to configure {agent} when running in a containerized environment. -Variables on this page are grouped by action type: - -* <> -* <> prepare the {fleet} plugin in {kib} -* <> bootstrap {fleet-server} on an {agent} -* <> enroll an {agent} - -[discrete] -[[env-common-vars]] -= Common variables - -// forces a unique ID so that settings can be included multiple times on the same page -:type: common - -To limit the number of environment variables that need to be set, -the following common variables are available. -These variables can be used across all {agent} actions, -but have a lower precedence than action-specific environment variables. - -These common variables are useful, for example, when using the same {es} and {kib} credentials -to prepare the {fleet} plugin in {kib}, configure {fleet-server}, and enroll an {agent}. - -[cols="2*> is enabled. -If the service token value and service token path are specified the value may be used for setup and the path is passed to the agent in the container. - -*Default:* none - -// end::fleet-server-service-token[] - -// ============================================================================= - -// tag::fleet-server-service-token-path[] -| -[id="env-{type}-fleet-server-service-token-path"] -`FLEET_SERVER_SERVICE_TOKEN_PATH` - -| (string) The path to the service token file to use for communication with {es} and {kib} if <> is enabled. -If the service token value and service token path are specified the value may be used for setup and the path is passed to the agent in the container. - -*Default:* none - -// end::fleet-server-service-token-path[] - -// ============================================================================= - -// tag::fleet-server-policy-id[] -| -[id="env-{type}-fleet-server-policy-id"] -`FLEET_SERVER_POLICY_ID` - -| (string) The policy ID for {fleet-server} to use on itself. - -// end::fleet-server-policy-id[] - -// ============================================================================= - -// tag::fleet-server-host[] -| -[id="env-{type}-fleet-server-host"] -`FLEET_SERVER_HOST` - -| (string) The binding host for {fleet-server} HTTP. -Overrides the host defined in the policy. - -*Default:* none - -// end::fleet-server-host[] - -// ============================================================================= - -// tag::fleet-server-port[] -| -[id="env-{type}-fleet-server-port"] -`FLEET_SERVER_PORT` - -| (string) The binding port for {fleet-server} HTTP. -Overrides the port defined in the policy. - -*Default:* none - -// end::fleet-server-port[] - -// ============================================================================= - -// tag::fleet-server-cert[] -| -[id="env-{type}-fleet-server-cert"] -`FLEET_SERVER_CERT` - -| (string) The path to the certificate to use for HTTPS. - -*Default:* none - -// end::fleet-server-cert[] - -// ============================================================================= - -// tag::fleet-server-cert-key[] -| -[id="env-{type}-fleet-server-cert-key"] -`FLEET_SERVER_CERT_KEY` - -| (string) The path to the private key for the certificate used for HTTPS. - -*Default:* none - -// end::fleet-server-cert-key[] - -// ============================================================================= - -// tag::fleet-server-cert-key-passphrase[] -| -[id="env-{type}-fleet-server-cert-key-passphrase"] -`FLEET_SERVER_CERT_KEY_PASSPHRASE` - -| (string) The path to the private key passphrase for an encrypted private key file. - -*Default:* none - -// end::fleet-server-cert-key-passphrase[] - -// ============================================================================= - -// tag::fleet-server-client-auth[] -| -[id="env-{type}-fleet-server-client-auth"] -`FLEET_SERVER_CLIENT_AUTH` - -| (string) One of `none`, `optional`, or `required`. -{fleet-server}'s client authentication option for client mTLS connections. -If `optional` or `required` is specified, client certificates are verified using CAs. - -*Default:* `none` - -// end::fleet-server-client-auth[] - -// ============================================================================= - -// tag::fleet-server-es-ca-trusted-fingerprint[] -| -[id="env-{type}-fleet-server-es-ca-trusted-fingerprint"] -`FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT` - -| (string) The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {es} certificates. -This fingerprint is used to verify self-signed certificates presented by {fleet-server} and any inputs started -by {agent} for communication. This flag is required when using self-signed certificates with {es}. - -*Default:* `""` - -// end::fleet-server-es-ca-trusted-fingerprint[] - -// ============================================================================= - -// tag::fleet-server-es-cert[] -| -[id="env-{type}-fleet-server-es-cert"] -`FLEET_SERVER_ES_CERT` - -| (string) The path to the mutual TLS client certificate that {fleet-server} will use to connect to {es}. - -*Default:* `""` - -// end::fleet-server-es-cert[] - -// ============================================================================= - -// tag::fleet-server-es-cert-key[] -| -[id="env-{type}-fleet-server-es-cert-key"] -`FLEET_SERVER_ES_CERT_KEY` - -| (string) The path to the mutual TLS private key that {fleet-server} will use to connect to {es}. - -*Default:* `""` - -// end::fleet-server-es-cert-key[] - -// ============================================================================= - -// tag::fleet-server-insecure-http[] -| -[id="env-{type}-fleet-server-insecure-http"] -`FLEET_SERVER_INSECURE_HTTP` - -| (bool) When `true`, {fleet-server} is exposed over insecure or unverified HTTP. -Setting this to `true` is not recommended. - -*Default:* `false` - -// end::fleet-server-insecure-http[] - -// ============================================================================= - -// tag::fleet-daemon-timeout[] -| -[id="env-{type}-fleet-daemon-timeout"] -`FLEET_DAEMON_TIMEOUT` - -| (duration) Set to indicate how long {fleet-server} will wait during the bootstrap process for {elastic-agent}. - -// end::fleet-daemon-timeout[] - -// ============================================================================= - -// tag::fleet-server-timeout[] -| -[id="env-{type}-fleet-server-timeout"] -`FLEET_SERVER_TIMEOUT` - -| (duration) Set to indicate how long {agent} will wait for {fleet-server} to check in as healthy. - -// end::fleet-server-timeout[] - -// ============================================================================= - -// tag::fleet-enroll[] -| -[id="env-{type}-fleet-enroll"] -`FLEET_ENROLL` - -| (bool) Set to `1` to enroll the {agent} into {fleet-server}. - -*Default:* `false` - -// end::fleet-enroll[] - -// ============================================================================= - -// tag::fleet-url[] -| -[id="env-{type}-fleet-url"] -`FLEET_URL` - -| (string) URL to enroll the {fleet-server} into. - -*Default:* `""` - -// end::fleet-url[] - -// ============================================================================= - -// tag::fleet-enrollment-token[] -| -[id="env-{type}-fleet-enrollment-token"] -`FLEET_ENROLLMENT_TOKEN` - -| (string) The token to use for enrollment. - -*Default:* `""` - -// end::fleet-enrollment-token[] - -// ============================================================================= - -// tag::fleet-token-name[] -| -[id="env-{type}-fleet-token-name"] -`FLEET_TOKEN_NAME` - -| (string) The token name to use to fetch the token from {kib}. - -*Default:* `""` - -// end::fleet-token-name[] - -// ============================================================================= - -// tag::fleet-token-policy-name[] -| -[id="env-{type}-fleet-token-policy-name"] -`FLEET_TOKEN_POLICY_NAME` - -| (string) The token policy name to use to fetch the token from {kib}. - -*Default:* `false` - -// end::fleet-token-policy-name[] - -// ============================================================================= - -// tag::fleet-ca[] -| -[id="env-{type}-fleet-ca"] -`FLEET_CA` - -| (string) The path to a certificate authority. Overrides `ELASTICSEARCH_CA` when set. - -By default, {agent} uses the list of trusted certificate authorities (CA) from the operating -system where it is running. -If the certificate authority that signed your node certificates is not in the host system's -trusted certificate authorities list, use this config to add the path to the `.pem` file that -contains your CA's certificate. - -*Default:* `false` - -// end::fleet-ca[] - -// ============================================================================= - -// tag::fleet-insecure[] -| -[id="env-{type}-fleet-insecure"] -`FLEET_INSECURE` - -| (bool) When `true`, {agent} communicates with {fleet-server} over insecure or unverified HTTP. -Setting this to `true` is not recommended. - -*Default:* `false` - -// end::fleet-insecure[] - -// ============================================================================= - -// tag::elasticsearch-host[] -| -[id="env-{type}-elasticsearch-host"] -`ELASTICSEARCH_HOST` - -| (string) The {es} host to communicate with. - -*Default:* `http://elasticsearch:9200` - -// end::elasticsearch-host[] - -// ============================================================================= - -// tag::es-host[] -| -[id="env-{type}-es-host"] -`ES_HOST` - -| (string) The {es} host to communicate with. - -*Default:* `http://elasticsearch:9200` - -// end::es-host[] - -// ============================================================================= - -// tag::elasticsearch-username[] -| -[id="env-{type}-elasticsearch-username"] -`ELASTICSEARCH_USERNAME` - -| (string) The basic authentication username used to connect to {kib} and retrieve a `service_token` for {fleet}. - -// To do: link to required privileges - -*Default:* none - -// end::elasticsearch-username[] - -// ============================================================================= - -// tag::es-username[] -| -[id="env-{type}-es-username"] -`ES_USERNAME` - -| (string) The basic authentication username used to connect to {es}. -This user needs the privileges required to publish events to {es}. - -// To do: link to required privileges - -*Default:* `elastic` - -// end::es-username[] - -// ============================================================================= - -// tag::elasticsearch-password[] -| -[id="env-{type}-elasticsearch-password"] -`ELASTICSEARCH_PASSWORD` - -| (string) The basic authentication password used to connect to {kib} and retrieve a `service_token` for {fleet}. - -*Default:* none - -// end::elasticsearch-password[] - -// ============================================================================= - -// tag::elasticsearch-api-key[] -| -[id="env-{type}-elasticsearch-api-key"] -`ELASTICSEARCH_API_KEY` - -| (string) API key used for authenticating to Elasticsearch. - -*Default:* none - -// end::elasticsearch-api-key[] - -// ============================================================================= - -// tag::es-password[] -| -[id="env-{type}-es-password"] -`ES_PASSWORD` - -| (string) The basic authentication password used to connect to {es}. - -*Default:* `changeme` - -// end::es-password[] - -// ============================================================================= - -// tag::elasticsearch-ca[] -| -[id="env-{type}-elasticsearch-ca"] -`ELASTICSEARCH_CA` - -| (string) The path to a certificate authority. - -By default, {agent} uses the list of trusted certificate authorities (CA) from the operating -system where it is running. -If the certificate authority that signed your node certificates is not in the host system's -trusted certificate authorities list, use this config to add the path to the `.pem` file that -contains your CA's certificate. - -*Default:* `""` - -// end::elasticsearch-ca[] - -// ============================================================================= - -// tag::kibana-host[] -| -[id="env-{type}-kibana-host"] -`KIBANA_HOST` - -| (string) The {kib} host. - -*Default:* `http://kibana:5601` - -// end::kibana-host[] - -// ============================================================================= - -// tag::kibana-username[] -| -[id="env={type}-kibana-username"] -`KIBANA_USERNAME` - -| (string) The basic authentication username used to connect to {kib} to retrieve a -`service_token`. - -*Default:* `elastic` - -// end::kibana-username[] - -// ============================================================================= - -// tag::kibana-password[] -| -[id="env={type}-kibana-password"] -`KIBANA_PASSWORD` - -| (string) The basic authentication password used to connect to {kib} to retrieve a -`service_token`. - -*Default:* `changeme` - -// end::kibana-password[] - -// ============================================================================= - -// tag::kibana-ca[] -| -[id="env-{type}-kibana-ca"] -`KIBANA_CA` - -| (string) The path to a certificate authority. - -By default, {agent} uses the list of trusted certificate authorities (CA) from the operating -system where it is running. -If the certificate authority that signed your node certificates is not in the host system's -trusted certificate authorities list, use this config to add the path to the `.pem` file that -contains your CA's certificate. - -*Default:* `""` - -// end::kibana-ca[] - -// tag::elastic-netinfo[] -| -[id="env-{type}-elastic-netinfo"] -`ELASTIC_NETINFO` - -| (bool) When `false`, disables `netinfo.enabled` parameter of `add_host_metadata` processor. -Setting this to `false` is recommended for large scale setups where the host.ip and host.mac fields index size increases. - -By default, {agent} initializes the `add_host_metadata` processor. The `netinfo.enabled` parameter defines ingestion of IP addresses and MAC addresses as fields `host.ip` and `host.mac`. -For more information see <> - - -*Default:* `"false"` - -// end::elastic-netinfo[] - -// ============================================================================= diff --git a/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-example-apache.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-example-apache.asciidoc deleted file mode 100644 index efa2f82f7..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-example-apache.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[[config-file-example-apache]] -= Config file example: Apache HTTP Server - -++++ -Apache HTTP Server -++++ - -Include these sample settings in your standalone {agent} `elastic-agent.yml` configuration file to ingest data from Apache HTTP server. - -* <> -* <> - -[[config-file-example-apache-logs]] -== Apache HTTP Server logs - -["source","yaml"] ----- -outputs: <1> - default: - type: elasticsearch <2> - hosts: - - '{elasticsearch-host-url}' <3> - api_key: "my_api_key" <4> -agent: - download: <5> - sourceURI: 'https://artifacts.elastic.co/downloads/' - monitoring: <6> - enabled: true - use_output: default - namespace: default - logs: true - metrics: true -inputs: <7> - - id: "insert a unique identifier here" <8> - name: apache-1 - type: logfile <9> - use_output: default - data_stream: <10> - namespace: default - streams: - - id: "insert a unique identifier here" <11> - data_stream: - dataset: apache.access <12> - type: logs - paths: <13> - - /var/log/apache2/access.log* - - /var/log/apache2/other_vhosts_access.log* - - /var/log/httpd/access_log* - tags: - - apache-access - exclude_files: - - .gz$ - - id: "insert a unique identifier here" <11> - data_stream: - dataset: apache.error <12> - type: logs - paths: <13> - - /var/log/apache2/error.log* - - /var/log/httpd/error_log* - exclude_files: - - .gz$ - tags: - - apache-error - processors: - - add_locale: null ----- - -<1> For available output settings, refer to <>. -<2> For settings specific to the {es} output, refer to <>. -<3> The URL of the Elasticsearch cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. -<4> An <> used to authenticate with the {es} cluster. -<5> For available download settings, refer to <>. -<6> For available monitoring settings, refer to <>. -<7> For available input settings, refer to <>. -<8> Specify a unique ID for the input. -<9> For available input types, refer to <>. -<10> Learn about <> for time series data. -<11> Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{user-defined-unique-id}-apache.access` or `{user-defined-unique-id}-apache.error`) is a recommended practice, but any unique ID will work. -<12> Refer to {integrations-docs}/apache#logs[Logs] in the Apache HTTP Server integration documentation for the logs available to ingest and exported fields. -<13> Path to the log files to be monitored. - -[[config-file-example-apache-metrics]] -== Apache HTTP Server metrics - -["source","yaml"] ----- -outputs: <1> - default: - type: elasticsearch <2> - hosts: - - '{elasticsearch-host-url}' <3> - api_key: "my_api_key" <4> -agent: - download: <5> - sourceURI: 'https://artifacts.elastic.co/downloads/' - monitoring: <6> - enabled: true - use_output: default - namespace: default - logs: true - metrics: true -inputs: <7> - type: apache/metrics <8> - use_output: default - data_stream: <9> - namespace: default - streams: - - id: "insert a unique identifier here" <10> - data_stream: <8> - dataset: apache.status <11> - type: metrics - metricsets: <12> - - status - hosts: - - 'http://127.0.0.1' - period: 30s - server_status_path: /server-status ----- - -<1> For available output settings, refer to <>. -<2> For settings specific to the {es} output, refer to <>. -<3> The URL of the Elasticsearch cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. -<4> An <> used to authenticate with the {es} cluster. -<5> For available download settings, refer to <>. -<6> For available monitoring settings, refer to <>. -<7> For available input settings, refer to <>. -<8> For available input types, refer to <>. -<9> Learn about <> for time series data. -<10> Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{user-defined-unique-id}-apache.status`) is a recommended practice, but any unique ID will work. -<11> A user-defined dataset. You can specify anything that makes sense to signify the source of the data. -<12> Refer to {integrations-docs}/apache#metrics[Metrics] in the Apache HTTP Server integration documentation for the type of metrics collected and exported fields. diff --git a/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-example-nginx.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-example-nginx.asciidoc deleted file mode 100644 index 4258ffa15..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-example-nginx.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[[config-file-example-nginx]] -= Config file example: Nginx HTTP Server - -++++ -Nginx HTTP Server -++++ - -Include these sample settings in your standalone {agent} `elastic-agent.yml` configuration file to ingest data from Nginx HTTP Server. - -* <> -* <> - -[[config-file-example-nginx-logs]] -== Nginx HTTP Server logs - -["source","yaml"] ----- -outputs: <1> - default: - type: elasticsearch <2> - hosts: - - '{elasticsearch-host-url}' <3> - api_key: "my_api_key" <4> -agent: - download: <5> - sourceURI: 'https://artifacts.elastic.co/downloads/' - monitoring: <6> - enabled: true - use_output: default - namespace: default - logs: true - metrics: true -inputs: <7> - - id: "insert a unique identifier here" <8> - name: nginx-1 - type: logfile <9> - use_output: default - data_stream: <10> - namespace: default - streams: - - id: "insert a unique identifier here" <11> - data_stream: - dataset: nginx.access <12> - type: logs - ignore_older: 72h - paths: <13> - - /var/log/nginx/access.log* - tags: - - nginx-access - exclude_files: - - .gz$ - processors: - - add_locale: null - - id: "insert a unique identifier here" <11> - data_stream: - dataset: nginx.error <12> - type: logs - ignore_older: 72h - paths: <13> - - /var/log/nginx/error.log* - tags: - - nginx-error - exclude_files: - - .gz$ - multiline: - pattern: '^\d{4}\/\d{2}\/\d{2} ' - negate: true - match: after - processors: - - add_locale: null ----- - -<1> For available output settings, refer to <>. -<2> For settings specific to the {es} output, refer to <>. -<3> The URL of the {es} cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. -<4> An <> used to authenticate with the {es} cluster. -<5> For available download settings, refer to <>. -<6> For available monitoring settings, refer to <>. -<7> For available input settings, refer to <>. -<8> A user-defined ID to uniquely identify the input stream. -<9> For available input types, refer to <>. -<10> Learn about <> for time series data. -<11> Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{user-defined-unique-id}-nginx.access` or `{user-defined-unique-id}-nginx.error`) is a recommended practice, but any unique ID will work. -<12> Refer to {integrations-docs}/nginx#logs-reference[Logs reference] in the Nginx HTTP integration documentation for the logs available to ingest and exported fields. -<13> Path to the log files to be monitored. - -[discrete] -[[config-file-example-nginx-metrics]] -== Nginx HTTP Server metrics - -["source","yaml"] ----- -outputs: <1> - default: - type: elasticsearch <2> - hosts: - - '{elasticsearch-host-url}' <3> - api_key: "my_api_key" <4> -agent: - download: <5> - sourceURI: 'https://artifacts.elastic.co/downloads/' - monitoring: <6> - enabled: true - use_output: default - namespace: default - logs: true - metrics: true -inputs: <7> - - id: "insert a unique identifier here" <8> - type: nginx/metrics <9> - use_output: default - data_stream: <10> - namespace: default - streams: - - id: "insert a unique identifier here" <11> - data_stream: <10> - dataset: nginx.stubstatus <12> - type: metrics - metricsets: <13> - - stubstatus - hosts: - - 'http://127.0.0.1:80' - period: 10s - server_status_path: /nginx_status ----- - -<1> For available output settings, refer to <>. -<2> For settings specific to the {es} output, refer to <>. -<3> The URL of the Elasticsearch cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. -<4> An <> used to authenticate with the {es} cluster. -<5> For available download settings, refer to <>. -<6> For available monitoring settings, refer to <>. -<7> For available input settings, refer to <>. -<8> A user-defined ID to uniquely identify the input stream. -<9> For available input types, refer to <>. -<10> Learn about <> for time series data. -<11> Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{user-defined-unique-id}-nginx.stubstatus`) is a recommended practice, but any unique ID will work. -<12> A user-defined dataset. You can specify anything that makes sense to signify the source of the data. -<13> Refer to {integrations-docs}/nginx#metrics-reference[Metrics reference] in the Nginx integration documentation for the type of metrics collected and exported fields. diff --git a/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-examples.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-examples.asciidoc deleted file mode 100644 index d3b5f3a79..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/examples/config-file-examples.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[config-file-examples]] -= Config file examples - -These examples show a basic, sample configuration to include in a standalone {agent} `elastic-agent.yml` <> to gather data from various source types. - -* <> -* <> - -include::config-file-example-apache.asciidoc[leveloffset=+1] -include::config-file-example-nginx.asciidoc[leveloffset=+1] diff --git a/docs/en/ingest-management/elastic-agent/configuration/inputs/input-configuration.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/inputs/input-configuration.asciidoc deleted file mode 100644 index 6fd6e5733..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/inputs/input-configuration.asciidoc +++ /dev/null @@ -1,76 +0,0 @@ -[[elastic-agent-input-configuration]] -= Configure inputs for standalone {agent}s - -++++ -Inputs -++++ - -The `inputs` section of the `elastic-agent.yml` file specifies how {agent} locates and processes input data. - -* <> -* <> - -[discrete] -[[elastic-agent-input-configuration-sample-metrics]] -== Sample metrics input configuration - -By default {agent} collects system metrics, such as CPU, memory, network, and file system metrics, and sends them to the default output. For example, to define datastreams for `cpu`, `memory`, `network` and `filesystem` metrics, this is the configuration: - -["source","yaml"] ------------------------------------------------------------------------ -- type: system/metrics <1> - id: unique-system-metrics-id <2> - data_stream.namespace: default <3> - use_output: default <4> - streams: - - metricsets: <5> - - cpu - data_stream.dataset: system.cpu <6> - - metricsets: - - memory - data_stream.dataset: system.memory - - metricsets: - - network - data_stream.dataset: system.network - - metricsets: - - filesystem - data_stream.dataset: system.filesystem ------------------------------------------------------------------------ - -<1> The name of the input. Refer to <> for the list of what's available. -<2> A unique ID for the input. -<3> A user-defined namespace. -<4> The name of the `output` to use. If not specified, `default` will be used. -<5> The set of enabled module metricsets. -+ -Refer to the {metricbeat} {metricbeat-ref}/metricbeat-module-system.html[System module] for a list of available options. The metricset fields can be configured. -<6> A user-defined dataset. It can contain anything that makes sense to signify the source of the data. - -[discrete] -[[elastic-agent-input-configuration-sample-logs]] -== Sample log files input configuration - -To enable {agent} to collect log files, you can use a configuration like the following. - -["source","yaml"] ------------------------------------------------------------------------ -- type: filestream <1> - id: your-input-id <2> - streams: - - id: your-filestream-stream-id <3> - data_stream: <4> - dataset: generic - paths: - - /var/log/*.log ------------------------------------------------------------------------ - -<1> The name of the input. Refer to <> for the list of what's available. -<2> A unique ID for the input. -<3> A unique ID for the data stream to track the state of the ingested files. -<4> The streams block is required only if multiple streams are used on the same input. Refer to the {filebeat} {filebeat-ref}/filebeat-input-filestream.html[filestream] documentation for a list of available options. Also, specifically for the `filestream` input type, refer to the <> for an example of ingesting a set of logs specified as an array. - -The input in this example harvests all files in the path `/var/log/*.log`, that is, all logs in the directory `/var/log/` that end with `.log`. All patterns supported by https://golang.org/pkg/path/filepath/#Glob[Go Glob] are also supported here. - -To fetch all files from a predefined level of subdirectories, use this pattern: -`/var/log/*/*.log`. This fetches all `.log` files from the subfolders of `/var/log`. It does not fetch log files from the `/var/log` folder itself. -Currently it is not possible to recursively fetch all files in all subdirectories of a directory. diff --git a/docs/en/ingest-management/elastic-agent/configuration/inputs/inputs-list.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/inputs/inputs-list.asciidoc deleted file mode 100644 index b7657e197..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/inputs/inputs-list.asciidoc +++ /dev/null @@ -1,408 +0,0 @@ -[[elastic-agent-inputs-list]] -= {agent} inputs - -When you <> for standalone {agents}, the following values are supported for the input `type` parameter. - -*Expand any section to view the available inputs:* - -// Auditbeat -[[elastic-agent-inputs-list-auditbeat]] -[%collapsible] -.Audit the activities of users and processes on your systems -==== - -|=== -|Input |Description |Learn more - -|`audit/auditd` -|Receives audit events from the Linux Audit Framework that is a part of the Linux kernel. -|{auditbeat-ref}/auditbeat-module-auditd.html[Auditd Module] ({auditbeat} docs) - -|`audit/file_integrity` -|Sends events when a file is changed (created, updated, or deleted) on disk. The events contain file metadata and hashes. -|{auditbeat-ref}/auditbeat-module-file_integrity.html[File Integrity Module] ({auditbeat} docs) - -|`audit/system` -|beta[] Collects various security related information about a system. All datasets send both periodic state information (e.g. all currently running processes) and real-time changes (e.g. when a new process starts or stops). -|{auditbeat-ref}/auditbeat-module-system.html[System Module] ({auditbeat} docs) - -|=== - -==== - -// Metricbeat -[[elastic-agent-inputs-list-metricbeat]] -[%collapsible] -.Collect metrics from operating systems and services running on your servers -==== - -|=== -|Input |Description |Learn more - -|`activemq/metrics` -|Periodically fetches JMX metrics from Apache ActiveMQ. -|{metricbeat-ref}/metricbeat-module-activemq.html[ActiveMQ module] ({metricbeat} docs) - -|`apache/metrics` -|Periodically fetches metrics from https://httpd.apache.org/[Apache HTTPD] servers. -|{metricbeat-ref}/metricbeat-module-apache.html[Apache module] ({metricbeat} docs) - -|`aws/metrics` -|Periodically fetches monitoring metrics from AWS CloudWatch using https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html[GetMetricData API] for AWS services. -|{metricbeat-ref}/metricbeat-module-aws.html[AWS module] ({metricbeat} docs) - -|`awsfargate/metrics` -|beta[] Retrieves various metadata, network metrics, and Docker stats about tasks and containers. -|{metricbeat-ref}/metricbeat-module-awsfargate.html[AWS Fargate module] ({metricbeat} docs) - -|`azure/metrics` -|Collects and aggregates Azure logs and metrics from a variety of sources into a common data platform where it can be used for analysis, visualization, and alerting. -|{metricbeat-ref}/metricbeat-module-azure.html[Azure module] ({metricbeat} docs) - -|`beat/metrics` -|Collects metrics about any Beat or other software based on libbeat. -|{metricbeat-ref}/metricbeat-module-beat.html[Beat module] ({metricbeat} docs) - -|`cloudfoundry/metrics` -|Connects to Cloud Foundry loggregator to gather container, counter, and value metrics into a common data platform where it can be used for analysis, visualization, and alerting. -|{metricbeat-ref}/metricbeat-module-cloudfoundry.html[Cloudfoundry module] ({metricbeat} docs) - -|`containerd/metrics` -|beta[] Collects cpu, memory and blkio statistics about running containers controlled by containerd runtime. -|{metricbeat-ref}/metricbeat-module-containerd.html[Containerd module] ({metricbeat} docs) - -|`docker/metrics` -|Fetches metrics from https://www.docker.com/[Docker] containers. -|{metricbeat-ref}/metricbeat-module-docker.html[Docker module] ({metricbeat} docs) - -|`elasticsearch/metrics` -|Collects metrics about {es}. -|{metricbeat-ref}/metricbeat-module-elasticsearch.html[Elasticsearch module] ({metricbeat} docs) - -|`enterprisesearch/metrics` -|Periodically fetches metrics and health information from Elastic {ents} instances using HTTP APIs. -|{metricbeat-ref}/metricbeat-module-enterprisesearch.html[{ents} module] ({metricbeat} docs) - -|`etcd/metrics` -|This module targets Etcd V2 and V3. When using V2, metrics are collected using https://coreos.com/etcd/docs/latest/v2/api.html[Etcd v2 API]. When using V3, metrics are retrieved from the `/metrics`` endpoint as intended for https://coreos.com/etcd/docs/latest/metrics.html[Etcd v3]. -|{metricbeat-ref}/metricbeat-module-etcd.html[Etcd module] ({metricbeat} docs) - -|`gcp/metrics` -|Periodically fetches monitoring metrics from Google Cloud Platform using https://cloud.google.com/monitoring/api/metrics_gcp[Stackdriver Monitoring API] for Google Cloud Platform services. -|{metricbeat-ref}/metricbeat-module-gcp.html[Google Cloud Platform module] ({metricbeat} docs) - -|`haproxy/metrics` -|Collects stats from http://www.haproxy.org/[HAProxy]. It supports collection from TCP sockets, UNIX sockets, or HTTP with or without basic authentication. -|{metricbeat-ref}/[HAProxy module] ({metricbeat} docs) - -|`http/metrics` -|Used to call arbitrary HTTP endpoints for which a dedicated Metricbeat module is not available. -|{metricbeat-ref}/metricbeat-module-http.html[HTTP module] ({metricbeat} docs) - -|`iis/metrics` -|Periodically retrieve IIS web server related metrics. -|{metricbeat-ref}/metricbeat-module-iis.html[IIS module] ({metricbeat} docs) - -|`jolokia/metrics` -|Collects metrics from https://jolokia.org/reference/html/agents.html[Jolokia agents] running on a target JMX server or dedicated proxy server. -|{metricbeat-ref}/metricbeat-module-jolokia.html[Jolokia module] ({metricbeat} docs) - -|`kafka/metrics` -|Collects metrics from the https://kafka.apache.org/intro[Apache Kafka] event streaming platform. -|{metricbeat-ref}/metricbeat-module-kafka.html[Kafka module] ({metricbeat} docs) - -|`kibana/metrics` -|Collects metrics about {Kibana}. -|{metricbeat-ref}/metricbeat-module-kibana.html[{kib} module] ({metricbeat} docs) - -|`kubernetes/metrics` -|As one of the main pieces provided for Kubernetes monitoring, this module is capable of fetching metrics from several components. -|{metricbeat-ref}/metricbeat-module-kubernetes.html[Kubernetes module] ({metricbeat} docs) - -|`linux/metrics` -|beta[] Reports on metrics exclusive to the Linux kernel and GNU/Linux OS. -|{metricbeat-ref}/metricbeat-module-linux.html[Linux module] ({metricbeat} docs) - -|`logstash/metrics` -|collects metrics about {ls}. -|{metricbeat-ref}/metricbeat-module-logstash.html[{ls} module] ({metricbeat} docs) - -|`memcached/metrics` -|Collects metrics about the https://memcached.org/[memcached] memory object caching system. -|{metricbeat-ref}/metricbeat-module-memcached.html[Memcached module] ({metricbeat} docs) - -|`mongodb/metrics` -|Periodically fetches metrics from https://www.mongodb.com/[MongoDB] servers. -|{metricbeat-ref}/metricbeat-module-mongodb.html[MongoDB module] ({metricbeat} docs) - -|`mssql/metrics` -|The https://www.microsoft.com/en-us/sql-server/sql-server-2017[Microsoft SQL 2017] Metricbeat module. It is still under active development to add new Metricsets and introduce enhancements. -|{metricbeat-ref}/metricbeat-module-mssql.html[MSSQL module] ({metricbeat} docs) - -|`mysql/metrics` -|Periodically fetches metrics from https://www.mysql.com/[MySQL] servers. -|{metricbeat-ref}/metricbeat-module-mysql.html[MySQL module] ({metricbeat} docs) - -|`nats/metrics` -|Uses the https://nats.io/documentation/managing_the_server/monitoring/[Nats monitoring server APIs] to collect metrics. -|{metricbeat-ref}/metricbeat-module-nats.html[NATS module] ({metricbeat} docs) - -|`nginx/metrics` -|Periodically fetches metrics from https://nginx.org/[Nginx] servers. -|{metricbeat-ref}/metricbeat-module-nginx.html[Nginx module] ({metricbeat} docs) - -|`oracle/metrics` -|The https://www.oracle.com/[Oracle] module for Metricbeat. It is under active development with feedback from the community. A single Metricset for Tablespace monitoring is added so the community can start gathering metrics from their nodes and contributing to the module. -|{metricbeat-ref}/metricbeat-module-oracle.html[Oracle module] ({metricbeat} docs) - -|`postgresql/metrics` -|Periodically fetches metrics from https://www.postgresql.org/[PostgreSQL] servers. -|{metricbeat-ref}/metricbeat-module-postgresql.html[PostgresSQL module] ({metricbeat} docs) - -|`prometheus/metrics` -|Periodically scrapes metrics from https://prometheus.io/docs/instrumenting/exporters/[Prometheus exporters]. -|{metricbeat-ref}/metricbeat-module-prometheus.html[Prometheus module] ({metricbeat} docs) - -|`rabbitmq/metrics` -|Uses the http://www.rabbitmq.com/management.html[HTTP API] created by the management plugin to collect RabbitMQ metrics. -|{metricbeat-ref}/metricbeat-module-rabbitmq.html[RabbitMQ module] ({metricbeat} docs) - -|`redis/metrics` -|Periodically fetches metrics from http://redis.io/[Redis] servers. -|{metricbeat-ref}/metricbeat-module-redis.html[Redis module] ({metricbeat} docs) - -|`sql/metrics` -|Allows you to execute custom queries against an SQL database and store the results in {es}. -|{metricbeat-ref}/metricbeat-module-sql.html[SQL module] ({metricbeat} docs) - -|`stan/metrics` -|Uses https://github.com/nats-io/nats-streaming-server/blob/master/server/monitor.go[STAN monitoring server APIs] to collect metrics. -|{metricbeat-ref}/metricbeat-module-stan.html[Stan module] ({metricbeat} docs) - -|`statsd/metrics` -|Spawns a UDP server and listens for metrics in StatsD compatible format. -|{metricbeat-ref}/metricbeat-module-statsd.html[Statsd module] ({metricbeat} docs) - -|`syncgateway/metrics` -|beta[] Monitor a Sync Gateway instance by using its REST API. -|{metricbeat-ref}/metricbeat-module-syncgateway.html[SyncGateway module] ({metricbeat} docs) - -|`system/metrics` -|Allows you to monitor your server metrics, including CPU, load, memory, network, processes, sockets, filesystem, fsstat, uptime, and more. -|{metricbeat-ref}/metricbeat-module-system.html[System module] ({metricbeat} docs) - -|`traefik/metrics` -|Periodically fetches metrics from a https://traefik.io/[Traefik] instance. -|{metricbeat-ref}/metricbeat-module-traefik.html[Traefik module] ({metricbeat} docs) - -|`uwsgi/metrics` -|By default, collects the uWSGI stats metricset, using https://uwsgi-docs.readthedocs.io/en/latest/StatsServer.html[StatsServer]. -|{metricbeat-ref}/metricbeat-module-uwsgi.html[uWSGI module] ({metricbeat} docs) - -|`vsphere/metrics` -|Uses the https://github.com/vmware/govmomi[Govmomi] library to collect metrics from any Vmware SDK URL (ESXi/VCenter). -|{metricbeat-ref}/metricbeat-module-vsphere.html[vSphere module] ({metricbeat} docs) - -|`windows/metrics` -|Collects metrics from Windows systems. -|{metricbeat-ref}/metricbeat-module-windows.html[Windows module] ({metricbeat} docs) - -|`zookeeper/metrics` -|Fetches statistics from the ZooKeeper service. -|{metricbeat-ref}/metricbeat-module-zookeeper.html[ZooKeeper module] ({metricbeat} docs) - -|=== - -==== - -// Filebeat -[[elastic-agent-inputs-list-filebeat]] -[%collapsible] -.Forward and centralize log data -==== - -|=== -|Input |Description |Learn more - -|`aws-cloudwatch` -|Stores log files -from Amazon Elastic Compute Cloud(EC2), AWS CloudTrail, Route53, and other sources. -|{filebeat-ref}/filebeat-input-aws-cloudwatch.html[AWS CloudWatch input] ({filebeat} docs) - -|`aws-s3` -|Retrieves logs from S3 objects that are pointed to by S3 notification events read from an SQS queue or directly polling list of S3 objects in an S3 bucket. -|{filebeat-ref}/filebeat-input-aws-s3.html[AWS S3 input] ({filebeat} docs) - -|`azure-blob-storage` -|Reads content from files stored in containers which reside on your Azure Cloud. -|{filebeat-ref}/filebeat-input-azure-blob-storage.html[Azure Blob Storage] ({filebeat} docs) - -|`azure-eventhub` -|Reads messages from an azure eventhub. -|{filebeat-ref}/filebeat-input-azure-eventhub.html[Azure eventhub input] ({filebeat} docs) - -|`cel` -|Reads messages from a file path or HTTP API with a variety of payloads using the https://opensource.google.com/projects/cel[Common Expression Language (CEL)] and the https://pkg.go.dev/github.com/elastic/mito/lib[mito] CEL extension libraries. -|{filebeat-ref}/filebeat-input-cel.html[Common Expression Language input] ({filebeat} docs) - -|`cloudfoundry` -|Gets HTTP access logs, container logs and error logs from Cloud Foundry. -|{filebeat-ref}/filebeat-input-cloudfoundry.html[Cloud Foundry input] ({filebeat} docs) - -|`cometd` -|Streams the real-time events from a Salesforce generic subscription Push Topic. -|{filebeat-ref}/filebeat-input-cometd.html[CometD input] ({filebeat} docs) - -|`container` -|Reads containers log files. -|{filebeat-ref}/filebeat-input-container.html[Container input] ({filebeat} docs) - -|`docker` -|Alias for `container`. -|- - -|`log/docker` -|Alias for `container`. -|n/a - -|`entity-analytics` -|Collects identity assets, such as users, from external identity providers. -|{filebeat-ref}/filebeat-input-entity-analytics.html[Entity Analytics input] ({filebeat} docs) - -|`event/file` -|Alias for `log`. -|n/a - -|`event/tcp` -|Alias for `tcp`. -|n/a - -|`filestream` -|Reads lines from active log files. Replaces and imporoves on the `log` input. -|{filebeat-ref}/filebeat-input-filestream.html[filestream input] ({filebeat} docs) - -|`gcp-pubsub` -|Reads messages from a Google Cloud Pub/Sub topic subscription. -|{filebeat-ref}/filebeat-input-gcp-pubsub.html[GCP Pub/Sub input] ({filebeat} docs) - -|`gcs` -|beta[] Reads content from files stored in buckets which reside on your Google Cloud. -|{filebeat-ref}/filebeat-input-gcs.html[Google Cloud Storage input] ({filebeat} docs) - -|`http_endpoint` -|beta[] Initializes a listening HTTP server that collects incoming HTTP POST requests containing a JSON body. -|{filebeat-ref}/filebeat-input-http_endpoint.html[HTTP Endpoint input] ({filebeat} docs) - -|`httpjson` -|Read messages from an HTTP API with JSON payloads. -|{filebeat-ref}/filebeat-input-httpjson.html[HTTP JSON input] ({filebeat} docs) - -|`journald` -|beta[] A system service that collects and stores logging data. -|{filebeat-ref}/filebeat-input-journald.html[Journald input] ({filebeat} docs) - -|`kafka` -|Reads from topics in a Kafka cluster. -|{filebeat-ref}/filebeat-input-kafka.html[Kafka input] ({filebeat} docs) - -|`log` -|DEPRECATED: Please use the `filestream` input instead. -|n/a - -|`logfile` -|Alias for `log`. -|n/a - -|`log/redis_slowlog` -|Alias for `redis`. -|n/a - -|`log/syslog` -|Alias for `syslog`. -|n/a - -|`mqtt` -|Reads data transmitted using lightweight messaging protocol for small and mobile devices, optimized for high-latency or unreliable networks. -|{filebeat-ref}/filebeat-input-mqtt.html[MQTT input] ({filebeat} docs) - -|`netflow` -|Reads NetFlow and IPFIX exported flows and options records over UDP. -|{filebeat-ref}/filebeat-input-netflow.html[NetFlow input] ({filebeat} docs) - -|`o365audit` -|beta[] Retrieves audit messages from Office 365 and Azure AD activity logs. -|{filebeat-ref}/filebeat-input-o365audit.html[Office 365 Management Activity API input] ({filebeat} docs) - -|`osquery` -|Collects and decodes the result logs written by https://osquery.readthedocs.io/en/latest/introduction/using-osqueryd/[osqueryd] in the JSON format. -| - - -|`redis` -|beta[] Reads entries from Redis slowlogs. -|{filebeat-ref}/[Redis input] ({filebeat} docs) - -|`syslog` -|Reads Syslog events as specified by RFC 3164 and RFC 5424, over TCP, UDP, or a Unix stream socket. -|{filebeat-ref}/filebeat-input-syslog.html[Syslog input] ({filebeat} docs) - -|`tcp` -|Reads events over TCP. -|{filebeat-ref}/filebeat-input-tcp.html[TCP input] ({filebeat} docs) - -|`udp` -|Reads events over UDP. -|{filebeat-ref}/filebeat-input-udp.html[UDP input] ({filebeat} docs) - -|`unix` -|beta[] Reads events over a stream-oriented Unix domain socket. -|{filebeat-ref}/[Unix input] ({filebeat} docs) - -|`winlog` -|Reads from one or more event logs using Windows APIs, filters the events based on user-configured criteria, then sends the event data to the configured outputs ({es} or {ls}). -|{winlogbeat-ref}[Winlogbeat Overview] ({winlogbeat} docs) - -|=== - -==== - -// Heartbeat -[[elastic-agent-inputs-list-heartbeat]] -[%collapsible] -.Monitor the status of your services -==== - -|=== -|Input |Description |Learn more - -|`synthetics/http` -|Connect via HTTP and optionally verify that the host returns the expected response. -|{heartbeat-ref}/monitor-http-options.html[HTTP options] ({heartbeat} docs) - -|`synthetics/icmp` -|Use ICMP (v4 and v6) Echo Requests to check the configured hosts. -|{heartbeat-ref}/monitor-icmp-options.html[ICMP options] ({heartbeat} docs) - -|`synthetics/tcp` -|Connect via TCP and optionally verify the endpoint by sending and/or receiving a custom payload. -|{heartbeat-ref}/monitor-tcp-options.html[TCP options] ({heartbeat} docs) - -|=== - -==== - -// Packetbeat -[[elastic-agent-inputs-list-packetbeat]] -[%collapsible] -.View network traffic between the servers of your network -==== - -|=== -|Input |Description |Learn more - -|`packet` -|Sniffs the traffic between your servers, parses the application-level protocols on the fly, and correlates the messages into transactions. -|{packetbeat-ref}/packetbeat-overview.html[Packetbeat overview] ({packetbeat} docs) - -|=== - -==== \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/inputs/simplified-input-configuration.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/inputs/simplified-input-configuration.asciidoc deleted file mode 100644 index 77bc31e99..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/inputs/simplified-input-configuration.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[elastic-agent-simplified-input-configuration]] -= Simplified log ingestion - -There is a simplified option for ingesting log files with {agent}. -The simplest input configuration to ingest the file -`/var/log/my-application/log-file.log` is: - -["source","yaml"] ------------------------------------------------------------------------ -inputs: - - type: filestream <1> - id: unique-id-per-input <2> - paths: <3> - - /var/log/my-application/log-file.log ------------------------------------------------------------------------ - -<1> The input type must be `filestream`. -<2> A unique ID for the input. -<3> An array containing all log file paths. - -For other custom options to configure the input, refer to the -{filebeat-ref}/filebeat-input-filestream.html[filestream input] in the {filebeat} documentation. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/outputs/output-configuration.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/outputs/output-configuration.asciidoc deleted file mode 100644 index e41a4e621..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/outputs/output-configuration.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[elastic-agent-output-configuration]] -= Configure outputs for standalone {agent}s - -++++ -Outputs -++++ - -The `outputs` section of the `elastic-agent.yml` file specifies where to -send data. You can specify multiple outputs to pair specific inputs with -specific outputs. - -This example configures two outputs: `default` and `monitoring`: - -[source,yaml] -------------------------------------------------------------------------------------- -outputs: - default: - type: elasticsearch - hosts: [127.0.0.1:9200] - api_key: "my_api_key" - - monitoring: - type: elasticsearch - api_key: VuaCfGcBCdbkQm-e5aOx:ui2lp2axTNmsyakw9tvNnw - hosts: ["localhost:9200"] - ca_sha256: "7lHLiyp4J8m9kw38SJ7SURJP4bXRZv/BNxyyXkCcE/M=" -------------------------------------------------------------------------------------- - -[NOTE] -============== -A default output configuration is required. -============== - -{agent} currently supports these outputs: - -* <> -* <> -* <> - -include::output-elasticsearch.asciidoc[leveloffset=+1] - -include::output-kafka.asciidoc[leveloffset=+1] - -include::output-logstash.asciidoc[leveloffset=+1] diff --git a/docs/en/ingest-management/elastic-agent/configuration/outputs/output-elasticsearch.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/outputs/output-elasticsearch.asciidoc deleted file mode 100644 index b1967e13d..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/outputs/output-elasticsearch.asciidoc +++ /dev/null @@ -1,711 +0,0 @@ -:type: output-elasticsearch - -[[elasticsearch-output]] -= Configure the {es} output - -++++ -{es} -++++ - -The {es} output sends events directly to {es} by using the {es} HTTP API. - -*Compatibility:* This output works with all compatible versions of {es}. See the -https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support -Matrix]. - -This example configures an {es} output called `default` in the -`elastic-agent.yml` file: - -[source,yaml] ----- -outputs: - default: - type: elasticsearch - hosts: [127.0.0.1:9200] - username: elastic - password: changeme ----- - -This example is similar to the previous one, except that it uses the recommended -<>: - -[source,yaml] ----- -outputs: - default: - type: elasticsearch - hosts: [127.0.0.1:9200] - api_key: "my_api_key" ----- - -NOTE: Token-based authentication is required in an -link:{serverless-docs}[{serverless-full}] environment. - -== {es} output configuration settings - -The `elasticsearch` output type supports the following settings, grouped by -category. Many of these settings have sensible defaults that allow you to run -{agent} with minimal configuration. - -* <> - -* <> - -* <> - -* <> - -* <> - -* <> - -* <> - -[[output-elasticsearch-commonly-used-settings]] -== Commonly used settings - -[cols="2* - protocol: https - path: /elasticsearch ------------------------------------------------------------------------------- -<1> In this example, the {es} nodes are available at -`https://10.45.3.2:9220/elasticsearch` and -`https://10.45.3.1:9230/elasticsearch`. - -Note that Elasticsearch Nodes in the link:{serverless-docs}[{serverless-full}] environment are exposed on port 443. -// end::hosts-setting[] - -// ============================================================================= - -// tag::protocol-setting[] -| -[id="{type}-protocol-setting"] -`protocol` - -| (string) The name of the protocol {es} is reachable on. The options are: -`http` or `https`. The default is `http`. However, if you specify a URL for -`hosts`, the value of `protocol` is overridden by whatever scheme you specify in -the URL. -// end::protocol-setting[] - -// ============================================================================= - -// tag::proxy_disable-setting[] -| -[id="{type}-proxy_disable-setting"] -`proxy_disable` - -| (boolean) If set to `true`, all proxy settings, including `HTTP_PROXY` and -`HTTPS_PROXY` variables, are ignored. - -*Default:* `false` - -// end::proxy_disable-setting[] - -// ============================================================================= - -// tag::proxy_headers-setting[] -| -[id="{type}-proxy_headers-setting"] -`proxy_headers` - -| (string) Additional headers to send to proxies during CONNECT requests. - -// end::proxy_headers-setting[] - -// ============================================================================= -// tag::proxy_url-setting[] -| -[id="{type}-proxy_url-setting"] -`proxy_url` - -| (string) The URL of the proxy to use when connecting to the {es} servers. The -value may be either a complete URL or a `host[:port]`, in which case the `http` -scheme is assumed. If a value is not specified through the configuration file -then proxy environment variables are used. See the -https://golang.org/pkg/net/http/#ProxyFromEnvironment[Go documentation] -for more information about the environment variables. -// end::proxy_url-setting[] - -// ============================================================================= - -|=== - -[[output-elasticsearch-authentication-settings]] -== Authentication settings - -When sending data to a secured cluster through the `elasticsearch` -output, {agent} can use any of the following authentication methods: - -* <> -* <> -* <> -* <> - -[[output-elasticsearch-basic-authentication-settings]] -=== Basic authentication credentials - -[source,yaml] ----- -outputs: - default: - type: elasticsearch - hosts: ["https://myEShost:9200"] - username: "your-username" - password: "your-password" ----- - -[cols="2*>. -// end::username-setting[] - -Note that in an link:{serverless-docs}[{serverless-full}] environment you need to use <>. - -// ============================================================================= - -|=== - -[[output-elasticsearch-apikey-authentication-settings]] -=== Token-based (API key) authentication - -[source,yaml] ----- -outputs: - default: - type: elasticsearch - hosts: ["https://myEShost:9200"] - api_key: "KnR6yE41RrSowb0kQ0HWoA" ----- - -[cols="2*>, -specifically the settings under <> and -<>. - -[[output-elasticsearch-kerberos-authentication-settings]] -=== Kerberos - -The following encryption types are supported: - -// lint ignore -* aes128-cts-hmac-sha1-96 -* aes128-cts-hmac-sha256-128 -* aes256-cts-hmac-sha1-96 -* aes256-cts-hmac-sha384-192 -* des3-cbc-sha1-kd -* rc4-hmac - -Example output config with Kerberos password-based authentication: - -[source,yaml] ----- -outputs: - default: - type: elasticsearch - hosts: ["http://my-elasticsearch.elastic.co:9200"] - kerberos.auth_type: password - kerberos.username: "elastic" - kerberos.password: "changeme" - kerberos.config_path: "/etc/krb5.conf" - kerberos.realm: "ELASTIC.CO" ----- - -The service principal name for the {es} instance is constructed from these -options. Based on this configuration, the name would be: - -`HTTP/my-elasticsearch.elastic.co@ELASTIC.CO` - -include::../authentication/kerberos-shared-settings.asciidoc[tag=kerberos-all-settings] - -[[output-elasticsearch-compatibility-setting]] -=== Compatibility setting - -[cols="2*> to optimize your {agent} performance when sending data to an {es} output. - -Refer to <> for a table showing the group of values associated with any preset, and another table showing EPS (events per second) results from testing the different preset options. - -Performance tuning preset settings: - -*`balanced`*:: -Configure the default tuning setting values for "out-of-the-box" performance. - -*`throughput`*:: -Optimize the {es} output for throughput. - -*`scale`*:: -Optimize the {es} output for scale. - -*`latency`*:: -Optimize the {es} output to reduce latence. - -*`custom`*:: -Use the `custom` option to fine-tune the performance tuning settings individually. - -*Default:* `balanced` -// end::preset-setting[] - -// ============================================================================= - -// tag::timeout-setting[] -| -[id="{type}-timeout-setting"] -`timeout` - -| (string) The HTTP request timeout in seconds for the {es} request. - -*Default:* `90s` - -// end::timeout-setting[] - -// ============================================================================= - -include::output-shared-settings.asciidoc[tag=worker-setting] - -// ============================================================================= - -|=== - -:type!: diff --git a/docs/en/ingest-management/elastic-agent/configuration/outputs/output-kafka.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/outputs/output-kafka.asciidoc deleted file mode 100644 index c8c7c605d..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/outputs/output-kafka.asciidoc +++ /dev/null @@ -1,530 +0,0 @@ -:type: output-kafka - -[[kafka-output]] -= Kafka output - -++++ -Kafka -++++ - -The Kafka output sends events to Apache Kafka. - -*Compatibility:* This output can connect to Kafka version 0.8.2.0 and later. -Older versions might work as well, but are not supported. - -This example configures a Kafka output called `kafka-output` in the {agent} -`elastic-agent.yml` file, with settings as described further in: - -[source,yaml] ----- -outputs: - kafka-output: - type: kafka - hosts: - - 'kafka1:9092' - - 'kafka2:9092' - - 'kafka3:9092' - client_id: Elastic - version: 1.0.0 - compression: gzip - compression_level: 4 - username: - password: - sasl: - mechanism: SCRAM-SHA-256 - partition: - round_robin: - group_events: 1 - topic: 'elastic-agent' - headers: [] - timeout: 30 - broker_timeout: 30 - required_acks: 1 - ssl: - verification_mode: full ----- - -== Kafka output and using {ls} to index data to {es} - -If you are considering using {ls} to ship the data from `kafka` to {es}, please be aware the -structure of the documents sent from {agent} to `kafka` must not be modified by {ls}. -We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec in order -to make sure the input doesn't edit the fields and their contents. - -The data streams set up by the integrations expect to receive events having the same structure and -field names as they were sent directly from an {agent}. - -Refer to <> documentation for more details. - -[source,yaml] ----- -inputs { - kafka { - ... - ecs_compatibility => "disabled" - codec => json { ecs_compatibility => "disabled" } - ... - } -} -... ----- - -== Kafka output configuration settings - -The `kafka` output supports the following settings, grouped by category. -Many of these settings have sensible defaults that allow you to run {agent} with -minimal configuration. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -[[output-kafka-commonly-used-settings]] -== Commonly used settings - -[cols="2*>, specifically the settings under -<> and <>. - -|=== - -[[output-kafka-memory-queue-settings]] -== Memory queue settings - -The memory queue keeps all events in memory. - -The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new -events can be inserted into the memory queue. Only after the signal from the output will the queue -free up space for more events to be accepted. - -The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. -`flush.min_events` gives a limit on the number of events that can be included in a single batch, and -`flush.timeout` specifies how long the queue should wait to completely fill an event request. If the -output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of -`bulk_max_size` and `flush.min_events`. - -`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size -with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with -`flush.min_events` instead of `bulk_max_size`. - -In synchronous mode, an event request is always filled as soon as events are available, even if -there are not enough events to fill the requested batch. This is useful when latency must be -minimized. To use synchronous mode, set `flush.timeout` to 0. - -For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 -or 1. In this case, batch size will be capped at 1/2 the queue capacity. - -In asynchronous mode, an event request will wait up to the specified timeout to try and fill the -requested batch completely. If the timeout expires, the queue returns a partial batch with all -available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s. - -This sample configuration forwards events to the output when there are enough events to fill the -output's request (usually controlled by `bulk_max_size`, and limited to at most 512 events by -`flush.min_events`), or when events have been waiting for - -[source,yaml] ------------------------------------------------------------------------------- - queue.mem.events: 4096 - queue.mem.flush.min_events: 512 - queue.mem.flush.timeout: 5s ------------------------------------------------------------------------------- - -[cols="2*{ls} -++++ - -The {ls} output uses an internal protocol to send events directly to {ls} over -TCP. {ls} provides additional parsing, transformation, and routing of data -collected by {agent}. - -*Compatibility:* This output works with all compatible versions of {ls}. Refer -to the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic -Support Matrix]. - -This example configures a {ls} output called `default` in the -`elastic-agent.yml` file: - -[source,yaml] ----- -outputs: - default: - type: logstash - hosts: ["127.0.0.1:5044"] <1> ----- -<1> The {ls} server and the port (`5044`) where {ls} is configured to listen for -incoming {agent} connections. - -To receive the events in {ls}, you also need to create a {ls} configuration pipeline. -The {ls} configuration pipeline listens for incoming {agent} connections, -processes received events, and then sends the events to {es}. - -Please be aware that the structure of the documents sent from {agent} to {ls} must not be modified by the pipeline. -We recommend that the pipeline doesn’t edit or remove the fields and their contents. -Editing the structure of the documents coming from {agent} can prevent the {es} ingest pipelines associated to the integrations in use to work correctly. -We cannot guarantee that the {es} ingest pipelines associated to the integrations using {agent} can work with missing or modified fields. - -The following {ls} pipeline definition example configures a pipeline that listens on port `5044` for -incoming {agent} connections and routes received events to {es}. - - -[source,yaml] ----- -input { - elastic_agent { - port => 5044 - enrich => none # don't modify the events' schema at all - ssl => true - ssl_certificate_authorities => [""] - ssl_certificate => "" - ssl_key => "" - ssl_verify_mode => "force_peer" - } -} - -output { - elasticsearch { - hosts => ["http://localhost:9200"] <1> - # cloud_id => "..." - data_stream => "true" - api_key => "" <2> - data_stream => true - ssl => true - # cacert => "" - } -} ----- -<1> The {es} server and the port (`9200`) where {es} is running. -<2> The API Key used by {ls} to ship data to the destination data streams. - -For more information about configuring {ls}, refer to -{logstash-ref}/configuration.html[Configuring {ls}] and -{logstash-ref}/plugins-inputs-elastic_agent.html[{agent} input plugin]. - -== {ls} output configuration settings - -The `logstash` output supports the following settings, grouped by category. -Many of these settings have sensible defaults that allow you to run {agent} with -minimal configuration. - -* <> - -* <> - -* <> - -* <> - -[[output-logstash-commonly-used-settings]] -== Commonly used settings - -[cols="2*>, specifically the settings under -<> and <>. - -NOTE: To use SSL/TLS, you must also configure the -{logstash-ref}/plugins-inputs-beats.html[{agent} input plugin for {ls}] to -use SSL/TLS. - -For more information, refer to <>. - -[[output-logstash-memory-queue-settings]] -== Memory queue settings - -The memory queue keeps all events in memory. - -The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new -events can be inserted into the memory queue. Only after the signal from the output will the queue -free up space for more events to be accepted. - -The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. -`flush.min_events` gives a limit on the number of events that can be included in a single batch, and -`flush.timeout` specifies how long the queue should wait to completely fill an event request. If the -output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of -`bulk_max_size` and `flush.min_events`. - -`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size -with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with -`flush.min_events` instead of `bulk_max_size`. - -In synchronous mode, an event request is always filled as soon as events are available, even if -there are not enough events to fill the requested batch. This is useful when latency must be -minimized. To use synchronous mode, set `flush.timeout` to 0. - -For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 -or 1. In this case, batch size will be capped at 1/2 the queue capacity. - -In asynchronous mode, an event request will wait up to the specified timeout to try and fill the -requested batch completely. If the timeout expires, the queue returns a partial batch with all -available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s. - -This sample configuration forwards events to the output when there are enough events to fill the -output's request (usually controlled by `bulk_max_size`, and limited to at most 512 events by -`flush.min_events`), or when events have been waiting for 5s without filling the requested size:f 512 events are available or the oldest -available event has been waiting for 5s in the queue: - -[source,yaml] ------------------------------------------------------------------------------- - queue.mem.events: 4096 - queue.mem.flush.min_events: 512 - queue.mem.flush.timeout: 5s ------------------------------------------------------------------------------- - -[cols="2* NOTE: Docker provider ensures that each docker container event is enriched with -the container's metadata, and hence the inputs will be populated with the `add_fields` processor which will be responsible for adding the proper container's metadata. diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/elastic-agent-providers.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/elastic-agent-providers.asciidoc deleted file mode 100644 index 396682c44..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/elastic-agent-providers.asciidoc +++ /dev/null @@ -1,126 +0,0 @@ -[[providers]] -= Configure providers for standalone {agent}s - -++++ -Providers -++++ - -Providers supply the key-value pairs that are used for variable substitution -and conditionals. Each provider's keys are automatically prefixed with the name -of the provider in the context of the {agent}. - -For example, a provider named `foo` provides -`{"key1": "value1", "key2": "value2"}`, the key-value pairs are placed in -`{"foo" : {"key1": "value1", "key2": "value2"}}`. To reference the keys, use `{{foo.key1}}` and `{{foo.key2}}`. - -[discrete] -== Provider configuration - -The provider configuration is specified under the top-level `providers` -key in the `elastic-agent.yml` configuration. All registered -providers are enabled by default. If a provider cannot connect, no mappings are produced. - -The following example shows two providers (`local` and `local_dynamic`) that -supply custom keys: - -[source,yaml] ----- -providers: - local: - vars: - foo: bar - local_dynamic: - vars: - - item: key1 - - item: key2 ----- - -Providers are enabled automatically if a provider is referenced in an {agent} policy. -All providers are prefixed without name collisions. -The name of the provider is in the key in the configuration. - -[source,yaml] ----- -providers: - docker: - enabled: false ----- - -{agent} supports two broad types of providers: <> and -<>. - -[discrete] -[[context-providers]] -=== Context providers - -Context providers give the current context of the running {agent}, for -example, agent information (ID, version), host information (hostname, IP -addresses), and environment information (environment variables). - -They can only provide a single key-value mapping. Think of them as singletons; -an update of a key-value mapping results in a re-evaluation of the entire -configuration. These providers are normally very static, but not -required. A value can change which results in re-evaluation. - -Context providers use the Elastic Common Schema (ECS) naming to ensure consistency and understanding throughout documentation and projects. - -{agent} supports the following context providers: - -// lint ignore env -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[dynamic-providers]] -=== Dynamic Providers - -Dynamic providers give an array of multiple key-value mappings. Each -key-value mapping is combined with the previous context provider's key and value -mapping which provides a new unique mapping that is used to generate a -configuration. - -{agent} supports the following context providers: - -* <> -* <> -* <> - -[discrete] -[[disable-providers-by-default]] -=== Disabling Providers By Default - -All registered providers are disabled by default until they are referenced in a policy. - -You can disable all providers even if they are referenced in a policy by setting `agent.providers.initial_default: false`. - -The following configuration disables all providers from running except for the docker provider, if it becomes referenced in the policy: - -[source,yaml] ----- -agent.providers.initial_default: false -providers: - docker: - enabled: true ----- - -include::local-provider.asciidoc[leveloffset=+1] - -include::agent-provider.asciidoc[leveloffset=+1] - -include::host-provider.asciidoc[leveloffset=+1] - -include::env-provider.asciidoc[leveloffset=+1] - -include::kubernetes_secrets-provider.asciidoc[leveloffset=+1] - -include::kubernetes_leaderelection-provider.asciidoc[leveloffset=+1] - -include::local-dynamic-provider.asciidoc[leveloffset=+1] - -include::docker-provider.asciidoc[leveloffset=+1] - -include::kubernetes-provider.asciidoc[leveloffset=+1] diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/env-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/env-provider.asciidoc deleted file mode 100644 index 60fa52089..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/env-provider.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[[env-provider]] -// lint ignore env -= Env Provider - -Provides access to the environment variables as key-value pairs. - -For example, set the variable `foo`: - -[source,shell] ----- -foo=bar elastic-agent run ----- - -The environment variable can be referenced as `${env.foo}`. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/host-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/host-provider.asciidoc deleted file mode 100644 index fff8c2cca..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/host-provider.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[host-provider]] -= Host provider - -Provides information about the current host. The available keys are: - -|=== -|Key |Type |Description - -|`host.name` -|`string` -|Host name - -|`host.platform` -|`string` -|Host platform - -|`host.architecture` -|`string` -|Host architecture - -|`host.ip[]` -|`[]string` -|Host IP addresses - -|`host.mac[]` -|`[]string` -|Host MAC addresses -|=== \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes-provider.asciidoc deleted file mode 100644 index f406f1361..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes-provider.asciidoc +++ /dev/null @@ -1,349 +0,0 @@ -[[kubernetes-provider]] -= Kubernetes Provider - -Provides inventory information from Kubernetes. - - -[discrete] -= Provider configuration - -[source,yaml] ----- -providers.kubernetes: - node: ${NODE_NAME} - scope: node - #kube_config: /Users/elastic-agent/.kube/config - #sync_period: 600s - #cleanup_timeout: 60s - resources: - pod: - enabled: true ----- - -`node`:: (Optional) Specify the node to scope {agent} to in case it -cannot be accurately detected by the default discovery approach: -1. If {agent} is deployed in Kubernetes cluster as Pod, use hostname of pod as the pod name to query pod metadata for node name. -2. If step 1 fails or {agent} is deployed outside of the Kubernetes cluster, use machine-id to match against Kubernetes nodes for node name. -3. If node cannot be discovered with step 1 or 2 fall back to `NODE_NAME` environment variable as default value. In case it is not set return error. -`cleanup_timeout`:: (Optional) Specify the time of inactivity before stopping the -running configuration for a container. This is `60s` by default. -`sync_period`:: (Optional) Specify the timeout for listing historical resources. -`kube_config`:: (Optional) Use the given config file as configuration for Kubernetes -client. If `kube_config` is not set, the `KUBECONFIG` environment variable will be -checked and will fall back to InCluster if not present. InCluster mode means that if -{agent} runs as a Pod it will try to initialize the client using the token and certificate -that are mounted in the Pod by default: - * `/var/run/secrets/kubernetes.io/serviceaccount/token` - * `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` - -as well as using the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` -to reach the API Server. -`kube_client_options`:: (Optional) Additional options can be configured for Kubernetes -client. Currently client QPS and burst are supported, if not set Kubernetes client's - https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants[default QPS and burst] will be used. -Example: -["source","yaml",subs="attributes"] -------------------------------------------------------------------------------------- - kube_client_options: - qps: 5 - burst: 10 -------------------------------------------------------------------------------------- -`scope`:: (Optional) Specify the level for autodiscover. `scope` can -either take `node` or `cluster` as values. `node` scope allows discovery of resources in -the specified node. `cluster` scope allows cluster wide discovery. Only `pod` and `node` resources -can be discovered at node scope. -`resources`:: (Optional) Specify the resources that want to start the autodiscovery for. One -of `pod`, `node`, `service`. By default `node` and `pod` are being enabled. `service` resource -requires the `scope` to be set at `cluster`. -`namespace`:: (Optional) Select the namespace from which to collect the -metadata. If it is not set, the processor collects metadata from all namespaces. -It is unset by default. -`include_annotations`:: (Optional) If added to the provider config, then the list of annotations present in the config -are added to the event. -`include_labels`:: (Optional) If added to the provider config, then the list of labels present in the config -will be added to the event. -`exclude_labels`:: (Optional) If added to the provider config, then the list of labels present in the config -will be excluded from the event. -`labels.dedot`:: (Optional) If set to be `true` in the provider config, then `.` in labels will be replaced with `_`. -By default it is `true`. -`annotations.dedot`:: (Optional) If set to be `true` in the provider config, then `.` in annotations will be replaced -with `_`. By default it is `true`. - -`add_resource_metadata`:: (Optional) Specify filters and configration for the extra metadata, that will be added to the event. -Configuration parameters: - * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default - all labels are included while annotations are not. To change the default behaviour `include_labels`, `exclude_labels` and `include_annotations` - can be defined. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. - The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. - Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. - * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name isn't added, this can be enabled by setting `deployment: true`. - * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name isn't added, this can be enabled by setting `cronjob: true`. - Example: - -["source","yaml",subs="attributes"] -------------------------------------------------------------------------------------- - add_resource_metadata: - namespace: - #use_regex_include: false - include_labels: ["namespacelabel1"] - #use_regex_exclude: false - #exclude_labels: ["namespacelabel2"] - node: - #use_regex_include: false - include_labels: ["nodelabel2"] - include_annotations: ["nodeannotation1"] - #use_regex_exclude: false - #exclude_labels: ["nodelabel3"] - #deployment: false - #cronjob: false -------------------------------------------------------------------------------------- - - -[discrete] -= Provider for Pod resources - -The available keys are: - -|=== -|Key |Type |Description - -|`kubernetes.namespace` -|`string` -|Namespace of the Pod - -|`kubernetes.namespace_uid` -|`string` -|UID of the Namespace of the Pod - -|`kubernetes.namespace_labels.*` -|`object` -|Labels of the Namespace of the Pod - -|`kubernetes.namespace_annotations.*` -|`object` -|Annotations of the Namespace of the Pod - -|`kubernetes.pod.name` -|`string` -|Name of the Pod - -|`kubernetes.pod.uid` -|`string` -|UID of the Pod - -|`kubernetes.pod.ip` -|`string` -|IP of the Pod - -|`kubernetes.labels.*` -|`object` -|Object of labels of the Pod - -|`kubernetes.annotations.*` -|`object` -|Object of annotations of the Pod - -|`kubernetes.container.name` -|`string` -|Name of the container - -|`kubernetes.container.runtime` -|`string` -|Runtime of the container - -|`kubernetes.container.id` -|`string` -|ID of the container - -|`kubernetes.container.image` -|`string` -|Image of the container - -|`kubernetes.container.port` -|`string` -|Port of the container (if defined) - -|`kubernetes.container.port_name` -|`string` -|Port's name for the container (if defined) - -|`kubernetes.node.name` -|`string` -|Name of the Node - -|`kubernetes.node.uid` -|`string` -|UID of the Node - -|`kubernetes.node.hostname` -|`string` -|Hostname of the Node - -|`kubernetes.node.labels.*` -|`string` -|Labels of the Node - -|`kubernetes.node.annotations.*` -|`string` -|Annotations of the Node - -|`kubernetes.deployment.name.*` -|`string` -|Deployment name of the Pod (if exists) - -|`kubernetes.statefulset.name.*` -|`string` -|StatefulSet name of the Pod (if exists) - -|`kubernetes.replicaset.name.*` -|`string` -|ReplicaSet name of the Pod (if exists) -|=== - - -These are the fields available within config templating. The `kubernetes.*` fields will be available on each emitted event. - -NOTE: `kubernetes.labels.*` and `kubernetes.annotations.*` used in config templating are not dedoted and should not be confused with -labels and annotations added in the final Elasticsearch document and which are dedoted by default. For examples refer to <>. - -Note that not all of these fields are available by default and special configuration options -are needed in order to include them. - -For example, if the Kubernetes provider provides the following inventory: - -[source,json] ----- -[ - { - "id": "1", - "mapping:": {"namespace": "kube-system", "pod": {"name": "kube-controllermanger"}}, - "processors": {"add_fields": {"kuberentes.namespace": "kube-system", "kubernetes.pod": {"name": "kube-controllermanger"}} - { - "id": "2", - "mapping:": {"namespace": "kube-system", "pod": {"name": "kube-scheduler"}}, - "processors": {"add_fields": {"kubernetes.namespace": "kube-system", "kubernetes.pod": {"name": "kube-scheduler"}} - } -] ----- - -{agent} automatically prefixes the result with `kubernetes`: - - -[source,json] ----- -[ - {"kubernetes": {"id": "1", "namespace": {"name": "kube-system"}, "pod": {"name": "kube-controllermanger"}}, - {"kubernetes": {"id": "2", "namespace": {"name": "kube-system"}, "pod": {"name": "kube-scheduler"}}, -] ----- - -In addition, the Kubernetes metadata are being added to each event by default. - -[discrete] -= Provider for Node resources - -[source,yaml] ----- -providers.kubernetes: - node: ${NODE_NAME} - scope: node - #kube_config: /Users/elastic-agent/.kube/config - #sync_period: 600s - #cleanup_timeout: 60s - resources: - node: - enabled: true ----- - -This resource is enabled by default but in this example we define it explicitly -for clarity. - -The available keys are: - -|=== -|Key |Type |Description - -|`kubernetes.labels.*` -|`object` -|Object of labels of the Node - -|`kubernetes.annotations.*` -|`object` -|Object of labels of the Node - -|`kubernetes.node.name` -|`string` -|Name of the Node - -|`kubernetes.node.uid` -|`string` -|UID of the Node - -|`kubernetes.node.hostname` -|`string` -|Hostname of the Node -|=== - -[discrete] -= Provider for Service resources - -[source,yaml] ----- -providers.kubernetes: - node: ${NODE_NAME} - scope: cluster - #kube_config: /Users/elastic-agent/.kube/config - #sync_period: 600s - #cleanup_timeout: 60s - resources: - service: - enabled: true ----- - -Note that this resource is only available with `scope: cluster` setting and `node` -cannot be used as scope. - -The available keys are: - -|=== -|Key |Type |Description - -|`kubernetes.namespace` -|`string` -|Namespace of the Service - -|`kubernetes.namespace_uid` -|`string` -|UID of the Namespace of the Service - -|`kubernetes.namespace_labels.*` -|`object` -|Labels of the Namespace of the Service - -|`kubernetes.namespace_annotations.*` -|`object` -|Annotations of the Namespace of the Service - -|`kubernetes.labels.*` -|`object` -|Object of labels of the Service - -|`kubernetes.annotations.*` -|`object` -|Object of labels of the Service - -|`kubernetes.service.name` -|`string` -|Name of the Service - -|`kubernetes.service.uid` -|`string` -|UID of the Service - -|`kubernetes.selectors.*` -|`string` -|Kubernetes selectors -|=== - -Refer to <> -for more information about shaping dynamic inputs for autodiscovery. diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes_leaderelection-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes_leaderelection-provider.asciidoc deleted file mode 100644 index ec89d4ef9..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes_leaderelection-provider.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[kubernetes_leaderelection-provider]] -= Kubernetes LeaderElection Provider - -Provides the option to enable leaderelection between a set of {agent}s -running on Kubernetes. Only one {agent} at a time will be the holder of the leader -lock and based on this, configurations can be enabled with the condition -that the {agent} holds the leadership. This is useful in cases where the {agent} between a set of {agent}s collects cluster wide metrics for the Kubernetes cluster, such as the `kube-state-metrics` endpoint. - -Provider needs a `kubeconfig` file to establish a connection to Kubernetes API. -It can automatically reach the API if it's running in an InCluster environment ({agent} runs as Pod). - -[source,yaml] ----- -providers.kubernetes_leaderelection: - #enabled: true - #kube_config: /Users/elastic-agent/.kube/config - #kube_client_options: - # qps: 5 - # burst: 10 - #leader_lease: agent-k8s-leader-lock - #leader_retryperiod: 2 - #leader_leaseduration: 15 - #leader_renewdeadline: 10 ----- - -`enabled`:: (Optional) Defaults to true. To explicitly disable the LeaderElection provider, -set `enabled: false`. -`kube_config`:: (Optional) Use the given config file as configuration for the Kubernetes -client. If `kube_config` is not set, `KUBECONFIG` environment variable will be -checked and will fall back to InCluster if it's not present. -`kube_client_options`:: (Optional) Configure additional options for the Kubernetes client. -Supported options are `qps` and `burst`. If not set, the Kubernetes client's -default QPS and burst settings are used. -`leader_lease`:: (Optional) Specify the name of the leader lease. -This is set to `elastic-agent-cluster-leader` by default. -`leader_retryperiod`:: (Optional) Default value 2 (in sec). How long before {agent}s try to get the `leader` role. -`leader_leaseduration`:: (Optional) Default value 15 (in sec). How long the leader {agent} holds the `leader` state. -`leader_renewdeadline`:: (Optional) Default value 10 (in sec). How long leaders retry getting the `leader` role. - -The available key is: - -|=== -|Key |Type |Description - -|`kubernetes_leaderelection.leader` -|`bool` -|The value of the leadership flag. This is set to `true` when the {agent} is the current leader, and is set to `false` otherwise. - -|=== - - -[discrete] -= Understanding leader timings - -As described above, the LeaderElection configuration offers the following parameters: Lease duration (`leader_leaseduration`), Renew deadline (`leader_renewdeadline`), and -Retry period (`leader_retryperiod`). Based on the config provided, each agent will trigger {k8s} API requests and will try to check the status of the lease. - -NOTE: The number of leader calls to the K8s Control API is proportional to the number of {agent}s installed. This means that requests will come from all {agent}s per `leader_retryperiod`. Setting `leader_retryperiod` to a greater value than the default (2sec), means that fewer requests will be made towards the {k8s} Control API, but will also increase the period where collection of metrics from the leader {agent} might be lost. - -The library applies https://github.com/kubernetes/client-go/blob/master/tools/leaderelection/leaderelection.go#L76[specific checks] for the timing parameters and if those are not verified {agent} will exit with a `panic` error. - -In general: -- Leaseduration must be greater than renewdeadline -- Renewdeadline must be greater than retryperiod*JitterFactor. - -NOTE: Constant JitterFactor=1.2 is defined in https://pkg.go.dev/gopkg.in/kubernetes/client-go.v11/tools/leaderelection[leaderelection lib]. - - -[discrete] -= Enabling configurations only when on leadership - -Use conditions based on the `kubernetes_leaderelection.leader` key to leverage the leaderelection provider and enable specific inputs only when the {agent} holds the leadership lock. -The below example enables the `state_container` -metricset only when the leadership lock is acquired: - -[source,yaml] ----- -- data_stream: - dataset: kubernetes.state_container - type: metrics - metricsets: - - state_container - add_metadata: true - hosts: - - 'kube-state-metrics:8080' - period: 10s - condition: ${kubernetes_leaderelection.leader} == true ----- - - diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes_secrets-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes_secrets-provider.asciidoc deleted file mode 100644 index c5180a727..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/kubernetes_secrets-provider.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[kubernetes_secrets-provider]] -= Kubernetes Secrets Provider - -Provides access to the Kubernetes Secrets API. - -Use the format `${kubernetes_secrets...}` to reference a Kubernetes Secrets variable, where `default` is the namespace of the Secret, `somesecret` is the name of the Secret and `value` is the field of the Secret to access. - -To obtain the values for the secrets, a request to the API Server is made. To avoid multiple requests for the same secret and to not overwhelm the API Server, a cache to store the values is used by default. This configuration can be set by using the variables `cache_*` (see below). - -The provider needs a `kubeconfig` file to establish connection to the Kubernetes API. It can automatically reach the API if it's run in an InCluster environment ({agent} runs as pod). - -[source,yaml] ----- -providers.kubernetes_secrets: - #kube_config: /Users/elastic-agent/.kube/config - #kube_client_options: - # qps: 5 - # burst: 10 - #cache_disable: false - #cache_refresh_interval: 60s - #cache_ttl: 1h - #cache_request_timeout: 5s ----- - - -`kube_config`:: (Optional) Use the given config file as configuration for the Kubernetes client. If `kube_config` is not set, `KUBECONFIG` environment variable will be checked and will fall back to InCluster if it's not present. -`kube_client_options`:: (Optional) Configure additional options for the Kubernetes client. Supported options are `qps` and `burst`. If not set, the Kubernetes client's default QPS and burst settings are used. -`cache_disable`:: (Optional) Disables the cache for the secrets. When disabled, thus is set to `true`, code makes a request to the API Server to obtain the value. To continue using the cache, set the variable to `false`. Default is `false`. -`cache_refresh_interval`:: (Optional) Defines the period to update all secret values kept in the cache. Defaults to `60s`. -`cache_ttl`:: (Optional) Defines for how long a secret should be kept in the cache if not being requested. The default is `1h`. -`cache_request_timeout`:: (Optional) Defines how long the API Server can take to provide the value for a given secret. Defaults to `5s`. - - - -If you run agent on Kubernetes, the proper rule in the `ClusterRole` is required to provide access to the {agent} pod in the Secrets API: - -[source,yaml] ----- -- apiGroups: [""] - resources: - - secrets - verbs: ["get"] ----- - -CAUTION: The above rule will give permission to {agent} pod to access Kubernetes Secrets API. Anyone who has access to the {agent} pod (`kubectl exec` for example) will also have access to the Kubernetes Secrets API. This allows access to a specific secret, regardless of the namespace that it belongs to. This option should be carefully considered. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/local-dynamic-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/local-dynamic-provider.asciidoc deleted file mode 100644 index 574bfd6b5..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/local-dynamic-provider.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[local-dynamic-provider]] -= Local dynamic provider - -Define multiple key-value pairs to generate multiple configurations. - -For example, the following {agent} policy defines a local dynamic provider that -defines three values for `item`: - -[source,yaml] ----- -inputs: - - id: logfile-${local_dynamic.my_var} - type: logfile - streams: - - paths: "/var/${local_dynamic.my_var}/app.log" - -providers: - local_dynamic: - items: - - vars: - my_var: key1 - - vars: - my_var: key2 - - vars: - my_var: key3 ----- - -The configuration generated by this policy looks like: - -[source,yaml] ----- -inputs: - - id: logfile-key1 - type: logfile - streams: - - paths: "/var/key1/app.log" - - id: logfile-key2 - type: logfile - streams: - - paths: "/var/key2/app.log" - - id: logfile-key3 - type: logfile - streams: - - paths: "/var/key3/app.log" ----- \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/providers/local-provider.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/providers/local-provider.asciidoc deleted file mode 100644 index 9430c950b..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/providers/local-provider.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[local-provider]] -= Local - -Provides custom keys to use as variables. For example: - -[source,yaml] ----- -providers: - local: - vars: - foo: bar ----- \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/structure-config-file.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/structure-config-file.asciidoc deleted file mode 100644 index 3d5bc8107..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/structure-config-file.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -[[structure-config-file]] -= Structure of a config file - -The `elastic-agent.yml` policy file contains all of the settings that determine how {agent} runs. The most important and commonly used settings are described here, including input and output options, providers used for variables and conditional output, security settings, logging options, enabling of special features, and specifications for {agent} upgrades. - -An `elastic-agent.yml` file is modular: You can combine input, output, and all other settings to enable the {integrations-docs}[{integrations}] to use with {agent}. Refer to <> for the steps to download the settings to use as a starting point, and then refer to the following examples to learn about the available settings: - -* <> -* <>. - -[discrete] -[[structure-config-file-components]] -== Config file components - -The following categories include the most common settings used to configure standalone {agent}. Follow each link for more detail and examples. - -<>:: -Specify how {agent} locates and processes input data. - -<>:: -Specify the key-value pairs used for variable substitution and conditionals in {agent} output. - -<>:: -Specify where {agent} sends data. - -<>:: -Configure SSL including SSL protocols and settings for certificates and keys. - -<>:: -Configure the {agent} logging output. - -<>:: -Configure any experiemental features in {agent}. These are disabled by default. - -<>:: -Specify the location of required artifacts and other settings used for {agent} upgrades. - - diff --git a/docs/en/ingest-management/elastic-agent/configuration/yaml/elastic-agent-reference-yaml.asciidoc b/docs/en/ingest-management/elastic-agent/configuration/yaml/elastic-agent-reference-yaml.asciidoc deleted file mode 100644 index 0b08d36dd..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/yaml/elastic-agent-reference-yaml.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[elastic-agent-reference-yaml]] -= Reference YAML - -The {agent} installation includes an `elastic-agent.reference.yml` file that -describes all the settings available in a standalone configuration. - -To ensure that you're accessing the latest version, refer to the original link:https://github.com/elastic/elastic-agent/blob/main/elastic-agent.reference.yml[`elastic-agent.reference.yml` file] in the `elastic/elastic-agent` repository. -A copy is included here for your convenience. - -Each section of the file and available settings are also described in <>. - -[source,yaml] ----- -include::elastic-agent-reference-yaml.yml[] ----- \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/configuration/yaml/elastic-agent-reference-yaml.yml b/docs/en/ingest-management/elastic-agent/configuration/yaml/elastic-agent-reference-yaml.yml deleted file mode 100644 index d0a500627..000000000 --- a/docs/en/ingest-management/elastic-agent/configuration/yaml/elastic-agent-reference-yaml.yml +++ /dev/null @@ -1,377 +0,0 @@ -###################### Agent Configuration Example ######################### - -# This file is an example configuration file highlighting only the most common -# options. The elastic-agent.reference.yml file from the same directory contains all the -# supported options with more comments. You can use it as a reference. - -###################################### -# Fleet configuration -###################################### -outputs: - default: - type: elasticsearch - hosts: [127.0.0.1:9200] - api_key: "example-key" - # username: "elastic" - # password: "changeme" - - # Performance preset for elasticsearch outputs. One of "balanced", "throughput", - # "scale", "latency" and "custom". - # The default if unspecified is "custom". - preset: balanced - -inputs: - - type: system/metrics - # Each input must have a unique ID. - id: unique-system-metrics-input - # Namespace name must conform to the naming conventions for Elasticsearch indices, cannot contain dashes (-), and cannot exceed 100 bytes - # For index naming restrictions, see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params - data_stream.namespace: default - use_output: default - streams: - - metricsets: - - cpu - # Dataset name must conform to the naming conventions for Elasticsearch indices, cannot contain dashes (-), and cannot exceed 100 bytes - data_stream.dataset: system.cpu - - metricsets: - - memory - data_stream.dataset: system.memory - - metricsets: - - network - data_stream.dataset: system.network - - metricsets: - - filesystem - data_stream.dataset: system.filesystem - -# # Collecting log files -# - type: filestream -# # Input ID allowing Elastic Agent to track the state of this input. Must be unique. -# id: your-input-id -# streams: -# # Stream ID for this data stream allowing Filebeat to track the state of the ingested files. Must be unique. -# # Each filestream data stream creates a separate instance of the Filebeat filestream input. -# - id: your-filestream-stream-id -# data_stream: -# dataset: generic -# paths: -# - /var/log/*.log - -# management: -# # Mode of management, the Elastic Agent support two modes of operation: -# # -# # local: The Elastic Agent will expect to find the inputs configuration in the local file. -# # -# # Default is local. -# mode: "local" - -# fleet: -# access_api_key: "" -# kibana: -# # kibana minimal configuration -# hosts: ["localhost:5601"] -# ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] - -# # optional values -# #protocol: "https" -# #service_token: "example-token" -# #path: "" -# #ssl.verification_mode: full -# #ssl.supported_protocols: [TLSv1.2, TLSv1.3] -# #ssl.cipher_suites: [] -# #ssl.curve_types: [] -# reporting: -# # Reporting threshold indicates how many events should be kept in-memory before reporting them to fleet. -# #reporting_threshold: 10000 -# # Frequency used to check the queue of events to be sent out to fleet. -# #reporting_check_frequency_sec: 30 - -# agent.download: -# # source of the artifacts, requires elastic like structure and naming of the binaries -# # e.g /windows-x86.zip -# sourceURI: "https://artifacts.elastic.co/downloads/beats/" -# # path to the directory containing downloaded packages -# target_directory: "${path.data}/downloads" -# # timeout for downloading package -# timeout: 120s -# # install_path describes the location of installed packages/programs. It is also used -# # for reading program specifications. -# install_path: "${path.data}/install" -# # retry_sleep_init_duration is the duration to sleep for before the first retry attempt. This -# # duration will increase for subsequent retry attempts in a randomized exponential backoff manner. -# retry_sleep_init_duration: 30s - -# agent.process: -# # timeout for creating new processes. when process is not successfully created by this timeout -# # start operation is considered a failure -# spawn_timeout: 30s -# # timeout for stopping processes. when process is not stopped by this timeout then the process. -# # is force killed -# stop_timeout: 30s - -# agent.grpc: -# # listen address for the GRPC server that spawned processes connect back to. -# address: localhost -# # port for the GRPC server that spawned processes connect back to. -# port: 6789 -# # max_message_size limits the message size in agent internal communication -# # default is 100MB -# max_message_size: 104857600 - -# agent.retry: -# # Enabled determines whether retry is possible. Default is false. -# enabled: true -# # RetriesCount specifies number of retries. Default is 3. -# # Retry count of 1 means it will be retried one time after one failure. -# retriesCount: 3 -# # Delay specifies delay in ms between retries. Default is 30s -# delay: 30s -# # MaxDelay specifies maximum delay in ms between retries. Default is 300s -# maxDelay: 5m -# # Exponential determines whether delay is treated as exponential. -# # With 30s delay and 3 retries: 30, 60, 120s -# # Default is false -# exponential: false - -# agent.limits: -# # limits the number of operating system threads that can execute user-level Go code simultaneously. -# # Translates into the GOMAXPROCS runtime parameter for each Go process started by the agent and the agent itself. -# # By default is set to `0` which means using all available CPUs. -# go_max_procs: 0 - -# agent.monitoring: -# # enabled turns on monitoring of running processes -# enabled: false -# # enables log monitoring -# logs: false -# # enables metrics monitoring -# metrics: false -# # metrics_period defines how frequent we should sample monitoring metrics. Default is 60 seconds. -# metrics_period: 60s -# # exposes /debug/pprof/ endpoints -# # recommended that these endpoints are only enabled if the monitoring endpoint is set to localhost -# pprof.enabled: false -# # The name of the output to use for monitoring data. -# use_output: monitoring -# # Exposes agent metrics using http, by default sockets and named pipes are used. -# # -# # `http` Also exposes a /liveness endpoint that will return an HTTP code depending on agent status: -# # 200: Agent is healthy -# # 500: A component or unit is in a failed state -# # 503: The agent coordinator is unresponsive -# # -# # You can pass a `failon` parameter to the /liveness endpoint to determine what component state will result in a 500. -# # For example: `curl 'localhost:6792/liveness?failon=degraded'` will return 500 if a component is in a degraded state. -# # The possible values for `failon` are: -# # `degraded`: return an error if a component is in a degraded state or failed state, or if the agent coordinator is unresponsive. -# # `failed`: return an error if a unit is in a failed state, or if the agent coordinator is unresponsive. -# # `heartbeat`: return an error only if the agent coordinator is unresponsive. -# # If no `failon` parameter is provided, the default behavior is `failon=heartbeat` -# http: -# # enables http endpoint -# enabled: false -# # The HTTP endpoint will bind to this hostname, IP address, unix socket or named pipe. -# # When using IP addresses, it is recommended to only use localhost. -# host: localhost -# # Port on which the HTTP endpoint will bind. Default is 0 meaning feature is disabled. -# port: 6791 -# # Metrics buffer endpoint -# buffer.enabled: false -# # Configuration for the diagnostics action handler -# diagnostics: -# # Rate limit for the action handler. Does not affect diagnostics collected through the CLI. -# limit: -# # Rate limit interval. -# interval: 1m -# # Rate limit burst. -# burst: 1 -# # Configuration for the file-upload client. Client may retry failed requests with an exponential backoff. -# uploader: -# # Max retries allowed when uploading a chunk. -# max_retries: 10 -# # Initial duration of the backoff. -# init_dur: 1s -# # Max duration of the backoff. -# max_dur: 1m - -# # Allow fleet to reload its configuration locally on disk. -# # Notes: Only specific process configuration and external input configurations will be reloaded. -# agent.reload: -# # enabled configure the Elastic Agent to reload or not the local configuration. -# # -# # Default is true -# enabled: true - -# # period define how frequent we should look for changes in the configuration. -# period: 10s - -# Feature Flags - -# This section enables or disables feature flags supported by Agent and its components. -#agent.features: -# fqdn: -# enabled: false - -# Logging - -# There are four options for the log output: file, stderr, syslog, eventlog -# The file output is the default. - -# Sets log level. The default log level is info. -# Available log levels are: error, warning, info, debug -#agent.logging.level: info - -# Enable debug output for selected components. To enable all selectors use ["*"] -# Other available selectors are "beat", "publish", "service" -# Multiple selectors can be chained. -#agent.logging.selectors: [ ] - -# Send all logging output to stderr. The default is false. -agent.logging.to_stderr: true - -# Send all logging output to syslog. The default is false. -#agent.logging.to_syslog: false - -# Send all logging output to Windows Event Logs. The default is false. -#agent.logging.to_eventlog: false - -# If enabled, Elastic-Agent periodically logs its internal metrics that have changed -# in the last period. For each metric that changed, the delta from the value at -# the beginning of the period is logged. Also, the total values for -# all non-zero internal metrics are logged on shutdown. This setting is also passed -# to beats running under the agent. The default is true. -#agent.logging.metrics.enabled: true - -# The period after which to log the internal metrics. The default is 30s. -#agent.logging.metrics.period: 30s - -# Logging to rotating files. Set logging.to_files to false to disable logging to -# files. -#agent.logging.to_files: true -#agent.logging.files: - # Configure the path where the logs are written. The default is the logs directory - # under the home path (the binary location). - #path: /var/log/elastic-agent - - # The name of the files where the logs are written to. - #name: elastic-agent - - # Configure log file size limit. If limit is reached, log file will be - # automatically rotated - #rotateeverybytes: 20971520 # = 20MB - - # Number of rotated log files to keep. Oldest files will be deleted first. - #keepfiles: 7 - - # The permissions mask to apply when rotating log files. The default value is 0600. - # Must be a valid Unix-style file permissions mask expressed in octal notation. - #permissions: 0600 - - # Enable log file rotation on time intervals in addition to size-based rotation. - # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h - # are boundary-aligned with minutes, hours, days, weeks, months, and years as - # reported by the local system clock. All other intervals are calculated from the - # Unix epoch. Defaults to disabled. - #interval: 0 - - # Rotate existing logs on startup rather than appending to the existing - # file. Defaults to true. - # rotateonstartup: true - -# Set to true to log messages in JSON format. -#agent.logging.json: false - -#=============================== Events Logging =============================== -# Some outputs will log raw events on errors like indexing errors in the -# Elasticsearch output, to prevent logging raw events (that may contain -# sensitive information) together with other log messages, a different -# log file, only for log entries containing raw events, is used. It will -# use the same level, selectors and all other configurations from the -# default logger, but it will have it's own file configuration. -# -# Having a different log file for raw events also prevents event data -# from drowning out the regular log files. -# -# IMPORTANT: No matter the default logger output configuration, raw events -# will **always** be logged to a file configured by `agent.logging.event_data.files`. - -# agent.logging.event_data: -# Logging to rotating files. Set agent.logging.to_files to false to disable logging to -# files. -#agent.logging.event_data.to_files: true -#agent.logging.event_data: - # Configure the path where the logs are written. The default is the logs directory - # under the home path (the binary location). - #path: /var/log/filebeat - - # The name of the files where the logs are written to. - #name: filebeat-event-data - - # Configure log file size limit. If the limit is reached, log file will be - # automatically rotated. - #rotateeverybytes: 5242880 # = 5MB - - # Number of rotated log files to keep. The oldest files will be deleted first. - #keepfiles: 2 - - # The permissions mask to apply when rotating log files. The default value is 0600. - # Must be a valid Unix-style file permissions mask expressed in octal notation. - #permissions: 0600 - - # Enable log file rotation on time intervals in addition to the size-based rotation. - # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h - # are boundary-aligned with minutes, hours, days, weeks, months, and years as - # reported by the local system clock. All other intervals are calculated from the - # Unix epoch. Defaults to disabled. - #interval: 0 - - # Rotate existing logs on startup rather than appending them to the existing - # file. Defaults to false. - # rotateonstartup: false - -# Providers - -# Providers supply the key/values pairs that are used for variable substitution -# and conditionals. Each provider's keys are automatically prefixed with the name -# of the provider. - -# All registered providers are enabled by default. - -# Disable all providers by default and only enable explicitly configured providers. -# agent.providers.initial_default: false - -#providers: - -# Agent provides information about the running agent. -# agent: -# enabled: true - -# Docker provides inventory information from Docker. -# docker: -# enabled: true -# host: "unix:///var/run/docker.sock" -# cleanup_timeout: 60 - -# Env providers information about the running environment. -# env: -# enabled: true - -# Host provides information about the current host. -# host: -# enabled: true - -# Local provides custom keys to use as variable. -# local: -# enabled: true -# vars: -# foo: bar - -# Local dynamic allows you to define multiple key/values to generate multiple configurations. -# local_dynamic: -# enabled: true -# items: -# - vars: -# my_var: key1 -# - vars: -# my_var: key2 -# - vars: -# my_var: key3 diff --git a/docs/en/ingest-management/elastic-agent/configuring-kubernetes-metadata.asciidoc b/docs/en/ingest-management/elastic-agent/configuring-kubernetes-metadata.asciidoc deleted file mode 100644 index 8aeccdf85..000000000 --- a/docs/en/ingest-management/elastic-agent/configuring-kubernetes-metadata.asciidoc +++ /dev/null @@ -1,119 +0,0 @@ -[[configuring-kubernetes-metadata]] -= Configuring Kubernetes metadata enrichment on {agent} - -Kubernetes https://www.elastic.co/guide/en/observability/current/monitor-kubernetes.html#beats-metadata[metadata] refer to contextual information extracted from Kubernetes resources. Metadata information enrich metrics and logs -collected from a Kubernetes cluster, enabling deeper insights into Kubernetes environments. - -When the {agent}'s policy includes the https://docs.elastic.co/en/integrations/kubernetes[{k8s} Integration] which configures the collection of Kubernetes related metrics and container logs, the mechanisms used for the metadata enrichment are: - -* <> for log collection -* Kubernetes metadata enrichers for metrics - -In case the {agent}'s policy does not include the Kubernetes integration, but {agent} runs inside a Kubernetes -environment, the Kubernetes metadata are collected by the <>. The processor is configurable when {agent} is managed by {fleet}. - - -[discrete] -== Kubernetes Logs - -When it comes to container logs collection, the <> is used. It monitors for pod resources -in the cluster and associates each container log file with a corresponding pod's container object. -That way, when a log file is parsed and an event is ready to be published to {es}, the internal mechanism knows to which actual -container this log file belongs. The linkage is established by the container's ID, which forms an integral part of the filename for the log. -The Kubernetes autodiscover provider has already collected all the metadata for that container, leveraging pod, namespace and node watchers. Thus, the events are enriched with the relevant metadata. - -In order to configure the metadata collection, the Kubernetes provider needs to be configured. -All the available configuration options of the **Kubernetes provider** can be found in the https://www.elastic.co/guide/en/fleet/current/kubernetes-provider.html[Kubernetes Provider] documentation. - -* For **Standalone {agent} configuration:** - -Follow information of `add_resource_metadata` parameter of <> - -[source,yaml] -.Example of how to configure kubernetes metadata enrichment ------------------------------------------------- -apiVersion: v1 -kind: ConfigMap -metadata: - name: agent-node-datastreams - namespace: kube-system - labels: - k8s-app: elastic-agent -data: - agent.yml: |- - kubernetes.provider - add_resource_metadata: - namespace: - #use_regex_include: false - include_labels: ["namespacelabel1"] - #use_regex_exclude: false - #exclude_labels: ["namespacelabel2"] - node: - #use_regex_include: false - include_labels: ["nodelabel2"] - include_annotations: ["nodeannotation1"] - #use_regex_exclude: false - #exclude_labels: ["nodelabel3"] - #deployment: false - #cronjob: false ------------------------------------------------- - -* **Managed {agent} configuration**: - -The Kubernetes provider can be configured following the steps in <>. - -[discrete] -== Kubernetes metrics - -The {agent} metrics collection implements metadata enrichment based on watchers, a mechanism used to continuously monitor Kubernetes resources for changes and updates. -Specifically, the different datasets share a set of resource watchers. Those watchers (pod, node, namespace, deployment, daemonset etc.) are responsible for watching for all different resource events (creation, update and deletion) by subscribing to the Kubernetes watch API. This enables real-time synchronization of application state with the state of the Kubernetes cluster. -So, they keep an up-to-date shared cache store of all the resources' information and metadata. Whenever metrics are collected by the different sources (kubelet, kube-state-metrics), before they get published to {es} as events, they are enriched with needed metadata. - -The metadata enrichment can be configured by editing the Kubernetes integration. -**Only in metrics collection**, metadata enrichment can be disabled by switching off the `Add Metadata` toggle in every dataset. Extra resource metadata such as -node, namespace labels and annotations, as well as deployment and cronjob information can be configured per dataset. - -- **Managed {agent} configuration**: - -image::images/kubernetes_metadata.png[metadata configuration] - -NOTE: add_resource_metadata block needs to be configured to all datasets that are enabled - - -- For **Standalone {agent} configuration**: - -[source,yaml] -.Elastic Agent Standalone manifest sample ------------------------------------------------- -[output trunctated ...] -- data_stream: - dataset: kubernetes.state_pod - type: metrics - metricsets: - - state_pod - add_metadata: true - hosts: - - 'kube-state-metrics:8080' - period: 10s - add_resource_metadata: - namespace: - enabled: true - #use_regex_include: false - include_labels: ["namespacelabel1"] - #use_regex_exclude: false - #exclude_labels: ["namespacelabel2"] - node: - enabled: true - #use_regex_include: false - include_labels: ["nodelabel2"] - include_annotations: ["nodeannotation1"] - #use_regex_exclude: false - #exclude_labels: ["nodelabel3"] - #deployment: false - #cronjob: false ------------------------------------------------- -The `add_resource_metadata` block configures the watcher's enrichment functionality. See <> for full description of add_resource_metadata. Same configuration parameters apply. - -[discrete] -== Note -Although the `add_kubernetes_metadata` processor is by default enabled when using elastic-agent, it is skipped whenever Kubernetes integration is detected. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/debug-standalone-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/debug-standalone-elastic-agent.asciidoc deleted file mode 100644 index 4882ddbe1..000000000 --- a/docs/en/ingest-management/elastic-agent/debug-standalone-elastic-agent.asciidoc +++ /dev/null @@ -1,220 +0,0 @@ -[[debug-standalone-agents]] -= Debug standalone {agent}s - -When you run standalone {agent}s, you are responsible for monitoring the status -of your deployed {agent}s. You cannot view the status or logs in {fleet}. - -Use the following tips to help identify potential issues. - -Also refer to <> for guidance on specific problems. - -NOTE: You might need to log in as a root user (or Administrator on Windows) to -run these commands. - -[discrete] -== Check the status of the running {agent} - -To check the status of the running {agent} daemon and other processes managed by -{agent}, run the `status` command. For example: - -[source,shell] ----- -elastic-agent status ----- - -Returns something like: - -[source,yaml] ----- -State: HEALTHY -Message: Running -Fleet State: STOPPED -Fleet Message: (no message) -Components: - * log (HEALTHY) - Healthy: communicating with pid '25423' - * filestream (HEALTHY) - Healthy: communicating with pid '25424' ----- - -By default, this command returns the status in human-readable format. Use the -`--output` flag to change it to `json` or `yaml`. - -For more information about this command, refer to -<>. - -[discrete] -[[inspect-standalone-agent-logs]] -== Inspect {agent} and related logs - -If the {agent} status is unhealthy, or behaving unexpectedly, inspect the logs -of the running {agent}. - -The log location varies by platform. {agent} logs are in the folders described -in <>. {beats} and {fleet-server} logs are in folders named -for the output (for example, `default`). - -Start by investigating any errors you see in the {agent} and related logs. Also -look for repeated lines that might indicate problems like connection issues. If -the {agent} and related logs look clean, check the host operating system logs -for out of memory (OOM) errors related to the {agent} or any of its processes. - -[discrete] -[[increase-log-level]] -== Increase the log level of the running {agent} - -The log level of the running agent is set to `info` by default. At this level, -{agent} will log informational messages, including the number of events that are -published. It also logs any warnings, errors, or critical errors. - -To increase the log level, set it to `debug` in the `elastic-agent.yml` file. - -The `debug` setting configures {agent} to log debug messages, including a -detailed printout of all flushed events, plus all the information collected at -other log levels. - -Set other options if you want write logs to a file. For example: - -[source,yaml] ----- -agent.logging.level: debug -agent.logging.to_files: true -agent.logging.files: - path: /var/log/elastic-agent - name: elastic-agent - keepfiles: 7 - permissions: 0600 ----- - -For other log settings, refer to <>. - -[discrete] -[[expose-debug-endpoint]] -// lint ignore pprof -== Expose /debug/pprof/ endpoints with the monitoring endpoint - -Profiling data produced by the `/debug/pprof/` endpoints can be useful for -debugging, but presents a security risk. Do not expose these endpoints if the -monitoring endpoint is accessible over a network. (By default, the monitoring -endpoint is bound to a local Unix socket or Windows npipe and not accessible -over a network.) - -To expose the `/debug/pprof/` endpoints, set `agent.monitoring.pprof: true` in -the `elastic-agent.yml` file. For more information about monitoring settings, -refer to <>. - -After exposing the endpoints, you can access the HTTP handler bound to a socket -for {beats} or the {agent}. For example: - -[source,shell] ----- -sudo curl --unix-socket /Library/Elastic/Agent/data/tmp/default/filebeat/filebeat.sock http://socket/ | json_pp ----- - -Returns something like: - -[source,json] ----- -{ - "beat" : "filebeat", - "binary_arch" : "amd64", - "build_commit" : "93708bd74e909e57ed5d9bea3cf2065f4cc43af3", - "build_time" : "2022-01-28T09:53:29.000Z", - "elastic_licensed" : true, - "ephemeral_id" : "421e2525-9360-41db-9395-b9e627fbbe6e", - "gid" : "0", - "hostname" : "My-MacBook-Pro.local", - "name" : "My-MacBook-Pro.local", - "uid" : "0", - "username" : "root", - "uuid" : "fc0cc98b-b6d8-4eef-abf5-2d5f26adc7e8", - "version" : "7.17.0" -} ----- - -Likewise, the following request: - -[source,shell] ----- -sudo curl --unix-socket /Library/Elastic/Agent/data/tmp/elastic-agent.sock http://socket/stats | json_pp ----- - -Returns something like: - -[source,shell] ----- -{ - "beat" : { - "cpu" : { - "system" : { - "ticks" : 16272, - "time" : { - "ms" : 16273 - } - }, - "total" : { - "ticks" : 42981, - "time" : { - "ms" : 42982 - }, - "value" : 42981 - }, - "user" : { - "ticks" : 26709, - "time" : { - "ms" : 26709 - } - } - }, - "info" : { - "ephemeral_id" : "ea8fec0d-f7dd-4577-85d7-a2c38583c9c6", - "uptime" : { - "ms" : 5885653 - }, - "version" : "7.17.0" - }, - "memstats" : { - "gc_next" : 13027776, - "memory_alloc" : 7771632, - "memory_sys" : 39666696, - "memory_total" : 757970208, - "rss" : 58990592 - }, - "runtime" : { - "goroutines" : 101 - } - }, - "system" : { - "cpu" : { - "cores" : 12 - }, - "load" : { - "1" : 4.8892, - "15" : 2.6748, - "5" : 3.0537, - "norm" : { - "1" : 0.4074, - "15" : 0.2229, - "5" : 0.2545 - } - } - } -} ----- - -[discrete] -[[inspect-configuration]] -== Inspect the {agent} configuration - -To inspect the running {agent} configuration use the <> command. - -To analyze the current state of the agent, inspect log files, and see the configuration -of {agent} and the sub-processes it starts, run the `diagnostics` command. For example: - -[source,shell] ----- -elastic-agent diagnostics ----- - -For more information about this command, refer to -<>. diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-capabilities.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-capabilities.asciidoc deleted file mode 100644 index 206a78965..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-capabilities.asciidoc +++ /dev/null @@ -1,170 +0,0 @@ -[discrete] -[[capabilities]] -== Capabilities - -Capabilities are a set of rules that can define the behavior of {agent} on specific machines. -When capabilities are specified, the input configuration is overridden, and any details that are not used are subject to the capabilities' definition. -When {agent} starts, it locates the `capabilities.yml` file next to its configuration file. Please note, {agent} will not reload the definition after it starts. - -This example definition configuration enables the system/metrics input and denies every other input. Rules are applied in the order they are defined, with the first matching rule being applied. The * wildcard is also supported. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: allow - input: system/metrics -- rule: deny - input: "*" ----- - - -Let's use the following configuration and the capabilities defined above. - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - paths: []{a.error,b.access} - - id: unique-system-logs-id - type: system/logs - streams: - - paths: []{a,b} - - id: unique-system-metrics-id - type: system/metrics - streams: - - metricset: cpu - - metricset: memory ----- - -The resulting configuration is this. - -[source,yaml] ----- -inputs: - - id: unique-system-metrics-id - type: system/metrics - streams: - - metricset: cpu - - metricset: memory ----- - - - -In this example, the stream is applied if the host platform is not Windows: - -[source,yaml] ----- -inputs: - - id: unique-system-metrics-id - type: system/metrics - streams: - - metricset: load - data_stream.dataset: system.cpu - condition: ${host.platform} != 'windows' ----- - -In this example, the processor is applied if the host platform is not Windows: - -[source,yaml] ----- -inputs: - - id: unique-system-metrics-id - type: system/metrics - streams: - - metricset: load - data_stream.dataset: system.cpu - processors: - - add_fields: - fields: - platform: ${host.platform} - to: host - condition: ${host.platform} != 'windows' ----- - -[discrete] -[[supported-capabilities]] -=== Supported capabilities - -You can use capabilities to modify the congestion of outputs, inputs or upgrade the capability of {agent}. - -[discrete] -[[capabilities-inputs]] -==== Inputs - -For inputs, you can the `input` key containing an expression. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: allow - input: system/* ----- - -[discrete] -[[capabilities-outputs]] -==== Outputs - -For outputs, you can use the `output` key containing an expression. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: deny - output: kafka ----- - -[discrete] -[[capabilities-upgrade]] -==== Upgrade - -For an upgrade, you can use the `output` key containing an expression that allows a specific upgrade to version `8.0.0`. `${version}` is the target version. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: allow - upgrade: "${version} == '8.0.0'" ----- - -With a versioned EQL, conditions can be used, as well as a * wildcard. This allows upgrades just for bug fixes. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: allow - upgrade: "${version} == '8.0.*'" ----- - -**Upgrade capability definition** - -This upgrade definition allows upgrades just for minor version changes. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: allow - upgrade: "${version} == '8.*.*'" ----- - -Part of the upgrade action is also a `source_uri` address that specifies the repository from where artifacts should be -downloaded. - -To restrict to HTTPS, you can use the following. - -[source,yaml] ----- -version: 0.0.1 -capabilities: -- rule: allow - upgrade: "startsWith(${source_uri}, 'https')" ----- - -Upgrade capability is blocking. This means that if an action does not meet the first upgrade definition, we’re not proceeding with the evaluation. diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-conditions.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-conditions.asciidoc deleted file mode 100644 index cf6dbc832..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-conditions.asciidoc +++ /dev/null @@ -1,108 +0,0 @@ -[discrete] -[[conditions]] -= Conditions - -A condition is a boolean expression that you can specify in your agent policy -to control whether a configuration is applied to the running {agent}. You can -set a condition on inputs, streams, or even processors. - -In this example, the input is applied if the host platform is Linux: - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - paths: - - /var/log/syslog - condition: ${host.platform} == 'linux' ----- - -In this example, the stream is applied if the host platform is not Windows: - -[source,yaml] ----- -inputs: - - id: unique-system-metrics-id - type: system/metrics - streams: - - metricset: load - data_stream.dataset: system.cpu - condition: ${host.platform} != 'windows' ----- - -In this example, the processor is applied if the host platform is not Windows: - -[source,yaml] ----- -inputs: - - id: unique-system-metrics-id - type: system/metrics - streams: - - metricset: load - data_stream.dataset: system.cpu - processors: - - add_fields: - fields: - platform: ${host.platform} - to: host - condition: ${host.platform} != 'windows' ----- - -[discrete] -[[condition-syntax]] -== Condition syntax - -The conditions supported by {agent} are based on {ref}/eql-syntax.html[EQL]'s -boolean syntax, but add support for variables from providers and functions to -manipulate the values. - -**Supported operators:** - -* Full PEMDAS math support for `+ - * / %`. -* Relational operators `< <= >= > == !=` -* Logical operators `and` and `or` - - -**Functions:** - -// lint ignore startswith-function indexof-function endswith-function concat-function -* Array functions <> -* Dict functions <> (not in EQL) -* Length functions <> -* Math functions <>, <>, -<>, <>, <> -* String functions <>, <>, -<>, <>, <>, -<>, <>, -<> - -**Types:** - -* Booleans `true` and `false` - -[discrete] -[[condition-examples]] -== Condition examples - -Run only when a specific label is included. - -[source,eql] ----- -arrayContains(${docker.labels}, 'monitor') ----- - -Skip on Linux platform or macOS. - -[source,eql] ----- -${host.platform} != "linux" and ${host.platform} != "darwin" ----- - -Run only for specific labels. - -[source,eql] ----- -arrayContains(${docker.labels}, 'monitor') or arrayContains(${docker.label}, 'production') ----- diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-container.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-container.asciidoc deleted file mode 100644 index 8d7a45b66..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-container.asciidoc +++ /dev/null @@ -1,425 +0,0 @@ -[[elastic-agent-container]] -= Run {agent} in a container - -You can run {agent} inside a container -- either with {fleet-server} or standalone. Docker images for all versions of {agent} are available from the https://www.docker.elastic.co/r/elastic-agent/elastic-agent[Elastic Docker registry]. If you are running in Kubernetes, refer to {eck-ref}/k8s-elastic-agent.html[run {agent} on ECK]. - -Note that running {elastic-agent} in a container is supported only in Linux environments. For this reason we don't currently provide {agent} container images for Windows. - -Considerations: - -* When {agent} runs inside a container, it cannot be upgraded through {fleet} as it expects that the container itself is upgraded. -* Enrolling and running an {agent} is usually a two-step process. -However, this doesn't work in a container, so a special subcommand, `container`, is called. -This command allows environment variables to configure all properties, and runs the `enroll` and `run` commands as a single command. - -[discrete] -== What you need - -- https://docs.docker.com/get-docker/[Docker installed]. - -- {es} for storing and searching your data, and {kib} for visualizing and managing it. -+ --- -include::{observability-docs-root}/docs/en/shared/spin-up-the-stack/widget.asciidoc[] - --- - -[discrete] -== Step 1: Pull the image - -There are two images for {agent}, *elastic-agent* and *elastic-agent-complete*. The *elastic-agent* image contains all the binaries for running {beats}, while the *elastic-agent-complete* image contains these binaries plus additional dependencies to run browser monitors through Elastic Synthetics. Refer to {observability-guide}/uptime-set-up.html[Synthetic monitoring via {agent} and {fleet}] for more information. - -Run the `docker pull` command against the Elastic Docker registry: - -[source,terminal,subs="attributes"] ----- -docker pull docker.elastic.co/elastic-agent/elastic-agent:{version} ----- - -Alternately, you can use the hardened link:https://wolfi.dev/[Wolfi] image. -Using Wolfi images requires Docker version 20.10.10 or later. -For details about why the Wolfi images have been introduced, refer to our article -link:https://www.elastic.co/blog/reducing-cves-in-elastic-container-images[Reducing CVEs in Elastic container images]. - - -[source,terminal,subs="attributes"] ----- -docker pull docker.elastic.co/elastic-agent/elastic-agent-wolfi:{version} ----- - -If you want to run Synthetics tests, run the `docker pull` command to fetch the *elastic-agent-complete* image: - -[source,terminal,subs="attributes"] ----- -docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:{version} ----- - -To run Synthetics tests using the hardened link:https://wolfi.dev/[Wolfi] image, run: - -[source,terminal,subs="attributes"] ----- -docker pull docker.elastic.co/elastic-agent/elastic-agent-complete-wolfi:{version} ----- - -[discrete] -== Step 2: Optional: Verify the image - -Although it's optional, we highly recommend verifying the signatures included with your downloaded Docker images to ensure that the images are valid. - -Elastic images are signed with https://docs.sigstore.dev/cosign/overview/[Cosign] which is part of the https://www.sigstore.dev/[Sigstore] project. Cosign supports container signing, verification, and storage in an OCI registry. Install the appropriate https://docs.sigstore.dev/cosign/installation/[Cosign application] -for your operating system. - -Run the following commands to verify the *elastic-agent* container image signature for {agent} v{version}: - -["source","sh",subs="attributes"] --------------------------------------------- -wget https://artifacts.elastic.co/cosign.pub <1> -cosign verify --key cosign.pub docker.elastic.co/elastic-agent/elastic-agent:{version} <2> --------------------------------------------- -<1> Download the Elastic public key to verify container signature -<2> Verify the container against the Elastic public key - -If you're using the *elastic-agent-complete* image, run the commands as follows: - -["source","sh",subs="attributes"] --------------------------------------------- -wget https://artifacts.elastic.co/cosign.pub <1> -cosign verify --key cosign.pub docker.elastic.co/elastic-agent/elastic-agent-complete:{version} <2> --------------------------------------------- - -The command prints the check results and the signature payload in JSON format, for example: - -["source","sh",subs="attributes"] --------------------------------------------- -Verification for docker.elastic.co/elastic-agent/elastic-agent-complete:{version} -- -The following checks were performed on each of these signatures: - - The cosign claims were validated - - Existence of the claims in the transparency log was verified offline - - The signatures were verified against the specified public key --------------------------------------------- - -[discrete] -== Step 3: Get aware of the {agent} container command - - -The {agent} container command offers a wide variety of options. -To see the full list, run: - -[source,terminal,subs="attributes"] ----- -docker run --rm docker.elastic.co/elastic-agent/elastic-agent:{version} elastic-agent container -h ----- - -[discrete] -== Step 4: Run the {agent} image - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/run-agent-image/widget.asciidoc[] - -If you need to run {fleet-server} as well, adjust the `docker run` command above by adding these environment variables: - -[source,yaml] ----- - --env FLEET_SERVER_ENABLE=true \ <1> - --env FLEET_SERVER_ELASTICSEARCH_HOST= \ <2> - --env FLEET_SERVER_SERVICE_TOKEN= <3> ----- -<1> Set to `true` to bootstrap {fleet-server} on this {agent}. This automatically forces {fleet} enrollment as well. -<2> The Elasticsearch host for Fleet Server to communicate with, for example `http://elasticsearch:9200`. -<3> Service token to use for communication with {es} and {kib}. - -[TIP] -.Running {agent} on a read-only file system -==== -If you'd like to run {agent} in a Docker container on a read-only file -system, you can do so by specifying the `--read-only` option. -{agent} requires a stateful directory to store application data, so -with the `--read-only` option you also need to use the `--mount` option to -specify a path to where that data can be stored. - -For example: - -[source,terminal,subs="attributes"] ----- -docker run --rm --mount source=$(pwd)/state,destination=/state -e {STATE_PATH}=/state --read-only docker.elastic.co/elastic-agent/elastic-agent:{version} <1> ----- - -Where {STATE_PATH} is the path to a stateful directory to mount where {agent} application data can be stored. - -You can also add `type=tmpfs` to the mount parameter (`--mount type=tmpfs,destination=/state...`) to specify a temporary file storage location. This should be done with caution as it can cause data duplication, particularly for logs, when the container is restarted, as no state data is persisted. -==== - -[discrete] -== Step 5: View your data in {kib} - - -include::run-container-common/kibana-fleet-data.asciidoc[] - -[discrete] -== Docker compose - -You can run {agent} in docker-compose. -The example below shows how to enroll an {agent}: - -["source","yaml",subs="attributes"] ----- -version: "3" -services: - elastic-agent: - image: docker.elastic.co/elastic-agent/elastic-agent:{version} <1> - container_name: elastic-agent - restart: always - user: root # note, synthetic browser monitors require this set to `elastic-agent` - environment: - - FLEET_ENROLLMENT_TOKEN= - - FLEET_ENROLL=1 - - FLEET_URL= ----- -//NOTCONSOLE -<1> Switch `elastic-agent` to `elastic-agent-complete` if you intend to use the complete version. Use the `elastic-agent` user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to {observability-guide}/uptime-set-up.html[Synthetics {fleet} Quickstart] for more information. - -If you need to run {fleet-server} as well, adjust the docker-compose file above by adding these environment variables: - -[source,yaml] ----- - - FLEET_SERVER_ENABLE=true - - FLEET_SERVER_ELASTICSEARCH_HOST= - - FLEET_SERVER_SERVICE_TOKEN= ----- - -Refer to <> for all available options. - -[discrete] -== Logs - - -Since a container supports only a single version of {agent}, -logs and state are stored a bit differently than when running an {agent} outside of a container. The logs can be found under: `/usr/share/elastic-agent/state/data/logs/*`. - -It's important to note that only the logs from the {agent} process itself are logged to `stdout`. Subprocess logs are not. Each subprocess writes its own logs to the `default` directory inside the logs directory: - -[source,terminal] ----- -/usr/share/elastic-agent/state/data/logs/default/* ----- - -TIP: Running into errors with {fleet-server}? -Check the fleet-server subprocess logs for more information. - -[discrete] -== Debugging - - -A monitoring endpoint can be enabled to expose resource usage and event processing data. The endpoint is compatible with {agent}s running in both {fleet} mode and Standalone mode. - -Enable the monitoring endpoint in `elastic-agent.yml` on the host where the {agent} is installed. -A sample configuration looks like this: - -[source,yaml] ----- -agent.monitoring: - enabled: true <1> - logs: true <2> - metrics: true <3> - http: - enabled: true <4> - host: localhost <5> - port: 6791 <6> ----- -<1> Enable monitoring of running processes. -<2> Enable log monitoring. -<3> Enable metrics monitoring. -<4> Expose {agent} metrics over HTTP. By default, sockets and named pipes are used. -<5> The hostname, IP address, Unix socket, or named pipe that the HTTP endpoint will bind to. When using IP addresses, we recommend only using `localhost`. -<6> The port that the HTTP endpoint will bind to. - -The above configuration exposes a monitoring endpoint at `http://localhost:6791/processes`. - -// Begin collapsed section -[%collapsible] -.`http://localhost:6791/processes` output -==== - -[source,json] ----- -{ - "processes":[ - { - "id":"metricbeat-default", - "pid":"36923", - "binary":"metricbeat", - "source":{ - "kind":"configured", - "outputs":[ - "default" - ] - } - }, - { - "id":"filebeat-default-monitoring", - "pid":"36924", - "binary":"filebeat", - "source":{ - "kind":"internal", - "outputs":[ - "default" - ] - } - }, - { - "id":"metricbeat-default-monitoring", - "pid":"36925", - "binary":"metricbeat", - "source":{ - "kind":"internal", - "outputs":[ - "default" - ] - } - } - ] -} ----- - -==== - -Each process ID in the `/processes` output can be accessed for more details. - -// Begin collapsed section -[%collapsible] -.`http://localhost:6791/processes/{process-name}` output -==== - -[source,json] ----- -{ - "beat":{ - "cpu":{ - "system":{ - "ticks":537, - "time":{ - "ms":537 - } - }, - "total":{ - "ticks":795, - "time":{ - "ms":796 - }, - "value":795 - }, - "user":{ - "ticks":258, - "time":{ - "ms":259 - } - } - }, - "info":{ - "ephemeral_id":"eb7e8025-7496-403f-9f9a-42b20439c737", - "uptime":{ - "ms":75332 - }, - "version":"7.14.0" - }, - "memstats":{ - "gc_next":23920624, - "memory_alloc":20046048, - "memory_sys":76104712, - "memory_total":60823368, - "rss":83165184 - }, - "runtime":{ - "goroutines":58 - } - }, - "libbeat":{ - "config":{ - "module":{ - "running":4, - "starts":4, - "stops":0 - }, - "reloads":1, - "scans":1 - }, - "output":{ - "events":{ - "acked":0, - "active":0, - "batches":0, - "dropped":0, - "duplicates":0, - "failed":0, - "toomany":0, - "total":0 - }, - "read":{ - "bytes":0, - "errors":0 - }, - "type":"elasticsearch", - "write":{ - "bytes":0, - "errors":0 - } - }, - "pipeline":{ - "clients":4, - "events":{ - "active":231, - "dropped":0, - "failed":0, - "filtered":0, - "published":231, - "retry":112, - "total":231 - }, - "queue":{ - "acked":0, - "max_events":4096 - } - } - }, - "metricbeat":{ - "system":{ - "cpu":{ - "events":8, - "failures":0, - "success":8 - }, - "filesystem":{ - "events":80, - "failures":0, - "success":80 - }, - "memory":{ - "events":8, - "failures":0, - "success":8 - }, - "network":{ - "events":135, - "failures":0, - "success":135 - } - } - }, - "system":{ - "cpu":{ - "cores":8 - }, - "load":{ - "1":2.5957, - "15":5.415, - "5":3.5815, - "norm":{ - "1":0.3245, - "15":0.6769, - "5":0.4477 - } - } - } -} ----- - -==== \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-debug-input-configs.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-debug-input-configs.asciidoc deleted file mode 100644 index f9211d506..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-debug-input-configs.asciidoc +++ /dev/null @@ -1,130 +0,0 @@ -[discrete] -[[debug-configs]] -== Debugging - -To debug configurations that include variable substitution and conditions, use -the `inspect` command. This command shows the configuration that's generated -after variables are replaced and conditions are applied. - -First run the {agent}. For this example, we'll use the following agent policy: - - -[source,yaml] ----- -outputs: - default: - type: elasticsearch - hosts: [127.0.0.1:9200] - apikey: - -providers: - local_dynamic: - items: - - vars: - key: value1 - processors: - - add_fields: - fields: - custom: match1 - target: dynamic - - vars: - key: value2 - processors: - - add_fields: - fields: - custom: match2 - target: dynamic - - vars: - key: value3 - processors: - - add_fields: - fields: - custom: match3 - target: dynamic - -inputs: - - id: unique-logfile-id - type: logfile - enabled: true - streams: - - paths: - - /var/log/${local_dynamic.key} ----- - -Then run `elastic-agent inspect --variables` to see the generated configuration. For -example: - -// lint disable elasticsearch changeme -[source,shell] ----- -$ ./elastic-agent inspect --variables -inputs: -- enabled: true - id: unique-logfile-id-local_dynamic-0 - original_id: unique-logfile-id - processors: - - add_fields: - fields: - custom: match1 - target: dynamic - streams: - - paths: - - /var/log/value1 - type: logfile -- enabled: true - id: unique-logfile-id-local_dynamic-1 - original_id: unique-logfile-id - processors: - - add_fields: - fields: - custom: match2 - target: dynamic - streams: - - paths: - - /var/log/value2 - type: logfile -- enabled: true - id: unique-logfile-id-local_dynamic-2 - original_id: unique-logfile-id - processors: - - add_fields: - fields: - custom: match3 - target: dynamic - streams: - - paths: - - /var/log/value3 - type: logfile -outputs: - default: - apikey: - hosts: - - 127.0.0.1:9200 - type: elasticsearch -providers: - local_dynamic: - items: - - processors: - - add_fields: - fields: - custom: match1 - target: dynamic - vars: - key: value1 - - processors: - - add_fields: - fields: - custom: match2 - target: dynamic - vars: - key: value2 - - processors: - - add_fields: - fields: - custom: match3 - target: dynamic - vars: - key: value3 - ---- ----- diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-dynamic-inputs.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-dynamic-inputs.asciidoc deleted file mode 100644 index adcac2bb1..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-dynamic-inputs.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -[[dynamic-input-configuration]] -= Variables and conditions in input configurations - -When running {agent} in some environments, you might not know all the input -configuration details up front. To solve this problem, the input configuration -accepts variables and conditions that get evaluated at runtime using -information from the running environment. Similar to autodiscovery, these -capabilities allow you to apply configurations dynamically. - -Let's consider a unique agent policy that is deployed on two machines: a Linux -machine named "linux-app" and a Windows machine named "winapp". Notice that -the configuration has some variable references: `${host.name}` and -`${host.platform}`: - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - paths: /var/log/${host.name}/another.log - condition: ${host.platform} == "linux" - - path: c:/service/app.log - condition: ${host.platform} == "windows" ----- - -At runtime, {agent} resolves variables and evaluates the conditions based -on values provided by the environment, generating two possible input -configurations. - -On the Windows machine: - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - path: c:/service/app.log ----- - -On the Linux machine: - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - paths: /var/log/linux-app/another.log ----- - -Using variable substitution along with conditions allows you to create concise, -but flexible input configurations that adapt to their deployed environment. - -include::elastic-agent-variable-substitution.asciidoc[] - -include::elastic-agent-conditions.asciidoc[] - -include::elastic-agent-functions.asciidoc[] - -include::elastic-agent-debug-input-configs.asciidoc[] diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-encryption.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-encryption.asciidoc deleted file mode 100644 index 71de18f99..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-encryption.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[elastic-agent-encryption] -= {agent} configuration encryption - -It is important for you to understand the {agent} security model and how it handles sensitive values in integration configurations. -At a high level, {agent} receives configuration data from {fleet-server} over an encrypted connection and persists the encrypted configuration on disk. -This persistence allows agents to continue to operate even if they are unable to connect to the {fleet-server}. - -The entire Fleet Agent Policy is encrypted at rest, but is recoverable if you have access to both the encrypted configuration data and the associated key. -The key material is stored in an OS-dependent manner as described in the following sections. - -[discrete] -== Darwin (macOS) - -Key material is stored in the system keychain. The value is stored as is without any additional transformations. - -[discrete] -== Windows - -Configuration data is encrypted with https://learn.microsoft.com/en-us/dotnet/standard/security/how-to-use-data-protection[DPAPI] `CryptProtectData` with `CRYPTPROTECT_LOCAL_MACHINE``. -Additional entropy is derived from crypto/rand bytes stored in the `.seed` file. -Configuration data is stored as separate files, where the name of the file is a SHA256 hash of the key, and the content of the file is encrypted with DPAPI data. -The security of key data relies on file system permissions. Only the Administrator should be able to access the file. - -[discrete] -== Linux - -The encryption key is derived from crypto/rand bytes stored in the `.seed` file after PBKDF2 transformation. -Configuration data is stored as separate files, where the name of the file is a SHA256 hash of the key, and the content of the file is AES256-GSM encrypted. -The security of the key material largely relies on file system permissions. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-functions.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-functions.asciidoc deleted file mode 100644 index c8ba65c90..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-functions.asciidoc +++ /dev/null @@ -1,237 +0,0 @@ -[discrete] -[[condition-function-reference]] -== Function reference - - -The condition syntax supports the following functions. - -[discrete] -[[add-function]] -=== `add` -`add(Number, Number) Number` - -Usage: - -[source,eql] ----- -add(1, 2) == 3 -add(5, ${foo}) >= 5 ----- - -[discrete] -[[arrayContains-function]] -=== `arrayContains` - -`arrayContains(Array, String) Boolean` - -Usage: - -[source,eql] ----- -arrayContains(${docker.labels}, 'monitor') ----- - -[discrete] -[[concat-function]] -=== `concat` - -`concat(String, String) String` - -NOTE: Parameters are coerced into strings before the concatenation. - -Usage: - -[source,eql] ----- -concat("foo", "bar") == "foobar" -concat(${var1}, ${var2}) != "foobar" ----- - -[discrete] -[[divide-function]] -=== `divide` - -`divide(Number, Number) Number` - -Usage: - -[source,eql] ----- -divide(25, 5) > 0 -divide(${var1}, ${var2}) > 7 ----- - -[discrete] -[[endsWith-function]] -=== `endsWith` - -`endsWith(String, String) Boolean` - - -Usage: - -[source,eql] ----- -endsWith("hello world", "hello") == true -endsWith(${var1}, "hello") != true ----- - -[discrete] -[[hasKey-function]] -=== `hasKey` - -`hasKey(Dictionary, String) Boolean` - -Usage: - -[source,eql] ----- -hasKey(${host}, "platform") ----- - -[discrete] -[[indexOf-function]] -=== `indexOf` - -`indexOf(String, String, Number?) Number` - -NOTE: Returns -1 if the string is not found. - -Usage: - -[source,eql] ----- -indexOf("hello", "llo") == 2 -indexOf(${var1}, "hello") >= 0 ----- - -[discrete] -[[length-function]] -=== `length` - -`length(Array|Dictionary|string)` - -Usage: - -[source,eql] ----- -length("foobar") > 2 -length(${docker.labels}) > 0 -length(${host}) > 2 ----- - -[discrete] -[[match-function]] -=== `match` - -`match(String, Regexp) boolean` - -NOTE: `Regexp` supports Go's regular expression syntax. Conditions that use -regular expressions are more expensive to run. If speed is critical, consider -using `endWiths` or `startsWith`. - -Usage: - -[source,eql] ----- -match("hello world", "^hello") == true -match(${var1}, "world$") == true ----- - -[discrete] -[[modulo-function]] -=== `modulo` - -`modulo(number, number) Number` - -Usage: - -[source,eql] ----- -modulo(25, 5) > 0 -modulo(${var1}, ${var2}) == 0 ----- - -[discrete] -[[multiply-function]] -=== `multiply` - -`multiply(Number, Number) Number` - -Usage: - -[source,eql] ----- -multiply(5, 5) == 25 -multiple(${var1}, ${var2}) > x ----- - -[discrete] -[[number-function]] -=== `number` - -`number(String) Integer` - -Usage: - -[source,eql] ----- -number("42") == 42 -number(${var1}) == 42 ----- - -[discrete] -[[startsWith-function]] -=== `startsWith` - -`startsWith(String, String) Boolean` - -Usage: - -[source,eql] ----- -startsWith("hello world", "hello") == true -startsWith(${var1}, "hello") != true ----- - -[discrete] -[[string-function]] -=== `string` - -`string(Number) String` - -Usage: - -[source,eql] ----- -string(42) == "42" -string(${var1}) == "42" ----- - -[discrete] -[[stringContains-function]] -=== `stringContains` - -`stringContains(String, String) Boolean` - -Usage: - -[source,eql] ----- -stringContains("hello world", "hello") == true -stringContains(${var1}, "hello") != true ----- - -[discrete] -[[subtract-function]] -=== `subtract` - -`subtract(Number, Number) Number` - -Usage: - -[source,eql] ----- -subtract(5, 1) == 4 -subtract(${foo}, 2) != 2 ----- diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-unprivileged-mode.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-unprivileged-mode.asciidoc deleted file mode 100644 index f4ebb351b..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-unprivileged-mode.asciidoc +++ /dev/null @@ -1,268 +0,0 @@ -[[elastic-agent-unprivileged]] -= Run {agent} without administrative privileges - -Beginning with {stack} version 8.15, {agent} is no longer required to be run by a user with superuser privileges. You can now run agents in an `unprivileged` mode that does not require `root` access on Linux or macOS, or `admin` access on Windows. Being able to run agents without full administrative privileges is often a requirement in organizations where this kind of access is often very limited. - -In general, agents running without full administrative privileges will perform and behave exactly as those run by a superuser. There are certain integrations and datastreams that are not available, however. If an integration requires root access, this is <>. - -// Add mention of the System integration data streams. - -You can also <> of an {agent} after it has been installed. - -Refer to <> and <> for the requirements and steps associated with running an agent without full `root` or `admin` superuser privileges. - -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[unprivileged-running]] -== Run {agent} in `unprivileged` mode - -To run {agent} without administrative privileges you use exactly the same commands that you use for {agent} otherwise, with one exception. When you run the <> command, add the `--unprivileged` flag. For example: - -[source,shell] ----- -elastic-agent install \ - --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \ - --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \ - --unprivileged ----- - -[IMPORTANT] -==== -Note the following current restrictions for running {agent} in `unprivileged` mode: - -* On Linux systems, after {agent} has been installed with the `--unprivileged` flag, all {agent} commands can be run without being the root user. -** The `sudo` option is still required for the `elastic-agent install` command. -Only `root` can install new services. -The installed service will not run as the root user. -* Using `sudo` without specifying an alternate non-root user with `sudo -u` in a command may result in <> due to the agent not having the required privileges. -* Using `sudo -u elastic-agent-user` will run commands as the user running the {agent} service and will always work. -* For files that allow users in the `elastic-agent` group access, using an alternate user that has been added to that group will also work. -There are still some commands that are only accessible to the `elastic-agent-user` that runs the service. -** For example, `elastic-agent inspect` requires you to prefix the command with `sudo -u elastic-agent-user`. -+ -[source,shell] ----- -sudo -u elastic-agent-user elastic-agent inspect ----- -==== - -[discrete] -[[unprivileged-command-behaviors]] -== Agent and dashboard behaviors in unprivileged mode - -In addition to the <> when {agent} is run in unpriviledged mode, certain data streams are also not available. The following tables show, for different operating systems, the impact when the agent does not have full administrative privileges. In most cases the limitations can be mediated by granting permissions for a user or group to the files indicated. - -.macOS -[options,header] -|=== -|Action |Behavior in unprivileged mode |Resolution - -|Run {agent} with the System integration -|Log file error: `Unexpected file opening error: Failed opening /var/log/system.log: open /var/log/system.log: permission denied`. -|Give read permission to the `elastic-agent` group for the `/var/log/system.log` file to fix this error. - -|Run {agent} with the System integration -|On the `[Logs System] Syslog` dashboard, the `Syslog events by hostname`, `Syslog hostnames and processes` and `Syslog logs` visualizations are are missing data. -|Give read permission to the `elastic-agent` group for the `/var/log/system.log` file to fix the missing visualizations. - -|Run {agent} with the System integration -|On the `[Metrics System] Host overview` dashboard, only the processes run by the `elastic-agent-user` user are shown in the CPU and memory usage lists. -|To fix the missing processes in the visualization lists you can add add the `elastic-agent-user` user to the system `admin` group. Note that while this mitigates the issue, it also grants `elastic-agent user` with more permissions than may be desired. - -|Run {agent} and access the {agent} dashboards -|On the `[Elastic Agent] Agents info` dashboard, visualizations including `Most Active Agents` and `Integrations per Agent` are missing data. -|To fix the missing data in the visualizations you can add add the `elastic-agent-user` user to the system `admin` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. - -|Run {agent} and access the {agent} dashboards -|On the `[Elastic Agent] Integrations` dashboard, visualizations including `Integration Errors Table`, `Events per integration` and `Integration Errors` are missing data. -|To fix the missing data in the visualizations you can add add the `elastic-agent-user` user to the system `admin` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. - -|=== - -.Linux -[options,header] -|=== -|Action |Behavior in unprivileged mode |Resolution - -|Run {agent} with the System integration -|Log file error: `[elastic_agent.filebeat][error] Harvester could not be started on new file: /var/log/auth.log.1, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/auth.log.1: open /var/log/auth.log.1: permission denied` -|To avoid the error you can add add the `elastic-agent-user` user to the `adm` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. - -|Run {agent} with the System integration -|Log file error: `[elastic_agent.metricbeat][error] error getting filesystem usage for /run/user/1000/gvfs: error in Statfs syscall: permission denied` -|To avoid the error you can add add the `elastic-agent-user` user to the `adm` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. - -|Run {agent} with the System integration -|On the `[Logs System] Syslog` dashboard, the `Syslog events by hostname`, `Syslog hostnames and processes` and `Syslog logs` visualizations are are missing data. -|To fix the missing data in the visualizations you can add add the `elastic-agent-user` user to the `adm` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. - -|Run {agent} and access the {agent} dashboards -|On the `[Elastic Agent] Agents info` dashboard, visualizations including `Most Active Agents` and `Integrations per Agent` are missing data. -|Giving read permission to the `elastic-agent` group for the `/var/log/system.log` file will partially fix the visualizations, but errors may still occur because the `elastic-agent-user` does not have read access to files in the `/run/user/1000/` directory. -// It'd be nice if we can expand on this, even if just to say why that read access can't be given. - -|Run {agent} and access the {agent} dashboards -|On the `[Elastic Agent] Integrations` dashboard, visualizations including `Integration Errors Table`, `Events per integration` and `Integration Errors` are missing data. -|Give read permission to the `elastic-agent` group for the `/var/log/system.log` file to fix the missing visualizations. - -|=== - -.Windows -[options,header] -|=== -|Action |Behavior in unprivileged mode |Resolution - -|Run {agent} with the System integration -|Log file error: `failed to open Windows Event Log channel "Security": Access is denied` -|Add the `elastic-agent-user` user to the `Event Log Users` group to fix this error. - -|Run {agent} with the System integration -|Log file error: `cannot open new key in the registry in order to enable the performance counters: Access is denied` -|Update the permissions for the `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PartMgr` registry to fix this error. - -|Run {agent} with the System integration -|Most of the System and {agent} dashboard visualizations are missing all data. -|Add the `elastic-agent-user` user to the `Event Log Users` group and update the permissions for the `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PartMgr` registry to fix the missing visualizations. - -Note that the `elastic-agent-user` user may still not have access to all processes, so the lists in the `Top processes by CPU usage` and `Top processes by memory usage` visualizations may be incomplete. - -|Run {agent} with the System integration -|On the `[Metrics System] Host overview` dashboard, the `Disk usage` visualizations are missing data. -|This occurs because direct access to the disk or a volume is restricted and not available to users without administrative privileges. Refer to link:https://learn.microsoft.com/en-us/windows/win32/secbp/running-with-special-privileges[Running with Special Privileges] in the Microsoft documentation for details. - -|=== - -[discrete] -[[unprivileged-integrations]] -== Using Elastic integrations - -// Add mention of the System integration data streams. - -Most Elastic integrations support running {agent} in unprivileged mode. For the exceptions, any integration that requires {agent} to have root privileges has the requirement indicated at the top of the integration page in {kib}: - -[role="screenshot"] -image::images/integration-root-requirement.png[Elastic Defend integration page showing root requirement] - -As well, a warning is displayed in {kib} if you try to add an integration that requires root privileges to an {agent} policy that has agents enrolled in unprivileged mode. - -[role="screenshot"] -image::images/unprivileged-agent-warning.png[Warning indicating that root privileged agent is required for an integration] - -Examples of integrations that require {agent} to have administrative privileges are: - -* link:https://docs.elastic.co/en/integrations/endpoint[{elastic-defend}] -* link:https://docs.elastic.co/integrations/auditd_manager[Auditd Manager] -* link:https://docs.elastic.co/integrations/fim[File Integrity Monitoring] -* link:https://docs.elastic.co/integrations/network_traffic[Network Packet Capture] -* link:https://docs.elastic.co/integrations/system_audit[System Audit] -* link:https://docs.elastic.co/integrations/profiler_agent[Universal Profiling Agent] - -[discrete] -[[unprivileged-view-mode]] -== Viewing an {agent} privilege mode - -The **Agent details** page shows you the privilege mode for any running {agent}. - -To view the status of an {agent}: - -. In {fleet}, open the **Agents** tab. -. Select an agent and click **View agent** in the actions menu. -. The **Agent details** tab shows whether the agent is running in `privileged` or `unprivileged` mode. -+ -[role="screenshot"] -image::images/agent-privilege-mode.png[Agent details tab showing the agent is running as non-root] - -As well, for any {agent} policy you can view the number of agents that are currently running in privileged or unprivileged mode: - -. In {fleet}, open the **Agent policies** tab. - -. Click the agent policy to view the policy details. - -The number of agents enrolled with the policy is shown. Hover over the link to view the number of privileged and unpriviled agents. - -[role="screenshot"] -image::images/privileged-and-unprivileged-agents.png[Agent policy tab showing 1 unprivileged agent and 0 privileged enrolled agents] - -In the event that the {agent} policy has integrations installed that require root privileges, but there are agents running without root privileges, this is shown in the tooltip. - -[role="screenshot"] -image::images/root-integration-and-unprivileged-agents.png[Agent policy tab showing 1 unprivileged agent and 0 privileged enrolled agents] - -[discrete] -[[unprivileged-change-mode]] -== Changing an {agent}'s privilege mode - -For any installed {agent} you can change the mode that it's running in by running the `privileged` or `unprivileged` subcommand. - -Change mode from privileged to unprivileged: - -[source,shell] ----- -sudo elastic-agent unprivileged ----- - -Note that changing to `unprivileged` mode is prevented if the agent is currently enrolled in a policy that includes an integration that requires administrative access, such as the {elastic-defend} integration. - -Change mode from unprivileged to privileged: - -[source,shell] ----- -sudo elastic-agent privileged ----- - -When an agent is running in `unprivileged` mode, if it doesn't have the right level of privilege to read a data source, you can also adjust the agent's privileges by adding `elastic-agent-user` to the user group that has privileges to read the data source. - -As background, when you run {agent} in `unprivileged` mode, one user and one group are created on the host. The same names are used for all operating systems: - -* `elastic-agent-user`: The user that is created and that the {agent} service runs as. -* `elastic-agent`: The group that is created. Any user in this group has access to control and communicate over the control protocol to the {agent} daemon. - -For example: - -. When you install {agent} with the `--unprivileged` setting, the `elastic-agent-user` user and the `elastic-agent` group are created automatically. -. If you then want your user `myuser` to be able to run an {agent} command such as `elastic-agent status`, add the `myuser` user to the `elastic-agent` group. -. Then, once added to the group, the `elastic-agent status` command will work. Prior to that, the user `myuser` running the command will result in a permission error that indicates a problem communicating with the control socket. - -[discrete] -[[unprivileged-preexisting-user]] -== Using `unprivileged` mode with a pre-existing user and group - -preview::[] - -In certain cases you may want to install {agent} in `unprivileged` mode, with the agent running as a pre-existing user or as part of a pre-existing group. -For example, on a Windows system you may have a service account in Active Directory and you'd like {agent} to run under that account. - -To install {agent} in `unprivileged` mode as a specific user, add the `--user` and `--password` parameters to the install command: - -[source,shell] ----- -elastic-agent install --unprivileged --user="my.path\username" --password="mypassword" ----- - -To install {agent} in `unprivileged` mode as part of a specific group, add the `--group` and `--password` parameters to the install command: - -[source,shell] ----- -elastic-agent install --unprivileged --group="my.path\groupname" --password="mypassword" ----- - -Alternatively, if you have {agent} already installed with administrative privileges, you can change the agent to use `unprivileged` mode and to run as a specific user or in a specific group. -For example: - -[source,shell] ----- -elastic-agent unprivileged --user="my.path\username" --password="mypassword" ----- - -[source,shell] ----- -elastic-agent unprivileged --group="my.path\groupname" --password="mypassword" ----- - - diff --git a/docs/en/ingest-management/elastic-agent/elastic-agent-variable-substitution.asciidoc b/docs/en/ingest-management/elastic-agent/elastic-agent-variable-substitution.asciidoc deleted file mode 100644 index 0f0b6ee27..000000000 --- a/docs/en/ingest-management/elastic-agent/elastic-agent-variable-substitution.asciidoc +++ /dev/null @@ -1,139 +0,0 @@ -[discrete] -[[variable-substitution]] -= Variable substitution - -The syntax for variable substitution is `${var}`, where `var` is the name of a -variable defined by a provider. A _provider_ defines key/value pairs that are -used for variable substitution and conditions. - -{agent} supports a variety of providers, such as `host` and `local`, that -supply variables to {agent}. For example, earlier you saw `${host.name}` used to -resolve the path to the host's log file based on the `{host.platform}` value. Both of these values -were provided by the `host` provider. - -All providers are enabled by default when {agent} starts. If a provider cannot -be configured, its variables are ignored. - -Refer to <> for more detail. - -The following agent policy uses a custom key named `foo` to resolve a value -defined by a local provider: - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - paths: /var/log/${foo}/another.log - -providers: - local: - vars: - foo: bar - ----- - -The policy generated by this configuration looks like this: - -[source,yaml] ----- -inputs: - - id: unique-logfile-id - type: logfile - streams: - - paths: /var/log/bar/another.log ----- - -When an input uses a variable substitution that is not present in the current -key/value mappings being evaluated, the input is removed in the result. - -For example, this agent policy uses an unknown key: - -[source,yaml] ----- -inputs: - - id: logfile-foo - type: logfile - path: "/var/log/foo" - - id: logfile-unknown - type: logfile - path: "${ unknown.key }" ----- - - -The policy generated by this configuration looks like this: - -[source,yaml] ----- -inputs: - - id: logfile-foo - type: logfile - path: "/var/log/foo" ----- - -[discrete] -= Alternative variables and constants - -Variable substitution can also define alternative variables or a constant. - -To define a constant, use either `'` or `"`. When a constant is reached during -variable evaluation, any remaining variables are ignored, so a constant should -be the last entry in the substitution. - -To define alternatives, use `|` followed by the next variable or constant. -The power comes from allowing the input to define the preference order of the -substitution by chaining multiple variables together. - -For example, the following agent policy chains together multiple variables to -set the log path based on information provided by the running container -environment. The constant `/var/log/other` is used to end of the path, which is -common to both providers: - -[source,yaml] ----- -inputs: - - id: logfile-foo - type: logfile - path: "/var/log/foo" - - id: logfile-container - type: logfile - path: "${docker.paths.log|kubernetes.container.paths.log|'/var/log/other'}" ----- - -[discrete] -= Escaping variables - -In some cases the `${var}` syntax causes an issue with using a value where the actually -wanted variable is `${var}`. In this case double `$$` can be provided for the variable. - -The double `$$` causes the variable to be ignored and the extra `$` is removed from the beginning. - -For example, the following agent policy uses the escaped variable so the actual value is used instead. - -[source,yaml] ----- -inputs: - - id: logfile-foo - type: logfile - path: "/var/log/foo" - processors: - - add_tags: - tags: [$${development}] - target: "environment" ----- - - -The policy generated by this configuration looks like this: - -[source,yaml] ----- -inputs: - - id: logfile-foo - type: logfile - path: "/var/log/foo" - processors: - - add_tags: - tags: [${development}] - target: "environment" ----- diff --git a/docs/en/ingest-management/elastic-agent/example-kubernetes-fleet-managed-agent-helm.asciidoc b/docs/en/ingest-management/elastic-agent/example-kubernetes-fleet-managed-agent-helm.asciidoc deleted file mode 100644 index f7f78b9b3..000000000 --- a/docs/en/ingest-management/elastic-agent/example-kubernetes-fleet-managed-agent-helm.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[[example-kubernetes-fleet-managed-agent-helm]] -= Example: Install {fleet}-managed {agent} on {k8s} using Helm - -preview::[] - -This example demonstrates how to install {fleet}-managed {agent} on a {k8s} system using a Helm chart, gather {k8s} metrics and send them to an {es} cluster in {ecloud}, and then view visualizations of those metrics in {kib}. - -For an overview of the {agent} Helm chart and its benefits, refer to <>. - -This guide takes you through these steps: - -* <> -* <> -* <> - - -[discrete] -[[agent-fleet-managed-helm-example-prereqs]] -=== Prerequisites - -To get started, you need: - -* A local install of the link:https://helm.sh/[Helm] {k8s} package manager. -* An link:{ess-trial}[{ecloud}] hosted {es} cluster on version 8.16 or higher. -* An active {k8s} cluster. -* A local clone of the link:https://github.com/elastic/elastic-agent/tree/8.16[elastic/elastic-agent] GitHub repository. Make sure to use the `8.16` branch to ensure that {agent} has full compatibility with the Helm chart. - -[discrete] -[[agent-fleet-managed-helm-example-install-agent]] -=== Install {agent} - -. Open your {ecloud} deployment, and from the navigation menu select **Fleet**. -. From the **Agents** tab, select **Add agent**. -. In the **Add agent** UI, specify a policy name and select **Create policy**. Leave the **Collect system logs and metrics** option selected. -. Scroll down in the **Add agent** flyout to the **Install Elastic Agent on your host** section. -. Select the **Linux TAR** tab and copy the values for `url` and `enrollment-token`. You'll use these when you run the `helm install` command. -. Open a terminal shell and change into a directory in your local clone of the `elastic-agent` repo. -. Copy this command. -+ -[source,sh] ----- -helm install demo ./deploy/helm/elastic-agent \ ---set agent.fleet.enabled=true \ ---set agent.fleet.url= \ ---set agent.fleet.token= \ ---set agent.fleet.preset=perNode ----- -+ -Note that the command has these properties: - -* `helm install` runs the Helm CLI install tool. -* `demo` gives a name to the installed chart. You can choose any name. -* `./deploy/helm/elastic-agent` is a local path to the Helm chart to install (in time it's planned to have a public URL for the chart). -* `--set agent.fleet.enabled=true` enables {fleet}-managed {agent}. The CLI parameter overrides the default `false` value for `agent.fleet.enabled` in the {agent} link:https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml[values.yaml] file. -* `--set agent.fleet.url=` sets the address where {agent} will connect to {fleet} in your {ecloud} deployment, over port 443 (again, overriding the value set by default in the {agent} link:https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml[values.yaml] file). -* `--set agent.fleet.token=` sets the enrollment token that {agent} uses to authenticate with {fleet}. -* `--set agent.fleet.preset=perNode` enables {k8s} metrics on `per node` basis. You can alternatively set cluster wide metrics (`clusterWide`) or kube-state-metrics (`ksmSharded`). -+ --- -TIP: For a full list of all available YAML settings and descriptions, refer to the link:https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent[{agent} Helm Chart Readme]. --- -. Update the command to replace: -.. `` with the URL that you copied earlier. -.. `` with the enrollment token that you copied earlier. -+ -After your updates, the command should look something like this: -+ -[source,sh] ----- -helm install demo ./deploy/helm/elastic-agent \ ---set agent.fleet.enabled=true \ ---set agent.fleet.url=https://256575858845283fxxxxxxxd5265d2b4.fleet.us-central1.gcp.foundit.no:443 \ ---set agent.fleet.token=eSVvFDUvSUNPFldFdhhZNFwvS5xxxxxxxxxxxxFEWB1eFF1YedUQ1NWFXwr== \ ---set agent.fleet.preset=perNode ----- - -. Run the command. -+ -The command output should confirm that {agent} has been installed: -+ -[source,sh] ----- -... -Installed agent: - - perNode [daemonset - managed mode] -... ----- - -. Run the `kubectl get pods -n default` command to confirm that the {agent} pod is running: -+ -[source,sh] ----- -NAME READY STATUS RESTARTS AGE -agent-pernode-demo-86mst 1/1 Running 0 12s ----- - -. In the **Add agent** flyout, wait a minute or so for confirmation that {agent} has successfully enrolled with {fleet} and that data is flowing: -+ -[role="screenshot"] -image::images/helm-example-nodes-enrollment-confirmation.png[Screen capture of Add Agent UI showing that the agent has enrolled in Fleet] - -. In {fleet}, open the **Agents** tab and see that an **Agent-pernode-demo-#####** agent is running. - -. Select the agent to view its details. - -. On the **Agent details** tab, on the **Integrations** pane, expand `system-1` to confirm that logs and metrics are incoming. You can click either the `Logs` or `Metrics` link to view details. -+ -[role="screenshot"] -image::images/helm-example-nodes-logs-and-metrics.png[Screen capture of the Logs and Metrics view on the Integrations pane] - - -[discrete] -[[agent-fleet-managed-helm-example-install-integration]] -=== Install the Kubernetes integration - -Now that you've {agent} and data is flowing, you can set up the {k8s} integration. - -. In your {ecloud} deployment, from the {kib} menu open the **Integrations** page. -. Run a search for `Kubernetes` and then select the {k8s} integration card. -. On the {k8s} integration page, click **Add Kubernetes** to add the integration to your {agent} policy. -. Scroll to the bottom of **Add Kubernetes integration** page. Under **Where to add this integration?** select the **Existing hosts** tab. On the **Agent policies** menu, select the agent policy that you created previously in the <> steps. -+ -You can leave all of the other integration settings at their default values. -. Click **Save and continue**. When prompted, select to **Add Elastic Agent later** since you've already added it using Helm. -. On the {k8s} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Pods** dashboard. -+ -On the dashboard, you can view the status of your {k8s} pods, including metrics on memory usage, CPU usage, and network throughput. -+ -[role="screenshot"] -image::images/helm-example-fleet-metrics-dashboard.png[Screen capture of the Metrics Kubernetes pods dashboard] - -You've successfully installed {agent} using Helm, and your {k8s} metrics data is available for viewing in {kib}. - -[discrete] -[[agent-fleet-managed-helm-example-tidy-up]] -=== Tidy up - -After you've run through this example, run the `helm uninstall` command to uninstall {agent}. - -[source,sh] ----- -helm uninstall demo ----- - -The uninstall should be confirmed as shown: - -[source,sh] ----- -release "demo" uninstalled ----- - -As a reminder, for full details about using the {agent} Helm chart refer to the link:https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent[{agent} Helm Chart Readme]. diff --git a/docs/en/ingest-management/elastic-agent/example-kubernetes-standalone-agent-helm.asciidoc b/docs/en/ingest-management/elastic-agent/example-kubernetes-standalone-agent-helm.asciidoc deleted file mode 100644 index 3dba3f63a..000000000 --- a/docs/en/ingest-management/elastic-agent/example-kubernetes-standalone-agent-helm.asciidoc +++ /dev/null @@ -1,290 +0,0 @@ -[[example-kubernetes-standalone-agent-helm]] -= Example: Install standalone {agent} on Kubernetes using Helm - -preview::[] - -This example demonstrates how to install standalone {agent} on a Kubernetes system using a Helm chart, gather Kubernetes metrics and send them to an {es} cluster in {ecloud}, and then view visualizations of those metrics in {kib}. - -For an overview of the {agent} Helm chart and its benefits, refer to <>. - -This guide takes you through these steps: - -* <> -* <> -* <> -* <> - -[discrete] -[[agent-standalone-helm-example-prereqs]] -=== Prerequisites - -To get started, you need: - -* A local install of the link:https://helm.sh/[Helm] {k8s} package manager. -* An link:{ess-trial}[{ecloud}] hosted {es} cluster on version 8.16 or higher. -* An <>. -* An active {k8s} cluster. -* A local clone of the link:https://github.com/elastic/elastic-agent/tree/8.16[elastic/elastic-agent] GitHub repository. Make sure to use the `8.16` branch to ensure that {agent} has full compatibility with the Helm chart. - -[discrete] -[[agent-standalone-helm-example-install]] -=== Install {agent} - -. Open your {ecloud} deployment, and from the navigation menu select **Manage this deployment**. -. In the **Applications** section, copy the {es} endpoint and make a note of the endpoint value. -. Open a terminal shell and change into a directory in your local clone of the `elastic-agent` repo. -. Copy this command. -+ -[source,sh] ----- -helm install demo ./deploy/helm/elastic-agent \ ---set kubernetes.enabled=true \ ---set outputs.default.type=ESPlainAuthAPI \ ---set outputs.default.url=:443 \ ---set outputs.default.api_key="API_KEY" ----- -+ -Note that the command has these properties: - -* `helm install` runs the Helm CLI install tool. -* `demo` gives a name to the installed chart. You can choose any name. -* `./deploy/helm/elastic-agent` is a local path to the Helm chart to install (in time it's planned to have a public URL for the chart). -* `--set kubernetes.enabled=true` enables the {k8s} integration. The CLI parameter overrides the default `false` value for `kubernetes.enabled` in the {agent} link:https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml[values.yaml] file. -* `--set outputs.default.type=ESPlainAuthAPI` sets the authentication method for the {es} output to require an API key (again, overriding the value set by default in the {agent} link:https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml[values.yaml] file). -* `--set outputs.default.url=:443` sets the address of your {ecloud} deployment, where {agent} will send its output over port 443. -* `--set outputs.default.api_key="API_KEY"` sets the API key that {agent} will use to authenticate with your {es} cluster. -+ --- -TIP: For a full list of all available YAML settings and descriptions, refer to the link:https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent[{agent} Helm Chart Readme]. --- -. Update the command to replace: -.. `` with the {es} endpoint value that you copied earlier. -.. `` with your API key name. -+ -After your updates, the command should look something like this: -+ -[source,sh] ----- -helm install demo ./deploy/helm/elastic-agent \ ---set kubernetes.enabled=true \ ---set outputs.default.type=ESPlainAuthAPI \ ---set outputs.default.url=https://demo.es.us-central1.gcp.foundit.no:443 \ ---set outputs.default.api_key="A6ecaHNTJUFFcJI6esf4:5HJPxxxxxxxPS4KwSBeVEs" ----- - -. Run the command. -+ -The command output should confirm that three {agents} have been installed as well as the {k8s} integration: -+ -[source,sh] ----- -... -Installed agents: - - clusterWide [deployment - standalone mode] - - ksmSharded [statefulset - standalone mode] - - perNode [daemonset - standalone mode] - -Installed integrations: - - kubernetes [built-in chart integration] -... ----- - -. Run the `kubectl get pods -n default` command to confirm that the {agent} pods are running: -+ -[source,sh] ----- -NAME READY STATUS RESTARTS AGE -agent-clusterwide-demo-77c65f6c7b-trdms 1/1 Running 0 5m18s -agent-ksmsharded-demo-0 2/2 Running 0 5m18s -agent-pernode-demo-c7d75 1/1 Running 0 5m18s ----- - -. In your {ecloud} deployment, from the {kib} menu open the **Integrations** page. -. Run a search for `Kubernetes` and then select the {k8s} integration card. -. On the {k8s} integration page, select **Install Kubernetes assets**. This installs the dashboards, {es} indexes, and other assets used to monitor your {k8s} cluster. -. On the {k8s} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Nodes** dashboard. -+ -On the dashboard, you can view the status of your {k8s} nodes, including metrics on memory, CPU, and filesystem usage, network throughput, and more. -+ -[role="screenshot"] -image::images/helm-example-nodes-metrics-dashboard.png[Screen capture of the Metrics Kubernetes nodes dashboard] - -. On the {k8s} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Pods** dashboard. As with the nodes dashboard, on this dashboard you can view the status of your {k8s} pods, including various metrics on memory, CPU, and network throughput. -+ -[role="screenshot"] -image::images/helm-example-pods-metrics-dashboard.png[Screen capture of the Metrics Kubernetes pods dashboard] - -[discrete] -[[agent-standalone-helm-example-upgrade]] -=== Upgrade your {agent} configuration - -Now that you have {agent} installed, collecting, and sending data successfully, let's try changing the agent configuration settings. - -In the previous install example, three {agent} nodes were installed. One of these nodes, `agent-ksmsharded-demo-0`, is installed to enable the link:https://github.com/kubernetes/kube-state-metrics[kube-state-metrics] service. Let's suppose that you don't need those metrics and would like to upgrade your configuration accordingly. - -. Copy the command that you used earlier to install {agent}: -+ -[source,sh] ----- -helm install demo ./deploy/helm/elastic-agent \ ---set kubernetes.enabled=true \ ---set outputs.default.type=ESPlainAuthAPI \ ---set outputs.default.url=:443 \ ---set outputs.default.api_key="API_KEY" ----- - -. Update the command as follows: -.. Change `install` to upgrade. -.. Add a parameter `--set kubernetes.state.enabled=false`. This will override the default `true` value for the setting `kubernetes.state` in the {agent} link:https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml[values.yaml] file. -+ -[source,sh] ----- -helm upgrade demo ./deploy/helm/elastic-agent \ ---set kubernetes.enabled=true \ ---set kubernetes.state.enabled=false \ ---set outputs.default.type=ESPlainAuthAPI \ ---set outputs.default.url=:443 \ ---set outputs.default.api_key="API_KEY" ----- - -. Run the command. -+ -The command output should confirm that now only two {agents} are installed together with the {k8s} integration: -+ -[source,sh] ----- -... -Installed agents: - - clusterWide [deployment - standalone mode] - - perNode [daemonset - standalone mode] - -Installed integrations: - - kubernetes [built-in chart integration] -... ----- - -You've upgraded your configuration to run only two {agents}, without the kube-state-metrics service. You can similarly upgrade your agent to change other settings defined in the in the {agent} link:https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml[values.yaml] file. - -[discrete] -[[agent-standalone-helm-example-change-mode]] -=== Change {agent}'s running mode - -By default {agent} runs under the `elastic` user account. For some use cases you may want to temporarily change an agent to run with higher privileges. - -. Run the `kubectl get pods -n default` command to view the running {agent} pods: -+ -[source,sh] ----- -NAME READY STATUS RESTARTS AGE -agent-clusterwide-demo-77c65f6c7b-trdms 1/1 Running 0 5m18s -agent-pernode-demo-c7d75 1/1 Running 0 5m18s ----- - -. Now, run the `kubectl exec` command to enter one of the running {agents}, substituting the correct pod name returned from the previous command. For example: -+ -[source,sh] ----- -kubectl exec -it pods/agent-pernode-demo-c7d75 -- bash ----- - -. From inside the pod, run the Linux `ps aux` command to view the running processes. -+ -[source,sh] ----- -ps aux ----- -+ -The results should be similar to the following: -+ -[source,sh] ----- -USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND -elastic+ 1 0.0 0.0 1936 416 ? Ss 21:04 0:00 /usr/bin/tini -- /usr/local/bin/docker-entrypoint -c /etc/elastic-agent/agent.yml -e -elastic+ 10 0.2 1.3 2555252 132804 ? Sl 21:04 0:13 elastic-agent container -c /etc/elastic-agent/agent.yml -e -elastic+ 37 0.6 2.0 2330112 208468 ? Sl 21:04 0:37 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E -elastic+ 38 0.2 1.7 2190072 177780 ? Sl 21:04 0:13 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se -elastic+ 56 0.1 1.7 2190136 175896 ? Sl 21:04 0:11 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E -elastic+ 68 0.1 1.8 2190392 184140 ? Sl 21:04 0:12 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E -elastic+ 78 0.7 2.0 2330496 204964 ? Sl 21:04 0:48 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se -elastic+ 535 0.0 0.0 3884 3012 pts/0 Ss 22:47 0:00 bash -elastic+ 543 0.0 0.0 5480 2360 pts/0 R+ 22:47 0:00 ps aux ----- - -. In the command output, note that {agent} is currently running as the `elastic` user: -+ -[source,sh] ----- -elastic+ 10 0.2 1.3 2555252 132804 ? Sl 21:04 0:13 elastic-agent container -c /etc/elastic-agent/agent.yml -e ----- - -. Run `exit` to leave the {agent} pod. - -. Run the `helm upgrade` command again, this time adding the parameter `--set agent.unprivileged=false` to override the default `true` value for that setting. -+ -[source,sh] ----- -helm upgrade demo ./deploy/helm/elastic-agent \ ---set kubernetes.enabled=true \ ---set kubernetes.state.enabled=false \ ---set outputs.default.type=ESPlainAuthAPI \ ---set outputs.default.url=:443 \ ---set outputs.default.api_key="API_KEY" \ ---set agent.unprivileged=false ----- - -. Run the `kubectl get pods -n default` command to view the running {agent} pods: -+ -[source,sh] ----- -NAME READY STATUS RESTARTS AGE -agent-clusterwide-demo-77c65f6c7b-trdms 1/1 Running 0 5m18s -agent-pernode-demo-s6s7z 1/1 Running 0 5m18s ----- - -. Re-run the `kubectl exec` command to enter one of the running {agents}, substituting the correct pod name. For example: -+ -[source,sh] ----- -kubectl exec -it pods/agent-pernode-demo-s6s7z -- bash ----- - -. From inside the pod, run the Linux `ps aux` command to view the running processes. -+ -[source,sh] ----- -USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND -root 1 0.0 0.0 1936 452 ? Ss 23:10 0:00 /usr/bin/tini -- /usr/local/bin/docker-entrypoint -c /etc/elastic-agent/agent.yml -e -root 9 0.9 1.3 2488368 135920 ? Sl 23:10 0:01 elastic-agent container -c /etc/elastic-agent/agent.yml -e -root 27 0.9 1.9 2255804 203128 ? Sl 23:10 0:01 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E -root 44 0.3 1.8 2116148 187432 ? Sl 23:10 0:00 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E -root 64 0.3 1.8 2263868 188892 ? Sl 23:10 0:00 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E -root 76 0.4 1.8 2190136 190972 ? Sl 23:10 0:00 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se -root 100 1.2 2.0 2256316 207692 ? Sl 23:10 0:01 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se -root 142 0.0 0.0 3752 3068 pts/0 Ss 23:12 0:00 bash -root 149 0.0 0.0 5480 2376 pts/0 R+ 23:13 0:00 ps aux ----- - -. Run `exit` to leave the {agent} pod. - -You've upgraded the {agent} privileges to run as `root`. To change the settings back, you can re-run the `helm upgrade` command with `--set agent.unprivileged=true` to return to the default `unprivileged` mode. - - -[discrete] -[[agent-standalone-helm-example-tidy-up]] -=== Tidy up - -After you've run through this example, run the `helm uninstall` command to uninstall {agent}. - -[source,sh] ----- -helm uninstall demo ----- - -The uninstall should be confirmed as shown: - -[source,sh] ----- -release "demo" uninstalled ----- - -As a reminder, for full details about using the {agent} Helm chart refer to the link:https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent[{agent} Helm Chart Readme]. diff --git a/docs/en/ingest-management/elastic-agent/example-standalone-monitor-nginx-ess.asciidoc b/docs/en/ingest-management/elastic-agent/example-standalone-monitor-nginx-ess.asciidoc deleted file mode 100644 index 0176598d2..000000000 --- a/docs/en/ingest-management/elastic-agent/example-standalone-monitor-nginx-ess.asciidoc +++ /dev/null @@ -1,280 +0,0 @@ -[[example-standalone-monitor-nginx]] -= Example: Use standalone {agent} with {ess} to monitor nginx - -This guide walks you through a simple monitoring scenario so you can learn the basics of setting up standalone {agent}, using it to work with {ess} and an Elastic integration. - -Following these steps, you'll deploy the {stack}, install a standalone {agent} on a host to monitor an nginx web server instance, and access visualizations based on the collected logs. - -. <>. -. <>. -. <> -. <>. -. <>. -. <>. -. <>. -. <>. -. <>. - -[discrete] -[[nginx-guide-prereqs-ess]] -=== Prerequisites - -To get started, you need: - -. An internet connection and an email address for your {ecloud} trial. -. A Linux host machine on which you'll install an nginx web server. The commands in this guide use an Ubuntu image but any Linux distribution should be fine. - -[discrete] -[[nginx-guide-install-nginx-ess]] -=== Step 1: Install nginx - -To start, we'll set up a basic link:https://docs.nginx.com/nginx/admin-guide/web-server/[nginx web server]. - -. Run the following command on an Ubuntu Linux host, or refer to the link:https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/[nginx install documentation] for the command appropriate to your operating system. -+ -[source,sh] ----- -sudo apt install nginx ----- -+ -. Open a web browser and visit your host machine's external URL, for example `http://192.168.64.17/`. You should see the nginx welcome message. -+ -[role="screenshot"] -image::images/guide-nginx-welcome.png["Browser window showing Welcome to nginx!"] - -[discrete] -[[nginx-guide-sign-up-ess]] -=== Step 2: Create an {ecloud} deployment - -NOTE: If you've already signed up for a trial deployment you can skip this step. - -Now that your web server is running, let's get set up to monitor it in {ecloud}. An {ecloud} {ess} deployment offers you all of the features of the {stack} as a hosted service. To test drive your first deployment, sign up for a free {ecloud} trial: - -. Go to our link:https://cloud.elastic.co/registration?elektra=guide-welcome-cta[{ecloud} Trial] page. - -. Enter your email address and a password. -+ -[role="screenshot"] -image::images/guide-sign-up-trial.png["Start your free Elastic Cloud trial",width="50%"] - -. After you've link:https://cloud.elastic.co/login[logged in], select *Create deployment* and give your deployment a name. You can leave the default options or select a different cloud provider, region, hardware profile, or version. - -. Select *Create deployment*. - -. While the deployment sets up, make a note of your `elastic` superuser password and keep it in a safe place. - -. Once the deployment is ready, select *Continue*. At this point, you access {kib} and a selection of setup guides. - -[discrete] -[[nginx-guide-create-api-key-ess]] -=== Step 3: Create an {es} API key - -. From the {kib} menu and go to *Stack Management* -> *API keys*. - -. Select *Create API key*. - -. Give the key a name, for example `nginx example API key`. - -. Leave the other default options and select *Create API key*. - -. In the *Create API key* confirmation dialog, change the dropdown menu setting from `Encoded` to `Beats`. This sets the API key format for communication between {agent} (which is based on {beats}) and {es}. - -. Copy the generated API key and store it in a safe place. You'll use it in a later step. - -[discrete] -[[nginx-guide-create-policy-ess]] -=== Step 4: Create an {agent} policy - -{agent} is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, and more. A single agent makes it easy and fast to deploy monitoring across your infrastructure. Each agent has a single policy (a collection of input settings) that you can update to add integrations for new data sources, security protections, and more. - -. When your {ecloud} deployment is ready, open the {kib} menu and go to **{fleet} -> Agent policies**. -+ -image::images/guide-agent-policies.png["Agent policies tab in Fleet"] -. Click *Create agent policy*. -. Give your policy a name. For this example we'll call it `nginx-policy`. -. Leave *Collect system logs and metrics* selected. -. Click *Create agent policy*. -+ -image::images/guide-create-agent-policy.png["Create agent policy UI"] - -[discrete] -[[nginx-guide-add-integration-ess]] -=== Step 5: Add the Nginx Integration - -Elastic integrations are a streamlined way to connect your data from popular services and platforms to the {stack}, including nginx. - -. From the **{fleet} -> Agent policies** tab, click the link for your new `nginx-policy`. -+ -image::images/guide-nginx-policy.png["The nginx-policy UI with integrations tab selected"] -. Note that the System integration (`system-1`) is included because you opted earlier to collect system logs and metrics. -. Click **Add integration**. -. On the Integrations page search for "nginx". -+ -image::images/guide-integrations-page.png["Integrations page with nginx in the search bar"] -. Select the **Nginx** card. -. Click **Add Nginx**. -. Click the link to **Add integration only (skip agent installation)**. You'll install standalone {agent} in a later step. -. Here, you can select options such as the paths to where your nginx logs are stored, whether or not to collect metrics data, and various other settings. -+ -For now, leave all of the default settings and click **Save and continue** to add the Nginx integration to your `nginx-policy` policy. -+ -image::images/guide-add-nginx-integration.png["Add Nginx Integration UI"] -. In the confirmation dialog, select to **Add {agent} later**. -+ -image::images/guide-nginx-integration-added.png["Nginx Integration added confirmation UI with Add {agent} later selected."] - -[discrete] -[[nginx-guide-configure-standalone-agent-ess]] -=== Step 6: Configure standalone {agent} - -Rather than opt for {fleet} to centrally manage {agent}, you'll configure an agent to run in standalone mode, so it will be managed by hand. - -. In {fleet}, open the **Agents** tab and click **Add agent**. -. For the **What type of host are you adding?** step, select `nginx-policy` from the drop-down menu if it's not already selected. -. For the **Enroll in {fleet}?** step, select **Run standalone**. -+ -image::images/guide-add-agent-standalone01.png["Add agent UI with nginx-policy and Run-standalone selected."] -. For the **Configure the agent** step, choose **Download Policy**. Save the `elastic-agent.yml` file to a directory on the host where you'll install nginx for monitoring. -+ -Have a look inside the policy file and notice that it contains all of the input, output, and other settings for the Nginx and System integrations. If you already have a standalone agent installed on a host with an existing {agent} policy, you can use the method described here to add a new integration. Just add the settings from the **Configure the agent** step to your existing `elastic-agent.yml` file. -. For the **Install {agent} on your host** step, select the tab for your host operating system and run the commands on your host. -+ -image::images/guide-install-agent-on-host.png["Install {agent} on your host step, showing tabs with the commands for different operating systems."] -+ -[NOTE] -==== -{agent} commands should be run as `root`. You can prefix each agent command with `sudo` or you can start a new shell as `root` by running `sudo su`. If you need to run {agent} commands without `root` access, refer to <>. -==== -+ -If you're prompted with `Elastic Agent will be installed at {installation location} and will run as a service. Do you want to continue?` answer `Yes`. -+ -If you're prompted with `Do you want to enroll this Agent into Fleet?` answer `no`. -+ -. You can run the `status` command to confirm that {agent} is running. -+ -[source,cmd] ----- -elastic-agent status - -┌─ fleet -│ └─ status: (STOPPED) Not enrolled into Fleet -└─ elastic-agent - └─ status: (HEALTHY) Running ----- -+ -Since you're running the agent in standalone mode the `Not enrolled into Fleet` message is expected. -. Open the `elastic-agent.yml` policy file that you saved. -. Near the top of the file, replace: -+ -[source,yaml] ----- - username: '${ES_USERNAME}' - password: '${ES_PASSWORD}' ----- -+ -with: -+ -[source,yaml] ----- - api_key: '' ----- -+ -where `your-api-key` is the API key that you generated in <>. - -. Find the location of the default `elastic-agent.yml` policy file that is included in your {agent} install. Install directories for each platform are described in <>. In our example Ubuntu image the default policy file can be found in `/etc/elastic-agent/elastic-agent.yml`. -. Replace the default policy file with the version that you downloaded and updated. For example: -+ -[source,sh] ----- -cp /home/ubuntu/homedir/downloads/elastic-agent.yml /etc/elastic-agent/elastic-agent.yml ----- -+ -NOTE: You may need to prefix the `cp` command with `sudo` for the permission required to replace the default file. -+ -By default, {agent} monitors the configuration file and reloads the configuration automatically when `elastic-agent.yml` is updated. - -. Run the `status` command again, this time with the `--output yaml` option which provides structured and much more detailed output. See the <> command documentation for more details. -+ -[source,shell] ----- -elastic-agent status --output yaml ----- -+ -The results show you the agent status together with details about the running components, which correspond to the inputs and outputs defined for the integrations that have been added to the {agent} policy, in this case the System and Nginx Integrations. -. At the top of the command output, the `info` section contains details about the agent instance. Make a note of the agent ID. In this example the ID is `4779b439-1130-4841-a878-e3d7d1a457d0`. You'll use that ID in the next section. -+ -[source,yaml] ----- -elastic-agent status --output yaml - -info: - id: 4779b439-1130-4841-a878-e3d7d1a457d0 - version: 8.9.1 - commit: 5640f50143410fe33b292c9f8b584117c7c8f188 - build_time: 2023-08-10 17:04:04 +0000 UTC - snapshot: false -state: 2 -message: Running ----- - -[discrete] -[[nginx-guide-confirm-agent-data-ess]] -=== Step 7: Confirm that your {agent} data is flowing - -Now that {agent} is running, it's time to confirm that the agent data is flowing into {es}. - -. Check that {agent} logs are flowing. -.. Open the {kib} menu and go to **Analytics -> Discover**. -.. In the KQL query bar, enter the query `agent.id : "{agent-id}"` where `{agent-id}` is the ID you retrieved from the `elastic-agent status --output yaml` command. For example: `agent.id : "4779b439-1130-4841-a878-e3d7d1a457d0"`. -+ -If {agent} has connected successfully with your {ecloud} deployment, the agent logs should be flowing into {es} and visible in {kib} Discover. -+ -image::images/guide-agent-logs-flowing.png["Kibana Discover shows agent logs are flowing into Elasticsearch."] -. Check that {agent} metrics are flowing. -.. Open the {kib} menu and go to **Analytics -> Dashboard**. -.. In the search field, search for `Elastic Agent` and select `[Elastic Agent] Agent metrics` in the results. -+ -like the agent logs, the agent metrics should be flowing into {es} and visible in {kib} Dashboard. You can view metrics on CPU usage, memory usage, open handles, events rate, and more. -+ -image::images/guide-agent-metrics-flowing.png["Kibana Dashboard shows agent metrics are flowing into Elasticsearch."] - -[discrete] -[[nginx-guide-view-system-data-ess]] -=== Step 8: View your system data - -In the step to <> you chose to collect system logs and metrics, so you can access those now. - -. View your system logs. -.. Open the {kib} menu and go to **Management -> Integrations -> Installed integrations**. -.. Select the **System** card and open the **Assets** tab. This is a quick way to access all of the dashboards, saved searches, and visualizations that come with each integration. -.. Select `[Logs System] Syslog dashboard`. -.. Select the calandar icon and change the time setting to `Today`. The {kib} Dashboard shows visualizations of Syslog events, hostnames and processes, and more. -. View your system metrics. - -.. Return to **Management -> Integrations -> Installed integrations**. -.. Select the **System** card and open the **Assets** tab. -.. This time, select `[Metrics System] Host overview`. -.. Select the calandar icon and change the time setting to `Today`. The {kib} Dashboard shows visualizations of host metrics including CPU usage, memory usage, running processes, and others. -+ -image::images/guide-system-metrics-dashboard.png["The System metrics host overview showing CPU usage, memory usage, and other visualizations"] - -[discrete] -[[nginx-guide-view-nginx-data-ess]] -=== Step 9: View your nginx logging data - -Now let's view your nginx logging data. - -. Open the {kib} menu and go to **Management -> Integrations -> Installed integrations**. -. Select the **Nginx** card and open the **Assets** tab. -. Select `[Logs Nginx] Overview`. The {kib} Dashboard opens with geographical log details, response codes and errors over time, top pages, and more. -. Refresh your nginx web page several times to update the logging data. You can also try accessing the nginx page from different web browsers. After a minute or so, the `Browsers breakdown` visualization shows the respective volume of requests from the different browser types. -+ -image::images/guide-nginx-browser-breakdown.png["Kibana Dashboard shows agent metrics are flowing into Elasticsearch."] - -Congratulations! You have successfully set up monitoring for nginx using standalone {agent} and an {ecloud} deployment. - -[discrete] -=== What's next? - -* Learn more about <>. -* Learn more about {integrations-docs}[{integrations}]. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/example-standalone-monitor-nginx-serverless.asciidoc b/docs/en/ingest-management/elastic-agent/example-standalone-monitor-nginx-serverless.asciidoc deleted file mode 100644 index de11186ea..000000000 --- a/docs/en/ingest-management/elastic-agent/example-standalone-monitor-nginx-serverless.asciidoc +++ /dev/null @@ -1,285 +0,0 @@ -[[example-standalone-monitor-nginx-serverless]] -= Example: Use standalone {agent} with {serverless-full} to monitor nginx - -This guide walks you through a simple monitoring scenario so you can learn the basics of setting up standalone {agent}, using it to work with {serverless-full} and an Elastic integration. - -Following these steps, you'll deploy the {stack}, install a standalone {agent} on a host to monitor an nginx web server instance, and access visualizations based on the collected logs. - -. <>. -. <>. -. <>. -. <>. -. <>. -. <>. -. <>. -. <>. -. <>. - -[discrete] -[[nginx-guide-prereqs-serverless]] -=== Prerequisites - -To get started, you need: - -. An internet connection and an email address for your {ecloud} trial. -. A Linux host machine on which you'll install an nginx web server. The commands in this guide use an Ubuntu image but any Linux distribution should be fine. - -[discrete] -[[nginx-guide-install-nginx-serverless]] -=== Step 1: Install nginx - -To start, we'll set up a basic link:https://docs.nginx.com/nginx/admin-guide/web-server/[nginx web server]. - -. Run the following command on an Ubuntu Linux host, or refer to the link:https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/[nginx install documentation] for the command appropriate to your operating system. -+ -[source,sh] ----- -sudo apt install nginx ----- -+ -. Open a web browser and visit your host machine's external URL, for example `http://192.168.64.17/`. You should see the nginx welcome message. -+ -[role="screenshot"] -image::images/guide-nginx-welcome.png["Browser window showing Welcome to nginx!"] - -[discrete] -[[nginx-guide-sign-up-serverless]] -=== Step 2: Create an {serverless-full} project - -NOTE: If you've already signed up for a trial deployment you can skip this step. - -Now that your web server is running, let's get set up to monitor it in {ecloud}. An {ecloud} {serverless-short} project offers you all of the features of the {stack} as a hosted service. To test drive your first deployment, sign up for a free {ecloud} trial: - -. Go to our link:https://cloud.elastic.co/registration?elektra=guide-welcome-cta[{ecloud} Trial] page. - -. Enter your email address and a password. -+ -[role="screenshot"] -image::images/guide-sign-up-trial.png["Start your free Elastic Cloud trial",width="50%"] - -. After you've link:https://cloud.elastic.co/login[logged in], select *Create project*. - -. On the *Observability* tab, select *Next*. The *Observability* and *Security* projects both include {fleet}, which you can use to create a policy for the {agent} that will monitor your nginx installation. - -. Give your project a name. You can leave the default options or select a different cloud provider and region. - -. Select *Create project*, and then wait a few minutes for the new project to set up. - -. Once the project is ready, select *Continue*. At this point, you access {kib} and a selection of setup guides. - - - -[discrete] -[[nginx-guide-create-api-key-serverless]] -=== Step 3: Create an {es} API key - -. When your {serverless-short} project is ready, open the {kib} menu and go to **Project settings** -> **Management -> API keys**. - -. Select *Create API key*. - -. Give the key a name, for example `nginx example API key`. - -. Leave the other default options and select *Create API key*. - -. In the *Create API key* confirmation dialog, change the dropdown menu setting from `Encoded` to `Beats`. This sets the API key to the format used for communication between {agent} and {es}. - -. Copy the generated API key and store it in a safe place. You'll use it in a later step. - -[discrete] -[[nginx-guide-create-policy-serverless]] -=== Step 4: Create an {agent} policy - -{agent} is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, and more. A single agent makes it easy and fast to deploy monitoring across your infrastructure. Each agent has a single policy (a collection of input settings) that you can update to add integrations for new data sources, security protections, and more. - -. Open the {kib} menu and go to **Project settings** -> **{fleet} -> Agent policies**. -+ -image::images/guide-agent-policies.png["Agent policies tab in Fleet"] -. Click *Create agent policy*. -. Give your policy a name. For this example we'll call it `nginx-policy`. -. Leave *Collect system logs and metrics* selected. -. Click *Create agent policy*. -+ -image::images/guide-create-agent-policy.png["Create agent policy UI"] - -[discrete] -[[nginx-guide-add-integration-serverless]] -=== Step 5: Add the Nginx Integration - -Elastic integrations are a streamlined way to connect your data from popular services and platforms to the {stack}, including nginx. - -. From the **{fleet} -> Agent policies** tab, click the link for your new `nginx-policy`. -+ -image::images/guide-nginx-policy.png["The nginx-policy UI with integrations tab selected"] -. Note that the System integration (`system-1`) is included because you opted earlier to collect system logs and metrics. -. Click **Add integration**. -. On the Integrations page search for "nginx". -+ -image::images/guide-integrations-page.png["Integrations page with nginx in the search bar"] -. Select the **Nginx** card. -. Click **Add Nginx**. -. Click the link to **Add integration only (skip agent installation)**. You'll install standalone {agent} in a later step. -. Here, you can select options such as the paths to where your nginx logs are stored, whether or not to collect metrics data, and various other settings. -+ -For now, leave all of the default settings and click **Save and continue** to add the Nginx integration to your `nginx-policy` policy. -+ -image::images/guide-add-nginx-integration.png["Add Nginx Integration UI"] -. In the confirmation dialog, select to **Add {agent} later**. -+ -image::images/guide-nginx-integration-added.png["Nginx Integration added confirmation UI with Add {agent} later selected."] - -[discrete] -[[nginx-guide-configure-standalone-agent-serverless]] -=== Step 6: Configure standalone {agent} - -Rather than opt for {fleet} to centrally manage {agent}, you'll configure an agent to run in standalone mode, so it will be managed by hand. - -. Open the {kib} menu and go to **{fleet} -> Agents** and click **Add agent**. -. For the **What type of host are you adding?** step, select `nginx-policy` from the drop-down menu if it's not already selected. -. For the **Enroll in {fleet}?** step, select **Run standalone**. -+ -image::images/guide-add-agent-standalone01.png["Add agent UI with nginx-policy and Run-standalone selected."] -. For the **Configure the agent** step, choose **Download Policy**. Save the `elastic-agent.yml` file to a directory on the host where you'll install nginx for monitoring. -+ -Have a look inside the policy file and notice that it contains all of the input, output, and other settings for the Nginx and System integrations. If you already have a standalone agent installed on a host with an existing {agent} policy, you can use the method described here to add a new integration. Just add the settings from the **Configure the agent** step to your existing `elastic-agent.yml` file. -. For the **Install {agent} on your host** step, select the tab for your host operating system and run the commands on your host. -+ -image::images/guide-install-agent-on-host.png["Install {agent} on your host step, showing tabs with the commands for different operating systems."] -+ -[NOTE] -==== -{agent} commands should be run as `root`. You can prefix each agent command with `sudo` or you can start a new shell as `root` by running `sudo su`. If you need to run {agent} commands without `root` access, refer to <>. -==== -+ -If you're prompted with `Elastic Agent will be installed at {installation location} and will run as a service. Do you want to continue?` answer `Yes`. -+ -If you're prompted with `Do you want to enroll this Agent into Fleet?` answer `no`. -+ -. You can run the `status` command to confirm that {agent} is running. -+ -[source,cmd] ----- -elastic-agent status - -┌─ fleet -│ └─ status: (STOPPED) Not enrolled into Fleet -└─ elastic-agent - └─ status: (HEALTHY) Running ----- -+ -Since you're running the agent in standalone mode the `Not enrolled into Fleet` message is expected. -. Open the `elastic-agent.yml` policy file that you saved. - -. Near the top of the file, replace: -+ -[source,yaml] ----- - username: '${ES_USERNAME}' - password: '${ES_PASSWORD}' ----- -+ -with: -+ -[source,yaml] ----- - api_key: '' ----- -+ -where `your-api-key` is the API key that you generated in <>. - -. Find the location of the default `elastic-agent.yml` policy file that is included in your {agent} install. Install directories for each platform are described in <>. In our example Ubuntu image the default policy file can be found in `/etc/elastic-agent/elastic-agent.yml`. -. Replace the default policy file with the version that you downloaded and updated. For example: -+ -[source,sh] ----- -cp /home/ubuntu/homedir/downloads/elastic-agent.yml /etc/elastic-agent/elastic-agent.yml ----- -+ -NOTE: You may need to prefix the `cp` command with `sudo` for the permission required to replace the default file. -+ -By default, {agent} monitors the configuration file and reloads the configuration automatically when `elastic-agent.yml` is updated. - -. Run the `status` command again, this time with the `--output yaml` option which provides structured and much more detailed output. See the <> command documentation for more details. -+ -[source,shell] ----- -elastic-agent status --output yaml ----- -+ -The results show you the agent status together with details about the running components, which correspond to the inputs and outputs defined for the integrations that have been added to the {agent} policy, in this case the System and Nginx Integrations. -. At the top of the command output, the `info` section contains details about the agent instance. Make a note of the agent ID. In this example the ID is `4779b439-1130-4841-a878-e3d7d1a457d0`. You'll use that ID in the next section. -+ -[source,yaml] ----- -elastic-agent status --output yaml - -info: - id: 4779b439-1130-4841-a878-e3d7d1a457d0 - version: 8.9.1 - commit: 5640f50143410fe33b292c9f8b584117c7c8f188 - build_time: 2023-08-10 17:04:04 +0000 UTC - snapshot: false -state: 2 -message: Running ----- - -[discrete] -[[nginx-guide-confirm-agent-data-serverless]] -=== Step 7: Confirm that your {agent} data is flowing - -Now that {agent} is running, it's time to confirm that the agent data is flowing into {es}. - -. Check that {agent} logs are flowing. -.. Open the {kib} menu and go to **Observability -> Discover**. -.. In the KQL query bar, enter the query `agent.id : "{agent-id}"` where `{agent-id}` is the ID you retrieved from the `elastic-agent status --output yaml` command. For example: `agent.id : "4779b439-1130-4841-a878-e3d7d1a457d0"`. -+ -If {agent} has connected successfully with your {ecloud} deployment, the agent logs should be flowing into {es} and visible in {kib} Discover. -+ -image::images/guide-agent-logs-flowing.png["Kibana Discover shows agent logs are flowing into Elasticsearch."] -. Check that {agent} metrics are flowing. -.. Open the {kib} menu and go to **Observability -> Dashboards**. -.. In the search field, search for `Elastic Agent` and select `[Elastic Agent] Agent metrics` in the results. -+ -like the agent logs, the agent metrics should be flowing into {es} and visible in {kib} Dashboard. You can view metrics on CPU usage, memory usage, open handles, events rate, and more. -+ -image::images/guide-agent-metrics-flowing.png["Kibana Dashboard shows agent metrics are flowing into Elasticsearch."] - -[discrete] -[[nginx-guide-view-system-data-serverless]] -=== Step 8: View your system data - -In the step to <> you chose to collect system logs and metrics, so you can access those now. - -. View your system logs. -.. Open the {kib} menu and go to **Project settings -> Integrations -> Installed integrations**. -.. Select the **System** card and open the **Assets** tab. This is a quick way to access all of the dashboards, saved searches, and visualizations that come with each integration. -.. Select `[Logs System] Syslog dashboard`. -.. Select the calandar icon and change the time setting to `Today`. The {kib} Dashboard shows visualizations of Syslog events, hostnames and processes, and more. -. View your system metrics. - -.. Return to **Project settings -> Integrations -> Installed integrations**. -.. Select the **System** card and open the **Assets** tab. -.. This time, select `[Metrics System] Host overview`. -.. Select the calandar icon and change the time setting to `Today`. The {kib} Dashboard shows visualizations of host metrics including CPU usage, memory usage, running processes, and others. -+ -image::images/guide-system-metrics-dashboard.png["The System metrics host overview showing CPU usage, memory usage, and other visualizations"] - -[discrete] -[[nginx-guide-view-nginx-data-serverless]] -=== Step 9: View your nginx logging data - -Now let's view your nginx logging data. - -. Open the {kib} menu and go to **Project settings -> Integrations -> Installed integrations**. -. Select the **Nginx** card and open the **Assets** tab. -. Select `[Logs Nginx] Overview`. The {kib} Dashboard opens with geographical log details, response codes and errors over time, top pages, and more. -. Refresh your nginx web page several times to update the logging data. You can also try accessing the nginx page from different web browsers. After a minute or so, the `Browsers breakdown` visualization shows the respective volume of requests from the different browser types. -+ -image::images/guide-nginx-browser-breakdown.png["Kibana Dashboard shows agent metrics are flowing into Elasticsearch."] - -Congratulations! You have successfully set up monitoring for nginx using standalone {agent} and an {serverless-full} project. - -[discrete] -=== What's next? - -* Learn more about <>. -* Learn more about {integrations-docs}[{integrations}]. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/grant-access-to-elasticsearch.asciidoc b/docs/en/ingest-management/elastic-agent/grant-access-to-elasticsearch.asciidoc deleted file mode 100644 index ac8d259f6..000000000 --- a/docs/en/ingest-management/elastic-agent/grant-access-to-elasticsearch.asciidoc +++ /dev/null @@ -1,158 +0,0 @@ -[[grant-access-to-elasticsearch]] -= Grant standalone {agent}s access to {es} - -You can use either API keys or user credentials to grant standalone {agent}s -access to {es} resources. The following minimal permissions are required to send -logs, metrics, traces, and synthetics to {es}: - -* `monitor` cluster privilege -* `auto_configure` and `create_doc` index privileges on `logs-*-*`, `metrics-*-*`, -`traces-*-*`, and `synthetics-*-*`. - -It's recommended that you use API keys to avoid exposing usernames and passwords -in configuration files. - -If you're using {fleet}, refer to -{fleet-guide}/fleet-enrollment-tokens.html[{fleet} enrollment tokens]. - -[discrete] -[[create-api-key-standalone-agent]] -== Create API keys for standalone agents - -NOTE: API keys are sent as plain-text, so they only provide security when used -in combination with Transport Layer Security (TLS). Our -{ess-product}[hosted {ess}] on {ecloud} provides secure, encrypted connections -out of the box! For self-managed {es} clusters, refer to -<>. - -You can set API keys to expire at a certain time, and you can explicitly -invalidate them. Any user with the `manage_api_key` or `manage_own_api_key` -cluster privilege can create API keys. - -For security reasons, we recommend using a unique API key per {agent}. You -can create as many API keys per user as necessary. - -If you are using link:{serverless-docs}[{serverless-full}], API key authentication is required. - -To create an API key for {agent}: - -. In an {ecloud} or on premises environment, in {kib} navigate to *{stack-manage-app} > API keys* and click *Create API key*. -+ -In a {serverless-short} environment, in {kib} navigate to *Project settings* > *Management* > *API keys* and click *Create API key*. - -. Enter a name for your API key and select *Control security privileges*. - -. In the role descriptors box, copy and paste the following JSON. This example creates an API key with privileges for ingesting logs, metrics, traces, and synthetics: -+ -[source,json] ----- -{ - "standalone_agent": { - "cluster": [ - "monitor" - ], - "indices": [ - { - "names": [ - "logs-*-*", "metrics-*-*", "traces-*-*", "synthetics-*-*" <1> - ], - "privileges": [ - "auto_configure", "create_doc" - ] - } - ] - } -} ----- -<1> Adjust this list to match the data you want to collect. For example, if -you aren't using APM or synthetics, remove `"traces-*-*"` and `"synthetics-*-*"` -from this list. - -. To set an expiration date for the API key, select *Expire after time* and input -the lifetime of the API key in days. - -. Click *Create API key*. -+ -You'll see a message indicating that the key was created, along with the -encoded key. By default, the API key is Base64 encoded, but that won't work for -{agent}. - -// lint ignore beats -. Click the down arrow next to Base64 and select *Beats*. -+ -[role="screenshot"] -image::images/copy-api-key.png[Message with field for copying API key] - -. Copy the API key. You will need this for the next step, and you will not be -able to view it again. - -. To use the API key, specify the `api_key` setting in the `elastic-agent.yml` -file. For example: -+ -[source,yaml] ----- -[...] -outputs: - default: - type: elasticsearch - hosts: - - 'https://da4e3a6298c14a6683e6064ebfve9ace.us-central1.gcp.cloud.es.io:443' - api_key: _Nj4oH0aWZVGqM7MGop8:349p_U1ERHyIc4Nm8_AYkw <1> -[...] ----- -<1> The format of this key is `:`. Base64 encoded API keys are not -currently supported in this configuration. - -For more information about creating API keys in {kib}, see -{kibana-ref}/api-keys.html[API Keys]. - -[discrete] -[[create-role-standalone-agent]] -== Create a standalone agent role - -Although it's recommended that you use an API key instead of a username and -password to access {es} (and an API key is required in a {serverless-short} environment), you can create a role with the required privileges, -assign it to a user, and specify the user's credentials in the -`elastic-agent.yml` file. - -. In {kib}, go to *{stack-manage-app} > Roles*. - -. Click *Create role* and enter a name for the role. - -. In *Cluster privileges*, enter `monitor`. - -. In *Index privileges*, enter: - -.. `logs-*-*`, `metrics-*-*`, `traces-*-*` and `synthetics-*-*` in the *Indices* -field. -+ -NOTE: Adjust this list to match the data you want to collect. For example, if -you aren't using APM or synthetics, remove `traces-*-*` and `synthetics-*-*` -from this list. - -.. `auto_configure` and `create_doc` in the *Privileges* field. -+ -[role="screenshot"] -image::create-standalone-agent-role.png[Create role settings for a standalone agent role] - -. Create the role and assign it to a user. For more information about creating -roles, refer to -{kibana-ref}/kibana-role-management.html[{kib} role management]. - -. To use these credentials, set the username and password in the -`elastic-agent.yml` file: -+ -[source,yaml] ----- -[...] -outputs: - default: - type: elasticsearch - hosts: - - 'https://da4e3a6298c14a6683e6064ebfve9ace.us-central1.gcp.cloud.es.io:443' - username: ES_USERNAME <1> - password: ES_PASSWORD -[...] ----- -<1> For security reasons, specify a user with the minimal privileges described -here. It's recommended that you do not use the `elastic` superuser. diff --git a/docs/en/ingest-management/elastic-agent/images/helm-example-fleet-metrics-dashboard.png b/docs/en/ingest-management/elastic-agent/images/helm-example-fleet-metrics-dashboard.png deleted file mode 100644 index f0f3ae7fa..000000000 Binary files a/docs/en/ingest-management/elastic-agent/images/helm-example-fleet-metrics-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-enrollment-confirmation.png b/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-enrollment-confirmation.png deleted file mode 100644 index c55a50bd6..000000000 Binary files a/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-enrollment-confirmation.png and /dev/null differ diff --git a/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-logs-and-metrics.png b/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-logs-and-metrics.png deleted file mode 100644 index 4d57e979d..000000000 Binary files a/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-logs-and-metrics.png and /dev/null differ diff --git a/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-metrics-dashboard.png b/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-metrics-dashboard.png deleted file mode 100644 index 9322eb818..000000000 Binary files a/docs/en/ingest-management/elastic-agent/images/helm-example-nodes-metrics-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/elastic-agent/images/helm-example-pods-metrics-dashboard.png b/docs/en/ingest-management/elastic-agent/images/helm-example-pods-metrics-dashboard.png deleted file mode 100644 index fa894a1de..000000000 Binary files a/docs/en/ingest-management/elastic-agent/images/helm-example-pods-metrics-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/elastic-agent/ingest-pipeline-kubernetes.asciidoc b/docs/en/ingest-management/elastic-agent/ingest-pipeline-kubernetes.asciidoc deleted file mode 100644 index 2885b9f8b..000000000 --- a/docs/en/ingest-management/elastic-agent/ingest-pipeline-kubernetes.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[ingest-pipeline-kubernetes]] -= Using a custom ingest pipeline with the {k8s} Integration - -This tutorial explains how to add a custom ingest pipeline to a {k8s} Integration in order to add specific metadata fields for deployments and cronjobs of pods. - -Custom pipelines can be used to add custom data processing, like adding fields, obfuscating sensitive information, and more. Find more information in our tutorial about <>. - -== Metadata enrichment for Kubernetes - -The https://docs.elastic.co/en/integrations/kubernetes[{k8s} Integration] is used to collect logs and metrics from Kubernetes clusters with {agent}. During the collection, the integration enhances the collected information with extra useful information that users can correlate with different Kubernetes assets. This additional information added on top of collected data, such as labels, annotations, ancestor names of Kubernetes assets, and others, are called metadata. - -The https://www.elastic.co/guide/en/fleet/current/kubernetes-provider.html[{k8s} Provider] offers the `add_resource_metadata` option to configure the metadata enrichment options. - -For {agent} versions >[8.10.4], the default configuration for metadata enrichment is `add_resource_metadata.deployment=false` and `add_resource_metadata.cronjob=false`. This means that pods that are created from replicasets that belong to specific deployments would not be enriched with `kubernetes.deployment.name`. Additionally, pods that are created from jobs that belong to specific cronjobs, would not be enriched with `kubernetes.cronjob.name`. - -**Kubernetes Integration Policy > Collect Kubernetes metrics from Kube-state-metrics > Kubernetes Pod Metrics** - --- -[role="screenshot"] -image::images/add_resource_metadata.png[Configure add_resource_metadata] --- - -Example: Enabling the enrichment through `add_resource_metadata` in a Managed {agent} Policy - -> **Note:** Enabling deployment and cronjob metadata enrichment leads to an increase of {agent}'s memory consumption. {agent} uses a local cache in order to keep records of the {k8s} assets from being discovered. - -== Add deployment and cronjob for {k8s} pods through ingest pipelines - -As an alternative to keeping the feature enabled and using more memory resources for {agent}, users can make use of ingest pipelines to add the missing fields of `kubernetes.deployment.name` and `kubernetes.cronjob.name`. - -Following the <> tutorial, navigate to `state_pod` datastream under: **Kubernetes Integration Policy > Collect Kubernetes metrics from Kube-state-metrics > Kubernetes Pod Metrics**. - -Create the following custom ingest pipeline with two processors: --- -[role="screenshot"] -image::images/ingest_pipeline_custom_k8s.png[Custom ingest pipeline] --- - -=== Processor for deployment - --- -[role="screenshot"] -image::images/gsub_deployment.png[Gsub Processor for deployment] --- - -=== Processor for cronjob - --- -[role="screenshot"] -image::images/gsub_cronjob.png[Gsub Processor for cronjob] --- - -The final `metrics-kubernetes.state_pod@custom` ingest pipeline: - -[source,json] ----- -[ - { - "gsub": { - "field": "kubernetes.replicaset.name", - "pattern": "(?:.(?!-))+$", - "replacement": "", - "target_field": "kubernetes.deployment.name", - "ignore_missing": true, - "ignore_failure": true - } - }, - { - "gsub": { - "field": "kubernetes.job.name", - "pattern": "(?:.(?!-))+$", - "replacement": "", - "target_field": "kubernetes.cronjob.name", - "ignore_missing": true, - "ignore_failure": true - } - } -] ----- - - -> **Note**: The ingest pipeline does not check for the actual existence of a deployment and cronjob ancestor, it only adds the specific values. - diff --git a/docs/en/ingest-management/elastic-agent/install-agent-msi.asciidoc b/docs/en/ingest-management/elastic-agent/install-agent-msi.asciidoc deleted file mode 100644 index c67a1e18a..000000000 --- a/docs/en/ingest-management/elastic-agent/install-agent-msi.asciidoc +++ /dev/null @@ -1,72 +0,0 @@ -[[install-agent-msi]] -= Install {agent} from an MSI package - -MSI is the file format and command line utility for the link:https://en.wikipedia.org/wiki/Windows_Installer[Windows Installer]. Windows Installer (previously known as Microsoft Installer) is an interface for Microsoft Windows that’s used to install and manage software on Windows systems. This section covers installing Elastic Agent through the MSI package repository. - -The MSI package installer must be run by an administrator account. The installer won't start without Windows admin permissions. - -[discrete] -== Install {agent} - -. Download the latest Elastic Agent MSI binary from the link:https://www.elastic.co/downloads/elastic-agent[{agent} download page]. - -. Run the installer. The command varies slightly depending on whether you're using the default Windows command prompt or PowerShell. -+ -==== -** Using the default command prompt: -+ -[source,shell] ----- -elastic-agent--windows-x86_64.msi INSTALLARGS="--url= --enrollment-token=" ----- -+ -** Using PowerShell: -+ -[source,shell] ----- -./elastic-agent--windows-x86_64.msi --% INSTALLARGS="--url= --enrollment-token=" ----- -==== -+ -Where: - -* `VERSION` is the {stack} version you're installing, indicated in the MSI package name. For example, `8.13.2`. -* `URL` is the {fleet-server} URL used to enroll the {agent} into {fleet}. You can find this on the {fleet} *Settings* tab in {kib}. -* `TOKEN` is the authentication token used to enroll the {agent} into {fleet}. You can find this on the {fleet} *Enrollment tokens* tab. - -+ -When you run the command, the value set for `INSTALLARGS` will be passed to the <> command verbatim. - -. If you need to troubleshoot, you can install using `msiexec` with the `-L*V "log.txt"` option to create installation logs: -+ -[source,shell] ----- -msiexec -i elastic-agent--windows-x86_64.msi INSTALLARGS="--url= --enrollment-token=" -L*V "log.txt" ----- - -[discrete] -== Installation notes - -Installing using an MSI package has the following behaviors: - -* If `INSTALLARGS` are not provided, the MSI will copy the files to a temporary folder and finish. -* If `INSTALLARGS` are provided, the MSI will copy the files to a temporary folder and then run the <> command with the provided parameters. If the install flow is successful, the temporary folder is deleted. -* If `INSTALLARGS` are provided but the `elastic-agent install` command fails, the top-level folder is NOT deleted, in order to allow for further troubleshooting. -* If the `elastic-agent install` command fails for any reason, the MSI will rollback all changes. -* If the {agent} enrollment fails, the install will fail as well. To avoid this behavior you can add the <> option to the install command. - -[discrete] -== Upgrading - -The {agent} version can be upgraded via {fleet}, but the registered MSI version will display the initially installed version (this shortcoming will be addressed in future releases). Attempts to upgrade outside of {fleet} via the MSI will require an uninstall and reinstall procedure to upgrade. Also note that this MSI implementation relies on the tar {agent} binary to upgrade the installation. Therefore if the {agent} is installed in an air-gapped environment, you must ensure that the tar image is available before an upgrade request is issued. - -[discrete] -== Installing in a custom location - -Starting in version 8.13, it's also possible to override the default installation folder by running the MSI from the command line, as shown: - -[source,shell] ----- -elastic-agent--windows-x86_64.msi INSTALLARGS="--url= --enrollment-token=" INSTALLDIR="" ----- - diff --git a/docs/en/ingest-management/elastic-agent/install-elastic-agent-in-container.asciidoc b/docs/en/ingest-management/elastic-agent/install-elastic-agent-in-container.asciidoc deleted file mode 100644 index 7be52ecb1..000000000 --- a/docs/en/ingest-management/elastic-agent/install-elastic-agent-in-container.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[install-elastic-agents-in-containers]] -= Install {agent}s in a containerized environment - -You can run {agent} inside of a container -- either with {fleet-server} or -standalone. Docker images for all versions of {agent} are available from the -Elastic Docker registry, and we provide deployment manifests for running on -Kubernetes. - -To learn how to run {agent}s in a containerized environment, see: - -* <> - -* <> - -** <> - -** <> - -** <> - -** <> - -** <> - -* <> - -* <> - -* <> - -* {eck-ref}/k8s-elastic-agent.html[Run {agent} on ECK] -- for {eck} users - diff --git a/docs/en/ingest-management/elastic-agent/install-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/install-elastic-agent.asciidoc deleted file mode 100644 index 51f5c0ec1..000000000 --- a/docs/en/ingest-management/elastic-agent/install-elastic-agent.asciidoc +++ /dev/null @@ -1,110 +0,0 @@ -[[elastic-agent-installation]] -= Install {agent}s - -[IMPORTANT] -.Restrictions -==== -Note the following restrictions when installing {agent} on your system: - -* You can install only a single {agent} per host. Due to the fact that the {agent} may read data sources that are only accessible by a superuser, {agent} will therefore also need to be executed with superuser permissions. -* You might need to log in as a root user (or Administrator on Windows) to -run the commands described here. After the {agent} service is installed and running, -make sure you run these commands without prepending them with `./` to avoid -invoking the wrong binary. -* Running {agent} commands using the Windows PowerShell ISE is not supported. -* See also the <> described on this page. -==== - -You have a few options for installing and managing an {agent}: - -* **Install a {fleet}-managed {agent} (recommended)** -+ -With this approach, you install {agent} and use {fleet} in {kib} to define, -configure, and manage your agents in a central location. -+ -We recommend using {fleet} management because it makes the management and -upgrade of your agents considerably easier. -+ -Refer to <>. - -* **Install {agent} in standalone mode (advanced users)** -+ -With this approach, you install {agent} and manually configure the agent locally -on the system where it’s installed. You are responsible for managing and -upgrading the agents. This approach is reserved for advanced users only. -+ -Refer to <>. - -* **Install {agent} in a containerized environment** -+ -You can run {agent} inside of a container -- either with {fleet-server} or -standalone. Docker images for all versions of {agent} are available from the -Elastic Docker registry, and we provide deployment manifests for running on -Kubernetes. -+ -Refer to: -+ --- -* <> -* <> -** <> -** <> -** <> -** <> -** <> -* <> -* <> -* {eck-ref}/k8s-elastic-agent.html[Run {agent} on ECK] -- for {eck} users --- - -[IMPORTANT] -.Restrictions in {serverless-short} -==== -If you are using {agent} with link:{serverless-docs}[{serverless-full}], note these differences from use with {ess} and self-managed {es}: - -* The number of {agents} that may be connected to an {serverless-full} project is limited to 10 thousand. -* The minimum supported version of {agent} supported for use with {serverless-full} is 8.11.0. -==== - -[discrete] -[[elastic-agent-installation-resource-requirements]] -== Resource requirements - -The {agent} resources consumption is influenced by the number of integration and the environment its been running on. - -Using our lab environment as an example, we can observe the following resource consumption: - -// lint ignore mem -[discrete] -=== CPU and RSS memory size - -// lint ignore 2 vCPU 8.0 GiB -We tested using an AWS `m7i.large` instance type with 2 vCPUs, 8.0 GB of memory, and up to 12.5 Gbps of bandwidth. The tests ingested a single log file using both the <> with self monitoring enabled. -These tests are representative of use cases that attempt to ingest data as fast as possible. This does not represent the resource overhead when using {integrations-docs}/endpoint[{elastic-defend}]. -[options,header] -|=== -| **Resource** | **Throughput** | **Scale** -| **CPU*** | ~67% | ~20% -| **RSS memory size*** | ~280 MB | ~220 MB -| **Write network throughput** | ~3.5 MB/s | 480 KB/s -|=== - -^*^ including all monitoring processes - -Adding integrations will increase the memory used by the agent and its processes. - -[discrete] -=== Size on disk - -The disk requirements for {agent} vary by operating system and {stack} version. - -[options,header] -|=== -|Operating system |8.13 | 8.14 | 8.15 | 8.18 | 9.0 | - -| **Linux** | 1800 MB | 1018 MB | 1060 MB | 1.5 GB | 1.5 GB | -| **macOS** | 1100 MB | 619 MB | 680 MB | 775 MB | 7755 MB | -| **Windows** | 891 MB | 504 MB | 500 MB | 678 MB | 705 MB | -|=== - -During upgrades, double the disk space is required to store the new {agent} binary. After the upgrade completes, the original {agent} is removed from disk to free up the space. diff --git a/docs/en/ingest-management/elastic-agent/install-fleet-managed-agent.asciidoc b/docs/en/ingest-management/elastic-agent/install-fleet-managed-agent.asciidoc deleted file mode 100644 index 2ac9a95b0..000000000 --- a/docs/en/ingest-management/elastic-agent/install-fleet-managed-agent.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -["appendix",role="exclude",id="install-fleet-managed-agent"] -= Install a {fleet}-managed {agent} - -This guide describes how to: - -* Install an {agent} that will be managed with {fleet} -* Enroll the {agent} in {fleet} - -These steps assume that {stack} is running and an Elastic Integration -has been added in {kib}. If this is not true -- don't worry, check -out one of the -link:https://www.elastic.co/training/free#quick-starts[Quick Starts] -and get started with {ecloud}. - -NOTE: You can install only a single {agent} per host. - -[discrete] -[[minimal-elastic-agent-prereqs]] -== Prerequisites - -* The {stack} is running. - -* You have added an integration in {kib} and are now ready to download -and enroll {agent} on your system. - -[discrete] -[[minimal-add-agent-to-fleet]] -== Step 1: Add an {agent} to {fleet} - -{agent} is a single, unified agent that you can deploy to hosts or containers to -collect data and send it to the {stack}. Behind the scenes, {agent} runs the -{beats} shippers or {elastic-endpoint} required for your configuration. - -Choose the operating system and run the provided commands to download and extract -{agent} on your system. - -include::../tab-widgets/download-widget.asciidoc[] - -[discrete] -[[enroll-agent]] -== Step 2: Enroll and start the {agent} - -// lint ignore elastic-agent -Return to {kib} and follow the **Enroll and start the Elastic Agent** instructions. The command provided in {kib} includes the information that {agent} needs to connect and authenticate with the {stack}. When the {agent} enrolls it will be configured to send the data associated with the policy that you created. - diff --git a/docs/en/ingest-management/elastic-agent/install-fleet-managed-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/install-fleet-managed-elastic-agent.asciidoc deleted file mode 100644 index 14e4a7c28..000000000 --- a/docs/en/ingest-management/elastic-agent/install-fleet-managed-elastic-agent.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -[[install-fleet-managed-elastic-agent]] -= Install {fleet}-managed {agent}s - -**** -{fleet} is a web-based UI in {kib} for -<>. To use {fleet}, you -install {agent}, then enroll the agent in a policy defined in {kib}. The policy -includes integrations that specify how to collect observability data from -specific services and protect endpoints. The {agent} connects to a trusted -<> instance to retrieve the policy and report agent -events. -**** - -[discrete] -[[get-started]] -== Where to start - -To get up and running quickly, read one of our end-to-end guides: - -* New to Elastic? Read our solution -{estc-welcome}/getting-started-guides.html[Getting started guides]. -* Want to add data to an existing cluster or deployment? Read our -<>. - -Looking for upgrade info? Refer to <>. - -Just want to learn how to install {agent}? Continue reading this page. - -[discrete] -[[elastic-agent-prereqs]] -== Prerequisites - -You will always need: - -* *A {kib} user with `All` privileges on {fleet} and {integrations}.* Since many -Integrations assets are shared across spaces, users need the {kib} privileges in -all spaces. - -* *<> running in a location accessible to {agent}.* -{agent} must have a direct network connection to -{fleet-server} and {es}. If you're using our hosted {ess} on {ecloud}, -{fleet-server} is already available as part of the {integrations-server}. For -self-managed deployments, refer to <>. - -* *Internet connection for {kib} to download integration packages from the {package-registry}.* -Make sure the {kib} server can connect to -`https://epr.elastic.co` on port `443`. If your environment has network traffic -restrictions, there are ways to work around this requirement. See -{fleet-guide}/air-gapped.html[Air-gapped environments] for more information. - -If you are using a {fleet-server} that uses your organization's certificate, -you will also need: - -* *A Certificate Authority (CA) certificate to configure Transport Layer Security (TLS) -to encrypt traffic.* If your organization already uses the {stack}, you may already have a -CA certificate. If you do not have a CA certificate, you can read more -about generating one in <>. - -If you're running {agent} 7.9 or earlier, stop the agent and manually remove -it from your host. - -[discrete] -[[elastic-agent-installation-steps]] -== Installation steps - -NOTE: You can install only a single {agent} per host. - -{agent} can monitor the host where it's deployed, and it can collect and forward -data from remote services and hardware where direct deployment is not possible. - -To install an {agent} and enroll it in {fleet}: - -// tag::agent-enroll[] - -// lint disable fleet -. In {fleet}, open the **Agents** tab and click **Add agent**. - -. In the *Add agent* flyout, select an existing agent policy or create a new -one. If you create a new policy, {fleet} generates a new -{fleet-guide}/fleet-enrollment-tokens.html[{fleet} enrollment token]. -+ -[NOTE] -==== -For on-premises deployments, you can dedicate a policy to all the -agents in the network boundary and configure that policy to include a -specific {fleet-server} (or a cluster of {fleet-server}s). - -Read more in {fleet-guide}/agent-policy.html#add-fleet-server-to-policy[Add a {fleet-server} to a policy]. -==== - -. Make sure **Enroll in Fleet** is selected. - -. Download, install, and enroll the {agent} on your host by selecting -your host operating system and following the **Install {agent} on your host** -step. Note that the commands shown are for AMD platforms, but ARM packages are also available. -Refer to the {agent} https://www.elastic.co/downloads/elastic-agent[downloads page] for the full list of available packages. -.. If you are enrolling the agent in a {fleet-server} that uses your -organization's certificate you _must_ add the `--certificate-authorities` -option to the command provided in the in-product instructions. -If you do not include the certificate, you will see the following error: -"x509: certificate signed by unknown authority". -+ --- -[role="screenshot"] -image::images/kibana-agent-flyout.png[Add agent flyout in {kib}] --- -// lint enable fleet - -After about a minute, the agent will enroll in {fleet}, download the -configuration specified in the agent policy, and start collecting data. - -**Notes:** - -* If you encounter an "x509: certificate signed by unknown authority" error, you -might be trying to enroll in a {fleet-server} that uses self-signed certs. To -fix this problem in a non-production environment, pass the `--insecure` flag. -For more information, refer to the -{fleet-guide}/fleet-troubleshooting.html#agent-enrollment-certs[troubleshooting guide]. - -* Optionally, you can use the `--tag` flag to specify a comma-separated list of -tags to apply to the enrolled {agent}. For more information, refer to -{fleet-guide}/filter-agent-list-by-tags.html[Filter list of Agents by tags]. - -* Refer to {fleet-guide}/installation-layout.html[Installation layout] for the -location of installed {agent} files. - -* Because {agent} is installed as an auto-starting service, it will restart -automatically if the system is rebooted. - - -To confirm that {agent} is installed and running, open the **Agents** tab in {fleet}. - -[role="screenshot"] -image::images/kibana-fleet-agents.png[{fleet} showing enrolled agents] - -TIP: If the status hangs at Enrolling, make sure the `elastic-agent` process -is running. - -If you run into problems: - -* Check the {agent} logs. If you use the default policy, agent logs and metrics -are collected automatically unless you change the default settings. For more -information, refer to {fleet-guide}/monitor-elastic-agent.html[Monitor {agent} in {fleet}]. - -* Refer to the {fleet-guide}/fleet-troubleshooting.html[troubleshooting guide]. - -For information about managing {agent} in {fleet}, -refer to {fleet-guide}/manage-agents-in-fleet.html[Centrally manage {agent}s in {fleet}]. - -// end::agent-enroll[] diff --git a/docs/en/ingest-management/elastic-agent/install-on-kubernetes-using-helm.asciidoc b/docs/en/ingest-management/elastic-agent/install-on-kubernetes-using-helm.asciidoc deleted file mode 100644 index af4a63e94..000000000 --- a/docs/en/ingest-management/elastic-agent/install-on-kubernetes-using-helm.asciidoc +++ /dev/null @@ -1,24 +0,0 @@ -[[install-on-kubernetes-using-helm]] -= Install {agent} on Kubernetes using Helm - -preview::[] - -Starting with {stack} version 8.16, a Helm chart is available for installing {agent} in a Kubernetes environment. A Helm-based install offers several advantages, including simplified deployment, availability in marketplaces, streamlined ugrades, as well as quick rollbacks whenever they're needed. - -Features of the Helm-based {agent} install include: - -* Support for both standalone and {fleet}-managed {agent}. -* For standalone agents, a built-in Kubernetes policy similar to that available in {fleet} for {fleet}-managed agents. -* Support for custom integrations. -* Support for {es} outputs with authentication through username and password, an API key, or a stored secret. -* Easy switching between privileged (`root`) and unprivileged {agent} deployments. -* Support for {stack} deployments on {eck}. - -For detailed install steps, try one of our walk-through examples: - -* <> -* <> - -NOTE: The {agent} Helm chart is currently available from inside the link:https://github.com/elastic/elastic-agent[elastic/elastic-agent] GitHub repo. It's planned to soon make the chart available from the Elastic Helm repository. - -You can also find details about the Helm chart, including all available YAML settings and descriptions, in the link:https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent[{agent} Helm Chart Readme]. Several link:https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent/examples[examples] are available if you'd like to explore other use cases. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/install-standalone-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/install-standalone-elastic-agent.asciidoc deleted file mode 100644 index 721288f8f..000000000 --- a/docs/en/ingest-management/elastic-agent/install-standalone-elastic-agent.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ -[[install-standalone-elastic-agent]] -= Install standalone {agent}s - -To run an {agent} in standalone mode, install the agent and manually configure -the agent locally on the system where it’s installed. You are responsible for -managing and upgrading the agents. This approach is recommended for advanced -users only. - -We recommend using <>, -when possible, because it makes the management and upgrade of your agents -considerably easier. - -IMPORTANT: Standalone agents are unable to upgrade to new integration package -versions automatically. When you upgrade the integration in {kib}, you'll -need to update the standalone policy manually. - -NOTE: You can install only a single {agent} per host. - -{agent} can monitor the host where it's deployed, and it can collect and forward -data from remote services and hardware where direct deployment is not possible. - -To install and run {agent} standalone: - -. On your host, download and extract the installation package. -+ --- -// tag::install-elastic-agent[] - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/download-widget.asciidoc[] - -// end::install-elastic-agent[] --- -+ -The commands shown are for AMD platforms, but ARM packages are also available. -Refer to the {agent} https://www.elastic.co/downloads/elastic-agent[downloads page] -for the full list of available packages. - -. Modify settings in the `elastic-agent.yml` as required. -+ -To get started quickly and avoid errors, use {kib} to create and download a -standalone configuration file rather than trying to build it by hand. For more -information, refer to <>. -+ -For additional configuration options, refer to <>. - -. In the `elastic-agent.yml` policy file, under `outputs`, specify an API key -or user credentials for the {agent} to access {es}. For example: -+ -[source,yaml] ----- -[...] -outputs: - default: - type: elasticsearch - hosts: - - 'https://da4e3a6298c14a6683e6064ebfve9ace.us-central1.gcp.cloud.es.io:443' - api_key: _Nj4oH0aWZVGqM7MGop8:349p_U1ERHyIc4Nm8_AYkw <1> -[...] ----- -+ -For more information required privileges and creating API keys, see -<>. - -. Make sure the assets you need, such as dashboards and ingest pipelines, are -set up in {kib} and {es}. If you used {kib} to generate the standalone -configuration, the assets are set up automatically. Otherwise, you need to -install them. For more information, refer to <> and -<>. - -. From the agent directory, run the following commands to install {agent} -and start it as a service. -+ -NOTE: On macOS, Linux (tar package), and Windows, run the `install` command to -install {agent} as a managed service and start the service. The DEB and RPM -packages include a service unit for Linux systems with -systemd, so just enable then start the service. -+ --- -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/run-standalone-widget.asciidoc[] --- - -Refer to <> for the location of installed {agent} files. - -Because {agent} is installed as an auto-starting service, it will restart -automatically if the system is rebooted. - -If you run into problems, refer to <>. diff --git a/docs/en/ingest-management/elastic-agent/installation-layout.asciidoc b/docs/en/ingest-management/elastic-agent/installation-layout.asciidoc deleted file mode 100644 index 447794474..000000000 --- a/docs/en/ingest-management/elastic-agent/installation-layout.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[installation-layout]] -= Installation layout - -{agent} files are installed in the following locations. - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/install-layout-widget.asciidoc[] diff --git a/docs/en/ingest-management/elastic-agent/otel-agent-transform.asciidoc b/docs/en/ingest-management/elastic-agent/otel-agent-transform.asciidoc deleted file mode 100644 index fb1c90017..000000000 --- a/docs/en/ingest-management/elastic-agent/otel-agent-transform.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -[[otel-agent-transform]] -== Transform an installed {agent} to run as an OTel Collector - -preview::[] - -If you have a currently installed standalone {agent}, it can be configured to run as an <>. This allows you to run {agent} both as a service and in an OTel Collector mode. - -In order to configure an installed standalone {agent} to run as an OTel Collector, it's enough to include a valid <> configuration in the `elastic-agent.yml` file, as shown in the following example. - -=== Example: configure {agent} to ingest host logs and metrics into Elasticsearch using the OTel Collector - -**Prerequisites** - -You'll need the following: - -. A suitable <> for authenticating on Elasticsearch -. An installed standalone {agent} -. A valid OTel Collector configuration. In this example we'll use the OTel sample configuration included in the {agent} repository: `otel_samples/platformlogs_hostmetrics.yml`. -** link:https://github.com/elastic/elastic-agent/blob/main/internal/pkg/otel/samples/linux/platformlogs_hostmetrics.yml[Linux version] -** link:https://github.com/elastic/elastic-agent/blob/main/internal/pkg/otel/samples/darwin/platformlogs_hostmetrics.yml[MacOS version] - -**Steps** - -To change a running standalone {agent} to run as an OTel Collector: - -. Create a directory where the OTel Collector can save its state. In this example we use `<{agent} install directory>/data/otelcol`. -. Open the `<{agent} install directory>/otel_samples/platformlogs_hostmetrics.yml` file for editing. -. Set environment details to be used by OTel Collector: -* **Option 1:** Define environment variables for the {agent} service: -** `ELASTIC_ENDPOINT`: The URL of the {es} instance where data will be sent -** `ELASTIC_API_KEY`: The API Key to use to authenticate with {es} -** `STORAGE_DIR`: The directory where the OTel Collector can persist its state -* **Option 2:** Replace the environment variable references in the sample configuration with the corresponding values: -** `${env:ELASTIC_ENDPOINT}`:The URL of the {es} instance where data will be sent -** `${env:ELASTIC_API_KEY}`: The API Key to use to authenticate with {es} -** `${env:STORAGE_DIR}`: The directory where the OTel Collector can persist its state -. Save the opened OTel configuration as `elastic-agent.yml`, overwriting the default configuration of the installed agent. -. Run the `elastic-agent status` command to verify that the new configuration has been correctly applied: -+ -[source,shell] ----- -elastic-agent status ----- -The OTel Collector running configuration should appear under `elastic-agent` key (note the `extensions` and `pipeline` keys): -+ -[source,shell] ----- -┌─ fleet -│ └─ status: (STOPPED) Not enrolled into Fleet -└─ elastic-agent - ├─ status: (HEALTHY) Running - ├─ extensions - │ ├─ status: StatusOK - │ └─ extension:file_storage - │ └─ status: StatusOK - ├─ pipeline:logs/platformlogs - │ ├─ status: StatusOK - │ ├─ exporter:elasticsearch/otel - │ │ └─ status: StatusOK - │ ├─ processor:resourcedetection - │ │ └─ status: StatusOK - │ └─ receiver:filelog/platformlogs - │ └─ status: StatusOK - └─ pipeline:metrics/hostmetrics - ├─ status: StatusOK - ├─ exporter:elasticsearch/ecs - │ └─ status: StatusOK - ├─ processor:attributes/dataset - │ └─ status: StatusOK - ├─ processor:elasticinframetrics - │ └─ status: StatusOK - ├─ processor:resource/process - │ └─ status: StatusOK - ├─ processor:resourcedetection - │ └─ status: StatusOK - └─ receiver:hostmetrics/system - └─ status: StatusOK ----- -+ -. Congratulations! Host logs and metrics are now being collected and ingested by the {agent} service running an OTel Collector instance. -For further details about OpenTelemetry collector components supported by {agent}, refer to the link:https://github.com/elastic/elastic-agent/tree/main/internal/pkg/otel#components[Elastic Distribution for OpenTelemetry Collector README]. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/otel-agent.asciidoc b/docs/en/ingest-management/elastic-agent/otel-agent.asciidoc deleted file mode 100644 index 177c73dc1..000000000 --- a/docs/en/ingest-management/elastic-agent/otel-agent.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[otel-agent]] -= Run {agent} as an OTel Collector - -preview::[] - -The link:https://opentelemetry.io/docs/collector/[OpenTelemetry Collector] is a vendor-neutral way to receive, process, and export telemetry data. {agent} includes an embedded OTel Collector, enabling you to instrument your applications and infrastructure once, and send data to multiple vendors and backends. - -When you run {agent} in `otel` mode it supports the standard OTel Collector configuration format that defines a set of receivers, processors, exporters, and connectors. Logs, metrics, and traces can be ingested using OpenTelemetry data formats. - -For a full overview and steps to configure {agent} in `otel` mode, including a guided onboarding, refer to the link:https://github.com/elastic/opentelemetry/tree/main[Elastic Distributions for OpenTelemetry] repository in GitHub. You can also check the <> in the {fleet} and {agent} Command reference. - -If you have a currently running {agent} you can <>. diff --git a/docs/en/ingest-management/elastic-agent/run-container-common/agent-tolerations.asciidoc b/docs/en/ingest-management/elastic-agent/run-container-common/agent-tolerations.asciidoc deleted file mode 100644 index b2e5f65e9..000000000 --- a/docs/en/ingest-management/elastic-agent/run-container-common/agent-tolerations.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -Kubernetes control plane nodes can use https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/[taints] to limit the workloads that can run on them. The manifest for standalone {agent} defines tolerations to run on these. Agents running on control plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes. To disable {agent} from running on control plane nodes, remove the following part of the DaemonSet spec: - -[source,yaml] ------------------------------------------------- -spec: - # Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes. - # Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes - tolerations: - - key: node-role.kubernetes.io/control-plane - effect: NoSchedule - - key: node-role.kubernetes.io/master - effect: NoSchedule ------------------------------------------------- - -Both these two tolerations do the same, but `node-role.kubernetes.io/master` is https://kubernetes.io/docs/reference/labels-annotations-taints/#node-role-kubernetes-io-master-taint[deprecated as of Kubernetes version v1.25]. \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/run-container-common/deploy-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/run-container-common/deploy-elastic-agent.asciidoc deleted file mode 100644 index bd29d19b6..000000000 --- a/docs/en/ingest-management/elastic-agent/run-container-common/deploy-elastic-agent.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -To deploy {agent} to Kubernetes, run: - -["source", "sh", subs="attributes"] ------------------------------------------------- -kubectl create -f {manifest} ------------------------------------------------- - -To check the status, run: - -["source", "sh", subs="attributes"] ------------------------------------------------- -$ kubectl -n kube-system get pods -l app=elastic-agent -NAME READY STATUS RESTARTS AGE -elastic-agent-4665d 1/1 Running 0 81m -elastic-agent-9f466c4b5-l8cm8 1/1 Running 0 81m -elastic-agent-fj2z9 1/1 Running 0 81m -elastic-agent-hs4pb 1/1 Running 0 81m ------------------------------------------------- - -[TIP] -.Running {agent} on a read-only file system -==== -If you'd like to run {agent} on Kubernetes on a read-only file -system, you can do so by specifying the `readOnlyRootFilesystem` option. -==== diff --git a/docs/en/ingest-management/elastic-agent/run-container-common/deploy-kube-state-metrics.asciidoc b/docs/en/ingest-management/elastic-agent/run-container-common/deploy-kube-state-metrics.asciidoc deleted file mode 100644 index f3644c776..000000000 --- a/docs/en/ingest-management/elastic-agent/run-container-common/deploy-kube-state-metrics.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -You need to deploy `kube-state-metrics` to get the metrics about the state of the objects on the cluster (see the https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment[Kubernetes deployment] docs). You can do that by first downloading the project: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -gh repo clone kubernetes/kube-state-metrics ------------------------------------------------- -+ -And then deploying it: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -kubectl apply -k kube-state-metrics ------------------------------------------------- -+ -WARNING: On managed Kubernetes solutions, such as AKS, GKE or EKS, {agent} does not have the required permissions to collect metrics from https://kubernetes.io/docs/concepts/overview/components/#control-plane-components[Kubernetes control plane] components, like `kube-scheduler` and `kube-controller-manager`. Audit logs are only available on Kubernetes control plane nodes as well, and hence cannot be collected by {agent}. Refer https://docs.elastic.co/en/integrations/kubernetes#scheduler-and-controllermanager[here] to find more information. For more information about specific cloud providers, refer to <>, <> and <> diff --git a/docs/en/ingest-management/elastic-agent/run-container-common/download-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/run-container-common/download-elastic-agent.asciidoc deleted file mode 100644 index ddc77ef2b..000000000 --- a/docs/en/ingest-management/elastic-agent/run-container-common/download-elastic-agent.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -NOTE: You can find {agent} Docker images https://www.docker.elastic.co/r/elastic-agent/elastic-agent[here]. - -Download the manifest file: - -["source", "sh", subs="attributes"] ------------------------------------------------- -curl -L -O {manifest} ------------------------------------------------- - -NOTE: You might need to adjust https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[resource limits] of the {agent} container in the manifest. Container resource usage depends on the number of data streams and the environment size. - -This manifest includes the Kubernetes integration to collect Kubernetes metrics and System integration to collect system level metrics and logs from nodes. - -The {agent} is deployed as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet] -to ensure that there is a running instance on each node of the cluster. These instances are used to retrieve most metrics from the host, such as system metrics, Docker stats, and metrics from all the services running on top of Kubernetes. These metrics are accessed through the deployed `kube-state-metrics`. Notice that everything is deployed under the `kube-system` namespace by default. To change the namespace, modify the manifest file. - -Moreover, one of the Pods in the DaemonSet will constantly hold a _leader lock_ which makes it responsible for -handling cluster-wide monitoring. You can find more information about leader election configuration options at <>. The leader pod will retrieve metrics that are unique for the whole cluster, such as Kubernetes events or https://github.com/kubernetes/kube-state-metrics[kube-state-metrics]. -ifeval::["{show-condition}"=="enabled"] -We make sure that these metrics are retrieved from the leader pod by applying the following <> in the manifest, before declaring the data streams with these metricsets: - -[source,yaml] ------------------------------------------------- -... -inputs: - - id: kubernetes-cluster-metrics - condition: ${kubernetes_leaderelection.leader} == true - type: kubernetes/metrics - # metricsets with the state_ prefix and the metricset event -... ------------------------------------------------- -endif::[] - -For Kubernetes Security Posture Management (KSPM) purposes, the {agent} requires read access to various types of Kubernetes resources, node processes, and files. -To achieve this, read permissions are granted to the {agent} for the necessary resources, and volumes from the hosting node's file system are mounted to allow accessibility to the {agent} pods. - -TIP: The size and the number of nodes in a Kubernetes cluster can be large at times, and in such a case the Pod that will be collecting cluster level metrics might require more runtime resources than you would like to dedicate to all of the pods in the DaemonSet. The leader which is collecting the cluster wide metrics may face performance issues due to resource limitations if under-resourced. In this case users might consider avoiding the use of a single DaemonSet with the leader election strategy and instead run a dedicated standalone {agent} instance for collecting cluster wide metrics using a Deployment in addition to the DaemonSet to collect metrics for each node. Then both the Deployment and the DaemonSet can be resourced independently and appropriately. For more information check the <> page. - - diff --git a/docs/en/ingest-management/elastic-agent/run-container-common/kibana-fleet-data.asciidoc b/docs/en/ingest-management/elastic-agent/run-container-common/kibana-fleet-data.asciidoc deleted file mode 100644 index 2b5dbb49d..000000000 --- a/docs/en/ingest-management/elastic-agent/run-container-common/kibana-fleet-data.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -. Launch {kib}: -+ --- -include::{observability-docs-root}/docs/en/shared/open-kibana/widget.asciidoc[] --- - -. To check if your {agent} is enrolled in {fleet}, go to *Management -> {fleet} -> Agents*. -+ -[role="screenshot"] -image::images/kibana-fleet-agents.png[{agent}s {fleet} page] - -. To view data flowing in, go to *Analytics -> Discover* and select the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can't see these indexes, {kibana-ref}/data-views.html[create a data view] for them. - -. To view predefined dashboards, either select *Analytics->Dashboard* or <>. - diff --git a/docs/en/ingest-management/elastic-agent/running-on-aks-managed-by-fleet.asciidoc b/docs/en/ingest-management/elastic-agent/running-on-aks-managed-by-fleet.asciidoc deleted file mode 100644 index 7959b3101..000000000 --- a/docs/en/ingest-management/elastic-agent/running-on-aks-managed-by-fleet.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[running-on-aks-managed-by-fleet]] -= Run {agent} on Azure AKS managed by {fleet} - -Please follow the steps to run the {agent} on <> page. - -[discrete] -== Important notes: - -On managed Kubernetes solutions like AKS, {agent} has no access to several data sources. Find below the list of the non available data: - -1. Metrics from https://kubernetes.io/docs/concepts/overview/components/#control-plane-components[Kubernetes control plane] -components are not available. Consequently metrics are not available for `kube-scheduler` and `kube-controller-manager` components. -In this regard, the respective **dashboards** will not be populated with data. -2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {agent}. -3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with https://docs.elastic.co/en/integrations/kubernetes[Kubernetes integration]. -+ -In this regard, you can use https://www.elastic.co/guide/en/beats/filebeat/current/add-fields.html[`add_fields` processor] to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each https://docs.elastic.co/en/integrations/kubernetes[Kubernetes integration]'s component: -+ -[source,yaml] -.Processors configuration ------------------------------------------------- -- add_fields: - target: orchestrator.cluster - fields: - name: clusterName - url: clusterURL ------------------------------------------------- diff --git a/docs/en/ingest-management/elastic-agent/running-on-eks-managed-by-fleet.asciidoc b/docs/en/ingest-management/elastic-agent/running-on-eks-managed-by-fleet.asciidoc deleted file mode 100644 index bbc8afaf8..000000000 --- a/docs/en/ingest-management/elastic-agent/running-on-eks-managed-by-fleet.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[running-on-eks-managed-by-fleet]] -= Run {agent} on Amazon EKS managed by {fleet} - -Please follow the steps to run the {agent} on <> page. - -[discrete] -== Important notes: - -On managed Kubernetes solutions like EKS, {agent} has no access to several data sources. Find below the list of the non available data: - -1. Metrics from https://kubernetes.io/docs/concepts/overview/components/#control-plane-components[Kubernetes control plane] -components are not available. Consequently metrics are not available for `kube-scheduler` and `kube-controller-manager` components. -In this regard, the respective **dashboards** will not be populated with data. -2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {agent}. -3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with https://docs.elastic.co/en/integrations/kubernetes[Kubernetes integration]. -+ -In this regard, you can use https://www.elastic.co/guide/en/beats/filebeat/current/add-fields.html[`add_fields` processor] to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each https://docs.elastic.co/en/integrations/kubernetes[Kubernetes integration]'s component: -+ -[source,yaml] -.Processors configuration ------------------------------------------------- -- add_fields: - target: orchestrator.cluster - fields: - name: clusterName - url: clusterURL ------------------------------------------------- diff --git a/docs/en/ingest-management/elastic-agent/running-on-gke-managed-by-fleet.asciidoc b/docs/en/ingest-management/elastic-agent/running-on-gke-managed-by-fleet.asciidoc deleted file mode 100644 index 1e68817a0..000000000 --- a/docs/en/ingest-management/elastic-agent/running-on-gke-managed-by-fleet.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[running-on-gke-managed-by-fleet]] -= Run {agent} on GKE managed by {fleet} - -Please follow the steps to run the {agent} on <> page. - -[discrete] -== Important notes: - -On managed Kubernetes solutions like GKE, {agent} has no access to several data sources. Find below the list of the non-available data: - -1. Metrics from https://kubernetes.io/docs/concepts/overview/components/#control-plane-components[Kubernetes control plane] components are not available. Consequently, metrics are not available for `kube-scheduler` and `kube-controller-manager` -components. In this regard, the respective **dashboards** will not be populated with data. -2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {agent}. - -== Autopilot GKE - -Although autopilot removes many administration challenges (like workload management, deployment automation etc. of kubernetes clusters), additionally restricts access to specific namespaces (i.e. `kube-system`) and host paths which is the reason that default Elastic Agent manifests would not work. - -Specific manifests are provided to cover **https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-gke-autopilot.md[Autopilot environments]**. - -> `kube-state-metrics` also must be installed to another namespace rather than the `default` as access to `kube-system` is not allowed. - -== Additonal Resources: - -- Blog https://www.elastic.co/blog/elastic-observe-gke-autopilot-clusters[Using Elastic to observe GKE Autopilot clusters] -- Elastic speakers webinar: https://www.elastic.co/virtual-events/get-full-kubernetes-visibility-into-gke-autopilot-with-elastic-observability["Get full Kubernetes visibility into GKE Autopilot with Elastic Observability"] - diff --git a/docs/en/ingest-management/elastic-agent/running-on-kubernetes-managed-by-fleet.asciidoc b/docs/en/ingest-management/elastic-agent/running-on-kubernetes-managed-by-fleet.asciidoc deleted file mode 100644 index a578c0438..000000000 --- a/docs/en/ingest-management/elastic-agent/running-on-kubernetes-managed-by-fleet.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[[running-on-kubernetes-managed-by-fleet]] -= Run {agent} on Kubernetes managed by {fleet} - -:manifest: https://raw.githubusercontent.com/elastic/elastic-agent/{branch}/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml - - -== What you need - -- https://kubernetes.io/docs/tasks/tools/[kubectl installed]. - -- {es} for storing and searching your data, and {kib} for visualizing and managing it. -+ --- -include::{observability-docs-root}/docs/en/shared/spin-up-the-stack/widget.asciidoc[] --- - -- `kube-state-metrics`. -+ -include::run-container-common/deploy-kube-state-metrics.asciidoc[] - - -[discrete] -== Step 1: Download the {agent} manifest - -include::run-container-common/download-elastic-agent.asciidoc[] - - -[discrete] -== Step 2: Configure {agent} policy - -The {agent} needs to be assigned to a policy to enable the proper inputs. To achieve Kubernetes observability, the policy needs to include the Kubernetes integration. -Refer to <> and <> to learn how to configure the https://docs.elastic.co/en/integrations/kubernetes[Kubernetes integration]. - -[discrete] -== Step 3: Enroll {agent} to the policy - -Enrollment of an {agent} is defined as the action to register a specific agent to a running {fleet-server}. - -{agent} is enrolled to a running {fleet-server} by using `FLEET_URL` parameter. Additionally, the `FLEET_ENROLLMENT_TOKEN` parameter is used to connect {agent} to a specific {agent} policy. - -A new `FLEET_ENROLLMENT_TOKEN` will be created upon new policy creation and will be inserted inside the Elastic Agent Manifest during the Guided installation. - - -Find more information for https://www.elastic.co/guide/en/fleet/current/fleet-enrollment-tokens.html[Enrollment Tokens]. - -To specify different destination/credentials, change the following parameters in the manifest file: - -[source,yaml] ------------------------------------------------- -- name: FLEET_URL - value: "https://fleet-server_url:port" <1> -- name: FLEET_ENROLLMENT_TOKEN - value: "token" <2> -- name: FLEET_SERVER_POLICY_ID - value: "fleet-server-policy" <3> -- name: KIBANA_HOST - value: "" <4> -- name: KIBANA_FLEET_USERNAME - value: "" <5> -- name: KIBANA_FLEET_PASSWORD - value: "" <6> ------------------------------------------------- -<1> URL to enroll the {fleet-server} into. You can find it in {kib}. Select *Management -> {fleet} -> Fleet Settings*, and copy the {fleet-server} host URL. -<2> The token to use for enrollment. Close the flyout panel and select *Enrollment tokens*. Find the Agent policy you created before to enroll {agent} into, and display and copy the secret token. -<3> The policy ID for {fleet-server} to use on itself. -<4> The {kib} host. -<5> The basic authentication username used to connect to {kib} and retrieve a `service_token` to enable {fleet}. -<6> The basic authentication password used to connect to {kib} and retrieve a `service_token` to enable {fleet}. - -If you need to run {fleet-server} as well, adjust the `docker run` command above by adding these environment variables: - -[source,yaml] ------------------------------------------------- -- name: FLEET_SERVER_ENABLE - value: "true" <1> -- name: FLEET_SERVER_ELASTICSEARCH_HOST - value: "" <2> -- name: FLEET_SERVER_SERVICE_TOKEN - value: "" <3> ------------------------------------------------- -<1> Set to `true` to bootstrap {fleet-server} on this {agent}. This automatically forces {fleet} enrollment as well. -<2> The Elasticsearch host for Fleet Server to communicate with, for example `http://elasticsearch:9200`. -<3> Service token to use for communication with {es} and {kib}. - -Refer to <> for all available options. - -[discrete] -== Step 4: Configure tolerations -include::run-container-common/agent-tolerations.asciidoc[] - -:manifest: elastic-agent-managed-kubernetes.yaml -[discrete] -== Step 5: Deploy the {agent} -include::run-container-common/deploy-elastic-agent.asciidoc[] - -[discrete] -== Step 6: View your data in {kib} - -include::run-container-common/kibana-fleet-data.asciidoc[] - - - -:manifest: diff --git a/docs/en/ingest-management/elastic-agent/running-on-kubernetes-standalone.asciidoc b/docs/en/ingest-management/elastic-agent/running-on-kubernetes-standalone.asciidoc deleted file mode 100644 index 012afbe7d..000000000 --- a/docs/en/ingest-management/elastic-agent/running-on-kubernetes-standalone.asciidoc +++ /dev/null @@ -1,175 +0,0 @@ -[[running-on-kubernetes-standalone]] -= Run {agent} Standalone on Kubernetes - -:manifest: https://raw.githubusercontent.com/elastic/elastic-agent/v{bare_version}/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml -:show-condition: enabled - -== What you need - -- https://kubernetes.io/docs/tasks/tools/[kubectl installed]. - -- {es} for storing and searching your data, and {kib} for visualizing and managing it. -+ --- -include::{observability-docs-root}/docs/en/shared/spin-up-the-stack/widget.asciidoc[] --- - -- `kube-state-metrics`. -+ -include::run-container-common/deploy-kube-state-metrics.asciidoc[] - -[discrete] -== Step 1: Download the {agent} manifest - - -include::run-container-common/download-elastic-agent.asciidoc[] - - -[discrete] -== Step 2: Connect to the {stack} - -Set the {es} settings before deploying the manifest: - -[source,yaml] ------------------------------------------------- -- name: ES_USERNAME - value: "elastic" <1> -- name: ES_PASSWORD - value: "passpassMyStr0ngP@ss" <2> -- name: ES_HOST - value: "https://somesuperhostiduuid.europe-west1.gcp.cloud.es.io:9243" <3> ------------------------------------------------- - -<1> The basic authentication username used to connect to {es}. -<2> The basic authentication password used to connect to {kib}. -<3> The {es} host to communicate with. - -Refer to <> for all available options. - - -[discrete] -== Step 3: Configure tolerations -include::run-container-common/agent-tolerations.asciidoc[] - -:manifest: elastic-agent-standalone-kubernetes.yaml -[discrete] -== Step 4: Deploy the {agent} -include::run-container-common/deploy-elastic-agent.asciidoc[] - -[discrete] -== Step 5: View your data in {kib} - - -. Launch {kib}: -+ --- -include::{observability-docs-root}/docs/en/shared/open-kibana/widget.asciidoc[] --- - -. You can see data flowing in by going to *Analytics -> Discover* and selecting the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can't see these indexes, {kibana-ref}/data-views.html[create a data view] for them. - -. You can see predefined dashboards by selecting *Analytics->Dashboard*, or by <>. - - -[discrete] -= Red Hat OpenShift configuration - -If you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and enable the container to run as privileged. - -. In the manifest file, modify the `agent-node-datastreams` ConfigMap and adjust inputs: -+ --- -* `kubernetes-cluster-metrics` input: -** If `https` is used to access `kube-state-metrics`, add the following settings to all `kubernetes.state_*` datasets: -+ -[source,yaml] ------------------------------------------------- - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - ssl.certificate_authorities: - - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt ------------------------------------------------- -* `kubernetes-node-metrics` input: -** Change the `kubernetes.controllermanager` data stream condition to: -+ -[source,yaml] ------------------------------------------------- -condition: ${kubernetes.labels.app} == 'kube-controller-manager' ------------------------------------------------- -** Change the `kubernetes.scheduler` data stream condition to: -+ -[source,yaml] ------------------------------------------------- -condition: ${kubernetes.labels.app} == 'openshift-kube-scheduler' ------------------------------------------------- -** The `kubernetes.proxy` data stream configuration should look like: -+ -[source,yaml] ------------------------------------------------- -- data_stream: - dataset: kubernetes.proxy - type: metrics - metricsets: - - proxy - hosts: - - 'localhost:29101' - period: 10s ------------------------------------------------- -** Add the following settings to all data streams that connect to `https://${env.NODE_NAME}:10250`: -+ -[source,yaml] ------------------------------------------------- - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - ssl.certificate_authorities: - - /path/to/ca-bundle.crt ------------------------------------------------- -NOTE: `ca-bundle.crt` can be any CA bundle that contains the issuer of the certificate used in the Kubelet API. -According to each specific installation of OpenShift this can be found either in `secrets` or in `configmaps`. -In some installations it can be available as part of the service account secret, in -`/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt`. -When using the https://github.com/openshift/installer/blob/master/docs/user/gcp/install.md[OpenShift installer] -for GCP, mount the following `configmap` in the elastic-agent pod and use `ca-bundle.crt` -in `ssl.certificate_authorities`: -+ -[source,shell] ------ -Name: kubelet-serving-ca -Namespace: openshift-kube-apiserver -Labels: -Annotations: - -Data -==== -ca-bundle.crt: ------ --- -. Grant the `elastic-agent` service account access to the privileged SCC: -+ -[source,shell] ------ -oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:elastic-agent ------ -+ -This command enables the container to be privileged as an administrator for -OpenShift. - -. If the namespace where elastic-agent is running has the `"openshift.io/node-selector"` annotation set, elastic-agent -might not run on all nodes. In this case consider overriding the node selector for the namespace to allow scheduling -on any node: -+ -[source,shell] ----- -oc patch namespace kube-system -p \ -'{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' ----- -+ -This command sets the node selector for the project to an empty string. - - -[discrete] -== Autodiscover targeted Pods - -Refer to <> for more information. - - -:manifest: -:show-condition: \ No newline at end of file diff --git a/docs/en/ingest-management/elastic-agent/scaling-on-kubernetes.asciidoc b/docs/en/ingest-management/elastic-agent/scaling-on-kubernetes.asciidoc deleted file mode 100644 index 143319654..000000000 --- a/docs/en/ingest-management/elastic-agent/scaling-on-kubernetes.asciidoc +++ /dev/null @@ -1,298 +0,0 @@ -[[scaling-on-kubernetes]] -= Scaling {agent} on {k8s} - -For more information on how to deploy {agent} on {k8s}, please review these pages: - -- <>. -- <>. - -[discrete] -== Observability at scale - -This document summarizes some key factors and best practices for using {estc-welcome-current}/getting-started-kubernetes.html[Elastic {observability}] to monitor {k8s} infrastructure at scale. Users need to consider different parameters and adjust {stack} accordingly. These elements are affected as the size of {k8s} cluster increases: - -- The amount of metrics being collected from several {k8s} endpoints -- The {agent}'s resources to cope with the high CPU and Memory needs for the internal processing -- The {es} resources needed due to the higher rate of metric ingestion -- The Dashboard's visualizations response times as more data are requested on a given time window - -The document is divided in two main sections: - -- <> -- <> - -[discrete] -[[configuration-practices]] -== Configuration Best Practices - -[discrete] -=== Configure Agent Resources - -The {k8s} {observability} is based on https://docs.elastic.co/en/integrations/kubernetes[Elastic {k8s} integration], which collects metrics from several components: - -* **Per node:** -** kubelet -** controller-manager -** scheduler -** proxy -* **Cluster wide (such as unique metrics for the whole cluster):** -** kube-state-metrics -** apiserver - -Controller manager and Scheduler datastreams are being enabled only on the specific node that actually runs based on autodiscovery rules - -The default manifest provided deploys {agent} as DaemonSet which results in an {agent} being deployed on every node of the {k8s} cluster. - -Additionally, by default one agent is elected as **leader** (for more information visit <>). The {agent} Pod which holds the leadership lock is responsible for collecting the cluster-wide metrics in addition to its node's metrics. - --- -[role="screenshot"] -image::images/k8sscaling.png[{agent} as daemonset] --- - -The above schema explains how {agent} collects and sends metrics to {es}. Because of Leader Agent being responsible to also collecting cluster-lever metrics, this means that it requires additional resources. - -The DaemonSet deployment approach with leader election simplifies the installation of the {agent} because we define less {k8s} Resources in our manifest and we only need one single Agent policy for our Agents. Hence it is the default supported method for <> - - -[discrete] -=== Specifying resources and limits in Agent manifests - -Resourcing of your Pods and the Scheduling priority (check section <>) of them are two topics that might be affected as the {k8s} cluster size increases. -The increasing demand of resources might result to under-resource the Elastic Agents of your cluster. - -Based on our tests we advise to configure only the `limit` section of the `resources` section in the manifest. In this way the `request`'s settings of the `resources` will fall back to the `limits` specified. The `limits` is the upper bound limit of your microservice process, meaning that can operate in less resources and protect {k8s} to assign bigger usage and protect from possible resource exhaustion. - -[source,yaml] ------------------------------------------------- -resources: - limits: - cpu: "1500m" - memory: "800Mi" ------------------------------------------------- - - -Based on our https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-scaling-tests.md[{agent} Scaling tests], the following table provides guidelines to adjust {agent} limits on different {k8s} sizes: - -Sample Elastic Agent Configurations: -|=== -| No of Pods in K8s Cluster | Leader Agent Resources | Rest of Agents -| 1000 | cpu: "1500m", memory: "800Mi" | cpu: "300m", memory: "600Mi" -| 3000 | cpu: "2000m", memory: "1500Mi" | cpu: "400m", memory: "800Mi" -| 5000 | cpu: "3000m", memory: "2500Mi" | cpu: "500m", memory: "900Mi" -| 10000 | cpu: "3000m", memory: "3600Mi" | cpu: "700m", memory: "1000Mi" -|=== - -> The above tests were performed with {agent} version 8.7 and scraping period of `10sec` (period setting for the {k8s} integration). Those numbers are just indicators and should be validated for each different {k8s} environment and amount of workloads. - -[discrete] -=== Proposed Agent Installations for large scale - -Although daemonset installation is simple, it can not accommodate the varying agent resource requirements depending on the collected metrics. The need for appropriate resource assignment at large scale requires more granular installation methods. - -{agent} deployment is broken in groups as follows: - -- A dedicated {agent} deployment of a single Agent for collecting cluster wide metrics from the apiserver - -- Node level {agent}s(no leader Agent) in a Daemonset - -- kube-state-metrics shards and {agent}s in the StatefulSet defined in the kube-state-metrics autosharding manifest - -Each of these groups of {agent}s will have its own policy specific to its function and can be resourced independently in the appropriate manifest to accommodate its specific resource requirements. - -Resource assignment led us to alternatives installation methods. - -IMPORTANT: The main suggestion for big scale clusters *is to install {agent} as side container along with `kube-state-metrics` Shard*. The installation is explained in details https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes#kube-state-metrics-ksm-in-autosharding-configuration[{agent} with Kustomize in Autosharding] - -The following **alternative configuration methods** have been verified: - -1. With `hostNetwork:false` - - {agent} as Side Container within KSM Shard pod - - For non-leader {agent} deployments that collect per KSM shards -2. With `taint/tolerations` to isolate the {agent} daemonset pods from rest of deployments - -You can find more information in the document called https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-ksm-sharding.md[{agent} Manifests in order to support Kube-State-Metrics Sharding]. - -Based on our https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-scaling-tests.md[{agent} scaling tests], the following table aims to assist users on how to configure their KSM Sharding as {k8s} cluster scales: -|=== -| No of Pods in K8s Cluster | No of KSM Shards | Agent Resources -| 1000 | No Sharding can be handled with default KSM config | limits: memory: 700Mi , cpu:500m -| 3000 | 4 Shards | limits: memory: 1400Mi , cpu:1500m -| 5000 | 6 Shards | limits: memory: 1400Mi , cpu:1500m -| 10000 | 8 Shards | limits: memory: 1400Mi , cpu:1500m -|=== - -> The tests above were performed with {agent} version 8.8 + TSDB Enabled and scraping period of `10sec` (for the {k8s} integration). Those numbers are just indicators and should be validated per different {k8s} policy configuration, along with applications that the {k8s} cluster might include - -NOTE: Tests have run until 10K pods per cluster. Scaling to bigger number of pods might require additional configuration from {k8s} Side and Cloud Providers but the basic idea of installing {agent} while horizontally scaling KSM remains the same. - -[discrete] -[[agent-scheduling]] -=== Agent Scheduling - -Setting the low priority to {agent} comparing to other pods might also result to {agent} being in Pending State.The scheduler tries to preempt (evict) lower priority Pods to make scheduling of the higher pending Pods possible. - -Trying to prioritise the agent installation before rest of application microservices, https://github.com/elastic/elastic-agent/blob/main/docs/manifests/elastic-agent-managed-gke-autopilot.yaml#L8-L16[PriorityClasses suggested] - -[discrete] -=== {k8s} Package Configuration - -Policy configuration of {k8s} package can heavily affect the amount of metrics collected and finally ingested. Factors that should be considered in order to make your collection and ingestion lighter: - -- Scraping period of {k8s} endpoints -- Disabling log collection - - Keep audit logs disabled -- Disable events dataset -- Disable {k8s} control plane datasets in Cloud managed {k8s} instances (see more info ** <>, <>, <> pages) - - -[discrete] -=== Dashboards and Visualisations - -The https://github.com/elastic/integrations/blob/main/docs/dashboard_guidelines.md[Dashboard Guidelines] document provides guidance on how to implement your dashboards and is constantly updated to track the needs of Observability at scale. - -User experience regarding Dashboard responses, is also affected from the size of data being requested. As dashboards can contain multiple visualisations, the general consideration is to split visualisations and group them according to the frequency of access. The less number of visualisations tends to improve user experience. - -[discrete] -=== Disabling indexing host.ip and host.mac fields - -A new environemntal variable `ELASTIC_NETINFO: false` has been introduced to globally disable the indexing of `host.ip` and `host.mac` fields in your {k8s} integration. For more information see <>. - -Setting this to `false` is recommended for large scale setups where the `host.ip` and `host.mac` fields' index size increases. The number of IPs and MAC addresses reported increases significantly as a Kubenetes cluster grows. This leads to considerably increased indexing time, as well as the need for extra storage and additional overhead for visualization rendering. - - -[discrete] -=== Elastic Stack Configuration - -The configuration of Elastic Stack needs to be taken under consideration in large scale deployments. In case of Elastic Cloud deployments the choice of the deployment https://www.elastic.co/guide/en/cloud/current/ec-getting-started-profiles.html[{ecloud} hardware profile] is important. - -For heavy processing and big ingestion rate needs, the `CPU-optimised` profile is proposed. - -[discrete] -[[validation-and-troubleshooting-practices]] -== Validation and Troubleshooting practices - -[discrete] -=== Define if Agents are collecting as expected - -After {agent} deployment, we need to verify that Agent services are healthy, not restarting (stability) and that collection of metrics continues with expected rate (latency). - -**For stability:** - -If {agent} is configured as managed, in {kib} you can observe under **Fleet>Agents** - --- -[role="screenshot"] -image::images/kibana-fleet-agents.png[{agent} Status] --- - -Additionally you can verify the process status with following commands: - -[source,bash] ------------------------------------------------- -kubectl get pods -A | grep elastic -kube-system elastic-agent-ltzkf 1/1 Running 0 25h -kube-system elastic-agent-qw6f4 1/1 Running 0 25h -kube-system elastic-agent-wvmpj 1/1 Running 0 25h ------------------------------------------------- - -Find leader agent: - - -[source,bash] ------------------------------------------------- -❯ k get leases -n kube-system | grep elastic -NAME HOLDER AGE -elastic-agent-cluster-leader elastic-agent-leader-elastic-agent-qw6f4 25h ------------------------------------------------- - -Exec into Leader agent and verify the process status: - -[source,bash] ------------------------------------------------- -❯ kubectl exec -ti -n kube-system elastic-agent-qw6f4 -- bash -root@gke-gke-scaling-gizas-te-default-pool-6689889a-sz02:/usr/share/elastic-agent# ./elastic-agent status -State: HEALTHY -Message: Running -Fleet State: HEALTHY -Fleet Message: (no message) -Components: - * kubernetes/metrics (HEALTHY) - Healthy: communicating with pid '42423' - * filestream (HEALTHY) - Healthy: communicating with pid '42431' - * filestream (HEALTHY) - Healthy: communicating with pid '42443' - * beat/metrics (HEALTHY) - Healthy: communicating with pid '42453' - * http/metrics (HEALTHY) - Healthy: communicating with pid '42462' ------------------------------------------------- - -It is a common problem of lack of CPU/memory resources that agent process restart as {k8s} size grows. In the logs of agent you - -[source,json] ------------------------------------------------- -kubectl logs -n kube-system elastic-agent-qw6f4 | grep "kubernetes/metrics" -[ouptut truncated ...] - -(HEALTHY->STOPPED): Suppressing FAILED state due to restart for '46554' exited with code '-1'","log":{"source":"elastic-agent"},"component":{"id":"kubernetes/metrics-default","state":"STOPPED"},"unit":{"id":"kubernetes/metrics-default-kubernetes/metrics-kube-state-metrics-c6180794-70ce-4c0d-b775-b251571b6d78","type":"input","state":"STOPPED","old_state":"HEALTHY"},"ecs.version":"1.6.0"} -{"log.level":"info","@timestamp":"2023-04-03T09:33:38.919Z","log.origin":{"file.name":"coordinator/coordinator.go","file.line":861},"message":"Unit state changed kubernetes/metrics-default-kubernetes/metrics-kube-apiserver-c6180794-70ce-4c0d-b775-b251571b6d78 (HEALTHY->STOPPED): Suppressing FAILED state due to restart for '46554' exited with code '-1'","log":{"source":"elastic-agent"} - ------------------------------------------------- - -You can verify the instant resource consumption by running `top pod` command and identify if agents are close to the limits you have specified in your manifest. - -[source,bash] ------------------------------------------------- -kubectl top pod -n kube-system | grep elastic -NAME CPU(cores) MEMORY(bytes) -elastic-agent-ltzkf 30m 354Mi -elastic-agent-qw6f4 67m 467Mi -elastic-agent-wvmpj 27m 357Mi ------------------------------------------------- - -[discrete] -=== Verify Ingestion Latency - -{kib} Discovery can be used to identify frequency of your metrics being ingested. - -Filter for Pod dataset: --- -[role="screenshot"] -image::images/pod-latency.png[{k8s} Pod Metricset] --- - -Filter for State_Pod dataset --- -[role="screenshot"] -image::images/state-pod.png[{k8s} State Pod Metricset] --- - -Identify how many events have been sent to {es}: - -[source,bash] ------------------------------------------------- -kubectl logs -n kube-system elastic-agent-h24hh -f | grep -i state_pod -[ouptut truncated ...] - -"state_pod":{"events":2936,"success":2936} ------------------------------------------------- - -The number of events denotes the number of documents that should be depicted inside {kib} Discovery page. - -> For eg, in a cluster with 798 pods, then 798 docs should be depicted in block of ingestion inside {kib} - - -[discrete] -=== Define if {es} is the bottleneck of ingestion - -In some cases maybe the {es} can not cope with the rate of data that are trying to be ingested. In order to verify the resource utilisation, installation of an {ref}/monitoring-overview.html[{stack} monitoring cluster] is advised. - -Additionally, in {ecloud} deployments you can navigate to *Manage Deployment > Deployments > Monitoring > Performance*. -Corresponding dashboards for `CPU Usage`, `Index Response Times` and `Memory Pressure` can reveal possible problems and suggest vertical scaling of {stack} resources. - -== Relevant links - -- {estc-welcome-current}/getting-started-kubernetes.html[Monitor {k8s} Infrastructure] -- https://www.elastic.co/blog/kubernetes-cluster-metrics-logs-monitoring[Blog: Managing your {k8s} cluster with Elastic {observability}] diff --git a/docs/en/ingest-management/elastic-agent/start-stop-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/start-stop-elastic-agent.asciidoc deleted file mode 100644 index 0a6270505..000000000 --- a/docs/en/ingest-management/elastic-agent/start-stop-elastic-agent.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[start-stop-elastic-agent]] -= Start and stop {agent}s on edge hosts - -You can start and stop the {agent} service on the host where it's running, and -it will no longer send data to {es}. - -[discrete] -[[start-elastic-agent-service]] -== Start {agent} -If you've stopped the {agent} service and want to restart it, use the commands -that work with your system: - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/start-widget.asciidoc[] - -[discrete] -[[stop-elastic-agent-service]] -== Stop {agent} - -To stop {agent} and its related executables, stop the {agent} service. Use the -commands that work with your system: - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/stop-widget.asciidoc[] diff --git a/docs/en/ingest-management/elastic-agent/uninstall-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/uninstall-elastic-agent.asciidoc deleted file mode 100644 index 38510dc16..000000000 --- a/docs/en/ingest-management/elastic-agent/uninstall-elastic-agent.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[uninstall-elastic-agent]] -= Uninstall {agent}s from edge hosts - -[discrete] -== Uninstall on macOS, Linux, and Windows - -To uninstall {agent}, run the `uninstall` command from the directory where -{agent} is running. - -[IMPORTANT] -==== -Be sure to run the `uninstall` command from a directory outside of where {agent} is installed. - -For example, on a Windows system the install location is `C:\Program Files\Elastic\Agent`. Run the uninstall command from `C:\Program Files\Elastic` or `\tmp`, or even your default home directory: - -[source,shell] ----- -C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall ----- - -==== - --- -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/uninstall-widget.asciidoc[] - --- - -Follow the prompts to confirm that you want to uninstall {agent}. The command -stops and uninstalls any managed programs, such as {beats} and -{elastic-endpoint}, before it stops and uninstalls {agent}. - -If you run into problems, refer to <>. - -If you are using DEB or RPM, you can use the package manager to remove the -installed package. - -NOTE: For hosts enrolled in the {elastic-defend} integration with Agent tamper protection enabled, you'll need to include the uninstall token in the command, using the `--uninstall-token` flag. Refer to the {security-guide}/agent-tamper-protection.html[Agent tamper protection docs] for more information. - -[discrete] -== Remove {agent} files manually - -You might need to remove {agent} files manually if there's a failure during -installation. - -To remove {agent} manually from your system: - -. <> if it's managed by {fleet}. - -. For standalone agents, back up any configuration files you want to preserve. - -. On your host, <>. If any {agent}-related -processes are still running, stop them too. -+ -TIP: Search for these processes and stop them if they're still running: -`filebeat`, `metricbeat`, `fleet-server`, and `elastic-endpoint`. - -. Manually remove the {agent} files from your system. For example, if you're -running {agent} on macOS, delete `/Library/Elastic/Agent/*`. Not sure where the -files are installed? Refer to <>. - -. If you've configured the {elastic-defend} integration, also remove the files -installed for endpoint protection. The directory structure is similar to {agent}, -for example, `/Library/Elastic/Endpoint/*`. -+ -NOTE: When you remove the {elastic-defend} integration from a macOS host -(10.13, 10.14, or 10.15), the Endpoint System Extension is left on disk -intentionally. If you want to remove the extension, refer to the documentation -for your operating system. diff --git a/docs/en/ingest-management/elastic-agent/upgrade-standalone-elastic-agent.asciidoc b/docs/en/ingest-management/elastic-agent/upgrade-standalone-elastic-agent.asciidoc deleted file mode 100644 index 4fe6e2402..000000000 --- a/docs/en/ingest-management/elastic-agent/upgrade-standalone-elastic-agent.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -[[upgrade-standalone]] -= Upgrade standalone {agent}s - -To upgrade a standalone agent running on an edge node: - -. Make sure the `elastic-agent` service is running. -. From the directory where {agent} is installed, run the `upgrade` command to -upgrade to a new version. Not sure where the agent is -installed? Refer to <>. -+ -For example, on macOS, to upgrade the agent from version 8.8.0 to 8.8.1, you -would run: -+ -[source,shell] ----- -cd /Library/Elastic/Agent/ -sudo elastic-agent upgrade 8.8.1 ----- - -This command upgrades the binary. Your agent policy should continue to work, -but you might need to upgrade it to use new features and capabilities. - -For more command-line options, see the help for the -<> command. - - -[[upgrade-standalone-air-gapped]] -== Upgrading standalone {agent} in an air-gapped environmment - -The basic upgrade scenario should work for most use cases. However, in an air-gapped environment {agent} is not able to access the {artifact-registry} at `artifacts.elastic.co` directly. - -As an alterative, you can do one of the following: - -* <> for standalone {agent} to access the {artifact-registry}. -* <> for standalone {agent} to access binary downloads. - -As well, starting from version 8.9.0, during the upgrade process {agent} needs to download a PGP/GPG key. Refer to <> for the steps to configure the key download location in an air-gapped environment. - -Refer to <> for more details. - -[[upgrade-standalone-verify-package]] -== Verifying {agent} package signatures - -Standalone {agent} verifies each package that it downloads using publically available SHA hash and .asc PGP signature files. The SHA file is used to verify that the package has not been modified, and the .asc file is used to verify that the package was released by Elastic. For this purpose, the Elastic public GPG key is embedded in {agent} itself. - -At times, the Elastic private GPG key may need to be rotated, either due to the key expiry or due to the private key having been exposed. In this case, standalone {agent} upgrades can fail because the embedded public key no longer works. - -In the event of a private GPG key rotation, you can use the following options with the <> command to either skip the verification process (not recommended) or force {agent} to use a new public key for verification: - -`--skip-verify`:: -Skip the package verification process. This option is not recommended as it is insecure. -+ -Example: -+ -[source,yaml,subs="attributes"] ----- -./elastic-agent upgrade 8.8.0 --skip-verify ----- - -`--pgp-path `:: -Use a locally stored copy of the PGP key to verify the upgrade package. -+ -Example: -+ -[source,yaml,subs="attributes"] ----- -./elastic-agent upgrade 8.8.0 --pgp-path /home/elastic-agent/GPG-KEY-elasticsearch ----- - -`--pgp-uri `:: -Use the specified online PGP key to verify the upgrade package. -+ -Example: -+ -[source,yaml,subs="attributes"] ----- -./elastic-agent upgrade 8.7.0-SNAPSHOT --pgp-uri "https://artifacts.elastic.co/GPG-KEY-elasticsearch" ----- - -Under the basic upgrade scenario standalone {agent} will automatically fetch the most current public key, however in an air-gapped environment or in the event that the {artifact-registry} is otherwise inaccessible, these commands can be used instead. - - - - - - - - - diff --git a/docs/en/ingest-management/fleet-agent-proxy-host-variables.asciidoc b/docs/en/ingest-management/fleet-agent-proxy-host-variables.asciidoc deleted file mode 100644 index 233444302..000000000 --- a/docs/en/ingest-management/fleet-agent-proxy-host-variables.asciidoc +++ /dev/null @@ -1,92 +0,0 @@ -[[host-proxy-env-vars]] -= Proxy Server connectivity using default host variables - -Set environment variables on the host to configure default proxy settings. -The {agent} uses host environment settings by default if no proxy settings are -specified elsewhere. You can override host proxy settings later when you -configure the {agent} and {fleet} settings. The following environment variables -are available on the host: - -|=== -|Variable |Description - -|`HTTP_PROXY` -|URL of the proxy server for HTTP traffic. - -|`HTTPS_PROXY` -|URL of the proxy server for HTTPS traffic. - -|`NO_PROXY` -|IP addresses or domain names that should not use the proxy. Supports patterns. -|=== - -The proxy URL can be a complete URL or `host[:port]`, in which case the `http` -scheme is assumed. An error is returned if the value is a different form. - -[discrete] -[[where-to-set-proxy-env-vars]] -== Where to set proxy environment variables - -The location where you set these environment variables is platform-specific and -based on the system manager you're using. Here are some examples to get you -started. For more information about setting environment variables, refer to the -documentation for your operating system. - -// lint ignore agent -* For Windows services, set environment variables for the service in -the Windows registry. -+ -This PowerShell command sets the -`HKLM\SYSTEM\CurrentControlSet\Services\Elastic Agent\Environment` registry -key, then restarts {agent}: -+ -[source,yaml] ----- -$environment = [string[]]@( - "HTTPS_PROXY=https://proxy-hostname:proxy-port", - "HTTP_PROXY=http://proxy-hostname:proxy-port" - ) - -Set-ItemProperty "HKLM:SYSTEM\CurrentControlSet\Services\Elastic Agent" -Name Environment -Value $environment - -Restart-Service "Elastic Agent" ----- - -* For Linux services, the location depends on the distribution you're using. -For example, you can set environment variables in: - -** `/etc/systemd/system/elastic-agent.service` for systems that use `systemd` to -manage the service. To edit the file, run: -+ -[source,shell] ----- -sudo systemctl edit --full elastic-agent.service ----- -+ -Then add the environment variables under `[Service]` -+ -[source,shell] ----- -[Service] - -Environment="HTTPS_PROXY=https://my.proxy:8443" -Environment="HTTP_PROXY=http://my.proxy:8080" ----- - -** `/etc/sysconfig/elastic-agent` for Red Hat-like distributions that don't use -`systemd`. - -** `/etc/default/elastic-agent` for Debian and Ubuntu distributions that don't -use `systemd`. -+ -For example: -+ -[source,shell] ----- -HTTPS_PROXY=https://my.proxy:8443 -HTTP_PROXY=http://my.proxy:8080 ----- - -After adding environment variables, restart the service. - -NOTE: If you use a proxy server to download new agent versions from `artifacts.elastic.co` for upgrading, configure <>. diff --git a/docs/en/ingest-management/fleet-agent-proxy-managed.asciidoc b/docs/en/ingest-management/fleet-agent-proxy-managed.asciidoc deleted file mode 100644 index aaee58bf8..000000000 --- a/docs/en/ingest-management/fleet-agent-proxy-managed.asciidoc +++ /dev/null @@ -1,184 +0,0 @@ -[[fleet-agent-proxy-managed]] -= Fleet managed {agent} connectivity using a proxy server - -Proxy settings in the {agent} policy override proxy settings specified by environment variables. This means you can specify proxy settings for {agent} that are different from host or system-level environment settings. - -This page describes where a proxy server is allowed in your deployment and how to configure proxy settings for {agent} and {fleet}. The steps for deploying the proxy server itself are beyond the scope of this article. - -{agents} generally egress two sets of connections, one for Control plane traffic to the {fleet-server}, the other Data plane traffic to an output such as {es}. In a similar fashion operators would place {agent} behind a proxy server, and proxy the control and data plane traffic to their final destinations. - -{fleet} central management enables you to define your proxy servers and then configure an output or the {fleet-server} to be reachable through any of these proxies. This also enables you to modify the proxy server details if needed without having to re-install {agents}. - -image::images/agent-proxy-server-managed-deployment.png[Image showing connections between {fleet} managed {agent}, {fleet-server}, and {es}] - -In this scenario Fleet Server and Elasticsearch are deployed in {ecloud} and reachable on port 443. - - -[[fleet-agent-proxy-server-managed-agents]] -== Configuring proxy servers in {fleet} for managed agents - -These steps describe how to set up {fleet} components to use a proxy. - -. **Globally add proxy server details to {fleet}.** - -.. In {fleet}, open the **Settings** tab. -.. Select **Add proxy**. The **Add proxy** or **Edit proxy** flyout opens. -+ -image::images/elastic-agent-proxy-edit-proxy.png[Screen capture of the Edit Proxy UI in Fleet] -+ -.. Add a name for the proxy (in this example `Proxy-A`) and specify the Proxy URL. -.. Add any other optional settings. -.. Select **Save and apply settings**. The proxy information is saved and that proxy is ready for other components in {fleet} to reference. - -. **Attach the proxy to a {fleet-server}.** -+ -If the control plane traffic to/from the Fleet Server needs to also go through the proxy server, the proxy created needs to also be added to the definition of that Fleet Server. - -.. In {fleet}, open the **Settings** tab. -.. In the list of **Fleet Server Hosts**, choose a host and select the edit button to configure it. -.. In the **Proxy** section dropdown list, select the proxy that you configured. -+ -image::images/elastic-agent-proxy-edit-fleet-server.png[Screen capture of the Edit Fleet Server UI] -+ -In this example, All the {agents} in a policy that uses this {fleets-server} will now connect to the {fleet-server} through the proxy server defined in `Proxy-A`. - -+ -==== -[WARNING] -Any invalid changes to the {fleet-server} definition that may cause connectivity issues between the {agents} and the {fleet-server} will cause them to disconnect. The only remedy would be to re-install the affected agents. This is because the connectivity to the {fleet-server} ensures policy updates reach the agents. If a policy with an invalid host address reaches the agent it will no longer be able to connect and therefore won't receive any other updates from the {fleet-server} (including the corrected setting). In this regard, adding a proxy server that is not reachable by the agents will break connectivity to the {fleet-server}. -==== -+ -. **Attach the proxy to the output** -+ -Similarly, if the data plane traffic to an output is to traverse via a proxy, that proxy definition would need to be added to the output defined in the Fleet. - -.. In {fleet}, open the **Settings** tab. -.. In the list of **Outputs**, choose an output and select the edit button to configure it. -.. In the **Proxy** section dropdown list, select the proxy that you configured. -+ -image::images/elastic-agent-proxy-edit-output.png[Screen capture of the Edit output UI in Fleet] -+ -In this example, All the {agents} in a policy that is configured to write to the chosen output will now write to that output through the proxy server defined in `Proxy-A`. - -+ -==== -[WARNING] -If agents are unable to reach the configured proxy server, they will not be able to write data to the output that has the proxy server configured. When changing the proxy of an output, please ensure that the affected agents all have connectivity to the proxy itself. -==== -+ -. **Attach the proxy to the agent download source** -+ -Likewise, if the download traffic to or from the artifact registry needs to go through the proxy server, that proxy definition also needs to be added to the agent binary source defined in {Fleet}. - -.. In {fleet}, open the **Settings** tab. -.. In the **Agent Binary Download** list, choose an agent binary source and select the edit button to configure it. -.. In the **Proxy** section dropdown list, select the proxy that you configured. -+ -image::images/elastic-agent-proxy-edit-agent-binary-source.png[Screen capture of the Edit agent binary source UI in Fleet] -+ -In this example, all of the {agents} enrolled in a policy that is configured to download from the chosen agent download source will now download from that agent download source through the proxy server defined in `Proxy-A`. - -+ -==== -[WARNING] -If agents are unable to reach the configured proxy server, they will not be able to download binaries from the agent download source that has the proxy server configured. When changing the proxy of an agent binary source, please ensure that the affected agents all have connectivity to the proxy itself. -==== -+ -. **Configure the {agent} policy** -+ -You can now configure the {agent} policy to use the {fleet-server} and the outputs that are reachable through a proxy server. - -** If the policy is configured with a {fleet-server} that has a proxy attached to it, all the control plane traffic from the agents in that policy will reach the {fleet-server} through that proxy. -** Similarly, if the output definition has a proxy attached to it, all the agents in that policy will write (data plane) to the output through the proxy. -+ -. **Enroll the {agents}** -+ -Now that {fleet} is configured, all policy downloads will update the agent with the latest configured proxies. When the agent is first installed it needs to communicate with {fleet} (through {fleet-server}) in order to download its first policy configuration. - -[discrete] -[[cli-proxy-settings]] -== Set the proxy for retrieving agent policies from {fleet} - -If there is a proxy between {agent} and {fleet}, specify proxy settings on the -command line when you install {agent} and enroll in {fleet}. The settings you -specify at the command line are added to the `fleet.yml` file installed on the -system where the {agent} is running. - -NOTE: If the initial agent communication with {fleet} (i.e control plane) needs to traverse the proxy server, then the agent needs to be configured to do so using the `–proxy-url` command line flag which is applied during the agent installation. Once connectivity to {fleet} is established, proxy server details can be managed through the UI. - -NOTE: If {kib} is behind a proxy server, you'll still need to -<> to access the package registry. - -The `enroll` and `install` commands accept the following flags: - -|=== -| CLI flag | Description - -|`--proxy-url ` -|URL of the proxy server. The value may be either a complete URL or a -`host[:port]`, in which case the `http` scheme is assumed. The URL accepts optional -username and password settings for authenticating with the proxy. For example: -`http://:@/`. - -|`--proxy-disabled` -|If specified, all proxy settings, including the `HTTP_PROXY` and `HTTPS_PROXY` -environment variables, are ignored. - -|`--proxy-header
=` -|Additional header to send to the proxy during CONNECT requests. Use the -`--proxy-header` flag multiple times to add additional headers. You can use -this setting to pass keys/tokens required for authenticating with the proxy. - -|=== - -For example: - -[source,sh] ----- -elastic-agent install --url="https://10.0.1.6:8220" --enrollment-token=TOKEN --proxy-url="http://10.0.1.7:3128" --fleet-server-es-ca="/usr/local/share/ca-certificates/es-ca.crt" --certificate-authorities="/usr/local/share/ca-certificates/fleet-ca.crt" ----- - -The command in the previous example adds the following settings to the -`fleet.yml` policy on the host where {agent} is installed: - -[source,yaml] ----- -fleet: - enabled: true - access_api_key: API-KEY - hosts: - - https://10.0.1.6:8220 - ssl: - verification_mode: "" - certificate_authorities: - - /usr/local/share/ca-certificates/es-ca.crt - renegotiation: never - timeout: 10m0s - proxy_url: http://10.0.1.7:3128 - reporting: - threshold: 10000 - check_frequency_sec: 30 - agent: - id: "" ----- - -NOTE: When {agent} runs, the `fleet.yml` file gets encrypted and renamed to `fleet.enc`. - -[[fleet-agent-proxy-server-secure-gateway]] -== {agent} connectivity using a secure proxy gateway - -Many secure proxy gateways are configured to perform mutual TLS and expect all connections to present their certificate. In these instances the Client (in this case the Elastic Agent) would need to present a certificate and key to the Server (the secure proxy). In return the client expects to see a certificate authority chain from the server to ensure it is also communicating to a trusted entity. - -image::images/elastic-agent-proxy-gateway-secure.png[Image showing data flow between the proxy server and the Certificate Authority] - -If mTLs is a requirement when connecting to your proxy server, then you have the option to add the Client certificate and Client certificate key to the proxy. Once configured, all the Elastic Agents in a policy that connect to this secure proxy (via an output or fleet server), will use the nominated certificates to establish connections to the proxy server. - -It should be noted that the user can define a local path to the certificate and key as in many common scenarios the certificate and key will be unique for each Elastic Agent. - -Equally important is the Certificate Authority that the agents need to use to validate the certificate they are receiving from the secure proxy server. This can also be added when creating the proxy definition in the Fleet settings. - -image::images/elastic-agent-edit-proxy-secure-settings.png[Screen capture of the Edit Proxy UI, highlighting the Certificate and Certificate key settings] - -NOTE: Currently {agents} will not present a certificate for Control Plane traffic to the {fleet-server}. Some proxy servers are setup to mandate that the client setting up a connection presents a certificate to them before allowing that client to connect. This issue will be resolved by link:https://github.com/elastic/elastic-agent/issues/2248[issue #2248]. Our recommendation is to avoid adding a secure proxy as such in a {fleet-server} configuration flyout. - -NOTE: In case {kib} is behind a proxy server or is otherwise unable to access the {package-registry} to download package metadata and content, refer to <>. \ No newline at end of file diff --git a/docs/en/ingest-management/fleet-agent-proxy-package-registry.asciidoc b/docs/en/ingest-management/fleet-agent-proxy-package-registry.asciidoc deleted file mode 100644 index adc07fd0f..000000000 --- a/docs/en/ingest-management/fleet-agent-proxy-package-registry.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[[epr-proxy-setting]] -= Set the proxy URL of the {package-registry} - -{fleet} might be unable to access the {package-registry} because {kib} is -behind a proxy server. - -Also your organization might have network traffic restrictions that prevent {kib} -from reaching the public {package-registry} (EPR) endpoints, like -https://epr.elastic.co/[epr.elastic.co], to download package metadata and -content. You can route traffic to the public endpoint of EPR through a network -gateway, then configure proxy settings in the -{kibana-ref}/fleet-settings-kb.html[{kib} configuration file], `kibana.yml`. For -example: - -[source,yaml] ----- -xpack.fleet.registryProxyUrl: your-nat-gateway.corp.net ----- - -If your HTTP proxy requires authentication, you can include the -credentials in the URI, such as `https://username:password@your-nat-gateway.corp.net`, -only when using HTTPS. - -== What information is sent to the {package-registry}? - -In production environments, {kib}, through the {fleet} plugin, is the only service interacting with the {package-registry}. Communication happens when interacting with the Integrations UI, and when upgrading {kib}. The shared information is about discovery of Elastic packages and their available versions. In general, the only deployment-specific data that is shared is the {kib} version. diff --git a/docs/en/ingest-management/fleet-agent-proxy-standalone.asciidoc b/docs/en/ingest-management/fleet-agent-proxy-standalone.asciidoc deleted file mode 100644 index 01bf2fc6b..000000000 --- a/docs/en/ingest-management/fleet-agent-proxy-standalone.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[[fleet-agent-proxy-standalone]] -= Standalone {agent} connectivity using a proxy server - -Proxy settings in the {agent} policy override proxy settings specified by -environment variables. This means you can specify proxy settings for {agent} -that are different from host or system-level environment settings. - -The following proxy settings are valid in the agent policy: - -|=== -|Setting | Description - -|`proxy_url` -| (string) URL of the proxy server. If set, the configured URL is used as a -proxy for all connection attempts by the component. The value may be either a -complete URL or a `host[:port]`, in which case the `http` scheme is assumed. If -a value is not specified through the configuration, then proxy environment -variables are used. The URL accepts optional `username` and `password` settings -for authenticating with the proxy. For example: -`http://:@/`. - -|`proxy_headers` -| (string) Additional headers to send to the proxy during CONNECT requests. You -can use this setting to pass keys/tokens required for authenticating with the -proxy. - -|`proxy_disable` -| (boolean) If set to `true`, all proxy settings, including the `HTTP_PROXY` and -`HTTPS_PROXY` environment variables, are ignored. - -|=== - -[discrete] -=== Set the proxy for communicating with {es} - -//To set the proxy for communicating with {es}, specify proxy settings under -//`outputs` in the agent policy. The procedure for doing this depends on -//whether you're running {fleet}-managed or standalone agents: - -For standalone agents, to set the proxy for communicating with {es}, specify proxy settings in the `elastic-agent.yml` file. For example: - -[source,yaml] ----- -outputs: - default: - api_key: API-KEY - hosts: - - https://10.0.1.2:9200 - proxy_url: http://10.0.1.7:3128 - type: elasticsearch ----- - -For more information, refer to <>. diff --git a/docs/en/ingest-management/fleet-agent-proxy-support.asciidoc b/docs/en/ingest-management/fleet-agent-proxy-support.asciidoc deleted file mode 100644 index 8f23e2aac..000000000 --- a/docs/en/ingest-management/fleet-agent-proxy-support.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[fleet-agent-proxy-support]] -= Using a proxy server with {agent} and {fleet} - -Many enterprises secure their assets by placing a proxy server between them and -the internet. The main role of the proxy server is to filter content and provide -a single gateway through which all traffic traverses in and out of a data center. -These proxy servers provide a various degree of functionality, security and -privacy. - -Your organization's security strategy and other considerations may require you -to use a proxy server between some components in your deployment. For example, -you may have a firewall rule that prevents endpoints from connecting directly to -{es}. In this scenario, you can set up the {agent} to connect to a proxy, then -the proxy can connect to {es} through the firewall. - -Support is available in {agent} and {fleet} for connections through HTTP Connect -(HTTP 1 only) and SOCKS5 proxy servers. - -NOTE: Some environments require users to authenticate with the proxy. There are -no explicit settings for proxy authentication in {agent} or {fleet}, except the -ability to pass credentials in the URL or as keys/tokens in headers, as -described later. - -Refer to <> for more -detail, or jump into one of the following guides: - -* <> -* <> -* <> - diff --git a/docs/en/ingest-management/fleet-agent-proxy-when-to-configure.asciidoc b/docs/en/ingest-management/fleet-agent-proxy-when-to-configure.asciidoc deleted file mode 100644 index 9d24d40a4..000000000 --- a/docs/en/ingest-management/fleet-agent-proxy-when-to-configure.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -[[elastic-agent-proxy-config]] -= When to configure proxy settings - -Configure proxy settings for {agent} when it must connect through a proxy server -to: - -* Download artifacts from `artifacts.elastic.co` for subprocesses or binary -upgrades (use <>) -* Send data to {es} -* Retrieve agent policies from {fleet-server} -* Retrieve agent policies from {es} (only needed for agents running {fleet-server}) - -image::images/agent-proxy-server.png[Image showing connections between {agent}, {fleet-server}, and {es}] - -If {fleet} is unable to access the {package-registry} because {kib} is -behind a proxy server, you may also need to set the registry proxy URL -in the {kib} configuration. - -image::images/fleet-epr-proxy.png[Image showing connections between {fleet} and the {package-registry}] diff --git a/docs/en/ingest-management/fleet/add-fleet-server-cloud.asciidoc b/docs/en/ingest-management/fleet/add-fleet-server-cloud.asciidoc deleted file mode 100644 index 05c233ec2..000000000 --- a/docs/en/ingest-management/fleet/add-fleet-server-cloud.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -[[add-fleet-server-cloud]] -= Deploy on {ecloud} - -To use {fleet} for central management, a <> must -be running and accessible to your hosts. - -{fleet-server} can be provisioned and hosted on {ecloud}. When the Cloud deployment is created, -a highly available set of {fleet-server}s is provisioned automatically. - -This approach might be right for you if you want to reduce on-prem compute resources -and you'd like Elastic to take care of provisioning and life cycle management of -your deployment. - -With this approach, multiple {fleet-server}s are automatically provisioned to satisfy -the chosen instance size (instance sizes are modified to satisfy the scale requirement). -You can also choose the resources allocated to each {fleet-server} and whether you want -each {fleet-server} to be deployed in multiple availability zones. -If you choose multiple availability zones to address your fault-tolerance -requirements, those instances are also utilized to balance the load. - -This approach might _not_ be right for you if you have restrictions on connectivity -to the internet. - -image::images/fleet-server-cloud-deployment.png[{fleet-server} Cloud deployment model] - -[discrete] -[[fleet-server-compatibility]] -= Compatibility and prerequisites - -{fleet-server} is compatible with the following Elastic products: - -* {stack} 7.13 or later. -** For version compatibility, {es} must be at the same or a later version than {fleet-server}, and {fleet-server} needs to be at the same or a later version than {agent} (not including patch releases). -** {kib} should be on the same minor version as {es}. - -* {ece} 2.10 or later -+ --- -** Requires additional wildcard domains and certificates (which normally only -cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for -{fleet-server} of `https://.fleet.`. -** The deployment template must contain an {integrations-server} node. --- -+ -For more information about hosting {fleet-server} on {ece}, refer to -{ece-ref}/ece-manage-integrations-server.html[Manage your {integrations-server}]. - -NOTE: The TLS certificates used to secure connections between {agent} and -{fleet-server} are managed by {ecloud}. You do not need to create a private key -or generate certificates. - -When {es} or {fleet-server} are deployed, components communicate over well-defined, pre-allocated ports. -You may need to allow access to these ports. See the following table for default port assignments: - -|=== -| Component communication | Default port - -| Elastic Agent → {fleet-server} | 443 -| Elastic Agent → {es} | 443 -| Elastic Agent → Logstash | 5044 -| Elastic Agent → {kib} ({fleet}) | 443 -| {fleet-server} → {kib} ({fleet}) | 443 -| {fleet-server} → {es} | 443 -|=== - -NOTE: If you do not specify the port for {es} as 443, the {agent} defaults to 9200. - -[discrete] -[[add-fleet-server-cloud-set-up]] -= Setup - -To confirm that an {integrations-server} is available in your deployment: - -. Open {fleet}. -. On the **Agent policies** tab, look for the **{ecloud} agent policy**. This policy is -managed by {ecloud}, and contains a {fleet-server} integration and an Elastic -APM integration. You cannot modify the policy. Confirm that the agent status is -**Healthy**. - -[TIP] -==== -Don't see the agent? Make sure your deployment includes an -{integrations-server} instance. This instance is required to use {fleet}. - -[role="screenshot"] -image::images/integrations-server-hosted-container.png[Hosted {integrations-server}] -==== - -[discrete] -[[add-fleet-server-cloud-next]] -= Next steps - -Now you're ready to add {agent}s to your host systems. To learn how, see -<>. diff --git a/docs/en/ingest-management/fleet/add-fleet-server-kubernetes-content.asciidoc b/docs/en/ingest-management/fleet/add-fleet-server-kubernetes-content.asciidoc deleted file mode 100644 index b3358ceac..000000000 --- a/docs/en/ingest-management/fleet/add-fleet-server-kubernetes-content.asciidoc +++ /dev/null @@ -1,209 +0,0 @@ -// tag::quickstart-secret[] -The following command assumes you have the {es} CA available as a local file. -+ -[source, shell] ------------------------------------------------------------- -kubectl create secret generic fleet-server-ssl \ - --from-file=es-ca.crt= ------------------------------------------------------------- -+ --- -When running the command, substitute the following values: - -* `` with your local file containing the {es} CA(s). --- -+ -If you prefer to obtain a *yaml manifest* of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file. -// end::quickstart-secret[] - -// *************************************************** -// *************************************************** - -// tag::production-secret[] -The following command assumes you have the {es} CA and the {fleet-server} certificate, key and CA available as local files. -+ -[source, shell] ------------------------------------------------------------- -kubectl create secret generic fleet-server-ssl \ - --from-file=es-ca.crt= \ - --from-file=fleet-ca.crt= \ - --from-file=fleet-server.crt= \ - --from-file=fleet-server.key= \ - --from-literal=fleet_url='' ------------------------------------------------------------- -+ --- -When running the command, substitute the following values: - -* `` with your local file containing the {es} CA(s). -* `` with your local file containing the {fleet-server} CA. -* `` with your local file containing the server TLS certificate for the {fleet-server}. -* `` with your local file containing the server TLS key for the {fleet-server}. -* `` with the URL that points to the {fleet-server}, for example `https://fleet-svc`. This URL will be used by the {fleet-server} during its bootstrap, and its hostname must be included in the server certificate’s x509 Subject Alternative Name (SAN) list. --- -+ -If you prefer to obtain a *yaml manifest* of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file. -// end::production-secret[] - -// *************************************************** -// *************************************************** - -// tag::quickstart-deployment[] -["source","yaml",subs="attributes"] ------------------------------------------------------------- -apiVersion: v1 -kind: Service -metadata: - name: fleet-svc -spec: - type: ClusterIP - selector: - app: fleet-server - ports: - - port: 443 - protocol: TCP - targetPort: 8220 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: fleet-server -spec: - replicas: 1 - selector: - matchLabels: - app: fleet-server - template: - metadata: - labels: - app: fleet-server - spec: - automountServiceAccountToken: false - containers: - - name: elastic-agent - image: docker.elastic.co/beats/elastic-agent:{version} - env: - - name: FLEET_SERVER_ENABLE - value: "true" - - name: FLEET_SERVER_ELASTICSEARCH_HOST - valueFrom: - secretKeyRef: - name: fleet-server-config - key: elastic_endpoint - - name: FLEET_SERVER_SERVICE_TOKEN - valueFrom: - secretKeyRef: - name: fleet-server-config - key: elastic_service_token - - name: FLEET_SERVER_POLICY_ID - valueFrom: - secretKeyRef: - name: fleet-server-config - key: fleet_policy_id - - name: ELASTICSEARCH_CA - value: /mnt/certs/es-ca.crt - ports: - - containerPort: 8220 - protocol: TCP - resources: {} - volumeMounts: - - name: certs - mountPath: /mnt/certs - readOnly: true - volumes: - - name: certs - secret: - defaultMode: 420 - optional: false - secretName: fleet-server-ssl ------------------------------------------------------------- -// end::quickstart-deployment[] - -// *************************************************** -// *************************************************** - -// tag::production-deployment[] -["source","yaml",subs="attributes"] ------------------------------------------------------------- -apiVersion: v1 -kind: Service -metadata: - name: fleet-svc -spec: - type: ClusterIP - selector: - app: fleet-server - ports: - - port: 443 - protocol: TCP - targetPort: 8220 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: fleet-server -spec: - replicas: 1 - selector: - matchLabels: - app: fleet-server - template: - metadata: - labels: - app: fleet-server - spec: - automountServiceAccountToken: false - containers: - - name: elastic-agent - image: docker.elastic.co/beats/elastic-agent:{version} - env: - - name: FLEET_SERVER_ENABLE - value: "true" - - name: FLEET_SERVER_ELASTICSEARCH_HOST - valueFrom: - secretKeyRef: - name: fleet-server-config - key: elastic_endpoint - - name: FLEET_SERVER_SERVICE_TOKEN - valueFrom: - secretKeyRef: - name: fleet-server-config - key: elastic_service_token - - name: FLEET_SERVER_POLICY_ID - valueFrom: - secretKeyRef: - name: fleet-server-config - key: fleet_policy_id - - name: ELASTICSEARCH_CA - value: /mnt/certs/es-ca.crt - - name: FLEET_SERVER_CERT - value: /mnt/certs/fleet-server.crt - - name: FLEET_SERVER_CERT_KEY - value: /mnt/certs/fleet-server.key - - name: FLEET_CA - value: /mnt/certs/fleet-ca.crt - - name: FLEET_URL - valueFrom: - secretKeyRef: - name: fleet-server-ssl - key: fleet_url - - name: FLEET_SERVER_TIMEOUT - value: '60s' - - name: FLEET_SERVER_PORT - value: '8220' - ports: - - containerPort: 8220 - protocol: TCP - resources: {} - volumeMounts: - - name: certs - mountPath: /mnt/certs - readOnly: true - volumes: - - name: certs - secret: - defaultMode: 420 - optional: false - secretName: fleet-server-ssl ------------------------------------------------------------- -// end::production-deployment[] diff --git a/docs/en/ingest-management/fleet/add-fleet-server-kubernetes.asciidoc b/docs/en/ingest-management/fleet/add-fleet-server-kubernetes.asciidoc deleted file mode 100644 index 08d095bfc..000000000 --- a/docs/en/ingest-management/fleet/add-fleet-server-kubernetes.asciidoc +++ /dev/null @@ -1,449 +0,0 @@ -[[add-fleet-server-kubernetes]] -= Deploy {fleet-server} on Kubernetes - -[NOTE] -==== -If your {stack} is orchestrated by {eck-ref}[ECK], we recommend to deploy the {fleet-server} through the operator. That simplifies the process, as the operator automatically handles most of the resources configuration and setup steps. - -Refer to {eck-ref}/k8s-elastic-agent-fleet.html[Run Fleet-managed {agent} on ECK] for more information. -==== - -[IMPORTANT] -==== -This guide assumes familiarity with Kubernetes concepts and resources, such as `Deployments`, `Pods`, `Secrets`, or `Services`, as well as configuring applications in Kubernetes environments. -==== - -To use {fleet} for central management, a <> must -be running and accessible to your hosts. - -You can deploy {fleet-server} on Kubernetes and manage it yourself. -In this deployment model, you are responsible for high-availability, -fault-tolerance, and lifecycle management of the {fleet-server}. - -To deploy a {fleet-server} on Kubernetes and register it into {fleet} you will need the following details: - -* The *Policy ID* of a {fleet} policy configured with the {fleet-server} integration. -* A *Service token*, used to authenticate {fleet-server} with Elasticsearch. -* For outgoing traffic: -** The *{es} endpoint URL* where the {fleet-server} should connect to, configured also in the {es} output associated to the policy. -** When a private or intermediate Certificate Authority (CA) is used to sign the {es} certificate, the *{es} CA file* or the *CA fingerprint*, configured also in the {es} output associated to the policy. -* For incoming connections: -** A *TLS/SSL certificate and key* for the {fleet-server} HTTPS endpoint, used to encrypt the traffic from the {agent}s. This certificate has to be valid for the *{fleet-server} Host URL* that {agent}s use when connecting to the {fleet-server}. -* Extra TLS/SSL certificates and configuration parameters in case of requiring <> (not covered in this document). - -This document walks you through the complete setup process, organized into the following sections: - -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[add-fleet-server-kubernetes-compatibility]] -== Compatibility - -{fleet-server} is compatible with the following Elastic products: - -* {stack} 7.13 or later. -** For version compatibility, {es} must be at the same or a later version than {fleet-server}, and {fleet-server} needs to be at the same or a later version than {agent} (not including patch releases). -** {kib} should be on the same minor version as {es}. - -[discrete] -[[add-fleet-server-kubernetes-prereq]] -== Prerequisites - -Before deploying {fleet-server}, you need to: - -* Prepare the SSL/TLS configuration, server certificate, <>, and needed Certificate Authorities (CAs). -* Ensure components have access to the ports needed for communication. - -[discrete] -[[add-fleet-server-kubernetes-cert-prereq]] -=== {fleet-server} and SSL/TLS certificates considerations - -This section shows the minimum requirements in terms of Transport Layer Security (TLS) certificates for the {fleet-server}, assuming no mutual TLS (mTLS) is needed. Refer to <> and <> for more information about the configuration needs of both approaches. - -There are two main traffic flows for {fleet-server}, each with different TLS requirements: - -[discrete] -[[add-fleet-server-kubernetes-cert-inbound]] -==== [{agent} → {fleet-server}] inbound traffic flow - -In this flow {fleet-server} acts as the server and {agent} acts as the client. Therefore, {fleet-server} requires a TLS certificate and key, and {agent} will need to trust the CA certificate used to sign the {fleet-server} certificate. - -[NOTE] -==== -A {fleet-server} certificate is not required when installing the server using the *Quick start* mode, but should always be used for *production* deployments. In *Quick start* mode, the {fleet-server} uses a self-signed certificate and the {agent}s have to be enrolled with the `--insecure` option. -==== - -If your organization already uses the {stack}, you may have a CA certificate that could be used to generate the new cert for the {fleet-server}. If you do not have a CA certificate, refer to <> for an example to generate a CA and a server certificate using the `elasticsearch-certutil` tool. - -[IMPORTANT] -==== -Before creating the certificate, you need to know and plan in advance the <> that the {agent} clients will use to access the {fleet-server}. This is important because the *hostname* part of the URL needs to be included in the server certificate as an `x.509 Subject Alternative Name (SAN)`. If you plan to make your {fleet-server} accessible through *multiple hostnames* or *FQDNs*, add all of them to the server certificate, and take in mind that the *{fleet-server} also needs to access the {fleet} URL during its bootstrap process*. -==== - -[discrete] -[[add-fleet-server-kubernetes-cert-outbound]] -==== [{fleet-server} → {es} output] outbound traffic flow - -In this flow, {fleet-server} acts as the client and {es} acts as the HTTPS server. For the communication to succeed, {fleet-server} needs to trust the CA certificate used to sign the {es} certificate. If your {es} cluster uses certificates signed by a corporate CA or multiple intermediate CAs you will need to use them during the {fleet-server} setup. - -[NOTE] -==== -If your {es} cluster is on Elastic Cloud or if it uses a certificate signed by a public and known CA, you won't need the {es} CA during the setup. -==== - -In summary, you need: - -* A *server certificate and key*, valid for the {fleet-server} URL. The CA used to sign this certificate will be needed by the {agent} clients and the {fleet-server} itself. - -* The *CA certificate* (or certificates) associated to your {es} cluster, except if you are sure your {es} certificate is fully trusted publicly. - -[discrete] -[[default-port-assignments-kubernetes]] -=== Default port assignments - -When {es} or {fleet-server} are deployed, components communicate over well-defined, pre-allocated ports. -You may need to allow access to these ports. Refer to the following table for default port assignments: - -|=== -| Component communication | Default port -| {agent} → {fleet-server} | 8220 -| {fleet-server} → {es} | 9200 -| {fleet-server} → {kib} (optional, for {fleet} setup) | 5601 -| {agent} → {es} | 9200 -| {agent} → Logstash | 5044 -| {agent} → {kib} (optional, for {fleet} setup) | 5601 -|=== - -In Kubernetes environments, you can adapt these ports without modifying the listening ports of the {fleet-server} or other applications, as traffic is managed by Kubernetes `Services`. This guide includes an example where {agent}s connect to the {fleet-server} through port `443` instead of the default `8220`. - -[discrete] -[[add-fleet-server-kubernetes-add-server]] -== Add {fleet-server} - -A {fleet-server} is an {agent} that is enrolled in a {fleet-server} policy. The policy configures the agent to operate in a special mode to serve as a {fleet-server} in your deployment. - -[discrete] -[[add-fleet-server-kubernetes-preparations]] -=== {fleet} preparations - -[TIP] -==== -If you already have a {fleet} policy with the {fleet-server} integration, you know its ID, and you know how to generate an {ref}/service-tokens-command.html[{es} service token] for the {fleet-server}, skip directly to <>. - -Also note that the `service token` required by the {fleet-server} is different from the `enrollment tokens` used by {agent}s to enroll to {fleet}. -==== - -. In {kib}, open *{fleet} → Settings* and ensure the *Elasticsearch output* that will be used by the {fleet-server} policy is correctly configured, paying special attention that: -+ -** The *hosts* field includes a valid URL that will be reachable by the {fleet-server} Pod(s). -** If your {es} cluster uses certificates signed by private or intermediate CAs not publicly trusted, you have added the trust information in the *Elasticsearch CA trusted fingerprint* field or in the *advanced configuration* section through the `ssl.certificate_authorities` setting. For an example, refer to https://elastic.co/guide/en/fleet/current/secure-connections.html#_encrypt_traffic_between_elastic_agents_fleet_server_and_elasticsearch[Secure Connections] documentation. -+ -[IMPORTANT] -==== -This validation step is critical. The {es} host URL and CA information has to be added *in both the {es} output and the environment variables* provided to the {fleet-server}. It's a common mistake to ignore the output settings believing that the environment variables will prevail, when the environment variables are only used during the bootstrap of the {fleet-server}. - -If the URL that {fleet-server} will use to access {es} is different from the {es} URL used by other clients, you may want to create a dedicated *{es} output* for {fleet-server}. -==== - -. Go to *{fleet} → Agent Policies* and select *Create agent policy* to create a policy for the {fleet-server}: -+ -** Set a *name* for the policy, for example `Fleet Server Policy Kubernetes`. -** Do *not* select the option *Collect system logs and metrics*. This option adds the System integration to the {agent} policy. Because {fleet-server} will run as a Kubernetes Pod without any visibility to the Kubernetes node, there won't be a system to monitor. -** Select the **output** that the {fleet-server} needs to use to contact {es}. This should be the output that you verified in the previous step. -** Optionally, you can set the **inactivity timeout** and **inactive agent unenrollment timeout** parameters to automatically unenroll and invalidate API keys after the {fleet-server} agents become inactive. This is especially useful in Kubernetes environments, where {fleet-server} Pods are ephemeral, and new {agent}s appear in {fleet} UI after Pod recreations. - -. Open the created policy, and from the *Integrations* tab select *Add integration*: -+ -** Search for and select the {fleet-server} integration. -** Select *Add {fleet-server}* to add the integration to the {agent} policy. -+ -At this point you can configure the integration settings per <>. -** When done, select *Save and continue*. Do not add an {agent} at this stage. - -. Open the configured policy, which now includes the {fleet-server} integration, and select *Actions* → *Add {fleet-server}*. In the next dialog: -+ -* Confirm that the *policy for {fleet-server}* is properly selected. -* *Choose a deployment mode for security*: -** If you select *Quick start*, the {fleet-server} generates a self-signed TLS certificate, and subsequent agents should be enrolled using the `--insecure` flag. -** If you select *Production*, you provide a TLS certificate, key and CA to the {fleet-server} during the deployment, and subsequent agents will need to trust the certificate's CA. -* Add your *{fleet-server} Host* information. This is the URL that clients ({agent}s) will use to connect to the {fleet-server}: -** In *Production* mode, the {fleet-server} certificate must include the hostname part of the URL as an `x509 SAN`, and the {fleet-server} itself will need to access that URL during its bootstrap process. -** On Kubernetes environments this could be the name of the `Kubernetes service` or reverse proxy that exposes the {fleet-server} Pods. -** In the provided example we use `https://fleet-svc.` as the URL, which corresponds to the Kubernetes service DNS resolution. -* Select **generate service token** to create a token for the {fleet-server}. -* From *Install {fleet-server} to a centralized host → Linux*, take note of the values of the following settings that will be needed for the {fleet-server} installation: -** Service token(specified by `--fleet-server-service-token` parameter). -** {fleet} policy ID (specified by `--fleet-server-policy` parameter). -** {es} URL (specified by `--fleet-server-es` parameter). - -. Keep the {kib} browser window open and continue with the <>. -+ -When the {fleet-server} installation has succeeded, the *Confirm Connection* UI will show a *Connected* status. - -[discrete] -[[add-fleet-server-kubernetes-install]] -=== {fleet-server} installation - -[discrete] -[[add-fleet-server-kubernetes-install-overview]] -==== Installation overview - -To deploy {fleet-server} on Kubernetes and enroll it into {fleet} you need the following details: - -* *Policy ID* of the {fleet} policy configured with the {fleet-server} integration. -* *Service token*, that you can generate following the <> or manually using the {ref}/service-tokens-command.html[{es}-service-tokens command]. -* *{es} endpoint URL*, configured in both the {es} output associated to the policy and in the Fleet Server as an environment variable. -* *{es} CA certificate file*, configured in both the {es} output associated to the policy and in the Fleet Server. -* {fleet-server} *certificate and key* (for *Production* deployment mode only). -* {fleet-server} *CA certificate file* (for *Production* deployment mode only). -* {fleet-server} URL (for *Production* deployment mode only). - -If you followed the <> and <> you should have everything ready to proceed with the {fleet-server} installation. - -The suggested deployment method for the {fleet-server} consists of: - -* A Kubernetes Deployment manifest that relies on two Secrets for its configuration: -** A Secret named `fleet-server-config` with the main configuration parameters, such as the service token, the {es} URL and the policy ID. -** A Secret named `fleet-server-ssl` with all needed certificate files and the {fleet-server} URL. -* A Kubernetes ClusterIP Service named `fleet-svc` that exposes the {fleet-server} on port 443, making it available at URLs like `https://fleet-svc`, `https://fleet-svc.` and `https://fleet-svc..svc`. - -Adapt and change the suggested manifests and deployment strategy to your needs, ensuring you feed the {fleet-server} with the needed configuration and certificates. For example, you can customize: - -* CPU and memory `requests` and `limits`. Refer to <> for more information about {fleet-server} resources utilization. -* Scheduling configuration, such as `affinity rules` or `tolerations`, if needed in your environment. -* Number of replicas, to scale the Fleet Server horizontally. -* Use an {es} CA fingerprint instead of a CA file. -* Configure other <>. - -[discrete] -[[add-fleet-server-kubernetes-install-steps]] -==== Installation Steps - -. Create the Secret for the {fleet-server} configuration. -+ -[source, shell] ------ -kubectl create secret generic fleet-server-config \ ---from-literal=elastic_endpoint='' \ ---from-literal=elastic_service_token='' \ ---from-literal=fleet_policy_id='' ------ -+ -When running the command, substitute the following values: -+ --- -* ``: Replace this with the URL of your {es} host, for example `'https://monitoring-es-http.default.svc:9200'`. -* ``: Use the service token provided by {kib} in the {fleet} UI. -* ``: Replace this with the ID of the created policy, for example `'dee949ac-403c-4c83-a489-0122281e4253'`. --- -+ -If you prefer to obtain a *yaml manifest* of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file. - -. Create the Secret for the TLS/SSL configuration: -+ -++++ -
-
- - -
-
-++++ -+ -include::add-fleet-server-kubernetes-content.asciidoc[tag=quickstart-secret] -+ -++++ -
- -
-++++ -+ -If your {es} cluster runs on Elastic Cloud or if it uses a publicly trusted CA, remove the `es-ca.crt` key from the proposed secret. - -. Save the proposed Deployment manifest locally, for example as `fleet-server-dep.yaml`, and adapt it to your needs: -+ -++++ -
-
- - -
-
-++++ -+ -include::add-fleet-server-kubernetes-content.asciidoc[tag=quickstart-deployment] -+ -++++ -
- -
-++++ -+ -Manifest considerations: -+ -* If your {es} cluster runs on Elastic Cloud or if it uses a publicly trusted CA, remove the `ELASTICSEARCH_CA` environment variable from the manifest. -* Check the `image` version to ensure its aligned with the rest of your {stack}. -* Keep `automountServiceAccountToken` set to `false` to disable the <>. -* Consider configuring requests and limits always as a best practice. Refer to <> for more information about resources utilization of the {fleet-server}. -* You can change the listening `port` of the service to any port of your choice, but do not change the `targetPort`, as the {fleet-server} Pods will listen on port 8220. -* If you want to expose the {fleet-server} externally, consider changing the service type to `LoadBalancer`. - -. Deploy the configured manifest to create the {fleet-server} and service: -+ -[source, shell] ------------------------------------------------------------- -kubectl apply -f fleet-server-dep.yaml ------------------------------------------------------------- -+ -[IMPORTANT] -==== -Ensure the `Service`, the `Deployment` and all the referenced `Secrets` are created in the *same Namespace*. -==== - -. Check the {fleet-server} Pod logs for errors and confirm in {kib} that the {fleet-server} agent appears as `Connected` and `Healthy` in *{kib} → {fleet}*. -+ -[source, shell] ------------------------------------------------------------- -kubectl logs fleet-server-69499449c7-blwjg ------------------------------------------------------------- -+ -It can take a couple of minutes for {fleet-server} to fully start. If you left the {kib} browser window open during <> it will show *Connected* when everything has gone well. -+ -[NOTE] -==== -In *Production mode*, during {fleet-server} bootstrap process, the {fleet-server} might be unable to access its own `FLEET_URL`. This is usually a temporary issue caused by the Kubernetes Service not forwarding traffic to the Pod(s). - -If the issue persists consider using `https://localhost:8220` as the `FLEET_URL` for the {fleet-server} configuration, and ensure that `localhost` is included in the certificate's SAN. -==== - -[discrete] -[[add-fleet-server-kubernetes-expose]] -== Expose the {fleet-server} to {agent}s - -This may include the creation of a Kubernetes `service`, an `ingress` resource, and / or DNS registers for FQDNs resolution. There are multiple ways to expose applications in Kubernetes. - -Considerations when exposing {fleet-server}: - -* If your environment requires the {fleet-server} to be reachable through multiple hostnames or URLs, you can create multiple *{fleet-server} Hosts* in *{fleet} → Settings*, and create different policies for different groups of agents. -* Remember that in *Production* mode, the *hostnames* used to access the {fleet-server} must be part of the {fleet-server} certificate as `x.509 Subject Alternative Names`. -* *Align always the service listening port to the URL*. If you configure the service to listen in port 8220 use a URL like `https://service-name:8220`, and if it listens in `443` use a URL like `https://service-name`. - -Below is an end to end example of how to expose the server to external and internal clients using a LoadBalancer service. For this example we assume the following: - --- -* The {fleet-server} runs in a namespace called `elastic`. -* External clients will access {fleet-server} using a URL like `https://fleet.example.com`, which will be resolved in DNS to the external IP of the Load Balancer. -* Internal clients will access {fleet-server} using the Kubernetes service directly `https://fleet-svc-lb.elastic`. -* The server certificate has both hostnames (`fleet.example.com` and `fleet-svc-lb.elastic`) in its SAN list. --- - -. Create the `LoadBalancer` Service -+ -[source, shell] ------------------------------------------------------------- -kubectl expose deployment fleet-server --name fleet-svc-lb --type LoadBalancer --port 443 --target-port 8220 ------------------------------------------------------------- -+ -That command creates a service named `fleet-svc-lb`, listening on port `443` and forwarding the traffic to the `fleet-server` deployment's Pods on port `8220`. The listening `--port` (and the consequent URL) of the service can be customized, but the `--target-port` must remain on the default port (`8220`), because it's the port used by the {fleet-server} application. - -. Add `https://fleet-server.example.com` and `https://fleet-svc-lb.elastic` as a new *{fleet-server} Hosts* in *{fleet} → Settings*. Align the port of the URLs if you configured something different from `443` in the Load Balancer. - -. Create a {fleet} policy for external clients using the `https://fleet-server.example.com` {fleet-server} URL. - -. Create a {fleet} policy for internal clients using the `https://fleet-svc-lb.elastic` {fleet-server} URL. - -. You are ready now to enroll external and internal agents to the relevant policies. Refer to <> for more details. - -[discrete] -[[add-fleet-server-kubernetes-troubleshoot]] -== Troubleshoot {fleet-server} - -[discrete] -[[add-fleet-server-kubernetes-troubleshoot-common]] -=== Common Problems - -The following issues may occur when {fleet-server} settings are missing or configured incorrectly: - -* {fleet-server} is trying to access {es} at `localhost:9200` even though the `FLEET_SERVER_ELASTICSEARCH_HOST` environment variable is properly set. -+ -This problem occurs when the `output` of the policy associated to the {fleet-server} is not correctly configured. - -* TLS certificate trust issues occur even when the `ELASTICSEARCH_CA` environment variable is properly set during deployment. -+ -This problem occurs when the `output` of the policy associated to the {fleet-server} is not correctly configured. Add the *CA certificate* or *CA trusted fingerprint* to the {es} output associated to the {fleet-server} policy. - -* In *Production mode*, {fleet-server} enrollment fails due to `FLEET_URL` not being accessible, showing something similar to: -+ -[source, sh] ------------------------------------------------------------- -Starting enrollment to URL: https://fleet-svc/ -1st enrollment attempt failed, retrying enrolling to URL: https://fleet-svc/ with exponential backoff (init 1s, max 10s) -Error: fail to enroll: fail to execute request to fleet-server: dial tcp 34.118.226.212:443: connect: connection refused -Error: enrollment failed: exit status 1 ------------------------------------------------------------- -+ -If the service and URL are correctly configured, this is usually a temporary issue caused by the Kubernetes Service not forwarding traffic to the Pod, and it should be cleared in a couple of restarts. -+ -As a workaround, consider using `https://localhost:8220` as the `FLEET_URL` for the {fleet-server} configuration, and ensure that `localhost` is included in the certificate's SAN. - -[discrete] -[[add-fleet-server-kubernetes-next]] -== Next steps - -Now you're ready to add {agent}s to your host systems. -To learn how, refer to <>, or <> if your {agent}s will also run on Kubernetes. - -When you connect {agent}s to {fleet-server}, remember to use the `--insecure` flag if the *quick start* mode was used, or to provide to the {agent}s the CA certificate associated to the {fleet-server} certificate if *production* mode was used. \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/add-fleet-server-mixed.asciidoc b/docs/en/ingest-management/fleet/add-fleet-server-mixed.asciidoc deleted file mode 100644 index aaacc777f..000000000 --- a/docs/en/ingest-management/fleet/add-fleet-server-mixed.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[add-fleet-server-mixed]] -= Deploy {fleet-server} on-premises and {es} on Cloud - -To use {fleet} for central management, a <> must -be running and accessible to your hosts. - -Another approach is to deploy a cluster of {fleet-server}s on-premises and -connect them back to {ecloud} with access to {es} and {kib}. -In this <>, you are responsible for high-availability, -fault-tolerance, and lifecycle management of {fleet-server}. - -This approach might be right for you if you would like to limit the control plane traffic -out of your data center. For example, you might take this approach if you are a -managed service provider or a larger enterprise that segregates its networks. - -This approach might _not_ be right for you if you don't want to manage the life cycle -of an extra compute resource in your environment for {fleet-server} to reside on. - -image::images/fleet-server-on-prem-es-cloud.png[{fleet-server} on-premise and {es} on Cloud deployment model] - -To deploy a self-managed {fleet-server} on-premises to work with a hosted {ess}, -you need to: - -* Satisfy all <> and <> -//* Add <> -* Create a <> -* <> by installing an {agent} and enrolling it in an agent policy containing the {fleet-server} integration - -[discrete] -[[add-fleet-server-mixed-compatibility]] -= Compatibility - -{fleet-server} is compatible with the following Elastic products: - -* {stack} 7.13 or later -** For version compatibility, {es} must be at the same or a later version than {fleet-server}, and {fleet-server} needs to be at the same or a later version than {agent} (not including patch releases). -** {kib} should be on the same minor version as {es} - -* {ece} 2.9 or later--allows you to use a hosted {fleet-server} on {ecloud}. -+ --- -** Requires additional wildcard domains and certificates (which normally only -cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for -{fleet-server} of `https://.fleet.`. -** The deployment template must contain an {integrations-server} node. --- -+ -For more information about hosting {fleet-server} on {ece}, refer to -{ece-ref}/ece-manage-integrations-server.html[Manage your {integrations-server}]. - -[discrete] -[[add-fleet-server-mixed-prereq]] -= Prerequisites - -Before deploying, you need to: - -* Obtain or generate a Cerfiticate Authority (CA) certificate. -* Ensure components have access to the default ports needed for communication. - -[discrete] -[[add-fleet-server-mixed-cert-prereq]] -== CA certificate - -include::add-fleet-server-on-prem.asciidoc[tag=cert-prereq] - -[discrete] -[[default-port-assignments-mixed]] -== Default port assignments - -When {es} or {fleet-server} are deployed, components communicate over well-defined, pre-allocated ports. -You may need to allow access to these ports. See the following table for default port assignments: - -|=== -| Component communication | Default port - -| Elastic Agent → {fleet-server} | 8220 -| Elastic Agent → {es} | 443 -| Elastic Agent → Logstash | 5044 -| Elastic Agent → {kib} ({fleet}) | 443 -| {fleet-server} → {kib} ({fleet}) | 443 -| {fleet-server} → {es} | 443 -|=== - -NOTE: If you do not specify the port for {es} as 443, the {agent} defaults to 9200. - -//[discrete] -//[[fleet-server-add-hosts]] -//= Add {fleet-server} hosts - -//include::add-fleet-server-on-prem.asciidoc[tag=fleet-server-host-prereq] - -//include::add-fleet-server-on-prem.asciidoc[tag=add-fleet-server-host] - -//. Save and apply the settings. - -[discrete] -[[fleet-server-create-policy]] -= Create a {fleet-server} policy - -First, create a {fleet-server} policy. The {fleet-server} policy manages -and configures the {agent} running on the {fleet-server} host to launch a -{fleet-server} process. - -To create a {fleet-server} policy: - -. In {fleet}, open the **Agent policies** tab. - -. Click on the **Create agent policy** button, then: -.. Provide a meaningful name for the policy that will help you identify this {fleet-server} (or cluster) in the future. -.. Ensure you select _Collect system logs and metrics_ so the compute system hosting this {fleet-server} can be monitored. (This is not required, but is highly recommended.) - -. After creating the {fleet-server} policy, navigate to the policy itself and click **Add integration**. - -. Search for and select the **{fleet-server}** integration. - -. Then click **Add {fleet-server}**. - -. Configure the {fleet-server}: -.. Expand **Change default**. Because you are deploying this {fleet-server} on-premises, -you need to enter the _Host_ address and _Port_ number, `8220`. -(In our example the {fleet-server} will be installed on the host `10.128.0.46`.) -.. It's recommended that you also enter the _Max agents_ you intend to support with this {fleet-server}. -This can also be modified at a later stage. -This will allow the {fleet-server} to handle the load and frequency of updates being sent to the agent -and ensure a smooth operation in a bursty environment. - -[discrete] -[[fleet-server-add-server]] -= Add {fleet-server}s - -Now that the policy exists, you can add {fleet-server}s. - -A {fleet-server} is an {agent} that is enrolled in a {fleet-server} policy. -The policy configures the agent to operate in a special mode to serve as a {fleet-server} in your deployment. - -To add a {fleet-server}: - -. In {fleet}, open the **Agents** tab. -. Click *Add {fleet-server}*. - -. This will open in-product instructions for adding a {fleet-server} using -one of two options. Choose *Advanced*. -+ -[role="screenshot"] -image::images/add-fleet-server-advanced.png[In-product instructions for adding a {fleet-server} in advanced mode] - -. Follow the in-product instructions to add a {fleet-server}. -.. Select the agent policy that you created for this deployment. -.. Choose **Production** as your deployment mode. -+ -Production mode is the fully secured mode where TLS certificates ensure a secure communication between {fleet-server} and {es}. -.. Open the *{fleet-server} Hosts* dropdown and select *Add new {fleet-server} Hosts*. -Specify one or more host URLs your {agent}s will use to connect to {fleet-server}. -For example, `https://192.0.2.1:8220`, where `192.0.2.1` is the host IP where you will install {fleet-server}. -.. A **Service Token** is required so the {fleet-server} can write data to the connected {es} instance. -Click **Generate service token** and copy the generated token. -.. Copy the installation instructions provided in {kib}, which include some of the known deployment parameters. -.. Replace the value of the `--certificate-authorities` parameter with your <>. -. If installation is successful, a confirmation indicates that {fleet-server} -is set up and connected. - -After {fleet-server} is installed and enrolled in {fleet}, the newly created -{fleet-server} policy is applied. You can see this on the {fleet-server} policy page. - -The {fleet-server} agent will also show up on the main {fleet} page as another agent -whose life-cycle can be managed (like other agents in the deployment). - -You can update your {fleet-server} configuration in {kib} at any time -by going to: *Management* -> *{fleet}* -> *Settings*. From there you can: - -** Update the {fleet-server} host URL. -** Configure additional outputs where agents will send data. -** Specify the location from where agents will download binaries. -** Specify proxy URLs to use for {fleet-server} or {agent} outputs. - -[discrete] -[[fleet-server-install-agents]] -= Next steps - -Now you're ready to add {agent}s to your host systems. -To learn how, see <>. - -[NOTE] -==== -For on-premises deployments, you can dedicate a policy to all the -agents in the network boundary and configure that policy to include a -specific {fleet-server} (or a cluster of {fleet-server}s). - -Read more in <>. -==== diff --git a/docs/en/ingest-management/fleet/add-fleet-server-on-prem.asciidoc b/docs/en/ingest-management/fleet/add-fleet-server-on-prem.asciidoc deleted file mode 100644 index 07240376a..000000000 --- a/docs/en/ingest-management/fleet/add-fleet-server-on-prem.asciidoc +++ /dev/null @@ -1,263 +0,0 @@ -[[add-fleet-server-on-prem]] -= Deploy on-premises and self-managed - -To use {fleet} for central management, a <> must -be running and accessible to your hosts. - -You can deploy {fleet-server} on-premises and manage it yourself. -In this <>, you are responsible for high-availability, -fault-tolerance, and lifecycle management of {fleet-server}. - -This approach might be right for you if you would like to limit the control plane traffic -out of your data center or have requirements for fully air-gapped operations. -For example, you might take this approach if you need to satisfy data governance requirements -or you want agents to only have access to a private segmented network. - -This approach might _not_ be right for you if you don't want to manage the life cycle -of your Elastic environment and instead would like that to be handled by Elastic. - -When using this approach, it's recommended that you provision multiple instances of -the {fleet-server} and use a load balancer to better scale the deployment. -You also have the option to use your organization's certificate to establish a -secure connection from {fleet-server} to {es}. - -image::images/fleet-server-on-prem-deployment.png[{fleet-server} on-premises deployment model] - -To deploy a self-managed {fleet-server}, you need to: - -* Satisfy all <> and <>. -* <> by installing an {agent} and enrolling it in an agent policy containing the {fleet-server} integration. - -NOTE: You can install only a single {agent} per host, which means you cannot run -{fleet-server} and another {agent} on the same host unless you deploy a -containerized {fleet-server}. - -[discrete] -[[add-fleet-server-on-prem-compatibility]] -= Compatibility - -{fleet-server} is compatible with the following Elastic products: - -* {stack} 7.13 or later. -** For version compatibility, {es} must be at the same or a later version than {fleet-server}, and {fleet-server} needs to be at the same or a later version than {agent} (not including patch releases). -** {kib} should be on the same minor version as {es}. - -* {ece} 2.9 or later -+ --- -** Requires additional wildcard domains and certificates (which normally only -cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for -{fleet-server} of `https://.fleet.`. -** The deployment template must contain an {integrations-server} node. --- -+ -For more information about hosting {fleet-server} on {ece}, refer to -{ece-ref}/ece-manage-integrations-server.html[Manage your {integrations-server}]. - -[discrete] -[[add-fleet-server-on-prem-prereq]] -= Prerequisites - -Before deploying, you need to: - -* Obtain or generate a Cerfiticate Authority (CA) certificate. -* Ensure components have access to the ports needed for communication. - -[discrete] -[[add-fleet-server-on-prem-cert-prereq]] -== CA certificate - -// tag::cert-prereq[] - -Before setting up {fleet-server} using this approach, you will need a -CA certificate to configure Transport Layer Security (TLS) -to encrypt traffic between the {fleet-server}s and the {stack}. - -If your organization already uses the {stack}, you may already have a CA certificate. If you do not have a CA certificate, you can read more -about generating one in <>. - -NOTE: This is not required when testing and iterating using the *Quick start* option, but should always be used for production deployments. - -// end::cert-prereq[] - -[discrete] -[[default-port-assignments-on-prem]] -== Default port assignments - -When {es} or {fleet-server} are deployed, components communicate over well-defined, pre-allocated ports. -You may need to allow access to these ports. Refer to the following table for default port assignments: - -|=== -| Component communication | Default port - -| Elastic Agent → {fleet-server} | 8220 -| Elastic Agent → {es} | 9200 -| Elastic Agent → Logstash | 5044 -| Elastic Agent → {kib} ({fleet}) | 5601 -| {fleet-server} → {kib} ({fleet}) | 5601 -| {fleet-server} → {es} | 9200 -|=== - -NOTE: Connectivity to {kib} on port 5601 is optional and not required at all times. {agent} and {fleet-server} may need to connect to {kib} if deployed in a -container environment where an enrollment token can not be provided during deployment. - -//[discrete] -//[[add-fleet-server-on-prem-hosts]] -//= Add {fleet-server} hosts - -////// - -// tag::fleet-server-host-prereq[] -Start by adding one or more {fleet-server} hosts. -A {fleet-server} host is a URL your {agent}s will use to connect to a {fleet-server}. - -{fleet-server} hosts should meet the following requirements: - -* All agents can connect to the host. -* The host also has a route to the {es} you plan to use. -* The host meets the <> based on the maximum number -of agents you plan to support in your deployment. -// end::fleet-server-host-prereq[] - -// tag::add-fleet-server-host[] -To add a {fleet-server} host: - -. In {fleet}, open the *Settings* tab. -For more information about these settings, see -{fleet-guide}/fleet-settings.html[{fleet} settings]. - -. Under *{fleet-server} hosts*, click *Edit hosts* and specify one or more host -URLs your {agent}s will use to connect to {fleet-server}. For example, -`https://192.0.2.1:8220`, where `192.0.2.1` is the host IP where you will -install {fleet-server}. Save and apply your settings. -+ -TIP: If the **Edit hosts** option is grayed out, {fleet-server} hosts -are configured outside of {fleet}. For more information, refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings in {kib}]. - -// end::add-fleet-server-host[] - -To update {es} hosts: - -// Update up Elasticsearch host (not used in the third deployment model -. In the **Outputs** table: -.. Find the _default_ row where the _Type_ is set to _Elasticsearch_. -.. Click the pencil icon in the _Actions_ column. -.. Update the _Hosts_ field to specify one or more {es} URLs where {agent}s -will send data. For example, `https://192.0.2.0:9200`. -+ -NOTE: Skip this step if you've started the {stack} with security enabled -(you cannot change this setting because it's managed outside of {fleet}). - -. Save and apply the settings. - -////// - -[discrete] -[[add-fleet-server-on-prem-add-server]] -= Add {fleet-server} - -A {fleet-server} is an {agent} that is enrolled in a {fleet-server} policy. -The policy configures the agent to operate in a special mode to serve as a {fleet-server} in your deployment. - -To add a {fleet-server}: - -. In {fleet}, open the **Agents** tab. -. Click *Add {fleet-server}*. -. This opens in-product instructions to add a {fleet-server} using -one of two options: *Quick Start* or *Advanced*. -* Use *Quick Start* if you want {fleet} to generate a -{fleet-server} policy and enrollment token for you. The {fleet-server} policy -will include a {fleet-server} integration plus a system integration for -monitoring {agent}. This option generates self-signed certificates and is -*not* recommended for production use cases. -+ -[role="screenshot"] -image::images/add-fleet-server.png[In-product instructions for adding a {fleet-server} in quick start mode] - -* Use *Advanced* if you want to either: -** *Use your own {fleet-server} policy.* {fleet-server} policies manage -and configure the {agent} running on {fleet-server} hosts to launch a -{fleet-server} process. You can create a new {fleet-server} policy or -select an existing one. Alternatively you can -{fleet-guide}/create-a-policy-no-ui.html[create a {fleet-server} policy without using the UI], -and then select the policy here. -** *Use your own TLS certificates.* TLS certificates encrypt traffic between -{agent}s and {fleet-server}. To learn how to generate certs, refer to -{fleet-guide}/secure-connections.html[Configure SSL/TLS for self-managed {fleet-server}s]. -+ -[NOTE] -==== -If you are providing your own certificates: - -* Before running the `install` command, make sure you replace the values in -angle brackets. -* Note that the URL specified by `--url` must match the DNS name used to -generate the certificate specified by `--fleet-server-cert`. -==== -+ -[role="screenshot"] -image::images/add-fleet-server-advanced.png[In-product instructions for adding a {fleet-server} in advanced mode] - -. Step through the in-product instructions to configure and install {fleet-server}. -+ -[NOTE] -==== -* The fields to configure {fleet-server} hosts are not available if the hosts -are already configured outside of {fleet}. For more information, refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings in {kib}]. -* When using the *Advanced* option, it's recommended to generate a unique service -token for each {fleet-server}. For other ways to generate service tokens, refer to -{ref}/service-tokens-command.html[`elasticsearch-service-tokens`]. -* If you've configured a non-default port for {fleet-server} in the -{fleet-server} integration, you need to include the `--fleet-server-host` and -`--fleet-server-port` options in the `elastic-agent install` command. Refer to the -{fleet-guide}/elastic-agent-cmd-options.html#elastic-agent-install-command[install command documentation] -for details. -==== -+ -At the *Install Fleet Server to a centralized host* step, -the `elastic-agent install` command installs an {agent} as a managed service -and enrolls it in a {fleet-server} policy. For more {fleet-server} commands, refer -to the {fleet-guide}/elastic-agent-cmd-options.html[{agent} command reference]. -+ -. If installation is successful, a confirmation indicates that {fleet-server} -is set up and connected. - -After {fleet-server} is installed and enrolled in {fleet}, the newly created -{fleet-server} policy is applied. You can see this on the {fleet-server} policy page. - -The {fleet-server} agent also shows up on the main {fleet} page as another agent -whose life-cycle can be managed (like other agents in the deployment). - -You can update your {fleet-server} configuration in {kib} at any time -by going to: *Management* -> *{fleet}* -> *Settings*. From there you can: - -** Update the {fleet-server} host URL. -** Configure additional outputs where agents should send data. -** Specify the location from where agents should download binaries. -** Specify proxy URLs to use for {fleet-server} or {agent} outputs. - -[discrete] -[[add-fleet-server-on-prem-troubleshoot]] -= Troubleshooting - -If you're unable to add a {fleet}-managed agent, click the **Agents** tab -and confirm that the agent running {fleet-server} is healthy. - -[discrete] -[[add-fleet-server-on-prem-next]] -= Next steps - -Now you're ready to add {agent}s to your host systems. -To learn how, see <>. - -[NOTE] -==== -For on-premises deployments, you can dedicate a policy to all the -agents in the network boundary and configure that policy to include a -specific {fleet-server} (or a cluster of {fleet-server}s). - -Read more in <>. -==== - diff --git a/docs/en/ingest-management/fleet/agent-health-status.asciidoc b/docs/en/ingest-management/fleet/agent-health-status.asciidoc deleted file mode 100644 index 37971afc9..000000000 --- a/docs/en/ingest-management/fleet/agent-health-status.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[agent-health-status]] -= {agent} health status - -The {agent} <> describes the features available through the {fleet} UI for you to view {agent} status and activity, access metrics and diagnostics, enable alerts, and more. - -For details about how the {agent} status is monitored by {fleet}, including connectivity, check-in frequency, and similar, see the following: - -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[agent-health-status-connect-to-fleet]] -== How does {agent} connect to the {fleet} to report its availability and health, and receive policy updates? - -After enrollment, {agent} regularly initiates a check-in to {fleet-server} using HTTP long-polling ({fleet-server} is either deployed on-premises or deployed as part of {es} in {ecloud}). - -The HTTP long-polling request is kept open until there's a configuration change that {agent} needs to consume, an action that is sent to the agent, or a 5 minute timeout has elapsed. After 5 minutes, the agent will again send another check-in to start the process over again. - -The frequency of check-ins can be configured to a new value with the condition that it may affect the maximum number of agents that can connect to {fleet}. Our regular scale testing of the solution doesn't modify this parameter. - -[role="screenshot"] -image::images/agent-health-status.png[Diagram of connectivity between agents, Fleet Server, Elasticsearch and Fleet UI] - -[discrete] -[[agent-health-status-stack-monitoring]] -== We use stack monitoring to monitor the status of our cluster. Is monitoring of {agent} and the status shown in {fleet} using stack monitoring as well? - -No. The health monitoring of {agent} and its inputs, as reported in {fleet}, is done completely outside of what stack monitoring provides. - -[discrete] -[[agent-health-status-other-components]] -== There are many components that make up {agent}. How does {agent} ensure that these components/processes are up and running, and healthy? - -{agent} is essentially a supervisor that (at a minimum) will deploy a {filebeat} instance for log collection and a {metricbeat} instance for metrics collection from the system and applications running on that system. As a supervisor, it also ensures that these spawned processes are running and healthy. Using gRPC, {agent} communicates with the underlying processes once every 30 seconds, ensuring their health. If there's no response, the agent will transfer to being `Unhealthy` with the result and details reported to {fleet}. - -[discrete] -[[agent-health-status-outage]] -== If {agent} goes down, is an alert generated by {fleet}? - -No. Alerts would have to be created in {kib} on the indices that show the total count of agents at each specific state. Refer to <> in the {agent} monitoring documentation for the steps to configure alerting. Generating alerts on status change on individual agents is currently planned for a future release. - -[discrete] -[[agent-health-status-report-timing]] -== How long does it take for {agent} to report a status change? - -Some {agent} states are reported immediately, such as when the agent has become `Unhealthy`. Some other states are derived after a certain criteria is met. Refer to <> in the {agent} monitoring documentation for details about monitoring agent status. - -Transition from an `Offline` state to an `Inactive` state is configurable by the user and that transition can be fine tuned by <>. diff --git a/docs/en/ingest-management/fleet/air-gapped.asciidoc b/docs/en/ingest-management/fleet/air-gapped.asciidoc deleted file mode 100644 index e56fed30a..000000000 --- a/docs/en/ingest-management/fleet/air-gapped.asciidoc +++ /dev/null @@ -1,346 +0,0 @@ -[[air-gapped]] -= Air-gapped environments - -When running {agent}s in a restricted or closed network, you need to take extra -steps to make sure: - -* {kib} is able to reach the {package-registry} to download package metadata and -content. -* {agent}s are able to download binaries during upgrades from the {artifact-registry}. - -The {package-registry} must therefore be accessible from {kib} via an HTTP Proxy and/or self-hosted. - -The {artifact-registry} must therefore be accessible from {kib} via an HTTP Proxy and/or self-hosted. - -[TIP] -==== -See the {elastic-sec} Solution documentation for air-gapped {security-guide}/offline-endpoint.html[offline endpoints]. -==== - -When upgrading all the components in an air-gapped environment, it is recommended that you upgrade in the following order: - -. Upgrade the {package-registry}. -. Upgrade the {stack} including {kib}. -. Upgrade the {artifact-registry} and ensure the latest {agent} binaries are available. -. Upgrade the on-premise {fleet-server}. -. In {fleet}, issue an upgrade for all the {agent}s. - -[discrete] -[[air-gapped-mode-flag]] -== Enable air-gapped mode for {fleet} - -Set the following property in {kib} to enable air-gapped mode in {fleet}. This allows {fleet} to intelligently skip certain requests or operations that shouldn't be attempted in air-gapped environments. - -[source,yaml] ----- -xpack.fleet.isAirGapped: true ----- - -[discrete] -[[air-gapped-pgp-fleet]] -== Configure {agents} to download a PGP/GPG key from {fleet-server} - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, an {agent} which doesn't have access to a PGP/GPG key from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` would fail to be upgraded. -For versions 8.9.0 to 8.10.3, you can resolve this problem following the steps described in the associated link:https://www.elastic.co/guide/en/fleet/8.9/release-notes-8.9.0.html#known-issues-8.9.0[known issue] documentation. - -Starting in version 8.10.4, you can resolve this problem by configuring {agents} to download the PGP/GPG key from {fleet-server}. - -Starting in version 8.10.4, {agent} will: - -. Verify the binary signature with the key bundled in the agent. -. If the verification doesn't pass, the agent will download the PGP/GPG key from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` and verify it. -. If that verification doesn't pass, the agent will download the PGP/GPG key from {fleet-server} and verify it. -. If that verification doesn't pass, the upgrade is blocked. - -By default, {fleet-server} serves {agents} with the key located in `FLEETSERVER_BINARY_DIR/elastic-agent-upgrade-keys/default.pgp`. -The key is served through the {fleet-server} endpoint `GET /api/agents/upgrades/{major}.{minor}.{patch}/pgp-public-key`. - -If there isn't a `default.pgp` key in the `FLEETSERVER_BINARY_DIR/elastic-agent-upgrade-keys/default.pgp` directory, {fleet-server} instead will attempt to retrieve a PGP/GPG key from the URL that you can specify with the `server.pgp.upstream_url` setting. - -You can prevent {fleet} from downloading the PGP/GPG key from `server.pgp.upstream_url` by manually downloading it from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` and storing it at `FLEETSERVER_BINARY_DIR/elastic-agent-upgrade-keys/default.pgp`. - -To set a custom URL for {fleet-server} to access a PGP/GPG key and make it available to {agents}: - -. In {kib}, go to *Management > {fleet} > Agent policies*. -. Select a policy for the agents that you want to upgrade. -. On the policy page, in the **Actions** menu for the {fleet-server} integration, select **Edit integration**. -. In the {fleet-server} settings section expand **Change defaults** and **Advanced options**. -. In the **Custom fleet-server configurations** field, add the setting `server.pgp.upstream_url` with the full URL where the PGP/GPG key can be accessed. For example: - -[source,yaml] ----- -server.pgp.upstream_url: ----- - -The setting `server.pgp.upstream_url` must point to a web server hosting the PGP/GPG key, which must be reachable by the host where {fleet-server} is installed. - -Note that: - - * `server.pgp.upstream_url` may be specified as an `http` endpoint (instead of `https`). - * For an `https` endpoint, the CA for {fleet-server} to connect to `server.pgp.upstream_url` must be trusted by {fleet-server} using the `--certificate-authorities` setting that is used globally for {agent}. - -[discrete] -[[air-gapped-proxy-server]] -== Use a proxy server to access the {package-registry} - -By default {kib} downloads package metadata and content from the public -{package-registry} at https://epr.elastic.co/[epr.elastic.co]. - -If you can route traffic to the public endpoint of the {package-registry} -through a network gateway, set the following property in {kib} to use a proxy -server: - -[source,yaml] ----- -xpack.fleet.registryProxyUrl: your-nat-gateway.corp.net ----- - -For more information, refer to <>. - -[discrete] -[[air-gapped-diy-epr]] -== Host your own {package-registry} - -NOTE: The {package-registry} packages include signatures used in -<>. By default, {fleet} uses the Elastic -public GPG key to verify package signatures. If you ever need to change this GPG -key, use the `xpack.fleet.packageVerification.gpgKeyPath` setting in -`kibana.yml`. For more information, refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings]. - -If routing traffic through a proxy server is not an option, you can host your -own {package-registry}. - -The {package-registry} can be deployed and hosted onsite using one of the -available Docker images. These docker images include the {package-registry} and -a selection of packages. - -There are different distributions available: - -* {version} (recommended): +docker.elastic.co/package-registry/distribution:{version}+ - Selection of packages from the production repository released with {stack} {version}. -* lite-{version}: +docker.elastic.co/package-registry/distribution:lite-{version}+ - Subset of the most commonly used packages from the production repository released with {stack} {version}. This image is a good candidate to start using {fleet} in air-gapped environments. -* production: `docker.elastic.co/package-registry/distribution:production` - Packages available in the production registry (https://epr.elastic.co). -Please note that this image is updated every time a new version of a package gets published. -* lite: `docker.elastic.co/package-registry/distribution:lite` - Subset of the most commonly used packages available in the production registry (https://epr.elastic.co). -Please note that this image is updated every time a new version of a package gets published. - -ifeval::["{release-state}"=="unreleased"] -[WARNING] -==== -Version {version} of the {package-registry} distribution has not yet been released. -==== -endif::[] - -To update the distribution image, re-pull the image and then restart the docker container. - -Every distribution contains packages that can be used by different versions of -the {stack}. The {package-registry} API exposes a {kib} version constraint that -allows for filtering packages that are compatible with a particular version. - -// lint ignore runtimes -NOTE: These steps use the standard Docker CLI, but you can create a Kubernetes manifest -based on this information. -These images can also be used with other container runtimes compatible with Docker images. - -1. Pull the Docker image from the public Docker registry: -+ -["source", "sh", subs="attributes"] ----- -docker pull docker.elastic.co/package-registry/distribution:{version} ----- -+ -2. Save the Docker image locally: -+ -["source", "sh", subs="attributes"] ----- -docker save -o package-registry-{version}.tar docker.elastic.co/package-registry/distribution:{version} ----- -+ -TIP: Check the image size to ensure that you have enough disk space. - -3. Transfer the image to the air-gapped environment and load it: -+ -["source", "sh", subs="attributes"] ----- -docker load -i package-registry-{version}.tar ----- - -4. Run the {package-registry}: -+ -["source", "sh", subs="attributes"] ----- -docker run -it -p 8080:8080 docker.elastic.co/package-registry/distribution:{version} ----- - -5. (Optional) You can monitor the health of your {package-registry} with -requests to the root path: -+ -["source", "sh", subs="attributes"] ----- -docker run -it -p 8080:8080 \ - --health-cmd "curl -f -L http://127.0.0.1:8080/health" \ - docker.elastic.co/package-registry/distribution:{version} ----- - -[discrete] -[[air-gapped-diy-epr-kibana]] -=== Connect {kib} to your hosted {package-registry} - -Use the `xpack.fleet.registryUrl` property in the {kib} config to set the URL of -your hosted package registry. For example: - -[source,yaml] ----- -xpack.fleet.registryUrl: "http://package-registry.corp.net:8080" ----- - -[discrete] -[[air-gapped-tls]] -=== TLS configuration of the {package-registry} - -You can configure the {package-registry} to listen on a secure HTTPS port using TLS. - -For example, given a key and a certificate pair available in `/etc/ssl`, you -can start the {package-registry} listening on the 443 port using the following command: - -["source", "sh", subs="attributes"] ----- -docker run -it -p 443:443 \ - -v /etc/ssl/package-registry.key:/etc/ssl/package-registry.key:ro \ - -v /etc/ssl/package-registry.crt:/etc/ssl/package-registry.crt:ro \ - -e EPR_ADDRESS=0.0.0.0:443 \ - -e EPR_TLS_KEY=/etc/ssl/package-registry.key \ - -e EPR_TLS_CERT=/etc/ssl/package-registry.crt \ - docker.elastic.co/package-registry/distribution:{version} ----- - -The {package-registry} supports TLS versions from 1.0 to 1.3. The minimum version accepted can be configured with `EPR_TLS_MIN_VERSION`, it defaults to 1.0. If you want to restrict the supported versions from 1.2 to 1.3, you can use `EPR_TLS_MIN_VERSION=1.2`. - -[discrete] -=== Using custom CA certificates - -If you are using self-signed certificates or certificates issued by a custom Certificate Authority (CA), you need to set the file path to your CA in the `NODE_EXTRA_CA_CERTS` environment -variable in the {kib} startup files. - -[source,text] ----- -NODE_EXTRA_CA_CERTS="/etc/kibana/certs/ca-cert.pem" ----- - -[discrete] -[[host-artifact-registry]] -== Host your own artifact registry for binary downloads - -{agent}s must be able to access the {artifact-registry} to download -binaries during upgrades. By default {agent}s download artifacts from -`https://artifacts.elastic.co/downloads/`. - -To make binaries available in an air-gapped environment, you can host your own -custom artifact registry, and then configure {agent}s to download binaries -from it. - -. Create a custom artifact registry in a location accessible to your {agent}s: -.. Download the latest release artifacts from the public {artifact-registry} at -`https://artifacts.elastic.co/downloads/`. For example, the -following cURL commands download all the artifacts that may be needed to upgrade -{agent}s running on Linux x86_64. You may replace `x86_64` with `arm64` for the ARM64 version. -The exact list depends on which integrations you're using. Make sure to also download the corresponding sha512, and PGP Signature (.asc) files for each binary. These are used for file integrity validations during installations and upgrades. -+ -["source","shell",subs="attributes"] ----- -curl -O https://artifacts.elastic.co/downloads/apm-server/apm-server-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/apm-server/apm-server-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/apm-server/apm-server-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/osquerybeat/osquerybeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/osquerybeat/osquerybeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/osquerybeat/osquerybeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/cloudbeat/cloudbeat-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/cloudbeat/cloudbeat-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/cloudbeat/cloudbeat-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/endpoint-dev/endpoint-security-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/endpoint-dev/endpoint-security-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/endpoint-dev/endpoint-security-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/fleet-server/fleet-server-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/fleet-server/fleet-server-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/fleet-server/fleet-server-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-host-agent-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-host-agent-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-host-agent-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-{version}-linux-x86_64.tar.gz.asc -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-{version}-linux-x86_64.tar.gz -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-{version}-linux-x86_64.tar.gz.sha512 -curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-{version}-linux-x86_64.tar.gz.asc ----- -.. On your HTTP file server, group the artifacts into directories and -sub-directories that follow the same convention used by the {artifact-registry}: -+ -[source,shell] ----- -//--- ----- -+ -Where: -+ -* `` is in the format `beats/elastic-agent`, `fleet-server`, `endpoint-dev`, and so on. -* `` is in the format `elastic-agent`, `endpoint-security`, or `fleet-server` and so on. -* `arch-package-type` is in the format `linux-x86_64`, `linux-arm64`, `windows_x86_64`, `darwin_x86_64`, or -darwin_aarch64`. -* If you're using the DEB package manager: -+ -** The 64bit variant has the format `--amd64.deb`. -** The aarch64 variant has the format `--arm64.deb`. -* If you're using the RPM package manager: -+ -** The 64bit variant has a format `--x86_64.rpm`. -** The aarch64 variant has a format `--aarch64.rpm`. - -+ -[TIP] -==== -* If you're ever in doubt, visit the link:https://www.elastic.co/downloads/elastic-agent[{agent} download page] to see what URL the various binaries are downloaded from. -* Make sure you have a plan or automation in place to update your artifact -registry when new versions of {agent} are available. -==== -. Add the agent binary download location to {fleet} settings: -.. Open **{fleet} -> Settings**. -.. Under **Agent Binary Download**, click **Add agent binary source** to add -the location of your artifact registry. For more detail about these settings, -refer to <>. If you want all {agent}s -to download binaries from this location, set it as the default. -. If your artifact registry is not the default, edit your agent policies to -override the default: -.. Go to **{fleet} -> Agent policies** and click the policy name to edit it. -.. Click **Settings**. -.. Under **Agent Binary Download**, select your artifact registry. -+ -When you trigger an upgrade for any {agent}s enrolled in the policy, the -binaries are downloaded from your artifact registry instead of the -public repository. - -**Not using {fleet}?** For standalone {agent}s, you can set the binary download -location under `agent.download.sourceURI` in the -<> file, or run the -<> command -with the `--source-uri` flag specified. diff --git a/docs/en/ingest-management/fleet/api-generated/README.md b/docs/en/ingest-management/fleet/api-generated/README.md deleted file mode 100644 index 718d4c58f..000000000 --- a/docs/en/ingest-management/fleet/api-generated/README.md +++ /dev/null @@ -1,41 +0,0 @@ - OpenAPI (Experimental) - -The process described here is a Fleet-specific instance of the process described for generating Kibana OpenAPI documentation in general. Refer to the Kibana OpenAPI [readme file](https://github.com/elastic/kibana/tree/main/docs/api-generated) for details. - -Open API specifications (OAS) exist in JSON for Fleet, though they are experimental and may be incomplete or change later. - -A preview of the API specifications can be added to the Fleet and Elastic Agent Guide by using the following process: - -. Make sure that your system has perl enabled. - -. Create a local clone of the [elastic/kibana](https://github.com/elastic/kibana) and [elastic/ingest-docs](https://github.com/elastic/ingest-docs) repositories in your `$GIT_PATH` directory. - -. Install [OpenAPI Generator](https://openapi-generator.tech/docs/installation), -or a similar tool that can generate HTML output from OAS. - -. Optionally validate the specifications by using the commands listed in the appropriate readmes. - -. Generate HTML output. For example: - - ``` - openapi-generator generate -g html -i $GIT_HOME/kibana/x-pack/plugins/fleet/common/openapi/bundled.json -o $GIT_HOME/ingest-docs/docs/en/ingest-management/fleet/api-generated/rules -t $GIT_HOME/ingest-docs/docs/en/ingest-management/fleet/api-generated/template - ``` - -. Rename the output files. For example: - ``` - mv $GIT_HOME/ingest-docs/docs/en/ingest-management/fleet/api-generated/rules/index.html $GIT_HOME/ingest-docs/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis-passthru.asciidoc - ``` - -. Run a perl search-and-replace command to fix the header text for each section of the API page: - ``` - perl -i -pe'while (s/">[A-Z][^ ]*[a-z]\K([A-Z])/ $1/g) {}' $GIT_HOME/ingest-docs/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis-passthru.asciidoc - ``` - -. If you're creating a new set of API output, you will need to have a page that incorporates the output by using passthrough blocks. For more information, refer to [Asciidoctor docs](https://docs.asciidoctor.org/asciidoc/latest/pass/pass-block/) - -. Verify the output by building the Kibana documentation. At this time, the output is added as a technical preview in the appendix. - -## Known issues - -- Some OAS 3.0 features such as `anyOf`, `oneOf`, and `allOf` might not display properly in the preview. These are on the [Short-term roadmap](https://openapi-generator.tech/docs/roadmap/) at this time. - diff --git a/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator-ignore b/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator-ignore deleted file mode 100644 index 7484ee590..000000000 --- a/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator-ignore +++ /dev/null @@ -1,23 +0,0 @@ -# OpenAPI Generator Ignore -# Generated by openapi-generator https://github.com/openapitools/openapi-generator - -# Use this file to prevent files from being overwritten by the generator. -# The patterns follow closely to .gitignore or .dockerignore. - -# As an example, the C# client generator defines ApiClient.cs. -# You can make changes and tell OpenAPI Generator to ignore just this file by uncommenting the following line: -#ApiClient.cs - -# You can match any string of characters against a directory, file or extension with a single asterisk (*): -#foo/*/qux -# The above matches foo/bar/qux and foo/baz/qux, but not foo/bar/baz/qux - -# You can recursively match patterns against a directory, file or extension with a double asterisk (**): -#foo/**/qux -# This matches foo/bar/qux, foo/baz/qux, and foo/bar/baz/qux - -# You can also negate patterns with an exclamation (!). -# For example, you can ignore all files in a docs folder with the file extension .md: -#docs/*.md -# Then explicitly reverse the ignore rule for a single file: -#!docs/README.md diff --git a/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator/FILES b/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator/FILES deleted file mode 100644 index dcaf71693..000000000 --- a/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator/FILES +++ /dev/null @@ -1 +0,0 @@ -index.html diff --git a/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator/VERSION b/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator/VERSION deleted file mode 100644 index 4be2c727a..000000000 --- a/docs/en/ingest-management/fleet/api-generated/rules/.openapi-generator/VERSION +++ /dev/null @@ -1 +0,0 @@ -6.5.0 \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis-passthru.asciidoc b/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis-passthru.asciidoc deleted file mode 100644 index 12b8e18f8..000000000 --- a/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis-passthru.asciidoc +++ /dev/null @@ -1,9221 +0,0 @@ -//// -This content is generated from the open API specification. -Any modifications made to this file will be overwritten. -//// - -++++ -
- - - -

Methods

- [ Jump to Models ] - -

Table of Contents

-
-

Agent Actions

- -

Agent Binary Download Sources

- -

Agent Policies

- -

Agent Status

- -

Agents

- -

Data Streams

- -

Elastic Package Manager EPM

- -

Enrollment APIKeys

- -

Fleet Internals

- -

Fleet Server Hosts

- -

Kubernetes

- -

Outputs

- -

Package Policies

- -

Proxies

- -

Service Tokens

- - -

Agent Actions

-
-
- Up -
post /agents/{agentId}/actions/{actionId}/cancel
-
Cancel agent action (agentActionCancel)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
actionId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{ }
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_action_cancel_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents/action_status
-
Get agent action status (agentsActionStatus)
-
- - - - - -

Query parameters

-
-
perPage (optional)
- -
Query Parameter — The number of items to return default: 20
page (optional)
- -
Query Parameter — default: 1
errorSize (optional)
- -
Query Parameter — default: 5
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ {
-    "nbAgentsActioned" : 0.8008281904610115,
-    "creationTime" : "creationTime",
-    "cancellationTime" : "cancellationTime",
-    "latestErrors" : [ {
-      "agentId" : "agentId",
-      "error" : "error",
-      "timestamp" : "timestamp"
-    }, {
-      "agentId" : "agentId",
-      "error" : "error",
-      "timestamp" : "timestamp"
-    } ],
-    "type" : "POLICY_REASSIGN",
-    "newPolicyId" : "newPolicyId",
-    "version" : "version",
-    "revision" : "revision",
-    "completionTime" : "completionTime",
-    "policyId" : "policyId",
-    "actionId" : "actionId",
-    "nbAgentsAck" : 1.4658129805029452,
-    "nbAgentsFailed" : 5.962133916683182,
-    "startTime" : "startTime",
-    "expiration" : "expiration",
-    "nbAgentsActionCreated" : 6.027456183070403,
-    "status" : "COMPLETE"
-  }, {
-    "nbAgentsActioned" : 0.8008281904610115,
-    "creationTime" : "creationTime",
-    "cancellationTime" : "cancellationTime",
-    "latestErrors" : [ {
-      "agentId" : "agentId",
-      "error" : "error",
-      "timestamp" : "timestamp"
-    }, {
-      "agentId" : "agentId",
-      "error" : "error",
-      "timestamp" : "timestamp"
-    } ],
-    "type" : "POLICY_REASSIGN",
-    "newPolicyId" : "newPolicyId",
-    "version" : "version",
-    "revision" : "revision",
-    "completionTime" : "completionTime",
-    "policyId" : "policyId",
-    "actionId" : "actionId",
-    "nbAgentsAck" : 1.4658129805029452,
-    "nbAgentsFailed" : 5.962133916683182,
-    "startTime" : "startTime",
-    "expiration" : "expiration",
-    "nbAgentsActionCreated" : 6.027456183070403,
-    "status" : "COMPLETE"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agents_action_status_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/{agentId}/actions
-
Create agent action (newAgentAction)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
new_agent_action_request new_agent_action_request (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "headers" : "headers",
-  "body" : [ 0.8008281904610115, 0.8008281904610115 ],
-  "statusCode" : 6.027456183070403
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - new_agent_action_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Agent Binary Download Sources

-
-
- Up -
delete /agent_download_sources/{sourceId}
-
Delete agent binary download source by ID (deleteDownloadSource)
-
- -

Path parameters

-
-
sourceId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "id" : "id"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_download_sources
-
List agent binary download sources (getDownloadSources)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 0,
-  "perPage" : 1,
-  "page" : 6,
-  "items" : [ {
-    "name" : "name",
-    "host" : "host",
-    "id" : "id",
-    "is_default" : true
-  }, {
-    "name" : "name",
-    "host" : "host",
-    "id" : "id",
-    "is_default" : true
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_download_sources_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_download_sources/{sourceId}
-
Get agent binary download source by ID (getOneDownloadSource)
-
- -

Path parameters

-
-
sourceId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "name" : "name",
-    "host" : "host",
-    "id" : "id",
-    "is_default" : true
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_one_download_source_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agent_download_sources
-
Create agent binary download source (postDownloadSources)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
post_download_sources_request post_download_sources_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "name" : "name",
-    "host" : "host",
-    "id" : "id",
-    "is_default" : true
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - post_download_sources_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /agent_download_sources/{sourceId}
-
Update agent binary download source by ID (updateDownloadSource)
-
- -

Path parameters

-
-
sourceId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
update_download_source_request update_download_source_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "name" : "name",
-    "host" : "host",
-    "id" : "id",
-    "is_default" : true
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_one_download_source_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Agent Policies

-
-
- Up -
post /agent_policies/{agentPolicyId}/copy
-
Copy agent policy by ID (agentPolicyCopy)
-
- -

Path parameters

-
-
agentPolicyId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
agent_policy_copy_request agent_policy_copy_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_info_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_policies/{agentPolicyId}/download
-
Download agent policy by ID (agentPolicyDownload)
-
- -

Path parameters

-
-
agentPolicyId (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
download (optional)
- -
Query Parameter — default: null
standalone (optional)
- -
Query Parameter — default: null
kubernetes (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : "item"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_download_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_policies/{agentPolicyId}/full
-
Get full agent policy by ID (agentPolicyFull)
-
- -

Path parameters

-
-
agentPolicyId (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
download (optional)
- -
Query Parameter — default: null
standalone (optional)
- -
Query Parameter — default: null
kubernetes (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{ }
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_full_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_policies/{agentPolicyId}
-
Get agent policy by ID (agentPolicyInfo)
-
Get one agent policy
- -

Path parameters

-
-
agentPolicyId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_info_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_policies
-
List agent policies (agentPolicyList)
-
- - - - - -

Query parameters

-
-
perPage (optional)
- -
Query Parameter — The number of items to return default: 20
page (optional)
- -
Query Parameter — default: 1
kuery (optional)
- -
Query Parameter — default: null
full (optional)
- -
Query Parameter — When set to true, retrieve the related package policies for each agent policy. default: null
noAgentCount (optional)
- -
Query Parameter — When set to true, do not count how many agents are in the agent policy, this can improve performance if you are searching over a large number of agent policies. The "agents" property will always be 0 if set to true. default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 5.637376656633329,
-  "perPage" : 7.061401241503109,
-  "page" : 2.3021358869347655,
-  "items" : [ {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  }, {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_list_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agent_policies/_bulk_get
-
Bulk get agent policies (bulkGetAgentPolicies)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_get_agent_policies_request bulk_get_agent_policies_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  }, {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_get_agent_policies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agent_policies
-
Create agent policy (createAgentPolicy)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
agent_policy_create_request agent_policy_create_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - create_agent_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agent_policies/delete
-
Delete agent policy by ID (deleteAgentPolicy)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
delete_agent_policy_request delete_agent_policy_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "success" : true,
-  "id" : "id"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_agent_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /agent_policies/{agentPolicyId}
-
Update agent policy by ID (updateAgentPolicy)
-
- -

Path parameters

-
-
agentPolicyId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
agent_policy_update_request agent_policy_update_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "updated_on" : "2000-01-23T04:56:07.000+00:00",
-    "package_policies" : [ null, null ],
-    "agent_features" : [ {
-      "name" : "name",
-      "enabled" : true
-    }, {
-      "name" : "name",
-      "enabled" : true
-    } ],
-    "description" : "description",
-    "fleet_server_host_id" : "fleet_server_host_id",
-    "monitoring_output_id" : "monitoring_output_id",
-    "inactivity_timeout" : 6.027456183070403,
-    "overrides" : "{}",
-    "download_source_id" : "download_source_id",
-    "is_protected" : true,
-    "revision" : 1.4658129805029452,
-    "agents" : 5.962133916683182,
-    "monitoring_enabled" : [ "metrics", "metrics" ],
-    "name" : "name",
-    "namespace" : "namespace",
-    "updated_by" : "updated_by",
-    "data_output_id" : "data_output_id",
-    "id" : "id",
-    "unenroll_timeout" : 0.8008281904610115
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_info_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Agent Status

-
-
- Up -
get /agent_status/data
-
Get incoming agent data (getAgentData)
-
- - - - - -

Query parameters

-
-
agentsIds (required)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ {
-    "key" : {
-      "data" : true
-    }
-  }, {
-    "key" : {
-      "data" : true
-    }
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_data_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent_status
-
Get agent status summary (getAgentStatus)
-
- - - - - -

Query parameters

-
-
policyId (optional)
- -
Query Parameter — default: null
kuery (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "all" : 2,
-  "offline" : 5,
-  "other" : 7,
-  "total" : 9,
-  "inactive" : 1,
-  "updating" : 3,
-  "online" : 2,
-  "active" : 4,
-  "error" : 0,
-  "unenrolled" : 5,
-  "events" : 6
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_status_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agent-status
-
Get agent status summary (getAgentStatusDeprecated)
-
- - - - - -

Query parameters

-
-
policyId (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "offline" : 5,
-  "other" : 2,
-  "total" : 7,
-  "inactive" : 1,
-  "updating" : 9,
-  "online" : 5,
-  "error" : 0,
-  "events" : 6
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_status_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Agents

-
-
- Up -
post /agents/bulk_reassign
-
Bulk reassign agents (bulkReassignAgents)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_reassign_agents_request bulk_reassign_agents_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "actionId" : "actionId"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_upgrade_agents_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/bulk_request_diagnostics
-
Bulk request diagnostics from agents (bulkRequestDiagnostics)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_request_diagnostics_request bulk_request_diagnostics_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "actionId" : "actionId"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_upgrade_agents_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/bulk_unenroll
-
Bulk unenroll agents (bulkUnenrollAgents)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_unenroll_agents_request bulk_unenroll_agents_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "actionId" : "actionId"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_upgrade_agents_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/bulk_update_agent_tags
-
Bulk update agent tags (bulkUpdateAgentTags)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_update_agent_tags_request bulk_update_agent_tags_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "actionId" : "actionId"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_upgrade_agents_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/bulk_upgrade
-
Bulk upgrade agents (bulkUpgradeAgents)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_upgrade_agents bulk_upgrade_agents (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "actionId" : "actionId"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_upgrade_agents_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
delete /agents/{agentId}
-
Delete agent by ID (deleteAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "action" : "deleted"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_agent_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents/{agentId}
-
Get agent by ID (getAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
withMetrics (optional)
- -
Query Parameter — Return agent metrics, false by default default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "default_api_key" : "default_api_key",
-    "enrolled_at" : "enrolled_at",
-    "access_api_key" : "access_api_key",
-    "components" : [ {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    }, {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    } ],
-    "user_provided_metadata" : "{}",
-    "unenrollment_started_at" : "unenrollment_started_at",
-    "policy_id" : "policy_id",
-    "policy_revision" : 0.8008281904610115,
-    "active" : true,
-    "local_metadata" : "{}",
-    "last_checkin" : "last_checkin",
-    "access_api_key_id" : "access_api_key_id",
-    "default_api_key_id" : "default_api_key_id",
-    "unenrolled_at" : "unenrolled_at",
-    "id" : "id",
-    "metrics" : {
-      "cpu_avg" : 6.027456183070403,
-      "memory_size_byte_avg" : 1.4658129805029452
-    }
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents/tags
-
List agent tags (getAgentTags)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ "items", "items" ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_tags_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents/files@{fileId}@{fileName}
-
Get file uploaded by agent (getAgentUploadFile)
-
- -

Path parameters

-
-
fileId (required)
- -
Path Parameter — default: null
fileName (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "body" : {
-    "items" : {
-      "headers" : "",
-      "body" : ""
-    }
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_upload_file_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents
-
List agents (getAgents)
-
- - - - - -

Query parameters

-
-
perPage (optional)
- -
Query Parameter — The number of items to return default: 20
page (optional)
- -
Query Parameter — default: 1
kuery (optional)
- -
Query Parameter — default: null
showInactive (optional)
- -
Query Parameter — default: null
showUpgradeable (optional)
- -
Query Parameter — default: null
sortField (optional)
- -
Query Parameter — default: null
sortOrder (optional)
- -
Query Parameter — default: null
withMetrics (optional)
- -
Query Parameter — Return agent metrics, false by default default: null
getStatusSummary (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 5.962133916683182,
-  "statusSummary" : {
-    "offline" : 7.061401241503109,
-    "inactive" : 2.027123023002322,
-    "updating" : 1.0246457001441578,
-    "online" : 3.616076749251911,
-    "enrolling" : 4.145608029883936,
-    "unenrolling" : 7.386281948385884,
-    "degraded'" : 1.4894159098541704,
-    "error" : 9.301444243932576,
-    "unenrolled" : 1.2315135367772556
-  },
-  "perPage" : 2.3021358869347655,
-  "page" : 5.637376656633329,
-  "list" : [ {
-    "default_api_key" : "default_api_key",
-    "enrolled_at" : "enrolled_at",
-    "access_api_key" : "access_api_key",
-    "components" : [ {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    }, {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    } ],
-    "user_provided_metadata" : "{}",
-    "unenrollment_started_at" : "unenrollment_started_at",
-    "policy_id" : "policy_id",
-    "policy_revision" : 0.8008281904610115,
-    "active" : true,
-    "local_metadata" : "{}",
-    "last_checkin" : "last_checkin",
-    "access_api_key_id" : "access_api_key_id",
-    "default_api_key_id" : "default_api_key_id",
-    "unenrolled_at" : "unenrolled_at",
-    "id" : "id",
-    "metrics" : {
-      "cpu_avg" : 6.027456183070403,
-      "memory_size_byte_avg" : 1.4658129805029452
-    }
-  }, {
-    "default_api_key" : "default_api_key",
-    "enrolled_at" : "enrolled_at",
-    "access_api_key" : "access_api_key",
-    "components" : [ {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    }, {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    } ],
-    "user_provided_metadata" : "{}",
-    "unenrollment_started_at" : "unenrollment_started_at",
-    "policy_id" : "policy_id",
-    "policy_revision" : 0.8008281904610115,
-    "active" : true,
-    "local_metadata" : "{}",
-    "last_checkin" : "last_checkin",
-    "access_api_key_id" : "access_api_key_id",
-    "default_api_key_id" : "default_api_key_id",
-    "unenrolled_at" : "unenrolled_at",
-    "id" : "id",
-    "metrics" : {
-      "cpu_avg" : 6.027456183070403,
-      "memory_size_byte_avg" : 1.4658129805029452
-    }
-  } ],
-  "items" : [ {
-    "default_api_key" : "default_api_key",
-    "enrolled_at" : "enrolled_at",
-    "access_api_key" : "access_api_key",
-    "components" : [ {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    }, {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    } ],
-    "user_provided_metadata" : "{}",
-    "unenrollment_started_at" : "unenrollment_started_at",
-    "policy_id" : "policy_id",
-    "policy_revision" : 0.8008281904610115,
-    "active" : true,
-    "local_metadata" : "{}",
-    "last_checkin" : "last_checkin",
-    "access_api_key_id" : "access_api_key_id",
-    "default_api_key_id" : "default_api_key_id",
-    "unenrolled_at" : "unenrolled_at",
-    "id" : "id",
-    "metrics" : {
-      "cpu_avg" : 6.027456183070403,
-      "memory_size_byte_avg" : 1.4658129805029452
-    }
-  }, {
-    "default_api_key" : "default_api_key",
-    "enrolled_at" : "enrolled_at",
-    "access_api_key" : "access_api_key",
-    "components" : [ {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    }, {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    } ],
-    "user_provided_metadata" : "{}",
-    "unenrollment_started_at" : "unenrollment_started_at",
-    "policy_id" : "policy_id",
-    "policy_revision" : 0.8008281904610115,
-    "active" : true,
-    "local_metadata" : "{}",
-    "last_checkin" : "last_checkin",
-    "access_api_key_id" : "access_api_key_id",
-    "default_api_key_id" : "default_api_key_id",
-    "unenrolled_at" : "unenrolled_at",
-    "id" : "id",
-    "metrics" : {
-      "cpu_avg" : 6.027456183070403,
-      "memory_size_byte_avg" : 1.4658129805029452
-    }
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agents_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents
-
List agents by action ids (getAgentsByActions)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
get_agents_by_actions_request get_agents_by_actions_request (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

-
- - array[array[String]] -
- - - -

Example data

-
Content-Type: application/json
-
[ [ "", "" ], [ "", "" ] ]
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents/setup
-
Get agent setup info (getAgentsSetupStatus)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "missing_requirements" : [ "tls_required", "tls_required" ],
-  "package_verification_key_id" : "package_verification_key_id",
-  "isReady" : true,
-  "missing_optional_features" : [ "encrypted_saved_object_encryption_key_required", "encrypted_saved_object_encryption_key_required" ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - fleet_status_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /agents/{agentId}/uploads
-
List agent uploads (listAgentUploads)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "body" : {
-    "item" : [ {
-      "createTime" : "createTime",
-      "filePath" : "filePath",
-      "name" : "name",
-      "actionId" : "actionId",
-      "id" : "id",
-      "status" : "READY"
-    }, {
-      "createTime" : "createTime",
-      "filePath" : "filePath",
-      "name" : "name",
-      "actionId" : "actionId",
-      "id" : "id",
-      "status" : "READY"
-    } ]
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - list_agent_uploads_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/{agentId}/reassign
-
Reassign agent (reassignAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
reassign_agent_deprecated_request reassign_agent_deprecated_request (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

-
- - Object -
- - - - -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - Object -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /agents/{agentId}/reassign
-
Reassign agent (reassignAgentDeprecated)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
reassign_agent_deprecated_request reassign_agent_deprecated_request (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

-
- - Object -
- - - - -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - Object -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/{agentId}/request_diagnostics
-
Request agent diagnostics (requestDiagnosticsAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "actionId" : "actionId"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_upgrade_agents_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/setup
-
Initiate agent setup (setupAgents)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
setup_agents_request setup_agents_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "isInitialized" : true,
-  "nonFatalErrors" : [ {
-    "name" : "name",
-    "message" : "message"
-  }, {
-    "name" : "name",
-    "message" : "message"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - fleet_setup_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/{agentId}/unenroll
-
Unenroll agent (unenrollAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
unenroll_agent_request unenroll_agent_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

-
- - Object -
- - - - -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - Object -

400

- BAD REQUEST - unenroll_agent_400_response -
-
-
-
- Up -
put /agents/{agentId}
-
Update agent by ID (updateAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
update_agent_request update_agent_request (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "default_api_key" : "default_api_key",
-    "enrolled_at" : "enrolled_at",
-    "access_api_key" : "access_api_key",
-    "components" : [ {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    }, {
-      "id" : "id",
-      "units" : [ {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      }, {
-        "payload" : "{}",
-        "id" : "id",
-        "message" : "message"
-      } ],
-      "type" : "type",
-      "message" : "message"
-    } ],
-    "user_provided_metadata" : "{}",
-    "unenrollment_started_at" : "unenrollment_started_at",
-    "policy_id" : "policy_id",
-    "policy_revision" : 0.8008281904610115,
-    "active" : true,
-    "local_metadata" : "{}",
-    "last_checkin" : "last_checkin",
-    "access_api_key_id" : "access_api_key_id",
-    "default_api_key_id" : "default_api_key_id",
-    "unenrolled_at" : "unenrolled_at",
-    "id" : "id",
-    "metrics" : {
-      "cpu_avg" : 6.027456183070403,
-      "memory_size_byte_avg" : 1.4658129805029452
-    }
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_agent_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /agents/{agentId}/upgrade
-
Upgrade agent (upgradeAgent)
-
- -

Path parameters

-
-
agentId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
upgrade_agent upgrade_agent (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "force" : true,
-  "version" : "version",
-  "source_uri" : "source_uri"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - upgrade_agent -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Data Streams

-
-
- Up -
get /data_streams
-
List data streams (dataStreamsList)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "data_streams" : [ {
-    "last_activity_ms" : 0.8008281904610115,
-    "size_in_bytes_formatted" : "size_in_bytes_formatted",
-    "package" : "package",
-    "package_version" : "package_version",
-    "namespace" : "namespace",
-    "size_in_bytes" : 6.027456183070403,
-    "index" : "index",
-    "type" : "type",
-    "dataset" : "dataset",
-    "dashboard" : [ {
-      "id" : "id",
-      "title" : "title"
-    }, {
-      "id" : "id",
-      "title" : "title"
-    } ]
-  }, {
-    "last_activity_ms" : 0.8008281904610115,
-    "size_in_bytes_formatted" : "size_in_bytes_formatted",
-    "package" : "package",
-    "package_version" : "package_version",
-    "namespace" : "namespace",
-    "size_in_bytes" : 6.027456183070403,
-    "index" : "index",
-    "type" : "type",
-    "dataset" : "dataset",
-    "dashboard" : [ {
-      "id" : "id",
-      "title" : "title"
-    }, {
-      "id" : "id",
-      "title" : "title"
-    } ]
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - data_streams_list_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Elastic Package Manager EPM

-
-
- Up -
post /epm/bulk_assets
-
Bulk get assets (bulkGetAssets)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_get_assets_request bulk_get_assets_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : [ [ {
-    "attributes" : {
-      "description" : "description",
-      "title" : "title"
-    },
-    "id" : "id",
-    "updatedAt" : "updatedAt"
-  }, {
-    "attributes" : {
-      "description" : "description",
-      "title" : "title"
-    },
-    "id" : "id",
-    "updatedAt" : "updatedAt"
-  } ], [ {
-    "attributes" : {
-      "description" : "description",
-      "title" : "title"
-    },
-    "id" : "id",
-    "updatedAt" : "updatedAt"
-  }, {
-    "attributes" : {
-      "description" : "description",
-      "title" : "title"
-    },
-    "id" : "id",
-    "updatedAt" : "updatedAt"
-  } ] ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_bulk_assets_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /epm/packages/_bulk
-
Bulk install packages (bulkInstallPackages)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_install_packages_request bulk_install_packages_request (optional)
- -
Body Parameter
- -
- - -

Query parameters

-
-
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : [ {
-    "name" : "name",
-    "version" : "version"
-  }, {
-    "name" : "name",
-    "version" : "version"
-  } ],
-  "items" : [ {
-    "name" : "name",
-    "version" : "version"
-  }, {
-    "name" : "name",
-    "version" : "version"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_install_packages_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
delete /epm/packages/{pkgName}/{pkgVersion}
-
Delete package (deletePackage)
-
- -

Path parameters

-
-
pkgName (required)
- -
Path Parameter — default: null
pkgVersion (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
install_package_deprecated_request install_package_deprecated_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- -

Query parameters

-
-
ignoreUnverified (optional)
- -
Query Parameter — Ignore if the package is fails signature verification default: null
full (optional)
- -
Query Parameter — Return all fields from the package manifest, not just those supported by the Elastic Package Registry default: null
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ {
-    "id" : "id"
-  }, {
-    "id" : "id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - update_package_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
delete /epm/packages/{pkgkey}
-
Delete ackage (deletePackageDeprecated)
-
- -

Path parameters

-
-
pkgkey (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
install_package_deprecated_request install_package_deprecated_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : [ {
-    "id" : "id"
-  }, {
-    "id" : "id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - install_package_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/packages/{pkgName}/{pkgVersion}
-
Get package (getPackage)
-
- -

Path parameters

-
-
pkgName (required)
- -
Path Parameter — default: null
pkgVersion (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
ignoreUnverified (optional)
- -
Query Parameter — Ignore if the package is fails signature verification default: null
full (optional)
- -
Query Parameter — Return all fields from the package manifest, not just those supported by the Elastic Package Registry default: null
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
null
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_package_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/categories
-
List package categories (getPackageCategories)
-
- - - - - -

Query parameters

-
-
prerelease (optional)
- -
Query Parameter — Whether to include prerelease packages in categories count (e.g. beta, rc, preview) default: false
experimental (optional)
- -
Query Parameter — default: false
include_policy_templates (optional)
- -
Query Parameter — default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : [ {
-    "count" : 0.8008281904610115,
-    "id" : "id",
-    "title" : "title"
-  }, {
-    "count" : 0.8008281904610115,
-    "id" : "id",
-    "title" : "title"
-  } ],
-  "items" : [ {
-    "count" : 6.027456183070403,
-    "id" : "id",
-    "title" : "title"
-  }, {
-    "count" : 6.027456183070403,
-    "id" : "id",
-    "title" : "title"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_categories_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/packages/{pkgkey}
-
Get package (getPackageDeprecated)
-
- -

Path parameters

-
-
pkgkey (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
null
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_package_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/packages/{pkgName}/stats
-
Get package stats (getPackageStats)
-
- -

Path parameters

-
-
pkgName (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : {
-    "agent_policy_count" : 0
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_package_stats_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /epm/packages/{pkgName}/{pkgVersion}
-
Install package (installPackage)
-
- -

Path parameters

-
-
pkgName (required)
- -
Path Parameter — default: null
pkgVersion (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
install_package_request install_package_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- -

Query parameters

-
-
ignoreUnverified (optional)
- -
Query Parameter — Ignore if the package is fails signature verification default: null
full (optional)
- -
Query Parameter — Return all fields from the package manifest, not just those supported by the Elastic Package Registry default: null
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "_meta" : {
-    "install_source" : "registry"
-  },
-  "items" : [ {
-    "id" : "id"
-  }, {
-    "id" : "id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - install_package_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /epm/packages
-
Install by package by direct upload (installPackageByUpload)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/zip
  • -
  • application/gzip
  • -
- -

Request body

-
-
body file (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "_meta" : {
-    "install_source" : "upload"
-  },
-  "items" : [ {
-    "id" : "id"
-  }, {
-    "id" : "id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - install_package_by_upload_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /epm/packages/{pkgkey}
-
Install package (installPackageDeprecated)
-
- -

Path parameters

-
-
pkgkey (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
install_package_deprecated_request install_package_deprecated_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : [ {
-    "id" : "id"
-  }, {
-    "id" : "id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - install_package_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/packages
-
List packages (listAllPackages)
-
- - - - - -

Query parameters

-
-
excludeInstallStatus (optional)
- -
Query Parameter — Whether to exclude the install status of each package. Enabling this option will opt in to caching for the response via cache-control headers. If you don't need up-to-date installation info for a package, and are querying for a list of available packages, providing this flag can improve performance substantially. default: false
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
experimental (optional)
- -
Query Parameter — default: false
category (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "response" : [ {
-    "savedObject" : "{}",
-    "path" : "path",
-    "download" : "download",
-    "name" : "name",
-    "description" : "description",
-    "installationInfo" : {
-      "installed_kibana" : {
-        "id" : "id"
-      },
-      "created_at" : "created_at",
-      "type" : "type",
-      "verification_status" : "verified",
-      "installed_es" : {
-        "deferred" : true,
-        "id" : "id"
-      },
-      "version" : "version",
-      "experimental_data_stream_features" : "",
-      "updated_at" : "updated_at",
-      "install_status" : "installed",
-      "install_kibana_space_id" : "install_kibana_space_id",
-      "verification_key_id" : "verification_key_id",
-      "name" : "name",
-      "install_source" : "registry",
-      "install_format_schema_version" : "install_format_schema_version",
-      "namespaces" : [ "namespaces", "namespaces" ]
-    },
-    "icons" : "icons",
-    "title" : "title",
-    "type" : "type",
-    "version" : "version",
-    "status" : "status"
-  }, {
-    "savedObject" : "{}",
-    "path" : "path",
-    "download" : "download",
-    "name" : "name",
-    "description" : "description",
-    "installationInfo" : {
-      "installed_kibana" : {
-        "id" : "id"
-      },
-      "created_at" : "created_at",
-      "type" : "type",
-      "verification_status" : "verified",
-      "installed_es" : {
-        "deferred" : true,
-        "id" : "id"
-      },
-      "version" : "version",
-      "experimental_data_stream_features" : "",
-      "updated_at" : "updated_at",
-      "install_status" : "installed",
-      "install_kibana_space_id" : "install_kibana_space_id",
-      "verification_key_id" : "verification_key_id",
-      "name" : "name",
-      "install_source" : "registry",
-      "install_format_schema_version" : "install_format_schema_version",
-      "namespaces" : [ "namespaces", "namespaces" ]
-    },
-    "icons" : "icons",
-    "title" : "title",
-    "type" : "type",
-    "version" : "version",
-    "status" : "status"
-  } ],
-  "items" : [ {
-    "savedObject" : "{}",
-    "path" : "path",
-    "download" : "download",
-    "name" : "name",
-    "description" : "description",
-    "installationInfo" : {
-      "installed_kibana" : {
-        "id" : "id"
-      },
-      "created_at" : "created_at",
-      "type" : "type",
-      "verification_status" : "verified",
-      "installed_es" : {
-        "deferred" : true,
-        "id" : "id"
-      },
-      "version" : "version",
-      "experimental_data_stream_features" : "",
-      "updated_at" : "updated_at",
-      "install_status" : "installed",
-      "install_kibana_space_id" : "install_kibana_space_id",
-      "verification_key_id" : "verification_key_id",
-      "name" : "name",
-      "install_source" : "registry",
-      "install_format_schema_version" : "install_format_schema_version",
-      "namespaces" : [ "namespaces", "namespaces" ]
-    },
-    "icons" : "icons",
-    "title" : "title",
-    "type" : "type",
-    "version" : "version",
-    "status" : "status"
-  }, {
-    "savedObject" : "{}",
-    "path" : "path",
-    "download" : "download",
-    "name" : "name",
-    "description" : "description",
-    "installationInfo" : {
-      "installed_kibana" : {
-        "id" : "id"
-      },
-      "created_at" : "created_at",
-      "type" : "type",
-      "verification_status" : "verified",
-      "installed_es" : {
-        "deferred" : true,
-        "id" : "id"
-      },
-      "version" : "version",
-      "experimental_data_stream_features" : "",
-      "updated_at" : "updated_at",
-      "install_status" : "installed",
-      "install_kibana_space_id" : "install_kibana_space_id",
-      "verification_key_id" : "verification_key_id",
-      "name" : "name",
-      "install_source" : "registry",
-      "install_format_schema_version" : "install_format_schema_version",
-      "namespaces" : [ "namespaces", "namespaces" ]
-    },
-    "icons" : "icons",
-    "title" : "title",
-    "type" : "type",
-    "version" : "version",
-    "status" : "status"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_packages_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/packages/limited
-
Get limited package list (listLimitedPackages)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ "items", "items" ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - list_limited_packages_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/packages/{pkgName}/{pkgVersion}/{filePath}
-
Get package file (packagesGetFile)
-
- -

Path parameters

-
-
pkgName (required)
- -
Path Parameter — default: null
pkgVersion (required)
- -
Path Parameter — default: null
filePath (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "headers" : "{}",
-  "body" : "{}",
-  "statusCode" : 0.8008281904610115
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - packages_get_file_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /epm/verification_key_id
-
Get package signature verification key ID (packagesGetVerificationKeyId)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "headers" : "{}",
-  "body" : {
-    "id" : "id"
-  },
-  "statusCode" : 0.8008281904610115
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - packages_get_verification_key_id_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /epm/packages/{pkgName}/{pkgVersion}
-
Update package settings (updatePackage)
-
- -

Path parameters

-
-
pkgName (required)
- -
Path Parameter — default: null
pkgVersion (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
update_package_request update_package_request (optional)
- -
Body Parameter
- -
- - -

Query parameters

-
-
ignoreUnverified (optional)
- -
Query Parameter — Ignore if the package is fails signature verification default: null
full (optional)
- -
Query Parameter — Return all fields from the package manifest, not just those supported by the Elastic Package Registry default: null
prerelease (optional)
- -
Query Parameter — Whether to return prerelease versions of packages (e.g. beta, rc, preview) default: false
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ {
-    "id" : "id"
-  }, {
-    "id" : "id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - update_package_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Enrollment APIKeys

-
-
- Up -
post /enrollment_api_keys
-
Create enrollment API key (createEnrollmentApiKeys)
-
- - - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  },
-  "action" : "created"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - create_enrollment_api_keys_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /enrollment-api-keys
-
Create enrollment API key (createEnrollmentApiKeysDeprecated)
-
- - - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  },
-  "action" : "created"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - create_enrollment_api_keys_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
delete /enrollment_api_keys/{keyId}
-
Delete enrollment API key by ID (deleteEnrollmentApiKey)
-
- -

Path parameters

-
-
keyId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "action" : "deleted"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_agent_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
delete /enrollment-api-keys/{keyId}
-
Delete enrollment API key by ID (deleteEnrollmentApiKeyDeprecated)
-
- -

Path parameters

-
-
keyId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "action" : "deleted"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_agent_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /enrollment_api_keys/{keyId}
-
Get enrollment API key by ID (getEnrollmentApiKey)
-
- -

Path parameters

-
-
keyId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_enrollment_api_key_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /enrollment-api-keys/{keyId}
-
Get enrollment API key by ID (getEnrollmentApiKeyDeprecated)
-
- -

Path parameters

-
-
keyId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_enrollment_api_key_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /enrollment_api_keys
-
List enrollment API keys (getEnrollmentApiKeys)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 1.4658129805029452,
-  "perPage" : 6.027456183070403,
-  "page" : 0.8008281904610115,
-  "list" : [ {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  }, {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  } ],
-  "items" : [ {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  }, {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_enrollment_api_keys_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /enrollment-api-keys
-
List enrollment API keys (getEnrollmentApiKeysDeprecated)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 1.4658129805029452,
-  "perPage" : 6.027456183070403,
-  "page" : 0.8008281904610115,
-  "list" : [ {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  }, {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  } ],
-  "items" : [ {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  }, {
-    "policy_id" : "policy_id",
-    "api_key" : "api_key",
-    "name" : "name",
-    "active" : true,
-    "created_at" : "created_at",
-    "id" : "id",
-    "api_key_id" : "api_key_id"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_enrollment_api_keys_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Fleet Internals

-
-
- Up -
post /health_check
-
Fleet Server health check (fleetServerHealthCheck)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
fleet_server_health_check_request fleet_server_health_check_request (required)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "name" : "name",
-  "host" : "host",
-  "status" : "status"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - fleet_server_health_check_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /settings
-
Get settings (getSettings)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "has_seen_add_data_notice" : true,
-    "fleet_server_hosts" : [ "fleet_server_hosts", "fleet_server_hosts" ],
-    "prerelease_integrations_enabled" : true,
-    "id" : "id"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - fleet_settings_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /setup
-
Initiate Fleet setup (setup)
-
- - - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "isInitialized" : true,
-  "nonFatalErrors" : [ {
-    "name" : "name",
-    "message" : "message"
-  }, {
-    "name" : "name",
-    "message" : "message"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - fleet_setup_response -

400

- Generic Error - fleet_server_health_check_400_response -

500

- Internal Server Error - setup_500_response -
-
-
-
- Up -
put /settings
-
Update settings (updateSettings)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
update_settings_request update_settings_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "has_seen_add_data_notice" : true,
-    "fleet_server_hosts" : [ "fleet_server_hosts", "fleet_server_hosts" ],
-    "prerelease_integrations_enabled" : true,
-    "id" : "id"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - fleet_settings_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Fleet Server Hosts

-
-
- Up -
delete /fleet_server_hosts/{itemId}
-
Delete Fleet Server host by ID (deleteFleetServerHosts)
-
- -

Path parameters

-
-
itemId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "id" : "id"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /fleet_server_hosts
-
List Fleet Server hosts (getFleetServerHosts)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 0,
-  "perPage" : 1,
-  "page" : 6,
-  "items" : [ {
-    "host_urls" : [ "host_urls", "host_urls" ],
-    "is_preconfigured" : true,
-    "name" : "name",
-    "id" : "id",
-    "is_default" : true
-  }, {
-    "host_urls" : [ "host_urls", "host_urls" ],
-    "is_preconfigured" : true,
-    "name" : "name",
-    "id" : "id",
-    "is_default" : true
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_fleet_server_hosts_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /fleet_server_hosts/{itemId}
-
Get Fleet Server host by ID (getOneFleetServerHosts)
-
- -

Path parameters

-
-
itemId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "host_urls" : [ "host_urls", "host_urls" ],
-    "is_preconfigured" : true,
-    "name" : "name",
-    "id" : "id",
-    "is_default" : true
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_one_fleet_server_hosts_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /fleet_server_hosts
-
Create Fleet Server host (postFleetServerHosts)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
post_fleet_server_hosts_request post_fleet_server_hosts_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "host_urls" : [ "host_urls", "host_urls" ],
-    "is_preconfigured" : true,
-    "name" : "name",
-    "id" : "id",
-    "is_default" : true
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - post_fleet_server_hosts_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /fleet_server_hosts/{itemId}
-
Update Fleet Server host by ID (updateFleetServerHosts)
-
- -

Path parameters

-
-
itemId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
update_fleet_server_hosts_request update_fleet_server_hosts_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "host_urls" : [ "host_urls", "host_urls" ],
-    "is_preconfigured" : true,
-    "name" : "name",
-    "id" : "id",
-    "is_default" : true
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_one_fleet_server_hosts_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Kubernetes

-
-
- Up -
get /kubernetes
-
Get full K8s agent manifest (getFullK8sManifest)
-
- - - - - -

Query parameters

-
-
download (optional)
- -
Query Parameter — default: null
fleetServer (optional)
- -
Query Parameter — default: null
enrolToken (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : "item"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - agent_policy_download_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Outputs

-
-
- Up -
delete /outputs/{outputId}
-
Delete output by ID (deleteOutput)
-
- -

Path parameters

-
-
outputId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "id" : "id"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /logstash_api_keys
-
Generate Logstash API key (generateLogstashApiKey)
-
- - - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "api_key" : "api_key"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - generate_logstash_api_key_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /outputs/{outputId}
-
Get output by ID (getOutput)
-
- -

Path parameters

-
-
outputId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
null
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - output_create_request -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /outputs
-
List outputs (getOutputs)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 0,
-  "perPage" : 1,
-  "page" : 6,
-  "items" : [ null, null ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_outputs_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /outputs
-
Create output (postOutputs)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
output_create_request output_create_request (required)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{ }
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - post_outputs_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /outputs/{outputId}
-
Update output by ID (updateOutput)
-
- -

Path parameters

-
-
outputId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
output_update_request output_update_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
null
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - output_update_request -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Package Policies

-
-
- Up -
post /package_policies/_bulk_get
-
Bulk get package policies (bulkGetPackagePolicies)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
bulk_get_package_policies_request bulk_get_package_policies_request (optional)
- -
Body Parameter
- -
- - -

Query parameters

-
-
format (optional)
- -
Query Parameter — Simplified or legacy format for package inputs default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "items" : [ null, null ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - bulk_get_package_policies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /package_policies
-
Create package policy (createPackagePolicy)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
package_policy_request package_policy_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- -

Query parameters

-
-
format (optional)
- -
Query Parameter — Simplified or legacy format for package inputs default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{ }
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - create_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -

409

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
delete /package_policies/{packagePolicyId}
-
Delete package policy by ID (deletePackagePolicy)
-
- -

Path parameters

-
-
packagePolicyId (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
force (optional)
- -
Query Parameter — default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "id" : "id"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /package_policies
-
List package policies (getPackagePolicies)
-
- - - - - -

Query parameters

-
-
perPage (optional)
- -
Query Parameter — The number of items to return default: 20
page (optional)
- -
Query Parameter — default: 1
kuery (optional)
- -
Query Parameter — default: null
format (optional)
- -
Query Parameter — Simplified or legacy format for package inputs default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 0.8008281904610115,
-  "perPage" : 1.4658129805029452,
-  "page" : 6.027456183070403,
-  "items" : [ null, null ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_package_policies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /package_policies/{packagePolicyId}
-
Get package policy by ID (getPackagePolicy)
-
- -

Path parameters

-
-
packagePolicyId (required)
- -
Path Parameter — default: null
-
- - - - -

Query parameters

-
-
format (optional)
- -
Query Parameter — Simplified or legacy format for package inputs default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{ }
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - create_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /package_policies/delete
-
Delete package policy (postDeletePackagePolicy)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
post_delete_package_policy_request post_delete_package_policy_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
[ {
-  "success" : true,
-  "name" : "name",
-  "id" : "id"
-}, {
-  "success" : true,
-  "name" : "name",
-  "id" : "id"
-} ]
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /package_policies/{packagePolicyId}
-
Update package policy by ID (updatePackagePolicy)
-
- -

Path parameters

-
-
packagePolicyId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
package_policy_request package_policy_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- -

Query parameters

-
-
format (optional)
- -
Query Parameter — Simplified or legacy format for package inputs default: null
-
- - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "sucess" : true
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - update_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /package_policies/upgrade
-
Upgrade package policy to a newer package version (upgradePackagePolicy)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
upgrade_package_policy_request upgrade_package_policy_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
[ {
-  "success" : true,
-  "name" : "name",
-  "id" : "id"
-}, {
-  "success" : true,
-  "name" : "name",
-  "id" : "id"
-} ]
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - -

400

- Generic Error - fleet_server_health_check_400_response -

409

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /package_policies/upgrade/dryrun
-
Dry run package policy upgrade (upgradePackagePolicyDryRun)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
upgrade_package_policy_dry_run_request upgrade_package_policy_dry_run_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
[ {
-  "hasErrors" : true,
-  "agent_diff" : [ [ null, null ], [ null, null ] ],
-  "diff" : [ null, null ]
-}, {
-  "hasErrors" : true,
-  "agent_diff" : [ [ null, null ], [ null, null ] ],
-  "diff" : [ null, null ]
-} ]
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Proxies

-
-
- Up -
delete /proxies/{itemId}
-
Delete proxy by ID (deleteFleetProxies)
-
- -

Path parameters

-
-
itemId (required)
- -
Path Parameter — default: null
-
- - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "id" : "id"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - delete_package_policy_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /proxies
-
List proxies (getFleetProxies)
-
- - - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "total" : 0,
-  "perPage" : 1,
-  "page" : 6,
-  "items" : [ {
-    "proxy_headers" : "{}",
-    "certificate_authorities" : "certificate_authorities",
-    "certificate_key" : "certificate_key",
-    "name" : "name",
-    "certificate" : "certificate",
-    "id" : "id",
-    "url" : "url"
-  }, {
-    "proxy_headers" : "{}",
-    "certificate_authorities" : "certificate_authorities",
-    "certificate_key" : "certificate_key",
-    "name" : "name",
-    "certificate" : "certificate",
-    "id" : "id",
-    "url" : "url"
-  } ]
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_fleet_proxies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
get /proxies/{itemId}
-
Get proxy by ID (getOneFleetProxies)
-
- -

Path parameters

-
-
itemId (required)
- -
Path Parameter — default: null
-
- - - - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "proxy_headers" : "{}",
-    "certificate_authorities" : "certificate_authorities",
-    "certificate_key" : "certificate_key",
-    "name" : "name",
-    "certificate" : "certificate",
-    "id" : "id",
-    "url" : "url"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_one_fleet_proxies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /proxies
-
Create proxy (postFleetProxies)
-
- - -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
post_fleet_proxies_request post_fleet_proxies_request (optional)
- -
Body Parameter
- -
- - - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "proxy_headers" : "{}",
-    "certificate_authorities" : "certificate_authorities",
-    "certificate_key" : "certificate_key",
-    "name" : "name",
-    "certificate" : "certificate",
-    "id" : "id",
-    "url" : "url"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - post_fleet_proxies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
put /proxies/{itemId}
-
Update proxy by ID (updateFleetProxies)
-
- -

Path parameters

-
-
itemId (required)
- -
Path Parameter — default: null
-
- -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    -
  • application/json
  • -
- -

Request body

-
-
update_fleet_proxies_request update_fleet_proxies_request (optional)
- -
Body Parameter
- -
- -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "item" : {
-    "proxy_headers" : "{}",
-    "certificate_authorities" : "certificate_authorities",
-    "certificate_key" : "certificate_key",
-    "name" : "name",
-    "certificate" : "certificate",
-    "id" : "id",
-    "url" : "url"
-  }
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - get_one_fleet_proxies_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-

Service Tokens

-
-
- Up -
post /service_tokens
-
Create service token (generateServiceToken)
-
- - - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "name" : "name",
-  "value" : "value"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - generate_service_token_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
-
-
- Up -
post /service-tokens
-
Create service token (generateServiceTokenDeprecated)
-
- - - - -

Request headers

-
-
kbn-xsrf (required)
- -
Header Parameter — Kibana's anti Cross-Site Request Forgery token. Can be any string value. default: null
- -
- - - -

Return type

- - - - -

Example data

-
Content-Type: application/json
-
{
-  "name" : "name",
-  "value" : "value"
-}
- -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    -
  • application/json
  • -
- -

Responses

-

200

- OK - generate_service_token_deprecated_200_response -

400

- Generic Error - fleet_server_health_check_400_response -
-
- -

Models

- [ Jump to Methods ] - -

Table of Contents

-
    -
  1. Full_agent_policy_output_permissions -
  2. -
  3. Full_agent_policy_output_permissions_data -
  4. -
  5. Full_agent_policy_output_permissions_data_indices_inner -
  6. -
  7. agent - Agent
  8. -
  9. agent_action - Agent action
  10. -
  11. agent_action_cancel_200_response -
  12. -
  13. agent_action_oneOf -
  14. -
  15. agent_action_oneOf_1 -
  16. -
  17. agent_action_oneOf_1_data -
  18. -
  19. agent_component - Agent component
  20. -
  21. agent_component_status - Agent component status
  22. -
  23. agent_component_unit - Agent component unit
  24. -
  25. agent_component_unit_type - Agent component unit type
  26. -
  27. agent_diagnostics - Agent diagnostics
  28. -
  29. agent_metrics -
  30. -
  31. agent_policy - Agent Policy
  32. -
  33. agent_policy_agent_features_inner -
  34. -
  35. agent_policy_copy_request -
  36. -
  37. agent_policy_create_request - Create agent policy request
  38. -
  39. agent_policy_download_200_response -
  40. -
  41. agent_policy_full - Agent policy full response
  42. -
  43. agent_policy_full_200_response -
  44. -
  45. agent_policy_full_200_response_item -
  46. -
  47. agent_policy_full_oneOf -
  48. -
  49. agent_policy_info_200_response -
  50. -
  51. agent_policy_list_200_response -
  52. -
  53. agent_policy_update_request - Update agent policy request
  54. -
  55. agent_status - Agent status
  56. -
  57. agent_type - Agent type
  58. -
  59. agents_action_status_200_response -
  60. -
  61. agents_action_status_200_response_items_inner -
  62. -
  63. agents_action_status_200_response_items_inner_latestErrors_inner -
  64. -
  65. bulk_get_agent_policies_200_response -
  66. -
  67. bulk_get_agent_policies_request -
  68. -
  69. bulk_get_assets_request -
  70. -
  71. bulk_get_assets_request_assetIds_inner -
  72. -
  73. bulk_get_package_policies_200_response -
  74. -
  75. bulk_get_package_policies_request -
  76. -
  77. bulk_install_packages_request -
  78. -
  79. bulk_install_packages_request_packages_inner -
  80. -
  81. bulk_install_packages_request_packages_inner_oneOf -
  82. -
  83. bulk_install_packages_response - Bulk install packages response
  84. -
  85. bulk_install_packages_response_response_inner -
  86. -
  87. bulk_reassign_agents_request -
  88. -
  89. bulk_reassign_agents_request_agents -
  90. -
  91. bulk_request_diagnostics_request -
  92. -
  93. bulk_unenroll_agents_request -
  94. -
  95. bulk_update_agent_tags_request -
  96. -
  97. bulk_upgrade_agents - Bulk upgrade agents
  98. -
  99. bulk_upgrade_agents_200_response -
  100. -
  101. create_agent_policy_200_response -
  102. -
  103. create_enrollment_api_keys_deprecated_200_response -
  104. -
  105. create_package_policy_200_response -
  106. -
  107. data_stream - Data stream
  108. -
  109. data_stream_dashboard_inner -
  110. -
  111. data_streams_list_200_response -
  112. -
  113. delete_agent_200_response -
  114. -
  115. delete_agent_policy_200_response -
  116. -
  117. delete_agent_policy_request -
  118. -
  119. delete_package_policy_200_response -
  120. -
  121. download_sources - Download Source
  122. -
  123. elasticsearch_asset_type - Elasticsearch asset type
  124. -
  125. enrollment_api_key - Enrollment API key
  126. -
  127. fleet_server_health_check_200_response -
  128. -
  129. fleet_server_health_check_400_response -
  130. -
  131. fleet_server_health_check_request -
  132. -
  133. fleet_server_host - Fleet Server Host
  134. -
  135. fleet_settings_response - Fleet settings response
  136. -
  137. fleet_setup_response - Fleet Setup response
  138. -
  139. fleet_setup_response_nonFatalErrors_inner -
  140. -
  141. fleet_status_response - Fleet status response
  142. -
  143. full_agent_policy - Full agent policy
  144. -
  145. full_agent_policy_fleet -
  146. -
  147. full_agent_policy_fleet_oneOf -
  148. -
  149. full_agent_policy_fleet_oneOf_1 -
  150. -
  151. full_agent_policy_fleet_oneOf_1_kibana -
  152. -
  153. full_agent_policy_fleet_oneOf_ssl -
  154. -
  155. full_agent_policy_input - Full agent policy input
  156. -
  157. full_agent_policy_input_allOf -
  158. -
  159. full_agent_policy_input_allOf_data_stream -
  160. -
  161. full_agent_policy_input_allOf_meta -
  162. -
  163. full_agent_policy_input_allOf_meta_package -
  164. -
  165. full_agent_policy_input_stream - Full agent policy input stream
  166. -
  167. full_agent_policy_input_stream_allOf -
  168. -
  169. full_agent_policy_input_stream_allOf_data_stream -
  170. -
  171. full_agent_policy_output - Full agent policy
  172. -
  173. full_agent_policy_output_additionalProperties -
  174. -
  175. full_agent_policy_output_permissions_1_value -
  176. -
  177. full_agent_policy_secret_references_inner -
  178. -
  179. generate_logstash_api_key_200_response -
  180. -
  181. generate_service_token_deprecated_200_response -
  182. -
  183. get_agent_200_response -
  184. -
  185. get_agent_data_200_response -
  186. -
  187. get_agent_data_200_response_items_inner_value -
  188. -
  189. get_agent_status_200_response -
  190. -
  191. get_agent_status_deprecated_200_response -
  192. -
  193. get_agent_tags_response - Get Agent Tags response
  194. -
  195. get_agent_upload_file_200_response -
  196. -
  197. get_agent_upload_file_200_response_body -
  198. -
  199. get_agent_upload_file_200_response_body_items -
  200. -
  201. get_agents_by_actions_request -
  202. -
  203. get_agents_response - Get Agent response
  204. -
  205. get_agents_response_statusSummary -
  206. -
  207. get_bulk_assets_response - Bulk get assets response
  208. -
  209. get_bulk_assets_response_response_inner_inner -
  210. -
  211. get_bulk_assets_response_response_inner_inner_attributes -
  212. -
  213. get_categories_response - Get categories response
  214. -
  215. get_categories_response_items_inner -
  216. -
  217. get_categories_response_response_inner -
  218. -
  219. get_download_sources_200_response -
  220. -
  221. get_enrollment_api_key_deprecated_200_response -
  222. -
  223. get_enrollment_api_keys_deprecated_200_response -
  224. -
  225. get_fleet_proxies_200_response -
  226. -
  227. get_fleet_server_hosts_200_response -
  228. -
  229. get_one_download_source_200_response -
  230. -
  231. get_one_fleet_proxies_200_response -
  232. -
  233. get_one_fleet_server_hosts_200_response -
  234. -
  235. get_outputs_200_response -
  236. -
  237. get_package_200_response -
  238. -
  239. get_package_200_response_allOf -
  240. -
  241. get_package_200_response_allOf_1 -
  242. -
  243. get_package_deprecated_200_response -
  244. -
  245. get_package_deprecated_200_response_allOf -
  246. -
  247. get_package_deprecated_200_response_allOf_1 -
  248. -
  249. get_package_policies_200_response -
  250. -
  251. get_package_stats_200_response -
  252. -
  253. get_packages_response - Get Packages response
  254. -
  255. install_package_200_response -
  256. -
  257. install_package_200_response__meta -
  258. -
  259. install_package_by_upload_200_response -
  260. -
  261. install_package_by_upload_200_response__meta -
  262. -
  263. install_package_by_upload_200_response_items_inner -
  264. -
  265. install_package_by_upload_200_response_items_inner_type -
  266. -
  267. install_package_deprecated_200_response -
  268. -
  269. install_package_deprecated_request -
  270. -
  271. install_package_request -
  272. -
  273. installation_info - Installation info object
  274. -
  275. installation_info_installed_es -
  276. -
  277. installation_info_installed_kibana -
  278. -
  279. kibana_saved_object_type - Kibana saved object asset type
  280. -
  281. list_agent_uploads_200_response -
  282. -
  283. list_agent_uploads_200_response_body -
  284. -
  285. list_limited_packages_200_response -
  286. -
  287. new_agent_action_200_response -
  288. -
  289. new_agent_action_request -
  290. -
  291. new_package_policy - New package policy
  292. -
  293. new_package_policy_inputs_inner -
  294. -
  295. new_package_policy_package -
  296. -
  297. output_create_request - Output
  298. -
  299. output_create_request_elasticsearch - elasticsearch
  300. -
  301. output_create_request_elasticsearch_shipper -
  302. -
  303. output_create_request_elasticsearch_ssl -
  304. -
  305. output_create_request_kafka - kafka
  306. -
  307. output_create_request_kafka_headers_inner -
  308. -
  309. output_create_request_kafka_random -
  310. -
  311. output_create_request_kafka_sasl -
  312. -
  313. output_create_request_kafka_topics_inner -
  314. -
  315. output_create_request_kafka_topics_inner_when -
  316. -
  317. output_create_request_logstash - logstash
  318. -
  319. output_update_request - Output
  320. -
  321. output_update_request_elasticsearch - elasticsearch
  322. -
  323. output_update_request_kafka - kafka
  324. -
  325. output_update_request_logstash - logstash
  326. -
  327. package_info - Package information
  328. -
  329. package_info_conditions -
  330. -
  331. package_info_conditions_elasticsearch -
  332. -
  333. package_info_conditions_kibana -
  334. -
  335. package_info_data_streams_inner -
  336. -
  337. package_info_data_streams_inner_vars_inner -
  338. -
  339. package_info_elasticsearch -
  340. -
  341. package_info_elasticsearch_privileges -
  342. -
  343. package_info_screenshots_inner -
  344. -
  345. package_info_source -
  346. -
  347. package_policy - Package policy
  348. -
  349. package_policy_allOf -
  350. -
  351. package_policy_allOf_inputs -
  352. -
  353. package_policy_request - Package Policy Request
  354. -
  355. package_policy_request_inputs_value -
  356. -
  357. package_policy_request_inputs_value_streams_value -
  358. -
  359. package_policy_request_package -
  360. -
  361. package_usage_stats - Package usage stats
  362. -
  363. packages_get_file_200_response -
  364. -
  365. packages_get_verification_key_id_200_response -
  366. -
  367. packages_get_verification_key_id_200_response_body -
  368. -
  369. post_delete_package_policy_200_response_inner -
  370. -
  371. post_delete_package_policy_request -
  372. -
  373. post_download_sources_200_response -
  374. -
  375. post_download_sources_request -
  376. -
  377. post_fleet_proxies_200_response -
  378. -
  379. post_fleet_proxies_request -
  380. -
  381. post_fleet_server_hosts_200_response -
  382. -
  383. post_fleet_server_hosts_request -
  384. -
  385. post_outputs_200_response -
  386. -
  387. proxies - Fleet Proxy
  388. -
  389. reassign_agent_deprecated_request -
  390. -
  391. saved_object_type - Saved Object type
  392. -
  393. search_result - Search result
  394. -
  395. settings - Settings
  396. -
  397. setup_500_response -
  398. -
  399. setup_agents_request -
  400. -
  401. unenroll_agent_400_response -
  402. -
  403. unenroll_agent_request -
  404. -
  405. update_agent_request -
  406. -
  407. update_download_source_request -
  408. -
  409. update_fleet_proxies_request -
  410. -
  411. update_fleet_server_hosts_request -
  412. -
  413. update_package_200_response -
  414. -
  415. update_package_policy_200_response -
  416. -
  417. update_package_request -
  418. -
  419. update_settings_request -
  420. -
  421. upgrade_agent - Upgrade agent
  422. -
  423. upgrade_diff_inner -
  424. -
  425. upgrade_diff_inner_allOf -
  426. -
  427. upgrade_diff_inner_allOf_allOf -
  428. -
  429. upgrade_diff_inner_allOf_allOf_errors_inner -
  430. -
  431. upgrade_package_policy_dry_run_200_response_inner -
  432. -
  433. upgrade_package_policy_dry_run_request -
  434. -
  435. upgrade_package_policy_request -
  436. -
- -
-

Full_agent_policy_output_permissions - Up

-
-
-
packagePolicyName (optional)
-
data (optional)
-
-
- - -
-

agent - Agent Up

-
-
-
type
-
active
-
enrolled_at
-
unenrolled_at (optional)
-
unenrollment_started_at (optional)
-
access_api_key_id (optional)
-
default_api_key_id (optional)
-
policy_id (optional)
-
policy_revision (optional)
-
last_checkin (optional)
-
user_provided_metadata (optional)
-
local_metadata (optional)
-
id
-
access_api_key (optional)
-
status
-
default_api_key (optional)
-
components (optional)
-
metrics (optional)
-
-
-
-

agent_action - Agent action Up

-
-
-
data (optional)
-
ack_data (optional)
-
type (optional)
-
-
- -
-

agent_action_oneOf - Up

-
-
-
data (optional)
-
ack_data (optional)
-
type (optional)
-
Enum:
-
UNENROLL
UPGRADE
POLICY_REASSIGN
-
-
-
-

agent_action_oneOf_1 - Up

-
-
-
type (optional)
-
data (optional)
-
-
-
-

agent_action_oneOf_1_data - Up

-
-
-
log_level (optional)
-
Enum:
-
debug
info
warning
error
-
-
-
-

agent_component - Agent component Up

-
-
-
id (optional)
-
type (optional)
-
status (optional)
-
message (optional)
-
units (optional)
-
-
- -
-

agent_component_unit - Agent component unit Up

-
-
-
id (optional)
-
type (optional)
-
status (optional)
-
message (optional)
-
payload (optional)
-
-
- -
-

agent_diagnostics - Agent diagnostics Up

-
-
-
id
-
name
-
createTime
-
filePath
-
actionId
-
status
-
Enum:
-
READY
AWAITING_UPLOAD
DELETED
IN_PROGRESS
-
-
-
-

agent_metrics - Up

-
-
-
cpu_avg (optional)
Big Decimal Average agent CPU usage during the last 5 minutes, number between 0-1
-
memory_size_byte_avg (optional)
Big Decimal Average agent memory consumption during the last 5 minutes
-
-
-
-

agent_policy - Agent Policy Up

-
-
-
id
-
name
-
namespace
-
description (optional)
-
monitoring_enabled (optional)
-
Enum:
- -
data_output_id (optional)
-
monitoring_output_id (optional)
-
fleet_server_host_id (optional)
-
download_source_id (optional)
-
unenroll_timeout (optional)
-
inactivity_timeout (optional)
-
package_policies (optional)
array[package_policy] This field is present only when retrieving a single agent policy, or when retrieving a list of agent policies with the ?full=true parameter
-
updated_on (optional)
Date format: date-time
-
updated_by (optional)
-
revision (optional)
-
agents (optional)
-
agent_features (optional)
-
is_protected (optional)
Boolean Indicates whether the agent policy has tamper protection enabled. Default false.
-
overrides (optional)
Object Override settings that are defined in the agent policy. Input settings cannot be overridden. The override option should be used only in unusual circumstances and not as a routine procedure.
-
-
- -
-

agent_policy_copy_request - Up

-
-
-
name
-
description (optional)
-
-
-
-

agent_policy_create_request - Create agent policy request Up

-
-
-
id (optional)
-
name
-
namespace
-
description (optional)
-
monitoring_enabled (optional)
-
Enum:
- -
data_output_id (optional)
-
monitoring_output_id (optional)
-
fleet_server_host_id (optional)
-
download_source_id (optional)
-
unenroll_timeout (optional)
-
inactivity_timeout (optional)
-
agent_features (optional)
-
is_protected (optional)
-
-
-
-

agent_policy_download_200_response - Up

-
-
-
item (optional)
-
-
- - - -
-

agent_policy_full_oneOf - Up

-
-
-
item (optional)
-
-
- - -
-

agent_policy_update_request - Update agent policy request Up

-
-
-
name
-
namespace
-
description (optional)
-
monitoring_enabled (optional)
-
Enum:
- -
data_output_id (optional)
-
monitoring_output_id (optional)
-
fleet_server_host_id (optional)
-
download_source_id (optional)
-
unenroll_timeout (optional)
-
inactivity_timeout (optional)
-
agent_features (optional)
-
is_protected (optional)
-
-
- - - -
-

agents_action_status_200_response_items_inner - Up

-
-
-
actionId
-
status
-
Enum:
-
COMPLETE
EXPIRED
CANCELLED
FAILED
IN_PROGRESS
ROLLOUT_PASSED
-
nbAgentsActioned
Big Decimal number of agents actioned
-
nbAgentsActionCreated
Big Decimal number of agents included in action from kibana
-
nbAgentsAck
Big Decimal number of agents that acknowledged the action
-
nbAgentsFailed
Big Decimal number of agents that failed to execute the action
-
version (optional)
String agent version number (UPGRADE action)
-
startTime (optional)
String start time of action (scheduled actions)
-
type
-
Enum:
-
POLICY_REASSIGN
UPGRADE
UNENROLL
FORCE_UNENROLL
UPDATE_TAGS
CANCEL
REQUEST_DIAGNOSTICS
SETTINGS
POLICY_CHANGE
INPUT_ACTION
-
expiration (optional)
-
completionTime (optional)
-
cancellationTime (optional)
-
newPolicyId (optional)
String new policy id (POLICY_REASSIGN action)
-
policyId (optional)
String policy id (POLICY_CHANGE action)
-
revision (optional)
String new policy revision (POLICY_CHANGE action)
-
creationTime
String creation time of action
-
latestErrors (optional)
array[agents_action_status_200_response_items_inner_latestErrors_inner] latest errors that happened when the agents executed the action
-
-
-
-

agents_action_status_200_response_items_inner_latestErrors_inner - Up

-
-
-
agentId (optional)
-
error (optional)
-
timestamp (optional)
-
-
- -
-

bulk_get_agent_policies_request - Up

-
-
-
ids
array[String] list of agent policy ids
-
full (optional)
Boolean get full policies with package policies populated
-
ignoreMissing (optional)
-
-
-
-

bulk_get_assets_request - Up

-
-
-
assetIds
array[bulk_get_assets_request_assetIds_inner] list of items necessary to fetch assets
-
-
-
-

bulk_get_assets_request_assetIds_inner - Up

-
-
-
type (optional)
-
id (optional)
-
-
- -
-

bulk_get_package_policies_request - Up

-
-
-
ids
array[String] list of package policy ids
-
ignoreMissing (optional)
-
-
-
-

bulk_install_packages_request - Up

-
-
-
packages
-
force (optional)
Boolean force install to ignore package verification errors
-
-
-
-

bulk_install_packages_request_packages_inner - Up

-
-
-
name (optional)
String package name
-
version (optional)
String package version
-
-
-
-

bulk_install_packages_request_packages_inner_oneOf - Up

-
-
-
name (optional)
String package name
-
version (optional)
String package version
-
-
- -
-

bulk_install_packages_response_response_inner - Up

-
-
-
name (optional)
-
version (optional)
-
-
-
-

bulk_reassign_agents_request - Up

-
-
-
policy_id
String new agent policy id
-
batchSize (optional)
-
agents
-
-
- - -
-

bulk_unenroll_agents_request - Up

-
-
-
revoke (optional)
Boolean Revokes API keys of agents
-
force (optional)
Boolean Unenroll hosted agents too
-
agents
-
-
-
-

bulk_update_agent_tags_request - Up

-
-
-
agents
-
tagsToAdd (optional)
-
tagsToRemove (optional)
-
batchSize (optional)
-
-
-
-

bulk_upgrade_agents - Bulk upgrade agents Up

-
-
-
version
String version to upgrade to
-
source_uri (optional)
String alternative upgrade binary download url
-
rollout_duration_seconds (optional)
Big Decimal rolling upgrade window duration in seconds
-
start_time (optional)
String start time of upgrade in ISO 8601 format
-
agents
-
force (optional)
Boolean Force upgrade, skipping validation (should be used with caution)
-
-
-
-

bulk_upgrade_agents_200_response - Up

-
-
-
actionId (optional)
-
-
- -
-

create_enrollment_api_keys_deprecated_200_response - Up

-
-
-
item (optional)
-
action (optional)
-
Enum:
-
created
-
-
- -
-

data_stream - Data stream Up

-
-
-
index (optional)
-
dataset (optional)
-
namespace (optional)
-
type (optional)
-
package (optional)
-
package_version (optional)
-
last_activity_ms (optional)
-
size_in_bytes (optional)
-
size_in_bytes_formatted (optional)
-
dashboard (optional)
-
-
-
-

data_stream_dashboard_inner - Up

-
-
-
id (optional)
-
title (optional)
-
-
-
-

data_streams_list_200_response - Up

-
-
-
data_streams (optional)
-
-
-
-

delete_agent_200_response - Up

-
-
-
action
-
Enum:
-
deleted
-
-
- -
-

delete_agent_policy_request - Up

-
-
-
agentPolicyId
-
-
- -
-

download_sources - Download Source Up

-
-
-
id (optional)
-
is_default
-
name
-
host
-
-
- -
-

enrollment_api_key - Enrollment API key Up

-
-
-
id
-
api_key_id
-
api_key
-
name (optional)
-
active
-
policy_id (optional)
-
created_at
-
-
-
-

fleet_server_health_check_200_response - Up

-
-
-
name (optional)
-
status (optional)
-
host (optional)
-
-
-
-

fleet_server_health_check_400_response - Up

-
-
-
statusCode (optional)
-
error (optional)
-
message (optional)
-
-
-
-

fleet_server_health_check_request - Up

-
-
-
host (optional)
-
-
-
-

fleet_server_host - Fleet Server Host Up

-
-
-
id
-
name (optional)
-
is_default
-
is_preconfigured
-
host_urls
-
-
- - - -
-

fleet_status_response - Fleet status response Up

-
-
-
isReady
-
missing_requirements
-
Enum:
- -
missing_optional_features
-
Enum:
- -
package_verification_key_id (optional)
-
-
-
-

full_agent_policy - Full agent policy Up

-
-
-
id
-
outputs
-
output_permissions (optional)
-
fleet (optional)
-
inputs
-
revision (optional)
-
agent (optional)
-
secret_references (optional)
-
-
-
-

full_agent_policy_fleet - Up

-
-
-
hosts (optional)
-
proxy_url (optional)
-
proxy_headers (optional)
-
ssl (optional)
-
kibana (optional)
-
-
-
-

full_agent_policy_fleet_oneOf - Up

-
-
-
hosts (optional)
-
proxy_url (optional)
-
proxy_headers (optional)
-
ssl (optional)
-
-
- -
-

full_agent_policy_fleet_oneOf_1_kibana - Up

-
-
-
hosts (optional)
-
protocol (optional)
-
path (optional)
-
-
-
-

full_agent_policy_fleet_oneOf_ssl - Up

-
-
-
verification_mode (optional)
-
certificate (optional)
-
key (optional)
-
certificate_authorities (optional)
-
renegotiation (optional)
-
-
- - - - - - - - -
-

full_agent_policy_output - Full agent policy Up

-
-
-
hosts
-
ca_sha256
-
proxy_url (optional)
-
proxy_headers (optional)
-
type
-
additionalProperties (optional)
-
-
- - - -
-

generate_logstash_api_key_200_response - Up

-
-
-
api_key (optional)
-
-
-
-

generate_service_token_deprecated_200_response - Up

-
-
-
name (optional)
-
value (optional)
-
-
- - - -
-

get_agent_status_200_response - Up

-
-
-
error
-
events
-
inactive
-
unenrolled (optional)
-
offline
-
online
-
other
-
total
-
updating
-
all
-
active
-
-
-
-

get_agent_status_deprecated_200_response - Up

-
-
-
error
-
events
-
inactive
-
offline
-
online
-
other
-
total
-
updating
-
-
- - - - -
-

get_agents_by_actions_request - Up

-
-
-
actionIds (optional)
-
-
-
-

get_agents_response - Get Agent response Up

-
-
-
list (optional)
-
items
-
total
-
page
-
perPage
-
statusSummary (optional)
-
-
-
-

get_agents_response_statusSummary - Up

-
-
-
offline (optional)
-
error (optional)
-
online (optional)
-
inactive (optional)
-
enrolling (optional)
-
unenrolling (optional)
-
unenrolled (optional)
-
updating (optional)
-
degradedQuote (optional)
-
-
- -
-

get_bulk_assets_response_response_inner_inner - Up

-
-
-
id (optional)
-
type (optional)
-
updatedAt (optional)
-
attributes (optional)
-
-
-
-

get_bulk_assets_response_response_inner_inner_attributes - Up

-
-
-
title (optional)
-
description (optional)
-
-
- - - -
-

get_download_sources_200_response - Up

-
-
-
items (optional)
-
total (optional)
-
page (optional)
-
perPage (optional)
-
-
- - -
-

get_fleet_proxies_200_response - Up

-
-
-
items (optional)
-
total (optional)
-
page (optional)
-
perPage (optional)
-
-
-
-

get_fleet_server_hosts_200_response - Up

-
-
-
items (optional)
-
total (optional)
-
page (optional)
-
perPage (optional)
-
-
- - - -
-

get_outputs_200_response - Up

-
-
-
items (optional)
-
total (optional)
-
page (optional)
-
perPage (optional)
-
-
-
-

get_package_200_response - Up

-
-
-
item (optional)
-
status
-
Enum:
-
installed
installing
install_failed
not_installed
-
savedObject
-
latestVersion (optional)
-
keepPoliciesUpToDate (optional)
-
notice (optional)
-
licensePath (optional)
-
-
- -
-

get_package_200_response_allOf_1 - Up

-
-
-
status
-
Enum:
-
installed
installing
install_failed
not_installed
-
savedObject
-
latestVersion (optional)
-
keepPoliciesUpToDate (optional)
-
notice (optional)
-
licensePath (optional)
-
-
-
-

get_package_deprecated_200_response - Up

-
-
-
response (optional)
-
status
-
Enum:
-
installed
installing
install_failed
not_installed
-
savedObject
-
-
- -
-

get_package_deprecated_200_response_allOf_1 - Up

-
-
-
status
-
Enum:
-
installed
installing
install_failed
not_installed
-
savedObject
-
-
-
-

get_package_policies_200_response - Up

-
-
-
items
-
total (optional)
-
page (optional)
-
perPage (optional)
-
-
- - - -
-

install_package_200_response__meta - Up

-
-
-
install_source (optional)
-
Enum:
-
registry
upload
bundled
-
-
- -
-

install_package_by_upload_200_response__meta - Up

-
-
-
install_source (optional)
-
Enum:
-
upload
registry
bundled
-
-
- - - -
-

install_package_deprecated_request - Up

-
-
-
force (optional)
-
-
-
-

install_package_request - Up

-
-
-
force (optional)
-
ignore_constraints (optional)
-
-
-
-

installation_info - Installation info object Up

-
-
-
type (optional)
-
created_at (optional)
-
updated_at (optional)
-
namespaces (optional)
-
installed_kibana
-
installed_es
-
name
-
version
-
install_status
-
Enum:
-
installed
installing
install_failed
-
install_source
-
Enum:
-
registry
upload
bundled
-
install_kibana_space_id (optional)
-
install_format_schema_version (optional)
-
verification_status
-
Enum:
-
verified
unverified
unknown
-
verification_key_id (optional)
-
experimental_data_stream_features (optional)
-
-
-
-

installation_info_installed_es - Up

-
-
-
id (optional)
-
deferred (optional)
-
type (optional)
-
-
- - - - - -
-

new_agent_action_200_response - Up

-
-
-
body (optional)
-
statusCode (optional)
-
headers (optional)
-
-
-
-

new_agent_action_request - Up

-
-
-
action (optional)
-
-
-
-

new_package_policy - New package policy Up

-
-
-
enabled (optional)
-
package (optional)
-
namespace (optional)
-
output_id (optional)
-
inputs
-
policy_id (optional)
-
name
-
description (optional)
-
-
-
-

new_package_policy_inputs_inner - Up

-
-
-
type
-
enabled
-
processors (optional)
-
streams (optional)
-
config (optional)
-
vars (optional)
-
-
-
-

new_package_policy_package - Up

-
-
-
name
-
version
-
title (optional)
-
-
-
-

output_create_request - Output Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type
-
Enum:
-
logstash
-
hosts
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
version (optional)
-
key (optional)
-
compression (optional)
-
compression_level (optional)
-
client_id (optional)
-
auth_type
-
username (optional)
-
password (optional)
-
sasl (optional)
-
partition (optional)
-
random (optional)
-
round_robin (optional)
-
topics
-
headers (optional)
-
timeout (optional)
-
broker_timeout (optional)
-
broker_buffer_size (optional)
-
broker_ack_reliability (optional)
-
-
-
-

output_create_request_elasticsearch - elasticsearch Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type (optional)
-
Enum:
-
elasticsearch
-
hosts (optional)
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
-
-
-

output_create_request_elasticsearch_shipper - Up

-
-
-
disk_queue_enabled (optional)
-
disk_queue_path (optional)
-
disk_queue_max_size (optional)
-
disk_queue_encryption_enabled (optional)
-
disk_queue_compression_enabled (optional)
-
compression_level (optional)
-
loadbalance (optional)
-
-
-
-

output_create_request_elasticsearch_ssl - Up

-
-
-
certificate_authorities (optional)
-
certificate (optional)
-
key (optional)
-
-
-
-

output_create_request_kafka - kafka Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type
-
Enum:
-
kafka
-
hosts
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
version (optional)
-
key (optional)
-
compression (optional)
-
compression_level (optional)
-
client_id (optional)
-
auth_type
-
username (optional)
-
password (optional)
-
sasl (optional)
-
partition (optional)
-
random (optional)
-
round_robin (optional)
-
topics
-
headers (optional)
-
timeout (optional)
-
broker_timeout (optional)
-
broker_buffer_size (optional)
-
broker_ack_reliability (optional)
-
-
-
-

output_create_request_kafka_headers_inner - Up

-
-
-
key (optional)
-
value (optional)
-
-
-
-

output_create_request_kafka_random - Up

-
-
-
group_events (optional)
-
-
-
-

output_create_request_kafka_sasl - Up

-
-
-
mechanism (optional)
-
-
- -
-

output_create_request_kafka_topics_inner_when - Up

-
-
-
type (optional)
-
condition (optional)
-
-
-
-

output_create_request_logstash - logstash Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type
-
Enum:
-
logstash
-
hosts
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
-
-
-

output_update_request - Output Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type
-
Enum:
-
logstash
-
hosts
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
version (optional)
-
key (optional)
-
compression (optional)
-
compression_level (optional)
-
client_id (optional)
-
auth_type (optional)
-
username (optional)
-
password (optional)
-
sasl (optional)
-
partition (optional)
-
random (optional)
-
round_robin (optional)
-
topics (optional)
-
headers (optional)
-
timeout (optional)
-
broker_timeout (optional)
-
broker_ack_reliability (optional)
-
broker_buffer_size (optional)
-
-
-
-

output_update_request_elasticsearch - elasticsearch Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type
-
Enum:
-
elasticsearch
-
hosts
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
-
-
-

output_update_request_kafka - kafka Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type (optional)
-
Enum:
-
kafka
-
hosts (optional)
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
version (optional)
-
key (optional)
-
compression (optional)
-
compression_level (optional)
-
client_id (optional)
-
auth_type (optional)
-
username (optional)
-
password (optional)
-
sasl (optional)
-
partition (optional)
-
random (optional)
-
round_robin (optional)
-
topics (optional)
-
headers (optional)
-
timeout (optional)
-
broker_timeout (optional)
-
broker_ack_reliability (optional)
-
broker_buffer_size (optional)
-
-
-
-

output_update_request_logstash - logstash Up

-
-
-
id (optional)
-
is_default (optional)
-
is_default_monitoring (optional)
-
name
-
type (optional)
-
Enum:
-
logstash
-
hosts (optional)
-
ca_sha256 (optional)
-
ca_trusted_fingerprint (optional)
-
config (optional)
-
config_yaml (optional)
-
ssl (optional)
-
proxy_id (optional)
-
shipper (optional)
-
-
-
-

package_info - Package information Up

-
-
-
name
-
title
-
version
-
release (optional)
String release label is deprecated, derive from the version instead (packages follow semver)
-
Enum:
-
experimental
beta
ga
-
source (optional)
-
readme (optional)
-
description
-
type
-
categories
-
conditions
-
screenshots (optional)
-
icons (optional)
-
assets
-
internal (optional)
-
format_version
-
data_streams (optional)
-
download
-
path
-
elasticsearch (optional)
-
-
- -
-

package_info_conditions_elasticsearch - Up

-
-
-
subscription (optional)
-
Enum:
-
basic
gold
platinum
enterprise
-
-
-
-

package_info_conditions_kibana - Up

-
-
-
versions (optional)
-
-
-
-

package_info_data_streams_inner - Up

-
-
-
title
-
name
-
release
-
ingeset_pipeline
-
vars (optional)
-
type
-
package
-
-
- - - -
-

package_info_screenshots_inner - Up

-
-
-
src
-
path
-
title (optional)
-
size (optional)
-
type (optional)
-
-
-
-

package_info_source - Up

-
-
-
license (optional)
-
Enum:
-
Apache-2.0
Elastic-2.0
-
-
-
-

package_policy - Package policy Up

-
-
-
id
-
revision
-
inputs
-
enabled (optional)
-
package (optional)
-
namespace (optional)
-
output_id (optional)
-
policy_id (optional)
-
name
-
description (optional)
-
-
-
-

package_policy_allOf - Up

-
-
-
id
-
revision
-
inputs (optional)
-
-
- -
-

package_policy_request - Package Policy Request Up

-
-
-
id (optional)
String Package policy unique identifier
-
name
String Package policy name (should be unique)
-
description (optional)
String Package policy description
-
namespace (optional)
String namespace by default "default"
-
policy_id
String Agent policy ID where that package policy will be added
-
package
-
vars (optional)
Object Package root level variable (see integration documentation for more information)
-
inputs (optional)
map[String, package_policy_request_inputs_value] Package policy inputs (see integration documentation to know what inputs are available)
-
force (optional)
Boolean Force package policy creation even if package is not verified, or if the agent policy is managed.
-
-
-
-

package_policy_request_inputs_value - Up

-
-
-
enabled (optional)
Boolean enable or disable that input, (default to true)
-
vars (optional)
Object Input level variable (see integration documentation for more information)
-
streams (optional)
map[String, package_policy_request_inputs_value_streams_value] Input streams (see integration documentation to know what streams are available)
-
-
-
-

package_policy_request_inputs_value_streams_value - Up

-
-
-
enabled (optional)
Boolean enable or disable that stream, (default to true)
-
vars (optional)
Object Stream level variable (see integration documentation for more information)
-
-
-
-

package_policy_request_package - Up

-
-
-
name
String Package name
-
version
String Package version
-
-
-
-

package_usage_stats - Package usage stats Up

-
-
-
agent_policy_count
-
-
-
-

packages_get_file_200_response - Up

-
-
-
body (optional)
-
statusCode (optional)
-
headers (optional)
-
-
- -
-

packages_get_verification_key_id_200_response_body - Up

-
-
-
id (optional)
String the key ID of the GPG key used to verify package signatures
-
-
- -
-

post_delete_package_policy_request - Up

-
-
-
packagePolicyIds
-
force (optional)
-
-
- -
-

post_download_sources_request - Up

-
-
-
id (optional)
-
name
-
is_default
-
host
-
-
-
-

post_fleet_proxies_200_response - Up

-
-
-
item (optional)
-
-
-
-

post_fleet_proxies_request - Up

-
-
-
id (optional)
-
name
-
url
-
proxy_headers (optional)
-
certificate_authorities (optional)
-
certificate (optional)
-
certificate_key (optional)
-
-
- -
-

post_fleet_server_hosts_request - Up

-
-
-
id (optional)
-
name
-
is_default (optional)
-
host_urls
-
-
- -
-

proxies - Fleet Proxy Up

-
-
-
id (optional)
-
name
-
url
-
proxy_headers (optional)
-
certificate_authorities (optional)
-
certificate (optional)
-
certificate_key (optional)
-
-
- - -
-

search_result - Search result Up

-
-
-
description
-
download
-
icons
-
name
-
path
-
title
-
type
-
version
-
status
-
installationInfo (optional)
-
savedObject (optional)
-
-
-
-

settings - Settings Up

-
-
-
id
-
has_seen_add_data_notice (optional)
-
fleet_server_hosts
-
prerelease_integrations_enabled (optional)
-
-
-
-

setup_500_response - Up

-
-
-
message (optional)
-
-
-
-

setup_agents_request - Up

-
-
-
admin_username
-
admin_password
-
-
-
-

unenroll_agent_400_response - Up

-
-
-
error (optional)
-
message (optional)
-
statusCode (optional)
-
Enum:
-
400
-
-
-
-

unenroll_agent_request - Up

-
-
-
revoke (optional)
-
force (optional)
-
-
-
-

update_agent_request - Up

-
-
-
user_provided_metadata (optional)
-
tags (optional)
-
-
-
-

update_download_source_request - Up

-
-
-
name
-
is_default
-
host
-
-
-
-

update_fleet_proxies_request - Up

-
-
-
name (optional)
-
url (optional)
-
proxy_headers (optional)
-
certificate_authorities (optional)
-
certificate (optional)
-
certificate_key (optional)
-
-
-
-

update_fleet_server_hosts_request - Up

-
-
-
name (optional)
-
is_default (optional)
-
host_urls (optional)
-
-
- - -
-

update_package_request - Up

-
-
-
keepPoliciesUpToDate (optional)
-
-
-
-

update_settings_request - Up

-
-
-
fleet_server_hosts (optional)
array[String] Protocol and path must be the same for each URL
-
has_seen_add_data_notice (optional)
-
additional_yaml_config (optional)
-
-
-
-

upgrade_agent - Upgrade agent Up

-
-
-
version
-
source_uri (optional)
-
force (optional)
Boolean Force upgrade, skipping validation (should be used with caution)
-
-
-
-

upgrade_diff_inner - Up

-
-
-
id
-
revision
-
inputs
-
enabled (optional)
-
package (optional)
-
namespace (optional)
-
output_id (optional)
-
policy_id (optional)
-
name
-
description (optional)
-
errors (optional)
-
missingVars (optional)
-
-
-
-

upgrade_diff_inner_allOf - Up

-
-
-
enabled (optional)
-
package (optional)
-
namespace (optional)
-
output_id (optional)
-
inputs
-
policy_id (optional)
-
name
-
description (optional)
-
errors (optional)
-
missingVars (optional)
-
-
- -
-

upgrade_diff_inner_allOf_allOf_errors_inner - Up

-
-
-
key (optional)
-
message (optional)
-
-
- -
-

upgrade_package_policy_dry_run_request - Up

-
-
-
packagePolicyIds
-
packageVersion (optional)
-
-
-
-

upgrade_package_policy_request - Up

-
-
-
packagePolicyIds
-
-
-
-++++ diff --git a/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis.asciidoc b/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis.asciidoc deleted file mode 100644 index d4b41cb34..000000000 --- a/docs/en/ingest-management/fleet/api-generated/rules/fleet-apis.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[fleet-apis]] -== Fleet APIs - -preview::[] - -//// -This file includes content that has been generated from https://github.com/elastic/kibana/tree/main/x-pack/plugins/fleet/common/openapi. Any modifications required must be done in that open API specification. -//// - -include::fleet-apis-passthru.asciidoc[] \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/api-generated/template/index.mustache b/docs/en/ingest-management/fleet/api-generated/template/index.mustache deleted file mode 100644 index 230134157..000000000 --- a/docs/en/ingest-management/fleet/api-generated/template/index.mustache +++ /dev/null @@ -1,173 +0,0 @@ -//// -This content is generated from the open API specification. -Any modifications made to this file will be overwritten. -//// - -++++ -
- - - -

Methods

- [ Jump to Models ] - - {{! for the tables of content, I cheat and don't use CSS styles.... }} -

Table of Contents

-
{{access}}
- {{#apiInfo}} - {{#apis}} - {{#operations}} -

{{baseName}}

- - {{/operations}} - {{/apis}} - {{/apiInfo}} - - {{#apiInfo}} - {{#apis}} - {{#operations}} -

{{baseName}}

- {{#operation}} -
-
- Up -
{{httpMethod}} {{path}}
-
{{summary}} ({{nickname}})
- {{! notes is operation.description. So why rename it and make it super confusing???? }} -
{{notes}}
- - {{#hasPathParams}} -

Path parameters

-
- {{#pathParams}}{{>pathParam}}{{/pathParams}} -
- {{/hasPathParams}} - - {{#hasConsumes}} -

Consumes

- This API call consumes the following media types via the Content-Type request header: -
    - {{#consumes}} -
  • {{{mediaType}}}
  • - {{/consumes}} -
- {{/hasConsumes}} - - {{#hasBodyParam}} -

Request body

-
- {{#bodyParams}}{{>bodyParam}}{{/bodyParams}} -
- {{/hasBodyParam}} - - {{#hasHeaderParams}} -

Request headers

-
- {{#headerParams}}{{>headerParam}}{{/headerParams}} -
- {{/hasHeaderParams}} - - {{#hasQueryParams}} -

Query parameters

-
- {{#queryParams}}{{>queryParam}}{{/queryParams}} -
- {{/hasQueryParams}} - - {{#hasFormParams}} -

Form parameters

-
- {{#formParams}}{{>formParam}}{{/formParams}} -
- {{/hasFormParams}} - - {{#returnType}} -

Return type

-
- {{#hasReference}}{{^returnSimpleType}}{{returnContainer}}[{{/returnSimpleType}}{{returnBaseType}}{{^returnSimpleType}}]{{/returnSimpleType}}{{/hasReference}} - {{^hasReference}}{{returnType}}{{/hasReference}} -
- {{/returnType}} - - - - {{#hasExamples}} - {{#examples}} -

Example data

-
Content-Type: {{{contentType}}}
-
{{{example}}}
- {{/examples}} - {{/hasExamples}} - - {{#hasProduces}} -

Produces

- This API call produces the following media types according to the Accept request header; - the media type will be conveyed by the Content-Type response header. -
    - {{#produces}} -
  • {{{mediaType}}}
  • - {{/produces}} -
- {{/hasProduces}} - -

Responses

- {{#responses}} -

{{code}}

- {{message}} - {{^containerType}}{{dataType}}{{/containerType}} - {{#examples}} -

Example data

-
Content-Type: {{{contentType}}}
-
{{example}}
- {{/examples}} - {{/responses}} -
-
- {{/operation}} - {{/operations}} - {{/apis}} - {{/apiInfo}} - -

Models

- [ Jump to Methods ] - -

Table of Contents

-
    - {{#models}} - {{#model}} -
  1. {{name}}{{#title}} - {{.}}{{/title}}
  2. - {{/model}} - {{/models}} -
- - {{#models}} - {{#model}} -
-

{{name}}{{#title}} - {{.}}{{/title}} Up

- {{#unescapedDescription}}
{{.}}
{{/unescapedDescription}} -
- {{#vars}}
{{name}} {{^required}}(optional){{/required}}
{{^isPrimitiveType}}{{dataType}}{{/isPrimitiveType}} {{unescapedDescription}} {{#dataFormat}}format: {{{.}}}{{/dataFormat}}
- {{#isEnum}} -
Enum:
- {{#_enum}}
{{this}}
{{/_enum}} - {{/isEnum}} - {{/vars}} -
-
- {{/model}} - {{/models}} -
-++++ diff --git a/docs/en/ingest-management/fleet/apis.asciidoc b/docs/en/ingest-management/fleet/apis.asciidoc deleted file mode 100644 index 123973c52..000000000 --- a/docs/en/ingest-management/fleet/apis.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[role="exclude",id="apis"] -= API - -[partintro] --- - -preview::[] - -The {fleet} API is documented using the OpenAPI specification. The current supported -version of the specification is 3.0. For more information, go to https://openapi-generator.tech/[OpenAPI Generator] - --- - -include::api-generated/rules/fleet-apis.asciidoc[] \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/diagrams/fleet-server-diagram.asciidoc b/docs/en/ingest-management/fleet/diagrams/fleet-server-diagram.asciidoc deleted file mode 100644 index 88bef2157..000000000 --- a/docs/en/ingest-management/fleet/diagrams/fleet-server-diagram.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -++++ -
- - - - - -return policy - - -return policy - - -get policy - - -enroll -policy - - -data - - - - - - - - - - - -Elastic Agent - - - - - - - -Elastic Agent - - - - - -Fleet Server - - - - - - - - - - - - - - - - -Elasticsearch - - - - - - - - - -Fleet UI in Kibana - - - - - - - - - - - - - - - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/filter-agent-list-by-tags.asciidoc b/docs/en/ingest-management/fleet/filter-agent-list-by-tags.asciidoc deleted file mode 100644 index c75807661..000000000 --- a/docs/en/ingest-management/fleet/filter-agent-list-by-tags.asciidoc +++ /dev/null @@ -1,106 +0,0 @@ -[[filter-agent-list-by-tags]] -= Add tags to filter the Agents list - -You can add tags to {agent} during or after enrollment, then use the tags to -filter the Agents list shown in {fleet}. - -Tags are useful for capturing information that is specific to the installation -environment, such machine type, location, operating system, environment, and -so on. Tags can be any arbitrary information that will help you filter and -perform operations on {agent}s with the same attributes. - -To filter the Agents list by tag, in {kib}, go to **{fleet} > Agents** and click -**Tags**. Select the tags to filter on. The tags are also available in the KQL -field for autocompletion. - -[role="screenshot"] -image::images/agent-tags.png[Agents list filtered to show agents with the staging tag] - -If you haven't added tags to any {agent}s yet, the list will be empty. - -[discrete] -[[add-tags-in-fleet]] -== Add, remove, rename, or delete tags in {fleet} - -You can use {fleet} to add, remove, or rename tags applied to one or more -{agent}s. - -Want to add tags when enrolling from a host instead? See -<>. - -To manage tags in {fleet}: - -. On the **Agents** tab, select one or more agents. - -. From the **Actions** menu, click **Add / remove tags**. -+ -[role="screenshot"] -image::images/add-remove-tags.png[Screenshot of add / remove tags menu] -+ -TIP: Make sure you use the correct **Actions** menu. To manage tags for a single -agent, click the ellipsis button under the **Actions** column. To manage tags -for multiple agents, click the **Actions** button to open the bulk actions menu. - -. In the tags menu, perform an action: -+ -[options,header] -|=== -|To... | Do this... - -|Create a new tag -|Type the tag name and click **Create new tag...**. Notice the tag name has -a check mark to show that the tag has been added to the selected agents. - -|Rename a tag -|Hover over the tag name and click the ellipsis button. Type a new name and press Enter. -The tag will be renamed in all agents that use it, even agents that are not -selected. - -|Delete a tag -|Hover over the tag name and click the ellipsis button. Click **Delete tag**. -The tag will be deleted from all agents, even agents that are not selected. - -|Add or remove a tag from an agent -|Click the tag name to add or clear the check mark. In the **Tags** column, -notice that the tags are added or removed. Note that the menu only shows -tags that are common to all selected agents. - -|=== - -[discrete] -[[add-tags-at-enrollment]] -== Add tags during agent enrollment - -When you install or enroll {agent} in {fleet}, you can specify a comma-separated -list of tags to apply to the agent, then use the tags to filter the Agents list -shown in {fleet}. - -The following command applies the `macOS` and `staging` tags during -installation: - -[source,shell] ----- -sudo ./elastic-agent install \ - --url= \ - --enrollment-token= \ - --tag macOS,staging ----- - -For the full command synopsis, refer to <> and -<>. - -The following command applies the `docker` and `dev` tags to {agent} running in -a Docker container: - -["source","yaml",subs="attributes"] ----- -docker run \ - --env FLEET_ENROLL=1 \ - --env FLEET_URL= \ - --env FLEET_ENROLLMENT_TOKEN= \ - --env ELASTIC_AGENT_TAGS=docker,dev - --rm docker.elastic.co/elastic-agent/elastic-agent:{version} ----- - -For more information about running on containers, refer to the guides under -<>. diff --git a/docs/en/ingest-management/fleet/fleet-api-docs.asciidoc b/docs/en/ingest-management/fleet/fleet-api-docs.asciidoc deleted file mode 100644 index 9571baf49..000000000 --- a/docs/en/ingest-management/fleet/fleet-api-docs.asciidoc +++ /dev/null @@ -1,422 +0,0 @@ -[[fleet-api-docs]] -= {kib} {fleet} APIs - -You can find details for all available {fleet} API endpoints in our generated -{api-kibana}[Kibana API docs]. - -In this section, we provide examples of some commonly used {fleet} APIs. - -[discrete] -[[using-the-console]] -== Using the Console - -You can run {fleet} API requests through the {kib} Console. - -. Open the {kib} menu and go to **Management -> Dev Tools**. -. In your request, prepend your {fleet} API endpoint with `kbn:`, for example: -+ -[source,sh] ----- -GET kbn:/api/fleet/agent_policies ----- - -For more detail about using the {kib} Console refer to {kibana-ref}/console-kibana.html[Run API requests]. - -[discrete] -[[authentication]] -== Authentication - -Authentication is required to send {fleet} API requests. For more information, -refer to {kibana-ref}/api.html#api-authentication[Authentication]. - -[discrete] -[[create-agent-policy-api]] -== Create agent policy - -To create a new agent policy in {fleet}, call -`POST /api/fleet/agent_policies`. - -This cURL example creates an agent policy called `Agent policy 1` in -the default namespace. - -[source,shell] ----- -curl --request POST \ - --url 'https://my-kibana-host:9243/api/fleet/agent_policies?sys_monitoring=true' \ - --header 'Accept: */*' \ - --header 'Authorization: ApiKey yourbase64encodedkey' \ - --header 'Cache-Control: no-cache' \ - --header 'Connection: keep-alive' \ - --header 'Content-Type: application/json' \ - --header 'kbn-xsrf: xxx' \ - --data '{ - "name": "Agent policy 1", - "description": "", - "namespace": "default", - "monitoring_enabled": [ - "logs", - "metrics" - ] -}' ----- - -**** -To save time, you can use {kib} to generate the API request, then run it -from the Dev Tools console. - -. Go to **{fleet} -> Agent policies**. -. Click **Create agent policy** and give the policy a name. -. Click **Preview API request**. -. Click **Open in Console** and run the request. - -**** - -Example response: - -[source,shell] ----- -{ - "item": { - "id": "2b820230-4b54-11ed-b107-4bfe66d759e4", <1> - "name": "Agent policy 1", - "description": "", - "namespace": "default", - "monitoring_enabled": [ - "logs", - "metrics" - ], - "status": "active", - "is_managed": false, - "revision": 1, - "updated_at": "2022-10-14T00:07:19.763Z", - "updated_by": "1282607447", - "schema_version": "1.0.0" - } -} ----- -<1> Make a note of the policy ID. You'll need the policy ID to add integration -policies. - -[discrete] -[[create-integration-policy-api]] -== Create integration policy - -To create an integration policy (also known as a package policy) and add it to an -existing agent policy, call `POST /api/fleet/package_policies`. - -TIP: You can use the {fleet} API to {security-guide}/create-defend-policy-api.html[Create and customize an {elastic-defend} policy]. - -This cURL example creates an integration policy for Nginx and adds it to the -agent policy created in the previous example: - -[source,shell] ----- -curl --request POST \ - --url 'https://my-kibana-host:9243/api/fleet/package_policies' \ - --header 'Authorization: ApiKey yourbase64encodedkey' \ - --header 'Content-Type: application/json' \ - --header 'kbn-xsrf: xx' \ - --data '{ - "name": "nginx-demo-123", - "policy_id": "2b820230-4b54-11ed-b107-4bfe66d759e4", - "package": { - "name": "nginx", - "version": "1.5.0" - }, - "inputs": { - "nginx-logfile": { - "streams": { - "nginx.access": { - "vars": { - "tags": [ - "test" - ] - } - }, - "nginx.error": { - "vars": { - "tags": [ - "test" - ] - } - } - } - } - } -}' ----- - -**** -* To save time, you can use {kib} to generate the API call, then run it -from the Dev Tools console. -+ -. Go to **Integrations**, select an {agent} integration, and click -**Add **. -. Configure the integration settings and select which agent policy to use. -. Click **Preview API request**. -+ -If you're creating the integration policy for a new agent policy, the preview -shows two requests: one to create the agent policy, and another to create the -integration policy. - -. Click **Open in Console** and run the request (or requests). - -* To find out which inputs, streams, and variables are available for an -integration, go to **Integrations**, select an {agent} integration, and click -**API reference**. -**** - -Example response (truncated for readability): - -[source,shell] ----- -{ - "item" : { - "created_at" : "2022-10-15T00:41:28.594Z", - "created_by" : "1282607447", - "enabled" : true, - "id" : "92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", - "inputs" : [ - { - "enabled" : true, - "policy_template" : "nginx", - "streams" : [ - { - "compiled_stream" : { - "exclude_files" : [ - ".gz$" - ], - "ignore_older" : "72h", - "paths" : [ - "/var/log/nginx/access.log*" - ], - "processors" : [ - { - "add_locale" : null - } - ], - "tags" : [ - "test" - ] - }, - "data_stream" : { - "dataset" : "nginx.access", - "type" : "logs" - }, - "enabled" : true, - "id" : "logfile-nginx.access-92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", - "release" : "ga", - "vars" : { - "ignore_older" : { - "type" : "text", - "value" : "72h" - }, - "paths" : { - "type" : "text", - "value" : [ - "/var/log/nginx/access.log*" - ] - }, - "preserve_original_event" : { - "type" : "bool", - "value" : false - }, - "processors" : { - "type" : "yaml" - }, - "tags" : { - "type" : "text", - "value" : [ - "test" - ] - } - } - }, - { - "compiled_stream" : { - "exclude_files" : [ - ".gz$" - ], - "ignore_older" : "72h", - "multiline" : { - "match" : "after", - "negate" : true, - "pattern" : "^\\d{4}\\/\\d{2}\\/\\d{2} " - }, - "paths" : [ - "/var/log/nginx/error.log*" - ], - "processors" : [ - { - "add_locale" : null - } - ], - "tags" : [ - "test" - ] - }, - "data_stream" : { - "dataset" : "nginx.error", - "type" : "logs" - }, - "enabled" : true, - "id" : "logfile-nginx.error-92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", - "release" : "ga", - "vars" : { - "ignore_older" : { - "type" : "text", - "value" : "72h" - }, - "paths" : { - "type" : "text", - "value" : [ - "/var/log/nginx/error.log*" - ] - }, - "preserve_original_event" : { - "type" : "bool", - "value" : false - }, - "processors" : { - "type" : "yaml" - }, - "tags" : { - "type" : "text", - "value" : [ - "test" - ] - } - } - } - ], - "type" : "logfile" - }, - ... - { - "enabled" : true, - "policy_template" : "nginx", - "streams" : [ - { - "compiled_stream" : { - "hosts" : [ - "http://127.0.0.1:80" - ], - "metricsets" : [ - "stubstatus" - ], - "period" : "10s", - "server_status_path" : "/nginx_status" - }, - "data_stream" : { - "dataset" : "nginx.stubstatus", - "type" : "metrics" - }, - "enabled" : true, - "id" : "nginx/metrics-nginx.stubstatus-92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", - "release" : "ga", - "vars" : { - "period" : { - "type" : "text", - "value" : "10s" - }, - "server_status_path" : { - "type" : "text", - "value" : "/nginx_status" - } - } - } - ], - "type" : "nginx/metrics", - "vars" : { - "hosts" : { - "type" : "text", - "value" : [ - "http://127.0.0.1:80" - ] - } - } - } - ], - "name" : "nginx-demo-123", - "namespace" : "default", - "package" : { - "name" : "nginx", - "title" : "Nginx", - "version" : "1.5.0" - }, - "policy_id" : "d625b2e0-4c21-11ed-9426-31f0877749b7", - "revision" : 1, - "updated_at" : "2022-10-15T00:41:28.594Z", - "updated_by" : "1282607447", - "version" : "WzI5OTAsMV0=" - } -} ----- - - -[discrete] -[[get-enrollment-token-api]] -== Get enrollment tokens - -To get a list of valid enrollment tokens from {fleet}, call -`GET /api/fleet/enrollment_api_keys`. - -This cURL example returns a list of enrollment tokens. - -[source,shell] ----- -curl --request GET \ - --url 'https://my-kibana-host:9243/api/fleet/enrollment_api_keys' \ - --header 'Authorization: ApiKey N2VLRDA0TUJIQ05MaGYydUZrN1Y6d2diMUdwSkRTWGFlSm1rSVZlc2JGQQ==' \ - --header 'Content-Type: application/json' \ - --header 'kbn-xsrf: xx' ----- - -Example response (formatted for readability): - -[source,shell] ----- -{ - "items" : [ - { - "active" : true, - "api_key" : "QlN2UaA0TUJlMGFGbF8IVkhJaHM6eGJjdGtyejJUUFM0a0dGSwlVSzdpdw==", - "api_key_id" : "BSvR04MBe0aFl_HVHIhs", - "created_at" : "2022-10-14T00:07:21.420Z", - "id" : "39703af4-5945-4232-90ae-3161214512fa", - "name" : "Default (39703af4-5945-4232-90ae-3161214512fa)", - "policy_id" : "2b820230-4b54-11ed-b107-4bfe66d759e4" - }, - { - "active" : true, - "api_key" : "Yi1MSTA2TUJIQ05MaGYydV9kZXQ5U2dNWFkyX19sWEdSemFQOUfzSDRLZw==", - "api_key_id" : "b-LI04MBHCNLhf2u_det", - "created_at" : "2022-10-13T23:58:29.266Z", - "id" : "e4768bf2-55a6-433f-a540-51d4ca2d34be", - "name" : "Default (e4768bf2-55a6-433f-a540-51d4ca2d34be)", - "policy_id" : "ee37a8e0-4b52-11ed-b107-4bfe66d759e4" - }, - { - "active" : true, - "api_key" : "b3VLbjA0TUJIQ04MaGYydUk1Z3Q6VzhMTTBITFRTmnktRU9IWDaXWnpMUQ==", - "api_key_id" : "luKn04MBHCNLhf2uI5d4", - "created_at" : "2022-10-13T23:21:30.707Z", - "id" : "d18d2918-bb10-44f2-9f98-df5543e21724", - "name" : "Default (d18d2918-bb10-44f2-9f98-df5543e21724)", - "policy_id" : "c3e31e80-4b4d-11ed-b107-4bfe66d759e4" - }, - { - "active" : true, - "api_key" : "V3VLRTa0TUJIQ05MaGYydVMx4S06WjU5dsZ3YzVRSmFUc5xjSThImi1ydw==", - "api_key_id" : "WuKE04MBHCNLhf2uS1E-", - "created_at" : "2022-10-13T22:43:27.139Z", - "id" : "aad31121-df89-4f57-af84-7c43f72640ee", - "name" : "Default (aad31121-df89-4f57-af84-7c43f72640ee)", - "policy_id" : "72fcc4d0-4b48-11ed-b107-4bfe66d759e4" - }, - ], - "page" : 1, - "perPage" : 20, - "total" : 4 -} ----- diff --git a/docs/en/ingest-management/fleet/fleet-deployment-models.asciidoc b/docs/en/ingest-management/fleet/fleet-deployment-models.asciidoc deleted file mode 100644 index 3ef712bd7..000000000 --- a/docs/en/ingest-management/fleet/fleet-deployment-models.asciidoc +++ /dev/null @@ -1,126 +0,0 @@ -[[fleet-deployment-models]] -= Deployment models - -There are various models for setting up {agents} to work with {es}. -The recommended approach is to use {fleet}, a web-based UI in Kibana, to centrally manage all of your {agents} and their policies. -Using {fleet} requires having an instance of {fleet-server} that acts as the interface between the {fleet} UI and your {agents}. - -For an overview of {fleet-server}, including details about how it communicates with {es}, how to ensure high availability, and more, refer to <>. - -The requirements for setting up {fleet-server} differ, depending on your particular deployment model: - -{serverless-full}:: -In a link:{serverless-docs}[{serverless-short}] environment, {fleet-server} is offered as a service, it is configured and scaled automatically without the need for any user intervention. - -{ess}:: -If you're running {es} and {kib} hosted on {cloud}/ec-getting-started.html[{ess}], no extra setup is required unless you want to scale your deployment. {ess} runs a hosted version of {integrations-server} that includes {fleet-server}. For details about this deployment model, refer to <>. - -{ess} with {fleet-server} on-premise:: -When you use a hosted {ess} deployment you may still choose to run {fleet-server} on-premise. For details about this deployment model and set up instructions, refer to <>. - -Docker and Kubernetes:: -You can deploy {fleet}-managed {agent} in Docker or on Kubernetes. Refer to <> or <> for all of the configuration instructions. For a Kubernetes install we also have a <> available to simplify the installation. Details for configuring {fleet-server} are included with the {agent} install steps. - -{eck}:: -You can deploy {fleet}-managed {agent} in an {ecloud} Kubernetes environment that provides configuration and management capabilities for the full {stack}. For details, refer to {eck-ref}/k8s-elastic-agent-fleet.html[Run {fleet}-managed {agent} on ECK]. - -Self-managed:: -For self-managed deployments, you must install and host {fleet-server} yourself. For details about this deployment model and set up instructions, refer to <>. - -[[fleet-server]] -== What is {fleet-server}? - -{fleet-server} is a component that connects {agent}s to {fleet}. It supports -many {agent} connections and serves as a control plane for updating agent -policies, collecting status information, and coordinating actions across -{agent}s. It also provides a scalable architecture. As the size of your agent -deployment grows, you can deploy additional {fleet-server}s to manage the -increased workload. - -* On-premises {fleet-server} is not currently available for use in an link:{serverless-docs}[{serverless-full}] environment. -We recommend using the hosted {fleet-server} that is included and configured automatically in {serverless-short} {observability} and Security projects. - -The following diagram shows how {agent}s communicate with {fleet-server} to -retrieve agent policies: - -include::./diagrams/fleet-server-diagram.asciidoc[A high level diagram showing the relationship between components of the {stack}.] - -. When a new agent policy is created, the {fleet} UI saves the policy to -a {fleet} index in {es}. - -. To enroll in the policy, {agent}s send a request to {fleet-server}, -using the enrollment key generated for authentication. - -. {fleet-server} monitors {fleet} indices, picks up the new agent policy from -{es}, then ships the policy to all {agent}s enrolled in that policy. -{fleet-server} may also write updated policies to the {fleet} index to manage -coordination between agents. - -. {agent} uses configuration information in the policy to collect and send data -to {es}. - -. {agent} checks in with {fleet-server} for updates, maintaining an open -connection. - -. When a policy is updated, {fleet-server} retrieves the updated policy from -{es} and sends it to the connected {agent}s. - -. To communicate with {fleet} about the status of {agent}s and the policy -rollout, {fleet-server} writes updates to {fleet} indices. - -**** -**Does {fleet-server} run inside of {agent}?** - -{fleet-server} is a subprocess that runs inside a deployed {agent}. This means -the deployment steps are similar to any {agent}, except that you enroll the -agent in a special {fleet-Server} policy. Typically--especially in large-scale -deployments--this agent is dedicated to running {fleet-server} as an {agent} -communication host and is not configured for data collection. -**** - -[discrete] -[[fleet-security-account]] -== Service account - -{fleet-server} uses a service token to communicate with {es}, which contains -a `fleet-server` service account. Each {fleet-server} can use its own service -token, and you can share it across multiple servers (not recommended). The -advantage of using a separate token for each server is that you can invalidate -each one separately. - -You can create a service token by either using the {fleet} UI or the {es} API. -For more information, refer to <> or <>, depending on your deployment model. - -[discrete] -[[fleet-server-HA-operations]] -== {fleet-server} High-availability operations - -{fleet-server} is stateless. Connections to the {fleet-server} therefore can be -load balanced as long as the {fleet-server} has capacity to accept more -connections. Load balancing is done on a round-robin basis. - -How you handle high-availability, fault-tolerance, and lifecycle management of {fleet-server} -depends on the deployment model you use. - -[discrete] -== Learn more - -To learn more about deploying and scaling {fleet-server}, refer to: - -[[add-fleet-server]] -* <> - -* <> - -* <> - -* <> - -* <> - -[discrete] -[[fleet-server-secrets-config]] -== {fleet-server} secrets configuration - -Secrets used to configure {fleet-server} can either be directly specified in configuration or provided through secret files. -See <> for more information. diff --git a/docs/en/ingest-management/fleet/fleet-manage-agents.asciidoc b/docs/en/ingest-management/fleet/fleet-manage-agents.asciidoc deleted file mode 100644 index 1a8494d36..000000000 --- a/docs/en/ingest-management/fleet/fleet-manage-agents.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[manage-agents]] -= {agent}s - -++++ -{agent}s -++++ - -TIP: To learn how to add {agent}s to {fleet}, see -<>. - -To manage your {agent}s, go to *Management > {fleet} > Agents* in {kib}. On the -*Agents* tab, you can perform the following actions: - -[options,header] -|=== -| User action | Result - -|<> -|Unenroll {agent}s from {fleet}. - -|<> -|Set inactivity timeout to move {agent}s to inactive status after being offline for the set amount of time. - -|<> -|Upgrade {agent}s to the latest version. - -|<> -|Migrate {agent}s from one cluster to another. - -|<> -|Monitor {fleet}-managed {agent}s by viewing agent status, logs, and metrics. - -|<> -|Add tags to {agent}, then use the tags to filter the Agents list in {fleet}. - -|=== diff --git a/docs/en/ingest-management/fleet/fleet-server-monitoring.asciidoc b/docs/en/ingest-management/fleet/fleet-server-monitoring.asciidoc deleted file mode 100644 index d598397f4..000000000 --- a/docs/en/ingest-management/fleet/fleet-server-monitoring.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[fleet-server-monitoring]] -= Monitor a self-managed {fleet-server} - -For self-managed {fleet-server}s, monitoring is key because the operation of the -{fleet-server} is paramount to the health of the deployed agents and the -services they offer. When {fleet-server} is not operating correctly, it may lead -to delayed check-ins, status information, and updates for the agents it manages. -The monitoring data will tell you when to add capacity for {fleet-server}, and -provide error logs and information to troubleshoot other issues. - -For self-managed clusters, monitoring is on by default when you create a -new agent policy or use the existing Default {fleet-server} agent policy. - -To monitor {fleet-server}: - -. In {fleet}, open the **Agent policies** tab. - -. Click the {fleet-server} policy name to edit the policy. - -. Click the **Settings** tab and verify that **Collect agent logs** and -**Collect agent metrics** are selected. - -. Next, set the **Default namespace** to something like `fleetserver`. -+ -Setting the default namespace lets you segregate {fleet-server} monitoring data -from other collected data. This makes it easier to search and visualize the -monitoring data. -+ -[role="screenshot"] -image::images/fleet-server-agent-policy-page.png[{fleet-server} agent policy] - -. To confirm your change, click **Save changes**. - -To see the metrics collected for the agent running {fleet-server}, go to -**Analytics > Discover**. - -In the following example, `fleetserver` was configured as the namespace, and -you can see the metrics collected: - -[role="screenshot"] -image::images/datastream-namespace.png[Data stream] - -// lint ignore elastic-agent -Go to **Analytics > Dashboard** and search for the predefined dashboard called -**[Elastic Agent] Agent metrics**. Choose this dashboard, and run a query based -on the `fleetserver` namespace. - -The following dashboard shows data for the query `data_stream.namespace: -"fleetserver"`. In this example, you can observe CPU and memory usage as a -metric and then resize the {fleet-server}, if necessary. - -[role="screenshot"] -image::images/dashboard-datastream01.png[Dashboard Data stream] - -Note that as an alternative to running the query, you can hide all metrics -except `fleet_server` in the dashboard. diff --git a/docs/en/ingest-management/fleet/fleet-server-scaling.asciidoc b/docs/en/ingest-management/fleet/fleet-server-scaling.asciidoc deleted file mode 100644 index 813a4af0c..000000000 --- a/docs/en/ingest-management/fleet/fleet-server-scaling.asciidoc +++ /dev/null @@ -1,248 +0,0 @@ -[[fleet-server-scalability]] -= {fleet-server} scalability - -This page summarizes the resource and {fleet-server} configuration -requirements needed to scale your deployment of {agent}s. To scale -{fleet-server}, you need to modify settings in your deployment and the -{fleet-server} agent policy. - -TIP: Refer to the <> section for specific recommendations about using {fleet-server} at scale. - -First modify your {fleet} deployment settings in {ecloud}: - -. Log in to {ecloud} and go to your deployment. - -. Under **Deployments > _deployment name_**, click **Edit**. - -. Under {integrations-server}: -+ --- -* Modify the compute resources available to the server to accommodate a higher -scale of {agent}s -* Modify the availability zones to satisfy fault tolerance requirements - -For recommended settings, refer to <>. - -[role="screenshot"] -image::images/fleet-server-hosted-container.png[{fleet-server} hosted agent] --- - -Next modify the {fleet-server} configuration by editing the agent policy: - -. In {fleet}, open the **Agent policies** tab. Click the name of the **{ecloud} agent policy** to edit the policy. - -. Open the **Actions** menu next to the {fleet-server} integration and click -**Edit integration**. -+ -[role="screenshot"] -image::images/elastic-cloud-agent-policy.png[{ecloud} policy] - -. Under {fleet-server}, modify **Max Connections** and other -<> as described in -<>. -+ -[role="screenshot"] -image::images/fleet-server-configuration.png[{fleet-server} configuration] - -[discrete] -[[fleet-server-configuration]] -== Advanced {fleet-server} options - -The following advanced settings are available to fine tune your {fleet-server} -deployment. - -`cache`:: - -`num_counters`::: -Size of the hash table. Best practice is to have this set to 10 times the max -connections. - -`max_cost`::: -Total size of the cache. - -`server.timeouts`:: -`checkin_timestamp`::: -How often {fleet-server} updates the "last activity" field for each agent. -Defaults to `30s`. In a large-scale deployment, increasing this -setting may improve performance. If this setting is higher than `2m`, -most agents will be shown as "offline" in the Fleet UI. For a typical setup, -it's recommended that you set this value to less than `2m`. - -`checkin_long_poll`::: -How long {fleet-server} allows a long poll request from an agent before -timing out. Defaults to `5m`. In a large-scale deployment, increasing -this setting may improve performance. - -`server.limits`:: -`policy_throttle`::: -How often a new policy is rolled out to the agents. - -Deprecated: Use the `action_limit` settings instead. - -`action_limit.interval`::: -How quickly {fleet-server} dispatches pending actions to the agents. - -`action_limit.burst`::: -Burst of actions that may be dispatched before falling back to the rate limit defined by `interval`. - -`checkin_limit.max`::: -Maximum number of agents that can call the checkin API concurrently. - -`checkin_limit.interval`::: -How fast the agents can check in to the {fleet-server}. - -`checkin_limit.burst`::: -Burst of check-ins allowed before falling back to the rate defined by -`interval`. - -`checkin_limit.max_body_byte_size`::: -Maximum size in bytes of the checkin API request body. - -`artifact_limit.max`::: -Maximum number of agents that can call the artifact API concurrently. It allows -the user to avoid overloading the {fleet-server} from artifact API calls. - -`artifact_limit.interval`::: -How often artifacts are rolled out. Default of `100ms` allows 10 artifacts to be -rolled out per second. - -`artifact_limit.burst`::: -Number of transactions allowed for a burst, controlling oversubscription on -outbound buffer. - -`artifact_limit.max_body_byte_size`::: -Maximum size in bytes of the artficact API request body. - -`ack_limit.max`::: -Maximum number of agents that can call the ack API concurrently. It allows the -user to avoid overloading the {fleet-server} from Ack API calls. - -`ack_limit.interval`::: -How often an acknowledgment (ACK) is sent. Default value of `10ms` enables 100 -ACKs per second to be sent. - -`ack_limit.burst`::: -Burst of ACKs to accommodate (default of 20) before falling back to the rate -defined in `interval`. - -`ack_limit.max_body_byte_size`::: -Maximum size in bytes of the ack API request body. - -`enroll_limit.max`::: -Maximum number of agents that can call the enroll API concurrently. This setting -allows the user to avoid overloading the {fleet-server} from Enrollment API -calls. - -`enroll_limit.interval`::: -Interval between processing enrollment request. Enrollment is both CPU and RAM -intensive, so the number of enrollment requests needs to be limited for overall -system health. Default value of `100ms` allows 10 enrollments per second. - -`enroll_limit.burst`::: -Burst of enrollments to accept before falling back to the rate defined by -`interval`. - -`enroll_limit.max_body_byte_size`::: -Maximum size in bytes of the enroll API request body. - -`status_limit.max`::: -Maximum number of agents that can call the status API concurrently. This setting allows the user to avoid overloading the Fleet Server from status API calls. - -`status_limit.interval`::: -How frequently agents can submit status requests to the Fleet Server. - -`status_limit.burst`::: -Burst of status requests to accomodate before falling back to the rate defined by interval. - -`status_limit.max_body_byte_size`::: -Maximum size in bytes of the status API request body. - -`upload_start_limit.max`::: -Maximum number of agents that can call the uploadStart API concurrently. This setting allows the user to avoid overloading the Fleet Server from uploadStart API calls. - -`upload_start_limit.interval`::: -How frequently agents can submit file start upload requests to the Fleet Server. - -`upload_start_limit.burst`::: -Burst of file start upload requests to accomodate before falling back to the rate defined by interval. - -`upload_start_limit.max_body_byte_size`::: -Maximum size in bytes of the uploadStart API request body. - -`upload_end_limit.max`::: -Maximum number of agents that can call the uploadEnd API concurrently. This setting allows the user to avoid overloading the Fleet Server from uploadEnd API calls. - -`upload_end_limit.interval`::: -How frequently agents can submit file end upload requests to the Fleet Server. - -`upload_end_limit.burst`::: -Burst of file end upload requests to accomodate before falling back to the rate defined by interval. - -`upload_end_limit.max_body_byte_size`::: -Maximum size in bytes of the uploadEnd API request body. - -`upload_chunk_limit.max`::: -Maximum number of agents that can call the uploadChunk API concurrently. This setting allows the user to avoid overloading the Fleet Server from uploadChunk API calls. - -`upload_chunk_limit.interval`::: -How frequently agents can submit file chunk upload requests to the Fleet Server. - -`upload_chunk_limit.burst`::: -Burst of file chunk upload requests to accomodate before falling back to the rate defined by interval. - -`upload_chunk_limit.max_body_byte_size`::: -Maximum size in bytes of the uploadChunk API request body. - -[discrete] -[[scaling-recommendations]] -== Scaling recommendations ({ecloud}) - -The following tables provide the minimum resource requirements and scaling guidelines based -on the number of agents required by your deployment. It should be noted that these compute -resource can be spread across multiple availability zones (for example: a 32GB RAM requirement -can be satisfed with 16GB of RAM in 2 different zones). - -* <> - -[discrete] -[[resource-requirements-by-number-agents]] -=== Resource requirements by number of agents -|=== -| Number of Agents | {fleet-server} Memory | {fleet-server} vCPU | {es} Hot Tier -| 2,000 | 2GB | up to 8 vCPU | 32GB RAM \| 8 vCPU -| 5,000 | 4GB | up to 8 vCPU | 32GB RAM \| 8 vCPU -| 10,000 | 8GB | up to 8 vCPU | 128GB RAM \| 32 vCPU -| 15,000 | 8GB | up to 8 vCPU | 256GB RAM \| 64 vCPU -| 25,000 | 8GB | up to 8 vCPU | 256GB RAM \| 64 vCPU -| 50,000 | 8GB | up to 8 vCPU | 384GB RAM \| 96 vCPU -| 75,000 | 8GB | up to 8 vCPU | 384GB RAM \| 96 vCPU -| 100,000 | 16GB | 16 vCPU | 512GB RAM \| 128 vCPU -|=== - -A series of scale performance tests are regularly executed in order to verify the above requirements -and the ability for {fleet} to manage the advertised scale of {agent}s. These tests go through a set -of acceptance criteria. The criteria mimics a typical platform operator workflow. The test cases are -performing agent installations, version upgrades, policy modifications, and adding/removing integrations, -tags, and policies. Acceptance criteria is passed when the {agent}s reach a `Healthy` state after any -of these operations. - -[discrete] -[[agent-policy-scaling-recommendations]] -== Scaling recommendations - -**{agent} policies** - -A single instance of {fleet} supports a maximum of 1000 {agent} policies. If more policies are configured, UI performance might be impacted. The maximum number of policies is not affected by the number of spaces in which the policies are used. - -If you are using {agent} with link:{serverless-docs}[{serverless-full}], the maximum supported number of {agent} policies is 500. - -**{agents}** - -When you use {fleet} to manage a large volume (10k or more) of {agents}, the check-in from each of the multiple agents triggers an {es} authentication request. To help reduce the possibility of cache eviction and to speed up propagation of {agent} policy changes and actions, we recommend setting the {ref}/security-settings.html#api-key-service-settings[API key cache size] in your {es} configuration to 2x the maximum number of agents. - -For example, with 25,000 running {agents} you could set the cache value to `50000`: - -[source,yaml] ----- -xpack.security.authc.api_key.cache.max_keys: 50000 ----- diff --git a/docs/en/ingest-management/fleet/fleet-server-secrets.asciidoc b/docs/en/ingest-management/fleet/fleet-server-secrets.asciidoc deleted file mode 100644 index 25a8fd95f..000000000 --- a/docs/en/ingest-management/fleet/fleet-server-secrets.asciidoc +++ /dev/null @@ -1,264 +0,0 @@ -[[fleet-server-secrets]] -= {fleet-server} Secrets - -{fleet-server} configuration can contain secret values. -You may specify these values directly in the configuration or through secret files. -You can use command line arguments to pass the values or file paths when you are running under {agent}, or you can use environment variables if {agent} is running in a container. - -For examples of how to deploy secret files, refer to our <>. - -NOTE: Stand-alone {fleet-server} is under active development. - -[discrete] -== Secret values - -The following secret values may be used when configuring {fleet-server}. - -Note that the configuration fragments shown below are specified either in the UI as part of the output specification or as part of the {fleet-server} integration settings. - -`service_token`:: -The `service_token` is used to communicate with {es}. -+ -It may be specified in the configuration directly as: -+ -[source,yaml] ----- -output.elasticsearch.service_token: my-service-token ----- -+ -Or by a file with: -+ -[source,yaml] ----- -output.elasticsearch.service_token_path: /path/to/token-file ----- -+ -When you are running {fleet-server} under {agent}, you can specify it with either the `--fleet-server-service-token` or the `--fleet-server-service-token-path` flag. -See <> for more details. -+ -If you are <>, you can use the environment variables `FLEET_SERVER_SERVICE_TOKEN` or `FLEET_SERVER_SERVICE_TOKEN_PATH`. - -TLS private key:: -Use the TLS private key to encrypt communications between {fleet-server} and {agent}. -See <> for more details. -+ -Although it is not recommended, you may specify the private key directly in the configuration as: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - ssl.key: | - ----BEGIN CERTIFICATE---- - .... - ----END CERTIFICATE---- ----- -+ -Alternatively, you can provide the path to the private key with the same attribute: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - ssl.key: /path/to/cert.key ----- -+ -When you are running {fleet-server} under {agent}, you can provide the private key path using with the `--fleet-server-cert-key` flag. -See <> for more details. -+ -If you are <>, you can use the environment variable `FLEET_SERVER_CERT_KEY` to specify the private key path. -+ -TLS private key passphrase:: -The private key passphrase is used to decrypt an encrypted private key file. -+ -You can specify the passphrase as a secret file in the configuration with: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - ssl.key_passphrase_path: /path/to/passphrase ----- -+ -When you are running {fleet-server} under {agent}, you can provide the passphrase path using the `--fleet-server-cert-key-passphrase` flag. -See <> for more details. -+ -If you are <>, you can use the environment variable `FLEET_SERVER_CERT_KEY_PASSPHRASE` to specify the file path. -+ -APM API Key:: -The APM API Key may be used to gather APM data from {fleet-server}. -+ -You can specify it directly in the instrumentation segment of the configuration: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - instrumentation.api_key: my-apm-api-key ----- -+ -Or by a file with: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - instrumentation.api_key_file: /path/to/apmAPIKey ----- -+ -You may specify the API key by value using the environment variable `ELASTIC_APM_API_KEY`. - -APM secret token:: -The APM secret token may be used to gather APM data from {fleet-server}. -+ -You can specify the secret token directly in the instrumentation segment of the configuration: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - instrumentation.secret_token: my-apm-secret-token ----- -+ -Or by a file with: -+ -[source,yaml] ----- -inputs: - - type: fleet-server - instrumentation.secret_token_file: /path/to/apmSecretToken ----- -+ -You may also specify the token by value using the environment variable `ELASTIC_APM_SECRET_TOKEN`. - -[[secret-files-guide]] -== Secret files guide - -This guide provides step-by-step examples with best practices on how to deploy secret files directly on a host or through the Kubernetes secrets engine. - -[[secret-filesystem]] -=== Secrets on filesystem - -Secret files can be provisioned as plain text files directly on filesystems and referenced or passed through {agent}. - -We recommend these steps to improve security. - -==== File permissions - -File permissions should not allow for global read permissions. - -On MacOS and Linux, you can set file ownership and file permissions with the `chown` and `chmod` commands, respectively. -{fleet-server} runs as the `root` user on MacOS and Linux, so given a file named `mySecret`, you can alter it with: -[source,sh] ----- -sudo chown root:root mySecret # set the user:group to root -sudo chmod 0600 mySecret # set only the read/write permission flags for the user, clear group and global permissions. ----- - -On Windows, you can use `icacls` to alter the ACL list associated with the file: -[source,powershell] ----- -Write-Output -NoNewline SECRET > mySecret # Create the file mySecret with the contents SECRET -icacls .\mySecret /inheritance:d # Remove inherited permissions from file -icacls .\mySecret /remove:g BUILTIN\Administrators # Remove Administrators group permissions -icacls .\mySecret /remove:g $env:UserName # Remove current user's permissions ----- - -==== Temporary filesystem - -You can use a temporary filesystem (in RAM) to hold secret files in order to improve security. -These types of filesystems are normally not included in backups and are cleared if the host is reset. -If used, the filesystem and secret files need to be reprovisioned with every reset. - -On Linux you can use `mount` with the `tmpfs` filesystem to create a temporary filesystem in RAM: -[source,sh] ----- -mount -o size=1G -t tmpfs none /mnt/fleet-server-secrets ----- - -On MacOS you can use a combination of `diskutil` and `hdiutil` to create a RAM disk: -[source,sh] ----- -diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://2097152` ----- - -Windows systems do not offer built-in options to create a RAM disk, but several third party programs are available. - -==== Example - -Here is a step by step guide for provisioning a service token on a Linux system: -[source,sh] ----- -sudo mkdir -p /mnt/fleet-server-secrets -sudo mount -o size=1G -t tmpfs none /mnt/fleet-server-secrets -echo -n MY-SERVICE-TOKEN > /mnt/fleet-server-secrets/service-token -sudo chown root:root /mnt/fleet-server-secrets/service-token -sudo chmod 0600 /mnt/fleet-server-secrets/service-token ----- - -NOTE: The `-n` flag is used with `echo` to prevent a newline character from being appended at the end of the secret. Be sure that the secret file does not contain the trailing newline character. - -=== Secrets in containers - -When you are using secret files directly in containers without using Kubernetes or another secrets management solution, you can pass the files into containers by mounting the file or directory. -Provision the file in the same manner as it is in <> and mount it in read-only mode. For example, when using Docker. - -If you are using {agent} image: -[source,sh] ----- -docker run \ - -v /path/to/creds:/creds:ro \ - -e FLEET_SERVER_CERT_KEY_PASSPHRASE=/creds/passphrase \ - -e FLEET_SERVER_SERVICE_TOKEN_PATH=/creds/service-token \ - --rm docker.elastic.co/elastic-agent/elastic-agent ----- - -=== Secrets in Kubernetes - -Kubernetes has a https://kubernetes.io/docs/concepts/configuration/secret/[secrets management engine] that can be used to provision secret files to pods. - -For example, you can create the passphrase secret with: -[source,sh] ----- -kubectl create secret generic fleet-server-key-passphrase \ - --from-literal=value=PASSPHRASE ----- - -And create the service token secret with: -[source,sh] ----- -kubectl create secret generic fleet-server-service-token \ - --from-literal=value=SERVICE-TOKEN ----- - -Then include it in the pod specification, for example, when you are running {fleet-server} under {agent}: -[source,yaml] ----- -spec: - volumes: - - name: key-passphrase - secret: - secretName: fleet-server-key-passphrase - - name: service-token - secret: - secretName: fleet-server-service-token - containers: - - name: fleet-server - image: docker.elastic.co/elastic-agent/elastic-agent - volumeMounts: - - name: key-passphrase - mountPath: /var/secrets/passphrase - - name: service-token - mountPath: /var/secrets/service-token - env: - - name: FLEET_SERVER_CERT_KEY_PASSPHRASE - value: /var/secrets/passphrase/value - - name: FLEET_SERVER_SERVICE_TOKEN_PATH - value: /var/secrets/service-token/value ----- - -==== {agent} Kubernetes secrets provider - -When you are running {fleet-server} under {agent} in {k8s}, you can use {agent}'s <> to insert a {k8s} secret directly into {fleet-server}'s configuration. -Note that due to how {fleet-server} is bootstrapped only the APM secrets (API key or secret token) can be specified with this provider. - diff --git a/docs/en/ingest-management/fleet/fleet-settings-changing-outputs.asciidoc b/docs/en/ingest-management/fleet/fleet-settings-changing-outputs.asciidoc deleted file mode 100644 index 2a9bf6e6b..000000000 --- a/docs/en/ingest-management/fleet/fleet-settings-changing-outputs.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -:type: output-elasticsearch-fleet-settings - -[[fleet-settings-changing-outputs]] -= Considerations when changing outputs - -{fleet} provides the capability to update your <> to add new outputs, and then to assign those new outputs to an {agent} policy. However, changing outputs should be done with caution. - -When you change the output configuration within a policy applied to one or more agents, there's a high likelihood of those agents re-ingesting previously processed logs: - -* Changing the output will cause the agents to remove and recreate all existing integrations associated with the new output, which as a result of the change receives a new UUID. -* As a consequence of the newly generated output UUID, the agents will retransmit all events and logs they have been configured to collect, since the data registry will be re-created. - -In cases when an update to an output is required, it's generally preferable to update your existing output rather than create a new one. - -An example of an update being needed would be when switching from a static IP address to a global load balancer (where both endpoints point to the same underlying cluster). In this type of situation, changing to a new output would result in data being re-collected, while updating the existing output would not. diff --git a/docs/en/ingest-management/fleet/fleet-settings-output-elasticsearch.asciidoc b/docs/en/ingest-management/fleet/fleet-settings-output-elasticsearch.asciidoc deleted file mode 100644 index 873da9141..000000000 --- a/docs/en/ingest-management/fleet/fleet-settings-output-elasticsearch.asciidoc +++ /dev/null @@ -1,355 +0,0 @@ -:type: output-elasticsearch-fleet-settings - -[[es-output-settings]] -= {es} output settings - -Specify these settings to send data over a secure connection to {es}. In the {fleet} <>, make sure that {es} output type is selected. - - - -[cols="2*> documentation for default ports and other configuration details. - -// ============================================================================= - -| -[id="es-trusted-fingerprint-yaml-setting"] -**{es} CA trusted fingerprint** - -| HEX encoded SHA-256 of a CA certificate. If this certificate is -present in the chain during the handshake, it will be added to the -`certificate_authorities` list and the handshake will continue -normally. To learn more about trusted fingerprints, refer to the -{ref}/configuring-stack-security.html[{es} security documentation]. - -// ============================================================================= - -| -[id="es-agent-proxy-output"] -**Proxy** - -| Select a proxy URL for {agent} to connect to {es}. -To learn about proxy configuration, refer to <>. - -// ============================================================================= - -| -[id="es-output-advanced-yaml-setting"] -**Advanced YAML configuration** - -| YAML settings that will be added to the {es} output section of each policy -that uses this output. Make sure you specify valid YAML. The UI does not -currently provide validation. - -See <> for descriptions of the available settings. - -// ============================================================================= - -| -[id="es-agent-integrations-output"] -**Make this output the default for agent integrations** - -| When this setting is on, {agent}s use this output to send data if no other -output is set in the <>. - -// ============================================================================= - -| -[id="es-agent-monitoring-output"] -**Make this output the default for agent monitoring** - -| When this setting is on, {agent}s use this output to send <> if no other output is set in the <>. - -// ============================================================================= - -| -[id="es-agent-performance-tuning"] -**Performance tuning** - -| Choose one of the menu options to tune your {agent} performance when sending data to an {es} output. You can optimize for throughput, scale, latency, or you can choose a balanced (the default) set of performance specifications. Refer to <> for details about the setting values and their potential impact on performance. - -You can also use the <> field to set custom values. Note that if you adjust any of the performance settings described in the following **Advanced YAML configuration** section, the **Performance tuning** option automatically changes to `Custom` and cannot be changed. - -Performance tuning preset values take precedence over any settings that may be defined separately. If you want to change any setting, you need to use the `Custom` **Performance tuning** option and specify the settings in the **Advanced YAML configuration** field. - -For example, if you would like to use the balanced preset values except that you prefer a higher compression level, you can do so as follows: - -. In {fleet}, open the **Settings** tab. -. In the **Outputs** section, select **Add output** to create a new output, or select the edit icon to edit an existing output. -. In the **Add new output** or the **Edit output** flyout, set **Performance tuning** to `Custom`. -. Refer to the list of <>, and add the settings you prefer into the **Advanced YAML configuration** field. For the `balanced` presets, the yaml configuration would be as shown: -+ -[source,yaml] ----- -bulk_max_size: 1600 -worker: 1 -queue.mem.events: 3200 -queue.mem.flush.min_events: 1600 -queue.mem.flush.timeout: 10s -compression_level: 1 -idle_connection_timeout: 3s ----- - -. Adjust any settings as preferred. For example, you can update the `compression_level` setting to `4`. - -When you create an {agent} policy using this output, the output will use the balanced preset options except with the higher compression level, as specified. - -|=== - -[[es-output-settings-yaml-config]] -== Advanced YAML configuration - -[cols="2*>. For the `queue.mem.events`, `queue.mem.flush.min_events` and `queue.mem.flush.timeout` settings, refer to the {filebeat-ref}/configuring-internal-queue.html[internal queue configuration settings] in the {filebeat} documentation. - -`Balanced` represents the new default setting (out of the box behaviour). Relative to `Balanced`, `Optimized for throughput` setting will improve EPS by 4 times, `Optimized for Scale` will perform on par and `Optimized for Latency` will show a 20% degredation in EPS (Events Per Second). These relative performance numbers were calculated from a performance testbed which operates in a controlled setting ingesting a large log file. - -As mentioned, the `custom` preset allows you to input your own set of parameters for a finer tuning of performance. The following table -is a summary of a few data points and how the resulting EPS compares to the `Balanced` setting mentioned above. - -These presets apply only to agents on version 8.12.0 or later. - -.Performance tuning: EPS data -[cols="1,1,1,1,1,1,1"] -|=== -|worker |bulk_max_size |queue.mem_events |queue.mem.flush.min_events |queue.mem.flush.timeout |idle_connection_timeout |Relative EPS - -|1 -|1600 -|3200 -|1600 -|5 -|15 -|1x - -|1 -|2048 -|4096 -|2048 -|5 -|15 -|1x - -|1 -|4096 -|8192 -|4096 -|5 -|15 -|1x - -|2 -|1600 -|6400 -|1600 -|5 -|15 -|2x - -|2 -|2048 -|8192 -|2048 -|5 -|15 -|2x - -|2 -|4096 -|16384 -|4096 -|5 -|15 -|2x - -|4 -|1600 -|12800 -|1600 -|5 -|15 -|3.6x - -|4 -|2048 -|16384 -|2048 -|5 -|15 -|3.6x - -|4 -|4096 -|32768 -|4096 -|5 -|15 -|3.6x - -|8 -|1600 -|25600 -|1600 -|5 -|15 -|5.3x - -|8 -|2048 -|32768 -|2048 -|5 -|15 -|5.1x - -|8 -|4096 -|65536 -|4096 -|5 -|15 -|5.2x - -|16 -|1600 -|51200 -|1600 -|5 -|15 -|5.3x - -|16 -|2048 -|65536 -|2048 -|5 -|15 -|5.2x - -|16 -|4096 -|131072 -|4096 -|5 -|15 -|5.3x -|=== diff --git a/docs/en/ingest-management/fleet/fleet-settings-output-kafka.asciidoc b/docs/en/ingest-management/fleet/fleet-settings-output-kafka.asciidoc deleted file mode 100644 index 806d7c0cb..000000000 --- a/docs/en/ingest-management/fleet/fleet-settings-output-kafka.asciidoc +++ /dev/null @@ -1,576 +0,0 @@ -:type: output-kafka-fleet-settings - -[[kafka-output-settings]] -= Kafka output settings - -Specify these settings to send data over a secure connection to Kafka. In the {fleet} <>, make sure that the Kafka output type is selected. - -NOTE: If you plan to use {ls} to modify {agent} output data before it's sent to Kafka, please refer to our <> for doing so, further in on this page. - -[discrete] -== General settings - -[cols="2*> documentation for default ports and other configuration details. - -|=== - -[discrete] -== Authentication settings - -Select the mechanism that {agent} uses to authenticate with Kafka. - -[cols="2*> as part of your {agent} input. -Otherwise, setting a custom field is not recommended. - -|=== - -[discrete] -== Header settings - -A header is a key-value pair, and multiple headers can be included with the same key. Only string values are supported. These headers will be included in each produced Kafka message. - -[cols="2*>. - -// ============================================================================= - -| -[id="kafka-output-advanced-yaml-setting"] -**Advanced YAML configuration** - -| YAML settings that will be added to the Kafka output section of each policy -that uses this output. Make sure you specify valid YAML. The UI does not -currently provide validation. - -See <> for descriptions of the available settings. - -// ============================================================================= - -| -[id="kafka-output-agent-integrations"] -**Make this output the default for agent integrations** - -| When this setting is on, {agent}s use this output to send data if no other -output is set in the <>. - -// ============================================================================= - -| -[id="kafka-output-agent-monitoring"] -**Make this output the default for agent monitoring** - -| When this setting is on, {agent}s use this output to send <> if no other output is set in the <>. - -|=== - -[[kafka-output-settings-yaml-config]] -== Advanced YAML configuration - -[cols="2*> documentation for more details. - -[source,yaml] ----- -inputs { - kafka { - ... - ecs_compatibility => "disabled" - codec => json { ecs_compatibility => "disabled" } - ... - } -} -... ----- - -:type!: diff --git a/docs/en/ingest-management/fleet/fleet-settings-output-logstash.asciidoc b/docs/en/ingest-management/fleet/fleet-settings-output-logstash.asciidoc deleted file mode 100644 index 66fdea424..000000000 --- a/docs/en/ingest-management/fleet/fleet-settings-output-logstash.asciidoc +++ /dev/null @@ -1,237 +0,0 @@ -:type: output-logstash-fleet-settings - -[[ls-output-settings]] -= {ls} output settings - -Specify these settings to send data over a secure connection to {ls}. You must -also configure a {ls} pipeline that reads encrypted data from {agent}s and sends -the data to {es}. Follow the in-product steps to configure the {ls} pipeline. - -In the {fleet} <>, make sure that the {ls} output type is selected. - -Before using the {ls} output, you need to make sure that for any integrations that have been <>, the integration assets have been installed on the destination cluster. Refer to <> for the steps to add integration assets. - -To learn how to generate certificates, refer to <>. - -To receive the events in {ls}, you also need to create a {ls} configuration pipeline. -The {ls} configuration pipeline listens for incoming {agent} connections, -processes received events, and then sends the events to {es}. - -The following example configures a {ls} pipeline that listens on port `5044` for -incoming {agent} connections and routes received events to {es}. - -The {ls} pipeline definition below is an example. Please refer to the `Additional Logstash -configuration required` steps when creating the {ls} output in the Fleet outputs page. - -[source,yaml] ----- -input { - elastic_agent { - port => 5044 - enrich => none # don't modify the events' schema at all - ssl => true - ssl_certificate_authorities => [""] - ssl_certificate => "" - ssl_key => "" - ssl_verify_mode => "force_peer" - } -} -output { - elasticsearch { - hosts => ["http://localhost:9200"] <1> - # cloud_id => "..." - data_stream => "true" - api_key => "" <2> - data_stream => true - ssl => true - # cacert => "" - } -} ----- -<1> The {es} server and the port (`9200`) where {es} is running. -<2> The API Key obtained from the {ls} output creation steps in Fleet. - -[cols="2*> documentation for default ports and other configuration details. - -// ============================================================================= - -| -[id="ls-server-ssl-certificate-authorities-setting"] -**Server SSL certificate authorities** - -| The CA certificate to use to connect to {ls}. This is the CA used to generate -the certificate and key for {ls}. Copy and paste in the full contents for the CA -certificate. - -This setting is optional. - -// ============================================================================= - -| -[id="ls-client-ssl-certificate-setting"] -**Client SSL certificate** - -| The certificate generated for the client. Copy and paste in the full contents -of the certificate. This is the certificate that all the agents will use to connect to {ls}. - -In cases where each client has a unique certificate, the local path to that certificate can be -placed here. The agents will pick the certificate in that location when establishing a connection to -{ls}. - -// ============================================================================= - -| -[id="ls-client-ssl-certificate-key-setting"] -**Client SSL certificate key** - -| The private key generated for the client. This must be in PKCS 8 key. -Copy and paste in the full contents of the certificate key. This is the certificate key that all the agents will use to connect to {ls}. - -In cases where each client has a unique certificate key, the local path to that certificate key can be placed here. -The agents will pick the certificate key in that location when establishing a connection to {ls}. - -To prevent unauthorized access the certificate key is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the key as plain text in the agent policy definition. Secret storage requires {fleet-server} version 8.12 or higher. - -Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See {kibana-ref}/fleet-settings-kb.html#_preconfiguration_settings_for_advanced_use_cases[Preconfiguration settings] in the {kib} Guide to learn more. - -// ============================================================================= - -| -[id="ls-agent-proxy-output"] -**Proxy** - -| Select a proxy URL for {agent} to connect to {ls}. -To learn about proxy configuration, refer to <>. - -// ============================================================================= - -| -[id="ls-output-advanced-yaml-setting"] -**Advanced YAML configuration** - -| YAML settings that will be added to the {ls} output section of each policy -that uses this output. Make sure you specify valid YAML. The UI does not -currently provide validation. - -See <> for descriptions of the available settings. - -// ============================================================================= - -| -[id="ls-agent-integrations-output"] -**Make this output the default for agent integrations** - -| When this setting is on, {agent}s use this output to send data if no other -output is set in the <>. - -Output to {ls} is not supported for agent integrations in a policy used by {fleet-server} or APM. - -// ============================================================================= - -| -[id="ls-agent-monitoring-output"] -**Make this output the default for agent monitoring** - -| When this setting is on, {agent}s use this output to send <> if no other output is set in the <>. - -Output to {ls} is not supported for agent monitoring in a policy used by {fleet-server} or APM. - -|=== - - -[[ls-output-settings-yaml-config]] -== Advanced YAML configuration - -[cols="2*> as your main {es} cluster. - -[NOTE] -==== -Note the following restrictions with the remote {es} output: - -* Using a remote {es} output with a target cluster that has {cloud}/ec-traffic-filtering-deployment-configuration.html[traffic filters] enabled is not currently supported. -* Using {elastic-defend} is currently not supported when a remote {es} output is configured for an agent. -==== - -To configure a remote {es} cluster for your {agent} data: - -. In {fleet}, open the **Settings** tab. - -. In the **Outputs** section, select **Add output**. - -. In the **Add new output** flyout, provide a name for the output and select **Remote Elasticsearch** as the output type. - -. In the **Hosts** field, add the URL that agents should use to access the remote {es} cluster. - -.. To find the remote host address, in the remote cluster open {kib} and go to **Management -> {fleet} -> Settings**. - -.. Copy the **Hosts** value for the default output. - -.. Back in your main cluster, paste the value you copied into the output **Hosts** field. - -. Create a service token to access the remote cluster. - -.. Below the **Service Token** field, copy the API request. - -.. In the remote cluster, open the {kib} menu and go to **Management -> Dev Tools**. - -.. Run the API request. - -.. Copy the value for the generated token. - -.. Back in your main cluster, paste the value you copied into the output **Service Token** field. -+ -NOTE: To prevent unauthorized access the {es} Service Token is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the password as plain text in the agent policy definition. Secret storage requires {fleet-server} version 8.12 or higher. This setting can also be stored as a secret value or as plain text for preconfigured outputs. See {kibana-ref}/fleet-settings-kb.html#_preconfiguration_settings_for_advanced_use_cases[Preconfiguration settings] in the {kib} Guide to learn more. - -. Choose whether or not the remote output should be the default for agent integrations or for agent monitoring data. When set, {agent}s use this output to send data if no other output is set in the <>. - -. Select which <> you'd prefer in order to optimize {agent} for throughput, scale, or latency, or leave the default `balanced` setting. - -. Add any <> that you'd like for the output. - -. Click **Save and apply settings**. - -After the output is created, you can update an {agent} policy to use the new remote {es} cluster: - -. In {fleet}, open the **Agent policies** tab. - -. Click the agent policy to edit it, then click **Settings**. - -. To send integrations data, set the **Output for integrations** option to use the output that you configured in the previous steps. - -. To send {agent} monitoring data, set the **Output for agent monitoring** option to use the output that you configured in the previous steps. - -. Click **Save changes**. - -The remote {es} cluster is now configured. - -As a final step before using the remote {es} output, you need to make sure that for any integrations that have been <>, the integration assets have been installed on the remote {es} cluster. Refer to <> for the steps. - -NOTE: When you use a remote {es} output, {fleet-server} performs a test to ensure connectivity to the remote cluster. The result of that connectivity test is used to report the ES Remote output as healthy or unhealthy on the **Fleet** > **Settings** > **Outputs** page, under the **Status** column. In some cases, the remote {es} output used for data from {agent} may be reachable only by those agents and not by {fleet-server}, so the unhealthy state and an associated `Unable to connect` error that appears on the UI can be ignored. \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/fleet-settings.asciidoc b/docs/en/ingest-management/fleet/fleet-settings.asciidoc deleted file mode 100644 index f68cfe47f..000000000 --- a/docs/en/ingest-management/fleet/fleet-settings.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -[[fleet-settings]] -= {fleet} settings - -NOTE: The settings described here are configurable through the {fleet} UI. Refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings in {kib}] for a list of -settings that you can configure in the `kibana.yml` configuration file. - -// lint ignore fleet -On the **Settings** tab in **Fleet**, you can configure global settings available -to all {agent}s enrolled in {fleet}. This includes {fleet-server} hosts and -output settings. - -[discrete] -[[fleet-server-hosts-setting]] -== {fleet-server} host settings - -Click **Edit hosts** and specify the host URLs your {agent}s will use to connect -to a {fleet-server}. - -TIP: If the **Edit hosts** option is grayed out, {fleet-server} hosts -are configured outside of {fleet}. For more information, refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings in {kib}]. - -Not sure if {fleet-server} is running? Refer to <>. - -On self-managed clusters, you must specify one or more URLs. - -On {ecloud}, this field is populated automatically. If you are using -Azure Private Link, GCP Private Service Connect, or AWS PrivateLink and -enrolling the {agent} with a private link URL, ensure that this setting is -configured. Otherwise, {agent} will reset to use a default address instead of -the private link URL. - -NOTE: If a URL is specified without a port, {kib} sets the port to `80` (http) -or `443` (https). - -By default, {fleet-server} is typically exposed on the following ports: - -`8220`:: -Default {fleet-server} port for self-managed clusters - -`443` or `9243`:: -Default {fleet-server} port for {ecloud}. View the {fleet} **Settings** tab -to find the actual port that's used. - -IMPORTANT: The exposed ports must be open for ingress and egress in the firewall and -networking rules on the host to allow {agent}s to communicate with {fleet-server}. - -Specify multiple URLs (click **Add row**) to scale out your deployment and provide -automatic failover. If multiple URLs exist, {fleet} shows the first provided URL -for enrollment purposes. Enrolled {agent}s will connect to the URLs in round -robin order until they connect successfully. - -When a {fleet-server} is added or removed from the list, all agent policies -are updated automatically. - -**Examples:** - -* `https://192.0.2.1:8220` -* `https://abae718c1276457893b1096929e0f557.fleet.eu-west-1.aws.qa.cld.elstc.co:443` -* `https://[2001:db8::1]:8220` - -[discrete] -[[output-settings]] -== Output settings - -Add or edit output settings to specify where {agent}s send data. {agent}s -use the default output if you don't select an output in the agent policy. - -TIP: If you have an `Enterprise` link:https://www.elastic.co/subscriptions[{stack} subscription], -you can configure {agent} to -<>. - -NOTE: The {ecloud} internal output is locked and cannot be edited. This -output is used for internal routing to reduce external network charges when -using the {ecloud} agent policy. It also provides visibility for -troubleshooting on {ece}. - -To add or edit an output: - -. Go to **{fleet} -> Settings**. - -. Under **Outputs**, click **Add output** or **Edit**. -+ -image::images/fleet-add-output-button.png[{fleet} Add output button] -+ -The **Add new output** UI opens. - -. Set the output name and type. - -. Specify settings for the output type you selected: -+ -* <> -* <> -* <> -* <> - -. Click **Save and apply settings**. - -TIP: If the options for editing an output are grayed out, outputs -are configured outside of {fleet}. For more information, refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings in {kib}]. - -[discrete] -[[fleet-agent-binary-download-settings]] -== Agent binary download settings - -{agent}s must be able to access the {artifact-registry} to download -binaries during upgrades. By default {agent}s download artifacts from the -artifact registry at `https://artifacts.elastic.co/downloads/`. - -For {agent}s that cannot access the internet, you can specify agent binary -download settings, and then configure agents to download their artifacts from -the alternate location. For more information about running {agent}s in a -restricted environment, refer to <>. - -To add or edit the source of binary downloads: - -. Go to **{fleet} -> Settings**. -. Under **Agent Binary Download**, click **Add agent binary source** or **Edit**. -. Set the agent binary source name. -. For **Host**, specify the address where you are hosting the artifacts -repository. -. (Optional) To make this location the default, select -**Make this host the default for all agent policies**. {agent}s -use the default location if you don't select a different agent binary source -in the agent policy. - -[discrete] -[[proxy-settings]] -== Proxies - -You can specify a proxy server to be used in {fleet-server}, {agent} outputs, or for any agent binary download sources. -For full details about proxy configuration refer to <>. - -[discrete] -[[delete-unenrolled-agents-setting]] -== Delete unenrolled agents - -After an {agent} has been unenrolled in {fleet}, a number of documents about the agent are retained just in case the agent needs to be recovered at some point. You can choose to have all data related to an unenrolled agent deleted automatically. - -Note that this option can also be enabled by adding the `xpack.fleet.enableDeleteUnenrolledAgents: true` setting to the {kibana-ref}/[{kib} settings file]. - -To enable automatic deletion of unenrolled agents: - -. Go to **{fleet} -> Settings**. -. Under **Advanced Settings**, enable the **Delete unenrolled agents** option. \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/fleet.asciidoc b/docs/en/ingest-management/fleet/fleet.asciidoc deleted file mode 100644 index 9bea11d5a..000000000 --- a/docs/en/ingest-management/fleet/fleet.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[manage-agents-in-fleet]] -= Centrally manage {agent}s in {fleet} - -++++ -Manage {agent}s in {fleet} -++++ - -**** -The {fleet} app in {kib} supports both {agent} infrastructure management and -agent policy management. You can use {fleet} to: - -* Manage {agent} binaries and specify settings installed on the host that -determine whether the {agent} is enrolled in {fleet}, what version of the -agent is running, and which agent policy is used. - -* Manage agent policies that specify agent configuration settings, which -integrations are running, whether agent monitoring is turned on, input -settings, and so on. - -Advanced users who don't want to use {fleet} for central management can use an -external infrastructure management solution and -<> instead. -**** - -[IMPORTANT] -==== -{fleet} currently requires a {kib} user with `All` privileges on -{fleet} and {integrations}. Since many Integrations assets are shared across -spaces, users need the {kib} privileges in all spaces. Refer to <> to learn how to create a user role with the required privileges to access {fleet} and {integrations}. -==== - -To learn how to add {agent}s to {fleet}, refer to -<>. - -To use {fleet} go to *Management > {fleet}* in {kib}. The following table -describes the main management actions you can perform in {fleet}: - -[options,header] -|=== -| Component | Management actions - -|<> -|Configure global settings available to all {agent}s managed by {fleet}, -including {fleet-server} hosts and output settings. - -|<> -|Enroll, unenroll, upgrade, add tags, and view {agent} status and logs. - -|<> -|Create and edit agent policies and add integrations to them. - -|<> -|Create and revoke enrollment tokens. - -|{security-guide}/agent-tamper-protection.html[Uninstall tokens] -|({elastic-defend} integration only) Access tokens to allow uninstalling {agent} from endpoints with Agent tamper protection enabled. - -|<> -|View data streams and navigate to dashboards to analyze your data. - -|=== \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/images/add-fleet-server-advanced.png b/docs/en/ingest-management/fleet/images/add-fleet-server-advanced.png deleted file mode 100644 index d43b3dc3b..000000000 Binary files a/docs/en/ingest-management/fleet/images/add-fleet-server-advanced.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/add-fleet-server.png b/docs/en/ingest-management/fleet/images/add-fleet-server.png deleted file mode 100644 index 6158bf5b5..000000000 Binary files a/docs/en/ingest-management/fleet/images/add-fleet-server.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/agent-health-status.png b/docs/en/ingest-management/fleet/images/agent-health-status.png deleted file mode 100644 index 34a35ab4e..000000000 Binary files a/docs/en/ingest-management/fleet/images/agent-health-status.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/collect-agent-diagnostics1.png b/docs/en/ingest-management/fleet/images/collect-agent-diagnostics1.png deleted file mode 100644 index b0c39fadc..000000000 Binary files a/docs/en/ingest-management/fleet/images/collect-agent-diagnostics1.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/collect-agent-diagnostics2.png b/docs/en/ingest-management/fleet/images/collect-agent-diagnostics2.png deleted file mode 100644 index b49d0734a..000000000 Binary files a/docs/en/ingest-management/fleet/images/collect-agent-diagnostics2.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/dashboard-datastream01.png b/docs/en/ingest-management/fleet/images/dashboard-datastream01.png deleted file mode 100644 index 3d971b6eb..000000000 Binary files a/docs/en/ingest-management/fleet/images/dashboard-datastream01.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/datastream-namespace.png b/docs/en/ingest-management/fleet/images/datastream-namespace.png deleted file mode 100644 index 8b47155e1..000000000 Binary files a/docs/en/ingest-management/fleet/images/datastream-namespace.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/elastic-agent-status-rule.png b/docs/en/ingest-management/fleet/images/elastic-agent-status-rule.png deleted file mode 100644 index b438ddb0f..000000000 Binary files a/docs/en/ingest-management/fleet/images/elastic-agent-status-rule.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/elastic-cloud-agent-policy.png b/docs/en/ingest-management/fleet/images/elastic-cloud-agent-policy.png deleted file mode 100644 index c71930c71..000000000 Binary files a/docs/en/ingest-management/fleet/images/elastic-cloud-agent-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-add-output-button.png b/docs/en/ingest-management/fleet/images/fleet-add-output-button.png deleted file mode 100644 index c6193c759..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-add-output-button.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-epr-proxy.png b/docs/en/ingest-management/fleet/images/fleet-epr-proxy.png deleted file mode 100644 index edf85e75d..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-epr-proxy.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-proxy-settings-ui.png b/docs/en/ingest-management/fleet/images/fleet-proxy-settings-ui.png deleted file mode 100644 index ec840bb4f..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-proxy-settings-ui.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-server-agent-policy-page.png b/docs/en/ingest-management/fleet/images/fleet-server-agent-policy-page.png deleted file mode 100644 index fed328d88..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-server-agent-policy-page.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-server-cloud-deployment.png b/docs/en/ingest-management/fleet/images/fleet-server-cloud-deployment.png deleted file mode 100644 index cdbcfb2f4..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-server-cloud-deployment.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-server-configuration.png b/docs/en/ingest-management/fleet/images/fleet-server-configuration.png deleted file mode 100644 index a730b4c3c..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-server-configuration.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-server-on-prem-deployment.png b/docs/en/ingest-management/fleet/images/fleet-server-on-prem-deployment.png deleted file mode 100644 index 8df092927..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-server-on-prem-deployment.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/fleet-server-on-prem-es-cloud.png b/docs/en/ingest-management/fleet/images/fleet-server-on-prem-es-cloud.png deleted file mode 100644 index 5e05b0f2f..000000000 Binary files a/docs/en/ingest-management/fleet/images/fleet-server-on-prem-es-cloud.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/integrations-server-hosted-container.png b/docs/en/ingest-management/fleet/images/integrations-server-hosted-container.png deleted file mode 100644 index da0603124..000000000 Binary files a/docs/en/ingest-management/fleet/images/integrations-server-hosted-container.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/images/upgrade-agent-custom.png b/docs/en/ingest-management/fleet/images/upgrade-agent-custom.png deleted file mode 100644 index 32e443a83..000000000 Binary files a/docs/en/ingest-management/fleet/images/upgrade-agent-custom.png and /dev/null differ diff --git a/docs/en/ingest-management/fleet/migrate-elastic-agent.asciidoc b/docs/en/ingest-management/fleet/migrate-elastic-agent.asciidoc deleted file mode 100644 index 5cf17463f..000000000 --- a/docs/en/ingest-management/fleet/migrate-elastic-agent.asciidoc +++ /dev/null @@ -1,221 +0,0 @@ -[[migrate-elastic-agent]] -= Migrate {fleet}-managed {agent}s from one cluster to another - -++++ -Migrate {agent}s -++++ - -There are situations where you may need to move your installed {agent}s from being managed in one cluster to being managed in another cluster. - -For a seamless migration, we advise that you create an identical agent policy in the new cluster that is configured in the same manner as the original cluster. There are a few methods to do this. - -This guide takes you through the steps to migrate your {agent}s by snapshotting a source cluster and restoring it on a target cluster. These instructions assume that you have an {ecloud} deployment, but they can be applied to on-premise clusters as well. - -[discrete] -[[migrate-elastic-agent-take-snapshot]] -== Take a snapshot of the source cluster - -Refer to the full {ref}/snapshot-restore.html[Snapshot and restore] documentation for full details. In short, to create a new snapshot in an {ecloud} deployment: - -. In {kib}, open the main menu, then click *Manage this deployment*. -. In the deployment menu, select *Snapshots*. -. Click *Take snapshot now*. -+ -[role="screenshot"] -image::images/migrate-agent-take-snapshot.png[Deployments Snapshots page] - -[discrete] -[[migrate-elastic-agent-create-target]] -== Create a new target cluster from the snapshot - -You can create a new cluster based on the snapshot taken in the previous step, and then migrate your {agent}s and {fleet} to the new cluster. For best results, it's recommended that the new target cluster be at the same version as the cluster that the agents are migrating from. - -. Open the {ecloud} console and select *Create deployment*. -. Select *Restore snapshot data*. -. In the *Restore from* field, select your source deployment. -. Choose your deployment settings, and, optimally, choose the same {stack} version as the source cluster. -. Click *Create deployment*. -+ -[role="screenshot"] -image::images/migrate-agent-new-deployment.png[Create a deployment page] - -[discrete] -[[migrate-elastic-agent-target-settings]] -== Update settings in the target cluster - -when the target cluster is available you'll need to adjust a few settings. Take some time to examine the {fleet} setup in the new cluster. - -. Open the {kib} menu and select *Fleet*. -. On the *Agents* tab, your agents should visible, however they'll appear as `Offline`. This is because these agents have not yet enrolled in the new, target cluster, and are still enrolled in the original, source cluster. -+ -[role="screenshot"] -image::images/migrate-agent-agents-offline.png[Agents tab in Fleet showing offline agents] - -. Open the {fleet} *Settings* tab. -. Examine the configurations captured there for {fleet}. Note that these settings are scopied from the snapshot of the source cluster and may not have a meaning in the target cluster, so they need to be modified accordingly. -+ -In the following example, both the *Fleet Server hosts* and the *Outputs* settings are copied over from the source cluster: -+ -[role="screenshot"] -image::images/migrate-agent-host-output-settings.png[Settings tab in Fleet showing source deployment host and output settings] -+ -The next steps explain how to obtain the relevant {fleet-server} host and {es} output details applicable to the new target cluster in {ecloud}. - -[discrete] -[[migrate-elastic-agent-elasticsearch-output]] -=== Modify the {es} output - -. In the new target cluster on {ecloud}, in the *Outputs* section, on the {fleet} *Settings* tab, you will find an internal output named `Elastic Cloud internal output`. The host address is in the form: -+ -`https://.containerhost:9244` -+ -Record this `` from the target cluster. In the example shown, the ID is `fcccb85b651e452aa28703a59aea9b00`. - -. Also in the *Outputs* section, notice that the default {es} output (that was copied over from the source cluster) is also in the form: -+ -`https://.:443`. -+ -Modify the {es} output so that the cluster ID is the same as that for `Elastic Cloud internal output`. In this example we also rename the output to `New Elasticsearch`. -+ -[role="screenshot"] -image::images/migrate-agent-elasticsearch-output.png[Outputs section showing the new Elasticsearch host setting] -+ -In this example, the `New Elasticsearch` output and the `Elastic Cloud internal output` now have the same cluster ID, namely `fcccb85b651e452aa28703a59aea9b00`. - -You have now created an {es} output that agents can use to write data to the new, target cluster. For on-premise environments not using {ecloud}, you should similarly be able to use the host address of the new cluster. - -[discrete] -[[migrate-elastic-agent-fleet-host]] -=== Modify the {fleet-server} host - -Like the {es} host, the {fleet-server} host has also changed with the new target cluster. Note that if you're deploying {fleet-server} on premise, the host has probably not changed address and this setting does not need to be modified. We still recommend that you ensure the agents are able to reach the the on-premise {fleet-server} host (which they should be able to as they were able to connect to it prior to the migration). - -The {ecloud} {fleet-server} host has a similar format to the {es} output: - -`https://.fleet..io` - -To configure the correct {ecloud} {fleet-server} host you will need to find the target cluster's full `deployment-id`, and use it to replace the original `deployment-id` that was copied over from the source cluster. - -The easiest way to find the `deployment-id` is from the deployment URL: - -. From the {kib} menu select *Manage this deployment*. -. Copy the deployment ID from the URL in your browser's address bar. -+ -[role="screenshot"] -image::images/migrate-agent-deployment-id.png[Deployment management page, showing the browser URL] -+ -In this example, the new deployment ID is `eed4ae8e2b604fae8f8d515479a16b7b`. -+ -Using that value for `deployment-id`, the new {fleet-server} host URL is: -+ -`https://eed4ae8e2b604fae8f8d515479a16b7b.fleet.us-central1.gcp.cloud.es.io:443` - -. In the target cluster, under *Fleet server hosts*, replace the original host URL with the new value. -+ -[role="screenshot"] -image::images/migrate-agent-fleet-server-host.png[Fleet server hosts showing the new host URL] - -[discrete] -[[migrate-elastic-agent-reset-policy]] -=== Reset the {ecloud} policy - -On your target cluster, certain settings from the original {ecloud} {agent} policiy may still be retained, and need to be updated to reference the new cluster. For example, in the APM policy installed to the {ecloud} {agent} policy, the original and outdated APM URL is preserved. This can be fixed by running the `reset_preconfigured_agent_policies` API request. Note that when you reset the policy, all APM Integration settings are reset, including the secret key or any tail-based sampling. - -To reset the {ecloud} {agent} policy: - -. Choose one of the API requests below and submit it through a terminal window. -** If you're using {kib} version 8.11 or higher, run: -+ -[source,shell] ----- -curl --request POST \ ---url https://{KIBANA_HOST:PORT}/internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ --u username:password \ ---header 'Content-Type: application/json' \ ---header 'kbn-xsrf: as' \ ---header 'elastic-api-version: 1' ----- -** If you're using a {kib} version below 8.11, run: -+ -[source,shell] ----- -curl --request POST \ ---url https://{KIBANA_HOST:PORT}/internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ --u username:password \ ---header 'Content-Type: application/json' \ ---header 'kbn-xsrf: as' ----- -+ -After running the command, your {ecloud} agent policy settings should all be updated appropriately. - -[NOTE] -==== -After running the command, a warning message may appear in {fleet} indicating that {fleet-server} is not healthy. As well, the {agent} associated with the {ecloud} agent policy may disappear from the list of agents. To remedy this, you can restart {integrations-server}: - -. From the {kib} menu, select *Manage this deployment*. -. In the deployment menu, select *Integrations Server*. -. On the *Integrations Server* page, select *Force Restart*. - -After the restart, {integrations-server} will enroll a new {agent} for the {ecloud} agent policy and {fleet-server} should return to a healthy state. -==== - -[discrete] -[[migrate-elastic-agent-confirm-policy]] -=== Confirm your policy settings - -Now that the {fleet} settings are correctly set up, it pays to ensure that the {agent} policy is also correctly pointing to the correct entities. - -. In the target cluster, go to *Fleet -> Agent policies*. -. Select a policy to verify. -. Open the *Settings* tab. -. Ensure that *Fleet Server*, *Output for integrations*, and *Output for agent monitoring* are all set to the newly created entities. -+ -[role="screenshot"] -image::images/migrate-agent-policy-settings.png[An agent policy's settings showing the newly created entities] - -NOTE: If you modified the {fleet-server} and the output in place these would have been updated accordingly. However if new entities are created, then ensure that the correct ones are referenced here. - -[discrete] -[[migrate-elastic-agent-migrated-policies]] -== Agent policies in the new target cluster - -By creating the new target cluster from a snapshot, all of your policies should have been created along with all of the agents. These agents will be offline due to the fact that the actual agents are not checking in with the new, target cluster (yet) and are still communicating with the source cluster. - -The agents can now be re-enrolled into these policies and migrated over to the new, target cluster. - -[discrete] -[[migrate-elastic-agent-migrated-agents]] -== Migrate {agent}s to the new target cluster - -In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new, target cluster. - -This is best performed one policy at a time. For a given policy, you need to capture the enrollment token and the URL for the agent to connect to. You can find these by running the in-product steps to add a new agent. - -. On the target cluster, open *Fleet* and select *Add agent*. -. Select your newly created policy. -. In the section *Install {agent} on your host*, find the sample install command. This contains the details you'll need to enroll the agents, namely the enrollment token and the {fleet-server} URL. -. Copy the portion of the install command containing these values. That is, `–url= –enrollment-token=`. -+ -[role="screenshot"] -image::images/migrate-agent-install-command.png[Install command from the Add Agent UI] - -. On the host machines where the current agents are installed, enroll the agents again using this copied URL and the enrollment token: -+ -[source,shell] ----- -sudo elastic-agent enroll --url= --enrollment-token= ----- -+ -The command output should be like the following: -+ -[role="screenshot"] -image::images/migrate-agent-install-command-output.png[Install command output] - -. The agent on each host will now check into the new {fleet-server} and appear in the new target cluster. In the source cluster, the agents will go offline as they won't be sending any check-ins. -+ -[role="screenshot"] -image::images/migrate-agent-newly-enrolled-agents.png[Newly enrolled agents in the target cluster] - -. Repeat this procedure for each {agent} policy. - -If all has gone well, you've successfully migrated your {fleet}-managed {agent}s to a new cluster. diff --git a/docs/en/ingest-management/fleet/monitor-elastic-agent.asciidoc b/docs/en/ingest-management/fleet/monitor-elastic-agent.asciidoc deleted file mode 100644 index d4cedef8b..000000000 --- a/docs/en/ingest-management/fleet/monitor-elastic-agent.asciidoc +++ /dev/null @@ -1,337 +0,0 @@ -[[monitor-elastic-agent]] -= Monitor {agent}s - -{fleet} provides built-in capabilities for monitoring your fleet of {agent}s. -In {fleet}, you can: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -Agent monitoring is turned on by default in the agent policy unless you -turn it off. Want to turn off agent monitoring to stop collecting logs and -metrics? See <>. - -Want to receive an alert when your {agent} health status changes? -Refer to <> and our <>. - -For more detail about how agents communicate their status to {fleet}, refer to <>. - -[discrete] -[[view-agent-status]] -= View agent status overview - -To view the overall status of your {fleet}-managed agents, in {kib}, go to -**Management -> {fleet} -> Agents**. - -[role="screenshot"] -image::images/kibana-fleet-agents.png[Agents tab showing status of each {agent}] - -IMPORTANT: The **Agents** tab in **{fleet}** displays a maximum of 10,000 agents, shown on 500 pages with 20 rows per page. -If you have more than 10,000 agents, we recommend using the filtering and sorting options described in this section to narrow the table to fewer than 10,000 rows. - -{agent}s can have the following statuses: - -|=== - -| *Healthy* | {agent}s are enrolled and checked in. There are no agent policy updates or automatic agent binary updates in progress, but the agent binary may still be out of date. {agent}s continuously check in to the {fleet-server} for required updates. - -| *Unhealthy* | {agent}s have errors or are running in a degraded state. An agent will be reported as `unhealthy` as a result of a configuration problem on the host system. For example, an {agent} may not have the correct permissions required to run an integration that has been added to the {agent} policy. In this case, you may need to investigate and address the situation. - -| *Updating* | {agent}s are updating the agent policy, updating the binary, or enrolling or unenrolling from {fleet}. - -| *Offline* | {agent}s have stayed in an unhealthy status for a period of time. Offline agent's API keys remain valid. You can still see these {agent}s in the {fleet} UI and investigate them for further diagnosis if required. - -| *Inactive* | {agent}s have been offline for longer than the time set in your <>. These {agent}s are valid, but have been removed from the main {fleet} UI. - -| *Unenrolled* | {agent}s have been manually unenrolled and their API keys have been removed from the system. You can <> an offline {agent} using {agent} actions if you determine it's offline and no longer valid. - -These agents need to re-enroll in {fleet} to be operational again. - -|=== - -The following diagram shows the flow of {agent} statuses: - -image::images/agent-status-diagram.png[Diagram showing the flow of Fleet Agent statuses] - -To filter the list of agents by status, click the **Status** dropdown and select -one or more statuses. - -[role="screenshot"] -image::images/agent-status-filter.png[Agent Status dropdown with multiple statuses selected] - -For advanced filtering, use the search bar to create structured queries -using {kibana-ref}/kuery-query.html[{kib} Query Language]. For example, enter -`local_metadata.os.family : "darwin"` to see only agents running on macOS. - -You can also sort the list of agents by host, last activity time, or version, by clicking on the table headings for those fields. - -To perform a bulk action on more than 10,000 agents, you can select the **Select everything on all pages** button. - -[discrete] -[[view-agent-details]] -= View details for an agent - -In {fleet}, you can access the detailed status of an individual agent and the integrations that are associated with it through the agent policy. - -. In {fleet}, open the **Agents** tab. - -. In the **Host** column, click the agent's name. - -On the **Agent details** tab, the **Overview** pane shows details about the agent and its performance, including its memory and CPU usage, last activity time, and last checkin message. To access metrics visualizations, you can also <>. - -image::images/agent-detail-overview.png[Agent details overview pane with various metrics] - -The **Integrations** pane shows the status of the integrations that have been added to the agent policy. Expand any integration to view its health status. Any errors or warnings are displayed as alerts. - -image::images/agent-detail-integrations-health.png[Agent details integrations pane with health status] - -To gather more detail about a particular error or warning, from the **Actions** menu select **View agent JSON**. The JSON contains all of the raw agent data tracked by Fleet. - -NOTE: Currently, the **Integrations** pane shows the health status only for agent inputs. Health status is not yet available for agent outputs. - -[discrete] -[[view-agent-activity]] -= View agent activity - -You can view a chronological list of all operations performed by your {agent}s. - -On the **Agents** tab, click **Agent activity**. All agent operations are shown, beginning from the most recent, including any in progress operations. - -[role="screenshot"] -image::images/agent-activity.png[Agent activity panel, showing the operations for an {agent}] - -[discrete] -[[view-agent-logs]] -= View agent logs - -When {fleet} reports an agent status like `Offline` or `Unhealthy`, you might -want to view the agent logs to diagnose potential causes. If agent monitoring -is configured to collect logs (the default), you can view agent logs in {fleet}. - -. In {fleet}, open the **Agents** tab. - -. In the **Host** column, click the agent's name. - -. On the **Agent details** tab, verify that **Monitor logs** is enabled. If -it's not, refer to <>. - -. Click the **Logs** tab. -+ -[role="screenshot"] -image::images/view-agent-logs.png[View agent logs under agent details] - -On the **Logs** tab you can filter, search, and explore the agent logs: - -* Use the search bar to create structured queries using -{kibana-ref}/kuery-query.html[{kib} Query Language]. -* Choose one or more datasets to show logs for specific programs, such as -{filebeat} or {fleet-server}. -+ -[role="screenshot"] -image::images/kibana-fleet-datasets.png[{fleet} showing datasets for logging] -* Change the log level to filter the view by log levels. Want to see debugging -logs? Refer to <>. -* Change the time range to view historical logs. -* Click **Open in Logs** to tail agent log files in real time. For more -information about logging, refer to -{observability-guide}/tail-logs.html[Tail log files]. - -[discrete] -[[change-logging-level]] -= Change the logging level - -The logging level for monitored agents is set to `info` by default. You can -change the agent logging level, for example, to turn on debug logging remotely: - -. After navigating to the **Logs** tab as described in <>, -scroll down to find the **Agent logging level** setting. -+ -[role="screenshot"] -image::images/agent-set-logging-level.png[{Logs} tab showing the agent logging level setting] - -. Select an *Agent logging level*: -+ -|=== -a|`error` | Logs errors and critical errors. -a|`warning`| Logs warnings, errors, and critical errors. -a|`info`| Logs informational messages, including the number of events that are published. -Also logs any warnings, errors, or critical errors. -a|`debug`| Logs debug messages, including a detailed printout of all events flushed. Also -logs informational messages, warnings, errors, and critical errors. -|=== - -. Click **Apply changes** to apply the updated logging level to the agent. - -[discrete] -[[collect-agent-diagnostics]] -= Collect {agent} diagnostics - -{fleet} provides the ability to remotely generate and gather an {agent}'s diagnostics bundle. -An agent can gather and upload diagnostics if it is online in a `Healthy` or `Unhealthy` state. -To download the diagnostics bundle for local viewing: - -. In {fleet}, open the **Agents** tab. - -. In the **Host** column, click the agent's name. - -. Select the **Diagnostics** tab and click the **Request diagnostics .zip** button. -+ -[role="screenshot"] -image::images/collect-agent-diagnostics1.png[Collect agent diagnostics under agent details] - -. In the **Request Diagnostics** pop-up, select **Collect additional CPU metrics** if you'd like detailed CPU data. -+ -[role="screenshot"] -image::images/collect-agent-diagnostics2.png[Collect agent diagnostics confirmation pop-up] - -. Click the **Request diagnostics** button. - -When available, the new diagnostic bundle will be listed on this page, as well as any in-progress or previously collected bundles for the {agent}. - -Note that the bundles are stored in {es} and are removed automatically after 7 days. You can also delete any previously created bundle by clicking the `trash can` icon. - -[discrete] -[[view-agent-metrics]] -= View the {agent} metrics dashboard - -When agent monitoring is configured to collect metrics (the default), you can -use the **[Elastic Agent] Agent metrics** dashboard in {kib} to view details -about {agent} resource usage, event throughput, and errors. This information can -help you identify problems and make decisions about scaling your deployment. - -To view agent metrics: - -. In {fleet}, open the **Agents** tab. - -. In the **Host** column, click the agent's name. - -. On the **Agent details** tab, verify that **Monitor metrics** is enabled. If -it's not, refer to <>. - -. Click **View more agent metrics** to navigate to the -**[Elastic Agent] Agent metrics** dashboard. -+ -[role="screenshot"] -image::images/selected-agent-metrics-dashboard.png[Screen capture showing {agent} metrics] - -The dashboard uses standard {kib} visualizations that you can extend to meet -your needs. - -[discrete] -[[change-agent-monitoring]] -= Change {agent} monitoring settings - -Agent monitoring is turned on by default in the agent policy. To change agent -monitoring settings for all agents enrolled in a specific agent policy: - -. In {fleet}, open the **Agent policies** tab. - -. Click the agent policy to edit it, then click **Settings**. - -. Under **Agent monitoring**, deselect (or select) one or both of these -settings: **Collect agent logs** and **Collect agent metrics**. - -. Under **Advanced monitoring options** you can configure additional settings including an HTTP monitoring endpoint, diagnostics rate limiting, and diagnostics file upload limits. Refer to <> for details. - -. Save your changes. - -To turn off agent monitoring when creating a new agent policy: - -. In the **Create agent policy** flyout, expand **Advanced options**. - -. Under **Agent monitoring**, deselect **Collect agent logs** and -**Collect agent metrics**. - -. When you're done configuring the agent policy, click **Create agent policy**. - -[discrete] -[[external-elasticsearch-monitoring]] -= Send {agent} monitoring data to a remote {es} cluster - -You may want to store all of the health and status data about your {agents} in a remote {es} cluster, so that it's separate and independent from the deployment where you use {fleet} to manage the agents. - -To do so, follow the steps in <>. After the new output is configured, follow the steps to update the {agent} policy and make sure that the **Output for agent monitoring** setting is enabled. {agent} monitoring data will use the remote {es} output that you configured. - -[discrete] -[[fleet-alerting]] -= Enable alerts and ML jobs based on {fleet} and {agent} status - -You can access the health status of {fleet}-managed {agents} and other {fleet} settings through internal {fleet} indices. This enables you to leverage various applications within the {stack} that can be triggered by the provided information. For instance, you can now create alerts and machine learning (ML) jobs based on these specific fields. Refer to the {kibana-ref}/alerting-getting-started.html[Alerting documentation] or see the <> on this page to learn how to define rules that can trigger actions when certain conditions are met. - -This functionality allows you to effectively track an agent's status, and identify scenarios where it has gone offline, is experiencing health issues, or is facing challenges related to input or output. - -The following datastreams and fields are available. - -Datastream:: -`metrics-fleet_server.agent_status-default` -+ -This data stream publishes the number of {agents} in various states. -+ -**Fields** -+ - * `@timestamp` - * `fleet.agents.total` - A count of all agents - * `fleet.agents.enrolled` - A count of all agents currently enrolled - * `fleet.agents.unenrolled` - A count of agents currently unenrolled - * `fleet.agents.healthy` - A count of agents currently healthy - * `fleet.agents.offline` - A count of agents currently offline - * `fleet.agents.updating` - A count of agents currently in the process of updating - * `fleet.agents.unhealthy` - A count of agents currently unhealthy - * `fleet.agents.inactive` - A count of agents currently inactive -+ -NOTE: Other fields regarding agent status, based on input and output health, are currently under consideration for future development. - -Datastream:: -`metrics-fleet_server.agent_versions-default` -+ -This index publishes a separate document for each version number and a count of enrolled agents only. -+ -**Fields** -+ - * `@timestamp` - * `fleet.agent.version` - A keyword field containing the version number - * `fleet.agent.count` - A count of agents on the specified version - -[discrete] -[[fleet-alerting-example]] -== Example: Enable an alert for offline {agent}s - -You can set up an alert to notify you when one or more {agent}s goes offline: - -. In {kib}, navigate to **Management > Stack Management > Rules**. -. Click **Create rule**. -. Select **Elasticsearch query** as the rule type. -. Choose a name for the rule, for example `Elastic Agent status`. -. Select **KQL or Lucene** as the query type. -. Select `DATA VIEW metrics-*` as the data view. -. Define your query, for example: `fleet.agents.offline >= 1`. -. Set the alert group, threshold, and time window. For example: -** WHEN: `count()` -** OVER: `all documents` -** IS ABOVE: `0` -** FOR THE LAST `5 minutes` -+ -This will generate an alert when one or more agents are reported by the `fleet.agents.offline` field over the last five minutes to be offline. -. Set the number of documents to send, for example: -** SIZE: 100 -. Set **Check every** to the frequency at which the rule condition should be evaluated. The default setting is one minute. -. Select an action to occur when the rule conditions are met. For example, to set the alert to send an email when an alert occurs, select the Email connector type and specify: -** Email connector: `Elastic-Cloud-SMTP` -** Action frequency: `For each alert` and `On check intervals` -** Run when: `Query matched` -** To: -** Subject: -. Click **Save**. - -The new rule will be enabled and an email will be sent to the specified recipient when the alert conditions are met. - -From the **Rules** page you can select the rule you created to enable or disable it, and to view the rule details including a list of active alerts and an alert history. - -image::images/elastic-agent-status-rule.png[A screen capture showing the details for the new Elastic Agent status rule] diff --git a/docs/en/ingest-management/fleet/set-elastic-agent-enrollment-timeout.asciidoc b/docs/en/ingest-management/fleet/set-elastic-agent-enrollment-timeout.asciidoc deleted file mode 100644 index a2759c7be..000000000 --- a/docs/en/ingest-management/fleet/set-elastic-agent-enrollment-timeout.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[set-enrollment-timeout]] -= Set unenrollment timeout for ephemeral hosts - -++++ -Set unenrollment timeout -++++ - -IMPORTANT: This setting is only valid for {fleet}-managed agents. - -Set the unenrollment timeout in the agent policy to specify the amount of time -{fleet} waits for an {agent} to check in before forcing unenrollment. - -This setting is useful for {agent}s running on ephemeral hosts, such as docker -containers, which can disappear at any time. - -To set the unenrollment timeout: - -. In *{fleet}*, select *Agent policies*. - -. Click the policy name, then click *Settings*. - -. In the *Unenrollment timeout* field, enter a value in seconds. -+ -Set the unenrollment timeout to a value that's high enough to prevent {fleet} from -accidentally unenrolling agents during a temporary outage. For example, `1440`. - -[[why-set-timeout]] -== Why set an unenrollment timeout? - -Containers might be killed, deleted, or moved, preventing agents from -unenrolling gracefully. If there's no unenrollment timeout specified, the agent -will continue to appear as `Offline` in {fleet}, and its API keys will remain -active until explicitly revoked. To remove these "orphan" agents from the main -Agent list and make them inactive, you need to force unenroll them in {fleet}. -However, in some environments, like {ess} on {ecloud}, you may not be allowed to -unenroll agents, which means the agents will appear as `Offline` indefinitely. - -To avoid this problem, set the unenrollment timeout in the agent policy for -{agent}s running on ephemeral hosts. If an {agent} fails to check in with -{fleet} during the timeout period, {fleet} will: - -* force unenroll the agent -* set the agent status to `Inactive` -* invalidate the API keys used by the agent - -The agent's ID and logs will be preserved in {es}, but container logs will be -lost. - -NOTE: The {ecloud} agent policy, which is used by {fleet-server} in -{ecloud}, already has an unenrollment timeout specified. You cannot change this -setting. - -== What if I don't set an unenrollment timeout? - -If there is no unenrollment timeout specified and an {agent} fails to check in, -the agent remains enrolled in {fleet} and its status is set to `Offline`. When -the agent checks in with {fleet} again, the status changes to `Healthy`, and the -agent receives updates, as needed. - -In most cases, you should avoid setting an unenrollment timeout for {agent}s -running on laptops or servers that may be shut down, paused, or restarted for -short periods of time before coming back online. - -== What if an {agent} is accidentally unenrolled? - -You must reenroll the {agent} in {fleet} to resume normal functioning. The -{agent} will enroll using a new agent ID and API keys, which means the Agent -will be appear as a new entry in the Agents list. When you click to see -the Agent details, including logs, you'll see details for the new agent. - -If you reenroll an {agent} inside the same container, it uses the local log -files, disk queue, and registry from the previous run and resumes sending data -to {es}. This minimizes data loss and duplication. - -== How do I see logs for inactive {agent}s? - -Logs for inactive {agent}s are preserved in {es}. To view them: - -. In *{fleet}*, select *Agents*. - -. In the *Status* menu, select *Inactive*. - -. Click the name of the inactive agent to see agent details, then click *Logs*. - -. If necessary, adjust the time range to see older logs. - diff --git a/docs/en/ingest-management/fleet/set-inactivity-timeout.asciidoc b/docs/en/ingest-management/fleet/set-inactivity-timeout.asciidoc deleted file mode 100644 index 8680bfb04..000000000 --- a/docs/en/ingest-management/fleet/set-inactivity-timeout.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[set-inactivity-timeout]] -= Set inactivity timeout - -The inactivity timeout moves {agent}s to inactive status after being offline for a set amount of time. -Inactive {agent}s are still valid {agent}s, but are removed from the main {fleet} UI allowing you to better manage {agent}s and -declutter the {fleet} UI. - -When {fleet-server} receives a check-in from an inactive {agent}, it returns to healthy status. - -For example, if an employee is on holiday with their laptop off, -the {agent} will transition to offline then inactive once the inactivity timeout limit is reached. -This prevents the inactive {agent} from cluttering the {fleet} UI. -When the employee returns, the {agent} checks in and returns to healthy status with valid API keys. - -If an {agent} is no longer valid, you can manually <> inactive {agent}s to revoke the API keys. -Unenrolled agents need to be re-enrolled to be operational again. - -For more on {agent} statuses, see <>. - - -[discrete] -[[setting-inactivity-timeout]] -== Set the inactivity timeout - -Set the inactivity timeout in the {agent} policy to the amount of time after which you want an offline {agent} to become inactive. - -To set the inactivity timeout: - -. In *{fleet}*, select *Agent policies*. - -. Click the policy name, then click *Settings*. - -. In the *Inactivity timeout* field, enter a value in seconds. The default value is 1209600 seconds or two weeks. \ No newline at end of file diff --git a/docs/en/ingest-management/fleet/unenroll-elastic-agent.asciidoc b/docs/en/ingest-management/fleet/unenroll-elastic-agent.asciidoc deleted file mode 100644 index ea36d0871..000000000 --- a/docs/en/ingest-management/fleet/unenroll-elastic-agent.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[unenroll-elastic-agent]] -= Unenroll {agent}s - -You can unenroll {agent}s to invalidate the API key used to connect to {es}. - -. In {fleet}, select *Agents*. - -. To unenroll a single agent, choose *Unenroll agent* from the *Actions* menu -next to the agent you want to unenroll. - -. To unenroll multiple agents, bulk select the agents and click -*Unenroll agents*. -+ -Unable to select multiple agents? Confirm that your subscription level supports -selective agent unenrollment in {fleet}. For more information, refer to -{subscriptions}[{stack} subscriptions]. - -Unenrolled agents will continue to run, but will not be able to send data. They -will show this error instead: `invalid api key to authenticate with fleet`. - -TIP: If unenrollment hangs, select *Force unenroll* to invalidate all API -keys related to the agent and change the status to `inactive` so that the agent -no longer appears in {fleet}. diff --git a/docs/en/ingest-management/fleet/upgrade-elastic-agent.asciidoc b/docs/en/ingest-management/fleet/upgrade-elastic-agent.asciidoc deleted file mode 100644 index 3cca74e96..000000000 --- a/docs/en/ingest-management/fleet/upgrade-elastic-agent.asciidoc +++ /dev/null @@ -1,313 +0,0 @@ -[[upgrade-elastic-agent]] -= Upgrade {fleet}-managed {agent}s - -++++ -Upgrade {agent}s -++++ - -TIP: Want to upgrade a standalone agent instead? See <>. - -With {fleet} upgrade capabilities, you can view and select agents that are out -of date, and trigger selected agents to download, install, and run the new -version. You can trigger upgrades to happen immediately, or specify a -maintenance window during which the upgrades will occur. If your {stack} -subscription level supports it, you can schedule upgrades to occur at a specific -date and time. - -In most failure cases the {agent} may retry an upgrade after a short wait. The -wait durations between retries are: 1m, 5m, 10m, 15m, 30m, and 1h. During this -time, the {agent} may show up as "retrying" in the {fleet} UI. As well, if agent -upgrades have been detected to have stalled, you can restart the upgrade process -for a <> or in bulk for -<>. - -This approach simplifies the process of keeping your agents up to date. It also -saves you time because you don't need third-party tools or processes to -manage upgrades. - -By default, {agent}s require internet access to perform binary upgrades from -{fleet}. However, you can host your own artifacts repository and configure -{agent}s to download binaries from it. For more information, refer to -<>. - -NOTE: The upgrade feature is not supported for upgrading DEB/RPM packages or Docker images. -Refer to <> to upgrade a DEB or RPM package manually. - -For a detailed view of the {agent} upgrade process and the interactions between {fleet}, {agent}, and {es}, refer to the link:https://github.com/elastic/elastic-agent/blob/main/docs/upgrades.md[Communications amongst components] diagram in the `elastic-agent` GitHub repository. - -[discrete] -[[upgrade-agent-restrictions]] -== Restrictions - -Note the following restrictions with upgrading an {agent}: - -* {agent} cannot be upgraded to a version higher than the highest currently installed version of {fleet-server}. When you upgrade a set of {agents} that are currently at the same version, you should first upgrade any agents that are acting as {fleet-server} (any agents that have a {fleet-server} policy associated with them). -* To be upgradeable, {agent} must not be running inside a container. -* To be upgradeable in a Linux environment, {agent} must be running as a service. The Linux Tar install instructions for {agent} provided in {fleet} include the commands to run it as a service. {agent} RPM and DEB system packages cannot be upgraded through {fleet}. - -These restrictions apply whether you are upgrading {agents} individually or in bulk. In the event that an upgrade isn't eligible, {fleet} generates a warning message when you attempt the upgrade. - -[discrete] -[[upgrade-agent]] -== Upgrading {agent} - -To upgrade your {agent}s, go to *Management > {fleet} > Agents* in {kib}. You -can perform the following upgrade-related actions: - -[options,header] -|=== -| User action | Result - -|<> -|Upgrade a single agent to a specific version. - -|<> -|Do a rolling upgrade of multiple agents over a specific time period. - -|<> -|Schedule an upgrade of one or more agents to begin at a specific time. - -|<> -|View the detailed status of an agent upgrade, including upgrade metrics and agent logs. - -|<> -|Restart an upgrade process that has stalled for a single agent. - -|<> -|Do a bulk restart of the upgrade process for a set of agents. - -|=== - -[discrete] -[[upgrade-an-agent]] -== Upgrade a single {agent} - -. On the **Agents** tab, agents that can be upgraded are identified with an **Upgrade available** indicator. -+ -[role="screenshot"] -image::images/upgrade-available-indicator.png[Indicator on the UI showing that the agent can be upgraded] -+ -You can also click the **Upgrade available** button to filter the list agents to only those that currently can be upgraded. -. From the **Actions** menu next to the agent, choose **Upgrade agent**. -+ -[role="screenshot"] -image::images/upgrade-single-agent.png[Menu for upgrading a single {agent}] - -. In the Upgrade agent window, select or specify an upgrade version and click -**Upgrade agent**. -+ -In certain cases the latest available {agent} version may not be recognized by {kib}. For instance, this occurs when the {kib} version is lower than the {agent} version. You can specify a custom version for {agent} to upgrade to by entering the version into the *Upgrade version* text field. -+ -[role="screenshot"] -image::images/upgrade-agent-custom.png[Menu for upgrading a single {agent}] - -[discrete] -[[rolling-agent-upgrade]] -== Do a rolling upgrade of multiple {agent}s - -You can do rolling upgrades to avoid exhausting network resources when updating -a large number of {agent}s. - -. On the **Agents** tab, select multiple agents, and click **Actions**. - -. From the **Actions** menu, choose to upgrade the agents. - -. In the Upgrade agents window, select an upgrade version. - -. Select the amount of time available for the maintenance window. The upgrades -are spread out uniformly across this maintenance window to avoid exhausting -network resources. -+ -To force selected agents to upgrade immediately when the upgrade is -triggered, select **Immediately**. Avoid using this setting for batches of more -than 10 agents. - -. Upgrade the agents. - -[discrete] -[[schedule-agent-upgrade]] -== Schedule an upgrade - -. On the **Agents** tab, select one or more agents, and click **Actions**. - -. From the **Actions** menu, choose to schedule an upgrade. -+ -[role="screenshot"] -image::images/schedule-upgrade.png[Menu for scheduling {agent} upgrades] -+ -If the schedule option is grayed out, it may not be available at your -subscription level. For more information, refer to {subscriptions}[{stack} -subscriptions]. - -. In the Upgrade window, select an upgrade version. - -. Select a maintenance window. For more information, refer to -<>. - -. Set the date and time when you want the upgrade to begin. - -. Click **Schedule**. - -[discrete] -[[view-upgrade-status]] -== View upgrade status - -On the Agents tab, when you trigger an upgrade, agents that are upgrading have the status `Updating` until the upgrade is complete, and then the status changes back to `Healthy`. - -Agents on version 8.12 and higher that are currently upgrading additionally show a detailed upgrade status indicator. - -[role="screenshot"] -image::images/upgrade-states.png[Detailed state of an upgrading agent] - -The following table explains the upgrade states in the order that they can occur. - -.{agent} upgrade states -|=== -| State | Description - -| Upgrade requested | {agent} has received the upgrade notice from {fleet}. -| Upgrade scheduled | {agent} has received the upgrade notice from {fleet} and the upgrade will start at the indicated time. -| Upgrade downloading | {agent} is downloading the archive containing the new version artifact. -| Upgrade extracting | {agent} is extracting the new version artifact from the downloaded archive. -| Upgrade replacing | {agent} is currently replacing the former, pre-upgrade agent artifact with the new one. -| Upgrade restarting | {agent} has been replaced with a new version and is now restarting in order to apply the update. -| Upgrade monitoring | The newly upgraded {agent} has started and is being monitored for errors. -| Upgrade rolled back | The upgrade was unsuccessful. {agent} is being rolled back to the former, pre-upgrade version. -| Upgrade failed | An error has been detected in the newly upgraded {agent} and the attempt to roll the upgrade back to the previous version has failed. - -|=== - -Under routine circumstances, an {agent} upgrade happens quickly. You typically will not see the agent transition through each of the upgrade states. The detailed upgrade status can be a very useful tool especially if you need to diagnose the state of an agent that may have become stuck, or just appears to have become stuck, during the upgrade process. - -Beside the upgrade status indicator, you can hover your cursor over the information icon to get more detail about the upgrade. - -[role="screenshot"] -image::images/upgrade-detailed-state01.png[Granular upgrade details shown as hover text (agent has requested an upgrade)] - -[role="screenshot"] -image::images/upgrade-detailed-state02.png[Granular upgrade details shown as hover text (agent is restarting to apply the update)] - -Note that when you upgrade agents from versions below 8.12, the upgrade details are not provided. - -[role="screenshot"] -image::images/upgrade-non-detailed.png[An earlier release agent showing only the updating state without additional details] - -When upgrading many agents, you can fine tune the maintenance window by -viewing stats and metrics about the upgrade: - -. On the **Agents** tab, click the host name to view agent details. If you -don't see the host name, try refreshing the page. -. Click **View more agent metrics** to open the **[{agent}] Agent metrics** dashboard. - -If an upgrade appears to have stalled, you can <>. - -If an upgrade fails, you can view the agent logs to find the reason: - -.. In {fleet}, in the Host column, click the agent's name. -.. Open the **Logs** tab. -.. Search for failures. -+ -[role="screenshot"] -image::images/upgrade-failure.png[Agent logs showing upgrade failure] - -[discrete] -[[restart-upgrade-single]] -== Restart an upgrade for a single agent - -An {agent} upgrade process may sometimes stall. This can happen for various -reasons, including, for example, network connectivity issues or a delayed shutdown. - -When an {agent} upgrade has been detected to be stuck, a warning indicator -appears on the UI. When this occurs, you can restart the upgrade from either the -*Agents* tab on the main {fleet} page or from the details page for any individual -agent. - -Note that there is a required 10 minute cooldown period in between restart attempts. -After launching a restart action you need to wait for the cooldown to complete before -initiating another restart. - -Restart from main {fleet} page: - -. From the **Actions** menu next to an agent that is stuck in an `Updating` -state, choose **Restart upgrade**. -. In the **Restart upgrade** window, select an upgrade version and click -**Upgrade agent**. - -Restart from an agent details page: - -. In {fleet}, in the **Host** column, click the agent's name. On the -**Agent details** tab, a warning notice appears if the agent is detected to have -stalled during an upgrade. -. Click *Restart upgrade*. -. In the **Restart upgrade** window, select an upgrade version and click -**Upgrade agent**. - -[discrete] -[[restart-upgrade-multiple]] -== Restart an upgrade for multiple agents - -When the upgrade process for multiple agents has been detected to have stalled, -you can restart the upgrade process in bulk. As with -<>, -a 10 minute cooldown period is enforced between restarts. - -. On the **Agents** tab, select any set of the agents that are indicated to be stuck, and click **Actions**. -. From the **Actions** menu, select **Restart upgrade agents**. -. In the **Restart upgrade...** window, select an upgrade version. -. Select the amount of time available for the maintenance window. The upgrades -are spread out uniformly across this maintenance window to avoid exhausting -network resources. -+ -To force selected agents to upgrade immediately when the upgrade is -triggered, select **Immediately**. Avoid using this setting for batches of more -than 10 agents. -. Restart the upgrades. - -[discrete] -[[upgrade-system-packages]] -== Upgrade RPM and DEB system packages - -If you have installed and enrolled {agent} using either a DEB (for a Debian-based Linux distribution) or RPM (for a RedHat-based Linux distribution) install package, the upgrade cannot be managed by {fleet}. -Instead, you can perform the upgrade using these steps. - -For installation steps refer to <>. - -[discrete] -=== Upgrade a DEB {agent} installation: - -. Download the {agent} Debian install package for the release that you want to upgrade to: -+ -[source,terminal,subs="attributes"] ----- -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-amd64.deb ----- - -. Upgrade {agent} to the target release: -+ -[source,terminal,subs="attributes"] ----- -sudo dpkg -i elastic-agent-{version}-amd64.deb ----- - -. Confirm in {fleet} that the agent has been upgraded to the target version. -Note that the **Upgrade agent** option in the **Actions** menu next to the agent will be disabled since [fleet]-managed upgrades are not supported for this package type. - -[discrete] -=== Upgrade an RPM {agent} installation: - -. Download the {agent} RPM install package for the release that you want to upgrade to: -+ -[source,terminal,subs="attributes"] ----- -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-x86_64.rpm ----- - -. Upgrade {agent} to the target release: -+ -[source,terminal,subs="attributes"] ----- -sudo rpm -U elastic-agent-{version}-x86_64.rpm ----- - -. Confirm in {fleet} that the agent has been upgraded to the target version. -Note that the **Upgrade agent** option in the **Actions** menu next to the agent will be disabled since [fleet]-managed upgrades are not supported for this package type. diff --git a/docs/en/ingest-management/images/add-agent-to-hosts.png b/docs/en/ingest-management/images/add-agent-to-hosts.png deleted file mode 100644 index 7ad181667..000000000 Binary files a/docs/en/ingest-management/images/add-agent-to-hosts.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add-fleet-server-to-policy.png b/docs/en/ingest-management/images/add-fleet-server-to-policy.png deleted file mode 100644 index eb2976d5f..000000000 Binary files a/docs/en/ingest-management/images/add-fleet-server-to-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add-integration-standalone.png b/docs/en/ingest-management/images/add-integration-standalone.png deleted file mode 100644 index 0ccb1a258..000000000 Binary files a/docs/en/ingest-management/images/add-integration-standalone.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add-integration.png b/docs/en/ingest-management/images/add-integration.png deleted file mode 100644 index 86de6040a..000000000 Binary files a/docs/en/ingest-management/images/add-integration.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add-logstash-output.png b/docs/en/ingest-management/images/add-logstash-output.png deleted file mode 100644 index 39eaf6e53..000000000 Binary files a/docs/en/ingest-management/images/add-logstash-output.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add-processor.png b/docs/en/ingest-management/images/add-processor.png deleted file mode 100644 index 1d7330d40..000000000 Binary files a/docs/en/ingest-management/images/add-processor.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add-remove-tags.png b/docs/en/ingest-management/images/add-remove-tags.png deleted file mode 100644 index 026568033..000000000 Binary files a/docs/en/ingest-management/images/add-remove-tags.png and /dev/null differ diff --git a/docs/en/ingest-management/images/add_resource_metadata.png b/docs/en/ingest-management/images/add_resource_metadata.png deleted file mode 100644 index 67a6e2c2d..000000000 Binary files a/docs/en/ingest-management/images/add_resource_metadata.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-activity.png b/docs/en/ingest-management/images/agent-activity.png deleted file mode 100644 index 1802173ef..000000000 Binary files a/docs/en/ingest-management/images/agent-activity.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-architecture.png b/docs/en/ingest-management/images/agent-architecture.png deleted file mode 100644 index e667e5955..000000000 Binary files a/docs/en/ingest-management/images/agent-architecture.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-detail-integrations-health.png b/docs/en/ingest-management/images/agent-detail-integrations-health.png deleted file mode 100644 index ec0aa5b96..000000000 Binary files a/docs/en/ingest-management/images/agent-detail-integrations-health.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-detail-overview.png b/docs/en/ingest-management/images/agent-detail-overview.png deleted file mode 100644 index bae0cf5e9..000000000 Binary files a/docs/en/ingest-management/images/agent-detail-overview.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-metrics-dashboard.png b/docs/en/ingest-management/images/agent-metrics-dashboard.png deleted file mode 100644 index 65f3754b8..000000000 Binary files a/docs/en/ingest-management/images/agent-metrics-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-monitoring-assets.png b/docs/en/ingest-management/images/agent-monitoring-assets.png deleted file mode 100644 index 33649b124..000000000 Binary files a/docs/en/ingest-management/images/agent-monitoring-assets.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-monitoring-settings.png b/docs/en/ingest-management/images/agent-monitoring-settings.png deleted file mode 100644 index f8fe9b74f..000000000 Binary files a/docs/en/ingest-management/images/agent-monitoring-settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-output-settings.png b/docs/en/ingest-management/images/agent-output-settings.png deleted file mode 100644 index e9b234710..000000000 Binary files a/docs/en/ingest-management/images/agent-output-settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-policy-custom-field.png b/docs/en/ingest-management/images/agent-policy-custom-field.png deleted file mode 100644 index 5c66512fd..000000000 Binary files a/docs/en/ingest-management/images/agent-policy-custom-field.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-privilege-mode.png b/docs/en/ingest-management/images/agent-privilege-mode.png deleted file mode 100644 index b6596cb51..000000000 Binary files a/docs/en/ingest-management/images/agent-privilege-mode.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-proxy-server-managed-deployment.png b/docs/en/ingest-management/images/agent-proxy-server-managed-deployment.png deleted file mode 100644 index d4eca6801..000000000 Binary files a/docs/en/ingest-management/images/agent-proxy-server-managed-deployment.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-proxy-server.png b/docs/en/ingest-management/images/agent-proxy-server.png deleted file mode 100644 index ef94ca561..000000000 Binary files a/docs/en/ingest-management/images/agent-proxy-server.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-set-logging-level.png b/docs/en/ingest-management/images/agent-set-logging-level.png deleted file mode 100644 index f90e446f0..000000000 Binary files a/docs/en/ingest-management/images/agent-set-logging-level.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-status-diagram.png b/docs/en/ingest-management/images/agent-status-diagram.png deleted file mode 100644 index abbdd2a03..000000000 Binary files a/docs/en/ingest-management/images/agent-status-diagram.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-status-filter.png b/docs/en/ingest-management/images/agent-status-filter.png deleted file mode 100644 index 78624b971..000000000 Binary files a/docs/en/ingest-management/images/agent-status-filter.png and /dev/null differ diff --git a/docs/en/ingest-management/images/agent-tags.png b/docs/en/ingest-management/images/agent-tags.png deleted file mode 100644 index fcfc82ee4..000000000 Binary files a/docs/en/ingest-management/images/agent-tags.png and /dev/null differ diff --git a/docs/en/ingest-management/images/apply-agent-policy.png b/docs/en/ingest-management/images/apply-agent-policy.png deleted file mode 100644 index 694e7f150..000000000 Binary files a/docs/en/ingest-management/images/apply-agent-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/ca-certs.png b/docs/en/ingest-management/images/ca-certs.png deleted file mode 100644 index a3c63403a..000000000 Binary files a/docs/en/ingest-management/images/ca-certs.png and /dev/null differ diff --git a/docs/en/ingest-management/images/ca.png b/docs/en/ingest-management/images/ca.png deleted file mode 100644 index 02a8d4bd9..000000000 Binary files a/docs/en/ingest-management/images/ca.png and /dev/null differ diff --git a/docs/en/ingest-management/images/client-certs.png b/docs/en/ingest-management/images/client-certs.png deleted file mode 100644 index c71b91d0b..000000000 Binary files a/docs/en/ingest-management/images/client-certs.png and /dev/null differ diff --git a/docs/en/ingest-management/images/component-templates-list.png b/docs/en/ingest-management/images/component-templates-list.png deleted file mode 100644 index de5c9d26e..000000000 Binary files a/docs/en/ingest-management/images/component-templates-list.png and /dev/null differ diff --git a/docs/en/ingest-management/images/copy-api-key.png b/docs/en/ingest-management/images/copy-api-key.png deleted file mode 100644 index a5b69c3ca..000000000 Binary files a/docs/en/ingest-management/images/copy-api-key.png and /dev/null differ diff --git a/docs/en/ingest-management/images/create-agent-policy.png b/docs/en/ingest-management/images/create-agent-policy.png deleted file mode 100644 index c5fd3df95..000000000 Binary files a/docs/en/ingest-management/images/create-agent-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/create-component-template.png b/docs/en/ingest-management/images/create-component-template.png deleted file mode 100644 index 0e8600c27..000000000 Binary files a/docs/en/ingest-management/images/create-component-template.png and /dev/null differ diff --git a/docs/en/ingest-management/images/create-index-template.png b/docs/en/ingest-management/images/create-index-template.png deleted file mode 100644 index f39eb2d96..000000000 Binary files a/docs/en/ingest-management/images/create-index-template.png and /dev/null differ diff --git a/docs/en/ingest-management/images/create-standalone-agent-role.png b/docs/en/ingest-management/images/create-standalone-agent-role.png deleted file mode 100644 index c59ffe896..000000000 Binary files a/docs/en/ingest-management/images/create-standalone-agent-role.png and /dev/null differ diff --git a/docs/en/ingest-management/images/create-token.png b/docs/en/ingest-management/images/create-token.png deleted file mode 100644 index d139f8357..000000000 Binary files a/docs/en/ingest-management/images/create-token.png and /dev/null differ diff --git a/docs/en/ingest-management/images/data-stream-info.png b/docs/en/ingest-management/images/data-stream-info.png deleted file mode 100644 index ff57ec44c..000000000 Binary files a/docs/en/ingest-management/images/data-stream-info.png and /dev/null differ diff --git a/docs/en/ingest-management/images/data-stream-overview.png b/docs/en/ingest-management/images/data-stream-overview.png deleted file mode 100644 index 503661862..000000000 Binary files a/docs/en/ingest-management/images/data-stream-overview.png and /dev/null differ diff --git a/docs/en/ingest-management/images/download-agent-policy.png b/docs/en/ingest-management/images/download-agent-policy.png deleted file mode 100644 index b7126d395..000000000 Binary files a/docs/en/ingest-management/images/download-agent-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/elastic-agent-edit-proxy-secure-settings.png b/docs/en/ingest-management/images/elastic-agent-edit-proxy-secure-settings.png deleted file mode 100644 index 29ae77999..000000000 Binary files a/docs/en/ingest-management/images/elastic-agent-edit-proxy-secure-settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/elastic-agent-proxy-edit-agent-binary-source.png b/docs/en/ingest-management/images/elastic-agent-proxy-edit-agent-binary-source.png deleted file mode 100644 index 1a60e023c..000000000 Binary files a/docs/en/ingest-management/images/elastic-agent-proxy-edit-agent-binary-source.png and /dev/null differ diff --git a/docs/en/ingest-management/images/elastic-agent-proxy-edit-fleet-server.png b/docs/en/ingest-management/images/elastic-agent-proxy-edit-fleet-server.png deleted file mode 100644 index a389a242d..000000000 Binary files a/docs/en/ingest-management/images/elastic-agent-proxy-edit-fleet-server.png and /dev/null differ diff --git a/docs/en/ingest-management/images/elastic-agent-proxy-edit-output.png b/docs/en/ingest-management/images/elastic-agent-proxy-edit-output.png deleted file mode 100644 index 3828393de..000000000 Binary files a/docs/en/ingest-management/images/elastic-agent-proxy-edit-output.png and /dev/null differ diff --git a/docs/en/ingest-management/images/elastic-agent-proxy-edit-proxy.png b/docs/en/ingest-management/images/elastic-agent-proxy-edit-proxy.png deleted file mode 100644 index abf40d5fa..000000000 Binary files a/docs/en/ingest-management/images/elastic-agent-proxy-edit-proxy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/elastic-agent-proxy-gateway-secure.png b/docs/en/ingest-management/images/elastic-agent-proxy-gateway-secure.png deleted file mode 100644 index b40a2598a..000000000 Binary files a/docs/en/ingest-management/images/elastic-agent-proxy-gateway-secure.png and /dev/null differ diff --git a/docs/en/ingest-management/images/fleet-agents.png b/docs/en/ingest-management/images/fleet-agents.png deleted file mode 100644 index 9d453aeba..000000000 Binary files a/docs/en/ingest-management/images/fleet-agents.png and /dev/null differ diff --git a/docs/en/ingest-management/images/fleet-policy-hidden-secret.png b/docs/en/ingest-management/images/fleet-policy-hidden-secret.png deleted file mode 100644 index a39fc3483..000000000 Binary files a/docs/en/ingest-management/images/fleet-policy-hidden-secret.png and /dev/null differ diff --git a/docs/en/ingest-management/images/fleet-proxy-server.png b/docs/en/ingest-management/images/fleet-proxy-server.png deleted file mode 100644 index 3d83e7623..000000000 Binary files a/docs/en/ingest-management/images/fleet-proxy-server.png and /dev/null differ diff --git a/docs/en/ingest-management/images/fleet-server-certs.png b/docs/en/ingest-management/images/fleet-server-certs.png deleted file mode 100644 index bfb60776c..000000000 Binary files a/docs/en/ingest-management/images/fleet-server-certs.png and /dev/null differ diff --git a/docs/en/ingest-management/images/fleet-server-hosted-container.png b/docs/en/ingest-management/images/fleet-server-hosted-container.png deleted file mode 100644 index 96332494f..000000000 Binary files a/docs/en/ingest-management/images/fleet-server-hosted-container.png and /dev/null differ diff --git a/docs/en/ingest-management/images/fleet-server-prompt-managed.png b/docs/en/ingest-management/images/fleet-server-prompt-managed.png deleted file mode 100644 index 6e3d068c2..000000000 Binary files a/docs/en/ingest-management/images/fleet-server-prompt-managed.png and /dev/null differ diff --git a/docs/en/ingest-management/images/green-check.svg b/docs/en/ingest-management/images/green-check.svg deleted file mode 100644 index 23094411c..000000000 --- a/docs/en/ingest-management/images/green-check.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/en/ingest-management/images/gsub_cronjob.png b/docs/en/ingest-management/images/gsub_cronjob.png deleted file mode 100644 index a5e1d2b89..000000000 Binary files a/docs/en/ingest-management/images/gsub_cronjob.png and /dev/null differ diff --git a/docs/en/ingest-management/images/gsub_deployment.png b/docs/en/ingest-management/images/gsub_deployment.png deleted file mode 100644 index d3b166c7d..000000000 Binary files a/docs/en/ingest-management/images/gsub_deployment.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-add-agent-standalone01.png b/docs/en/ingest-management/images/guide-add-agent-standalone01.png deleted file mode 100644 index 9c0f9635c..000000000 Binary files a/docs/en/ingest-management/images/guide-add-agent-standalone01.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-add-nginx-integration.png b/docs/en/ingest-management/images/guide-add-nginx-integration.png deleted file mode 100644 index ba0a9c04f..000000000 Binary files a/docs/en/ingest-management/images/guide-add-nginx-integration.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-agent-logs-flowing.png b/docs/en/ingest-management/images/guide-agent-logs-flowing.png deleted file mode 100644 index 96101c1d9..000000000 Binary files a/docs/en/ingest-management/images/guide-agent-logs-flowing.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-agent-metrics-flowing.png b/docs/en/ingest-management/images/guide-agent-metrics-flowing.png deleted file mode 100644 index f8eb2ae0b..000000000 Binary files a/docs/en/ingest-management/images/guide-agent-metrics-flowing.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-agent-policies.png b/docs/en/ingest-management/images/guide-agent-policies.png deleted file mode 100644 index 651392026..000000000 Binary files a/docs/en/ingest-management/images/guide-agent-policies.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-create-agent-policy.png b/docs/en/ingest-management/images/guide-create-agent-policy.png deleted file mode 100644 index 35a39a76c..000000000 Binary files a/docs/en/ingest-management/images/guide-create-agent-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-create-first-deployment.png b/docs/en/ingest-management/images/guide-create-first-deployment.png deleted file mode 100644 index ed12eec89..000000000 Binary files a/docs/en/ingest-management/images/guide-create-first-deployment.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-install-agent-on-host.png b/docs/en/ingest-management/images/guide-install-agent-on-host.png deleted file mode 100644 index a502cfdff..000000000 Binary files a/docs/en/ingest-management/images/guide-install-agent-on-host.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-integrations-page.png b/docs/en/ingest-management/images/guide-integrations-page.png deleted file mode 100644 index 8a817ed6d..000000000 Binary files a/docs/en/ingest-management/images/guide-integrations-page.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-nginx-browser-breakdown.png b/docs/en/ingest-management/images/guide-nginx-browser-breakdown.png deleted file mode 100644 index 7f93ef7fb..000000000 Binary files a/docs/en/ingest-management/images/guide-nginx-browser-breakdown.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-nginx-integration-added.png b/docs/en/ingest-management/images/guide-nginx-integration-added.png deleted file mode 100644 index 7db08404a..000000000 Binary files a/docs/en/ingest-management/images/guide-nginx-integration-added.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-nginx-logs-dashboard.png b/docs/en/ingest-management/images/guide-nginx-logs-dashboard.png deleted file mode 100644 index 1a9cb48b4..000000000 Binary files a/docs/en/ingest-management/images/guide-nginx-logs-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-nginx-policy.png b/docs/en/ingest-management/images/guide-nginx-policy.png deleted file mode 100644 index 82a1680ad..000000000 Binary files a/docs/en/ingest-management/images/guide-nginx-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-nginx-welcome.png b/docs/en/ingest-management/images/guide-nginx-welcome.png deleted file mode 100644 index 82a137efb..000000000 Binary files a/docs/en/ingest-management/images/guide-nginx-welcome.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-sign-up-trial.png b/docs/en/ingest-management/images/guide-sign-up-trial.png deleted file mode 100644 index 10036e1fc..000000000 Binary files a/docs/en/ingest-management/images/guide-sign-up-trial.png and /dev/null differ diff --git a/docs/en/ingest-management/images/guide-system-metrics-dashboard.png b/docs/en/ingest-management/images/guide-system-metrics-dashboard.png deleted file mode 100644 index 24c7faf66..000000000 Binary files a/docs/en/ingest-management/images/guide-system-metrics-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/images/index-template-system-auth.png b/docs/en/ingest-management/images/index-template-system-auth.png deleted file mode 100644 index c4bedaff0..000000000 Binary files a/docs/en/ingest-management/images/index-template-system-auth.png and /dev/null differ diff --git a/docs/en/ingest-management/images/ingest_pipeline_custom_k8s.png b/docs/en/ingest-management/images/ingest_pipeline_custom_k8s.png deleted file mode 100644 index 9b97f7c30..000000000 Binary files a/docs/en/ingest-management/images/ingest_pipeline_custom_k8s.png and /dev/null differ diff --git a/docs/en/ingest-management/images/integration-root-requirement.png b/docs/en/ingest-management/images/integration-root-requirement.png deleted file mode 100644 index 597214d8d..000000000 Binary files a/docs/en/ingest-management/images/integration-root-requirement.png and /dev/null differ diff --git a/docs/en/ingest-management/images/integrations.png b/docs/en/ingest-management/images/integrations.png deleted file mode 100644 index 6f43f16e5..000000000 Binary files a/docs/en/ingest-management/images/integrations.png and /dev/null differ diff --git a/docs/en/ingest-management/images/k8skibanaUI.png b/docs/en/ingest-management/images/k8skibanaUI.png deleted file mode 100644 index d1ea9a8db..000000000 Binary files a/docs/en/ingest-management/images/k8skibanaUI.png and /dev/null differ diff --git a/docs/en/ingest-management/images/k8sscaling.png b/docs/en/ingest-management/images/k8sscaling.png deleted file mode 100644 index 6b219eae4..000000000 Binary files a/docs/en/ingest-management/images/k8sscaling.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-agent-flyout.png b/docs/en/ingest-management/images/kibana-agent-flyout.png deleted file mode 100644 index c96b394ca..000000000 Binary files a/docs/en/ingest-management/images/kibana-agent-flyout.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-agents.png b/docs/en/ingest-management/images/kibana-fleet-agents.png deleted file mode 100644 index 666000043..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-agents.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-datasets.png b/docs/en/ingest-management/images/kibana-fleet-datasets.png deleted file mode 100644 index c1d790b5e..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-datasets.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-datastreams.png b/docs/en/ingest-management/images/kibana-fleet-datastreams.png deleted file mode 100644 index 73aa22392..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-datastreams.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-enable.png b/docs/en/ingest-management/images/kibana-fleet-enable.png deleted file mode 100644 index 4a464d282..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-enable.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-enroll.png b/docs/en/ingest-management/images/kibana-fleet-enroll.png deleted file mode 100644 index 036558ea4..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-enroll.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-log-levels.png b/docs/en/ingest-management/images/kibana-fleet-log-levels.png deleted file mode 100644 index eb4930bc7..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-log-levels.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-policies-default-yaml.png b/docs/en/ingest-management/images/kibana-fleet-policies-default-yaml.png deleted file mode 100644 index 15b19225f..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-policies-default-yaml.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-privileges-all.png b/docs/en/ingest-management/images/kibana-fleet-privileges-all.png deleted file mode 100644 index 128b1862b..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-privileges-all.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-privileges-read.png b/docs/en/ingest-management/images/kibana-fleet-privileges-read.png deleted file mode 100644 index 7288e9974..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-privileges-read.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-fleet-privileges.png b/docs/en/ingest-management/images/kibana-fleet-privileges.png deleted file mode 100644 index cea848dab..000000000 Binary files a/docs/en/ingest-management/images/kibana-fleet-privileges.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-ingest-manager-configurations-default-yaml.png b/docs/en/ingest-management/images/kibana-ingest-manager-configurations-default-yaml.png deleted file mode 100644 index 0ac09f170..000000000 Binary files a/docs/en/ingest-management/images/kibana-ingest-manager-configurations-default-yaml.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kibana-ingest-manager-integrations-list-installed.png b/docs/en/ingest-management/images/kibana-ingest-manager-integrations-list-installed.png deleted file mode 100644 index 3f8984bcb..000000000 Binary files a/docs/en/ingest-management/images/kibana-ingest-manager-integrations-list-installed.png and /dev/null differ diff --git a/docs/en/ingest-management/images/kubernetes_metadata.png b/docs/en/ingest-management/images/kubernetes_metadata.png deleted file mode 100644 index b832cb358..000000000 Binary files a/docs/en/ingest-management/images/kubernetes_metadata.png and /dev/null differ diff --git a/docs/en/ingest-management/images/logstash-certs.png b/docs/en/ingest-management/images/logstash-certs.png deleted file mode 100644 index e3568606f..000000000 Binary files a/docs/en/ingest-management/images/logstash-certs.png and /dev/null differ diff --git a/docs/en/ingest-management/images/logstash-output-certs.png b/docs/en/ingest-management/images/logstash-output-certs.png deleted file mode 100644 index 7bfc7f2bb..000000000 Binary files a/docs/en/ingest-management/images/logstash-output-certs.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-agents-offline.png b/docs/en/ingest-management/images/migrate-agent-agents-offline.png deleted file mode 100644 index bc63da15f..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-agents-offline.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-deployment-id.png b/docs/en/ingest-management/images/migrate-agent-deployment-id.png deleted file mode 100644 index 7240ca186..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-deployment-id.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-elasticsearch-output.png b/docs/en/ingest-management/images/migrate-agent-elasticsearch-output.png deleted file mode 100644 index 0f34e22a1..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-elasticsearch-output.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-fleet-server-host.png b/docs/en/ingest-management/images/migrate-agent-fleet-server-host.png deleted file mode 100644 index d88a4e5d0..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-fleet-server-host.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-host-output-settings.png b/docs/en/ingest-management/images/migrate-agent-host-output-settings.png deleted file mode 100644 index 44d8fd866..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-host-output-settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-install-command-output.png b/docs/en/ingest-management/images/migrate-agent-install-command-output.png deleted file mode 100644 index 2841b4e0d..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-install-command-output.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-install-command.png b/docs/en/ingest-management/images/migrate-agent-install-command.png deleted file mode 100644 index 630fb3264..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-install-command.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-new-deployment.png b/docs/en/ingest-management/images/migrate-agent-new-deployment.png deleted file mode 100644 index 2e0d7cf45..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-new-deployment.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-newly-enrolled-agents.png b/docs/en/ingest-management/images/migrate-agent-newly-enrolled-agents.png deleted file mode 100644 index 895026eeb..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-newly-enrolled-agents.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-policy-settings.png b/docs/en/ingest-management/images/migrate-agent-policy-settings.png deleted file mode 100644 index 32ca060fe..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-policy-settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migrate-agent-take-snapshot.png b/docs/en/ingest-management/images/migrate-agent-take-snapshot.png deleted file mode 100644 index 518f0a749..000000000 Binary files a/docs/en/ingest-management/images/migrate-agent-take-snapshot.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-add-integration-policy.png b/docs/en/ingest-management/images/migration-add-integration-policy.png deleted file mode 100644 index 07ae2ace6..000000000 Binary files a/docs/en/ingest-management/images/migration-add-integration-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-add-nginx-integration.png b/docs/en/ingest-management/images/migration-add-nginx-integration.png deleted file mode 100644 index 7986548a5..000000000 Binary files a/docs/en/ingest-management/images/migration-add-nginx-integration.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-add-processor.png b/docs/en/ingest-management/images/migration-add-processor.png deleted file mode 100644 index 4d2715c2e..000000000 Binary files a/docs/en/ingest-management/images/migration-add-processor.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-agent-data-streams01.png b/docs/en/ingest-management/images/migration-agent-data-streams01.png deleted file mode 100644 index a9780a7fb..000000000 Binary files a/docs/en/ingest-management/images/migration-agent-data-streams01.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-agent-details01.png b/docs/en/ingest-management/images/migration-agent-details01.png deleted file mode 100644 index 81f246eed..000000000 Binary files a/docs/en/ingest-management/images/migration-agent-details01.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-agent-status-healthy01.png b/docs/en/ingest-management/images/migration-agent-status-healthy01.png deleted file mode 100644 index 12396b7eb..000000000 Binary files a/docs/en/ingest-management/images/migration-agent-status-healthy01.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-event-from-agent.png b/docs/en/ingest-management/images/migration-event-from-agent.png deleted file mode 100644 index d663d2334..000000000 Binary files a/docs/en/ingest-management/images/migration-event-from-agent.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-event-from-filebeat.png b/docs/en/ingest-management/images/migration-event-from-filebeat.png deleted file mode 100644 index b89cc64d3..000000000 Binary files a/docs/en/ingest-management/images/migration-event-from-filebeat.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-index-lifecycle-policies.png b/docs/en/ingest-management/images/migration-index-lifecycle-policies.png deleted file mode 100644 index 534424ba3..000000000 Binary files a/docs/en/ingest-management/images/migration-index-lifecycle-policies.png and /dev/null differ diff --git a/docs/en/ingest-management/images/migration-preserve-raw-event.png b/docs/en/ingest-management/images/migration-preserve-raw-event.png deleted file mode 100644 index daa4aa5c3..000000000 Binary files a/docs/en/ingest-management/images/migration-preserve-raw-event.png and /dev/null differ diff --git a/docs/en/ingest-management/images/mutual-tls-cloud-proxy.png b/docs/en/ingest-management/images/mutual-tls-cloud-proxy.png deleted file mode 100644 index 3b1805776..000000000 Binary files a/docs/en/ingest-management/images/mutual-tls-cloud-proxy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/mutual-tls-cloud.png b/docs/en/ingest-management/images/mutual-tls-cloud.png deleted file mode 100644 index 1064a3aeb..000000000 Binary files a/docs/en/ingest-management/images/mutual-tls-cloud.png and /dev/null differ diff --git a/docs/en/ingest-management/images/mutual-tls-fs-onprem-proxy.png b/docs/en/ingest-management/images/mutual-tls-fs-onprem-proxy.png deleted file mode 100644 index cc2affef2..000000000 Binary files a/docs/en/ingest-management/images/mutual-tls-fs-onprem-proxy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/mutual-tls-fs-onprem.png b/docs/en/ingest-management/images/mutual-tls-fs-onprem.png deleted file mode 100644 index c0f633f1d..000000000 Binary files a/docs/en/ingest-management/images/mutual-tls-fs-onprem.png and /dev/null differ diff --git a/docs/en/ingest-management/images/mutual-tls-on-prem.png b/docs/en/ingest-management/images/mutual-tls-on-prem.png deleted file mode 100644 index 0fababfa7..000000000 Binary files a/docs/en/ingest-management/images/mutual-tls-on-prem.png and /dev/null differ diff --git a/docs/en/ingest-management/images/mutual-tls-onprem-advanced-yaml.png b/docs/en/ingest-management/images/mutual-tls-onprem-advanced-yaml.png deleted file mode 100644 index ca7953ea8..000000000 Binary files a/docs/en/ingest-management/images/mutual-tls-onprem-advanced-yaml.png and /dev/null differ diff --git a/docs/en/ingest-management/images/pod-latency.png b/docs/en/ingest-management/images/pod-latency.png deleted file mode 100644 index ed2a3de46..000000000 Binary files a/docs/en/ingest-management/images/pod-latency.png and /dev/null differ diff --git a/docs/en/ingest-management/images/privileged-and-unprivileged-agents.png b/docs/en/ingest-management/images/privileged-and-unprivileged-agents.png deleted file mode 100644 index 8b929a8a1..000000000 Binary files a/docs/en/ingest-management/images/privileged-and-unprivileged-agents.png and /dev/null differ diff --git a/docs/en/ingest-management/images/proxy_url_settings.png b/docs/en/ingest-management/images/proxy_url_settings.png deleted file mode 100644 index 383bdcc53..000000000 Binary files a/docs/en/ingest-management/images/proxy_url_settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/red-x.svg b/docs/en/ingest-management/images/red-x.svg deleted file mode 100644 index 5426fb2af..000000000 --- a/docs/en/ingest-management/images/red-x.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/en/ingest-management/images/review-component-template01.png b/docs/en/ingest-management/images/review-component-template01.png deleted file mode 100644 index d057d1a15..000000000 Binary files a/docs/en/ingest-management/images/review-component-template01.png and /dev/null differ diff --git a/docs/en/ingest-management/images/review-component-template02.png b/docs/en/ingest-management/images/review-component-template02.png deleted file mode 100644 index 5f9309bad..000000000 Binary files a/docs/en/ingest-management/images/review-component-template02.png and /dev/null differ diff --git a/docs/en/ingest-management/images/revoke-token.png b/docs/en/ingest-management/images/revoke-token.png deleted file mode 100644 index b8a24eb29..000000000 Binary files a/docs/en/ingest-management/images/revoke-token.png and /dev/null differ diff --git a/docs/en/ingest-management/images/rolling-agent-upgrade.png b/docs/en/ingest-management/images/rolling-agent-upgrade.png deleted file mode 100644 index 8e6dd889b..000000000 Binary files a/docs/en/ingest-management/images/rolling-agent-upgrade.png and /dev/null differ diff --git a/docs/en/ingest-management/images/root-integration-and-unprivileged-agents.png b/docs/en/ingest-management/images/root-integration-and-unprivileged-agents.png deleted file mode 100644 index cb7d3bd4c..000000000 Binary files a/docs/en/ingest-management/images/root-integration-and-unprivileged-agents.png and /dev/null differ diff --git a/docs/en/ingest-management/images/schedule-upgrade.png b/docs/en/ingest-management/images/schedule-upgrade.png deleted file mode 100644 index 6714e7818..000000000 Binary files a/docs/en/ingest-management/images/schedule-upgrade.png and /dev/null differ diff --git a/docs/en/ingest-management/images/selected-agent-metrics-dashboard.png b/docs/en/ingest-management/images/selected-agent-metrics-dashboard.png deleted file mode 100644 index e4b545d80..000000000 Binary files a/docs/en/ingest-management/images/selected-agent-metrics-dashboard.png and /dev/null differ diff --git a/docs/en/ingest-management/images/show-token.png b/docs/en/ingest-management/images/show-token.png deleted file mode 100644 index 902b0e904..000000000 Binary files a/docs/en/ingest-management/images/show-token.png and /dev/null differ diff --git a/docs/en/ingest-management/images/state-pod.png b/docs/en/ingest-management/images/state-pod.png deleted file mode 100644 index ab7040837..000000000 Binary files a/docs/en/ingest-management/images/state-pod.png and /dev/null differ diff --git a/docs/en/ingest-management/images/system-managed.png b/docs/en/ingest-management/images/system-managed.png deleted file mode 100644 index 7786d958e..000000000 Binary files a/docs/en/ingest-management/images/system-managed.png and /dev/null differ diff --git a/docs/en/ingest-management/images/tls-overview-mutual-all.jpg b/docs/en/ingest-management/images/tls-overview-mutual-all.jpg deleted file mode 100644 index 88ef37a68..000000000 Binary files a/docs/en/ingest-management/images/tls-overview-mutual-all.jpg and /dev/null differ diff --git a/docs/en/ingest-management/images/tls-overview-mutual-fs-agent.png b/docs/en/ingest-management/images/tls-overview-mutual-fs-agent.png deleted file mode 100644 index 4d05d5c1f..000000000 Binary files a/docs/en/ingest-management/images/tls-overview-mutual-fs-agent.png and /dev/null differ diff --git a/docs/en/ingest-management/images/tls-overview-mutual-fs-es.png b/docs/en/ingest-management/images/tls-overview-mutual-fs-es.png deleted file mode 100644 index 0cd7746bf..000000000 Binary files a/docs/en/ingest-management/images/tls-overview-mutual-fs-es.png and /dev/null differ diff --git a/docs/en/ingest-management/images/tls-overview-oneway-all.jpg b/docs/en/ingest-management/images/tls-overview-oneway-all.jpg deleted file mode 100644 index e65f24add..000000000 Binary files a/docs/en/ingest-management/images/tls-overview-oneway-all.jpg and /dev/null differ diff --git a/docs/en/ingest-management/images/tls-overview-oneway-fs-agent.png b/docs/en/ingest-management/images/tls-overview-oneway-fs-agent.png deleted file mode 100644 index 7d5326c27..000000000 Binary files a/docs/en/ingest-management/images/tls-overview-oneway-fs-agent.png and /dev/null differ diff --git a/docs/en/ingest-management/images/tls-overview-oneway-fs-es.png b/docs/en/ingest-management/images/tls-overview-oneway-fs-es.png deleted file mode 100644 index 6b2477384..000000000 Binary files a/docs/en/ingest-management/images/tls-overview-oneway-fs-es.png and /dev/null differ diff --git a/docs/en/ingest-management/images/unified-view-selector.png b/docs/en/ingest-management/images/unified-view-selector.png deleted file mode 100644 index b38d989b7..000000000 Binary files a/docs/en/ingest-management/images/unified-view-selector.png and /dev/null differ diff --git a/docs/en/ingest-management/images/unprivileged-agent-warning.png b/docs/en/ingest-management/images/unprivileged-agent-warning.png deleted file mode 100644 index cb0f1da80..000000000 Binary files a/docs/en/ingest-management/images/unprivileged-agent-warning.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-available-indicator.png b/docs/en/ingest-management/images/upgrade-available-indicator.png deleted file mode 100644 index 695b98d51..000000000 Binary files a/docs/en/ingest-management/images/upgrade-available-indicator.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-detailed-state01.png b/docs/en/ingest-management/images/upgrade-detailed-state01.png deleted file mode 100644 index 0e2ed9bf9..000000000 Binary files a/docs/en/ingest-management/images/upgrade-detailed-state01.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-detailed-state02.png b/docs/en/ingest-management/images/upgrade-detailed-state02.png deleted file mode 100644 index 0eeac452a..000000000 Binary files a/docs/en/ingest-management/images/upgrade-detailed-state02.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-failure.png b/docs/en/ingest-management/images/upgrade-failure.png deleted file mode 100644 index 922fbc1d2..000000000 Binary files a/docs/en/ingest-management/images/upgrade-failure.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-integration-policies-automatically.png b/docs/en/ingest-management/images/upgrade-integration-policies-automatically.png deleted file mode 100644 index c22e6e124..000000000 Binary files a/docs/en/ingest-management/images/upgrade-integration-policies-automatically.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-integration.png b/docs/en/ingest-management/images/upgrade-integration.png deleted file mode 100644 index 17c0c6954..000000000 Binary files a/docs/en/ingest-management/images/upgrade-integration.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-non-detailed-old.png b/docs/en/ingest-management/images/upgrade-non-detailed-old.png deleted file mode 100644 index dd65a8c23..000000000 Binary files a/docs/en/ingest-management/images/upgrade-non-detailed-old.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-non-detailed.png b/docs/en/ingest-management/images/upgrade-non-detailed.png deleted file mode 100644 index 8e411d121..000000000 Binary files a/docs/en/ingest-management/images/upgrade-non-detailed.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-package-policy.png b/docs/en/ingest-management/images/upgrade-package-policy.png deleted file mode 100644 index 4a4faf598..000000000 Binary files a/docs/en/ingest-management/images/upgrade-package-policy.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-policy-editor.png b/docs/en/ingest-management/images/upgrade-policy-editor.png deleted file mode 100644 index 9c848b10c..000000000 Binary files a/docs/en/ingest-management/images/upgrade-policy-editor.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-resolve-conflicts.png b/docs/en/ingest-management/images/upgrade-resolve-conflicts.png deleted file mode 100644 index 4c4ff7742..000000000 Binary files a/docs/en/ingest-management/images/upgrade-resolve-conflicts.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-single-agent.png b/docs/en/ingest-management/images/upgrade-single-agent.png deleted file mode 100644 index 879c0af14..000000000 Binary files a/docs/en/ingest-management/images/upgrade-single-agent.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-states.png b/docs/en/ingest-management/images/upgrade-states.png deleted file mode 100644 index 70cf59a91..000000000 Binary files a/docs/en/ingest-management/images/upgrade-states.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-status.png b/docs/en/ingest-management/images/upgrade-status.png deleted file mode 100644 index 27082ae2f..000000000 Binary files a/docs/en/ingest-management/images/upgrade-status.png and /dev/null differ diff --git a/docs/en/ingest-management/images/upgrade-view-previous-config.png b/docs/en/ingest-management/images/upgrade-view-previous-config.png deleted file mode 100644 index 7202ca735..000000000 Binary files a/docs/en/ingest-management/images/upgrade-view-previous-config.png and /dev/null differ diff --git a/docs/en/ingest-management/images/user-settings.png b/docs/en/ingest-management/images/user-settings.png deleted file mode 100755 index bcb630403..000000000 Binary files a/docs/en/ingest-management/images/user-settings.png and /dev/null differ diff --git a/docs/en/ingest-management/images/view-agent-logs.png b/docs/en/ingest-management/images/view-agent-logs.png deleted file mode 100644 index dd457a186..000000000 Binary files a/docs/en/ingest-management/images/view-agent-logs.png and /dev/null differ diff --git a/docs/en/ingest-management/index.asciidoc b/docs/en/ingest-management/index.asciidoc deleted file mode 100644 index aed381011..000000000 --- a/docs/en/ingest-management/index.asciidoc +++ /dev/null @@ -1,246 +0,0 @@ -include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[] -include::{docs-root}/shared/attributes.asciidoc[] - -:doctype: book - -= {fleet} and {agent} Guide - -include::overview.asciidoc[leveloffset=+1] - -include::serverless-restrictions.asciidoc[leveloffset=+2] - -include::beats-agent-comparison.asciidoc[leveloffset=+1] - -include::quick-starts.asciidoc[leveloffset=+1] - -include::migrate-beats-to-agent.asciidoc[leveloffset=+1] - -include::migrate-auditbeat-to-agent.asciidoc[leveloffset=+2] - -include::fleet/fleet-deployment-models.asciidoc[leveloffset=+1] - -include::fleet/add-fleet-server-cloud.asciidoc[leveloffset=+2] - -include::fleet/add-fleet-server-on-prem.asciidoc[leveloffset=+2] - -include::fleet/add-fleet-server-mixed.asciidoc[leveloffset=+2] - -include::fleet/add-fleet-server-kubernetes.asciidoc[leveloffset=+2] - -include::fleet/fleet-server-scaling.asciidoc[leveloffset=+2] - -include::fleet/fleet-server-secrets.asciidoc[leveloffset=+2] - -include::fleet/fleet-server-monitoring.asciidoc[leveloffset=+2] - -include::elastic-agent/install-elastic-agent.asciidoc[leveloffset=+1] - -include::elastic-agent/install-fleet-managed-elastic-agent.asciidoc[leveloffset=+2] - -include::elastic-agent/install-standalone-elastic-agent.asciidoc[leveloffset=+2] - -include::elastic-agent/upgrade-standalone-elastic-agent.asciidoc[leveloffset=+3] - -include::elastic-agent/install-elastic-agent-in-container.asciidoc[leveloffset=+2] - -include::elastic-agent/elastic-agent-container.asciidoc[leveloffset=+3] - -include::elastic-agent/running-on-kubernetes-managed-by-fleet.asciidoc[leveloffset=+3] - -include::elastic-agent/install-on-kubernetes-using-helm.asciidoc[leveloffset=+3] - -include::elastic-agent/example-kubernetes-standalone-agent-helm.asciidoc[leveloffset=+3] - -include::elastic-agent/example-kubernetes-fleet-managed-agent-helm.asciidoc[leveloffset=+3] - -include::elastic-agent/advanced-kubernetes-managed-by-fleet.asciidoc[leveloffset=+3] - -include::elastic-agent/configuring-kubernetes-metadata.asciidoc[leveloffset=+3] - -include::elastic-agent/running-on-gke-managed-by-fleet.asciidoc[leveloffset=+3] - -include::elastic-agent/running-on-eks-managed-by-fleet.asciidoc[leveloffset=+3] - -include::elastic-agent/running-on-aks-managed-by-fleet.asciidoc[leveloffset=+3] - -include::elastic-agent/running-on-kubernetes-standalone.asciidoc[leveloffset=+3] - -include::elastic-agent/scaling-on-kubernetes.asciidoc[leveloffset=+3] - -include::elastic-agent/ingest-pipeline-kubernetes.asciidoc[leveloffset=+3] - -include::elastic-agent/configuration/env/container-envs.asciidoc[leveloffset=+3] - -include::elastic-agent/otel-agent.asciidoc[leveloffset=+2] - -include::elastic-agent/otel-agent-transform.asciidoc[leveloffset=+2] - -include::elastic-agent/elastic-agent-unprivileged-mode.asciidoc[leveloffset=+2] - -include::elastic-agent/install-agent-msi.asciidoc[leveloffset=+2] - -include::elastic-agent/installation-layout.asciidoc[leveloffset=+2] - -include::fleet/air-gapped.asciidoc[leveloffset=+2] - -include::fleet-agent-proxy-support.asciidoc[leveloffset=+2] - -include::fleet-agent-proxy-when-to-configure.asciidoc[leveloffset=+3] - -include::fleet-agent-proxy-host-variables.asciidoc[leveloffset=+3] - -include::fleet-agent-proxy-managed.asciidoc[leveloffset=+3] - -include::fleet-agent-proxy-standalone.asciidoc[leveloffset=+3] - -include::fleet-agent-proxy-package-registry.asciidoc[leveloffset=+3] - -include::elastic-agent/uninstall-elastic-agent.asciidoc[leveloffset=+2] - -include::elastic-agent/start-stop-elastic-agent.asciidoc[leveloffset=+2] - -include::elastic-agent/elastic-agent-encryption.asciidoc[leveloffset=+2] - -include::security/generate-certificates.asciidoc[leveloffset=+1] - -include::security/certificates.asciidoc[leveloffset=+2] - -include::security/certificates-rotation.asciidoc[leveloffset=+2] - -include::security/mutual-tls.asciidoc[leveloffset=+2] - -include::security/tls-overview.asciidoc[leveloffset=+2] - -include::security/logstash-certificates.asciidoc[leveloffset=+2] - -include::fleet/fleet.asciidoc[leveloffset=+1] - -include::fleet/fleet-settings.asciidoc[leveloffset=+2] - -include::fleet/fleet-settings-output-elasticsearch.asciidoc[leveloffset=+3] - -include::fleet/fleet-settings-output-logstash.asciidoc[leveloffset=+3] - -include::fleet/fleet-settings-output-kafka.asciidoc[leveloffset=+3] - -include::fleet/fleet-settings-remote-elasticsearch.asciidoc[leveloffset=+3] - -include::fleet/fleet-settings-changing-outputs.asciidoc[leveloffset=+3] - -include::fleet/fleet-manage-agents.asciidoc[leveloffset=+2] - -include::fleet/unenroll-elastic-agent.asciidoc[leveloffset=+3] - -include::fleet/set-inactivity-timeout.asciidoc[leveloffset=+3] - -include::fleet/upgrade-elastic-agent.asciidoc[leveloffset=+3] - -include::fleet/migrate-elastic-agent.asciidoc[leveloffset=+3] - -include::fleet/monitor-elastic-agent.asciidoc[leveloffset=+3] - -include::fleet/agent-health-status.asciidoc[leveloffset=+3] - -include::fleet/filter-agent-list-by-tags.asciidoc[leveloffset=+3] - -include::agent-policies.asciidoc[leveloffset=+2] - -include::create-agent-policies-no-UI.asciidoc[leveloffset=+3] - -include::override-policy-settings.asciidoc[leveloffset=+3] - -include::agent-policies-environment-variables.asciidoc[leveloffset=+3] - -include::security/fleet-roles-and-privileges.asciidoc[leveloffset=+2] - -include::security/enrollment-tokens.asciidoc[leveloffset=+2] - -include::fleet/fleet-api-docs.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/elastic-agent-configuration.asciidoc[leveloffset=+1] - -include::elastic-agent/configuration/create-standalone-agent-policy.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/structure-config-file.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/inputs/input-configuration.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/inputs/simplified-input-configuration.asciidoc[leveloffset=+3] - -include::elastic-agent/configuration/inputs/inputs-list.asciidoc[leveloffset=+3] - -include::elastic-agent/elastic-agent-dynamic-inputs.asciidoc[leveloffset=+3] - -include::elastic-agent/configuration/providers/elastic-agent-providers.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/outputs/output-configuration.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/authentication/ssl-settings.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/elastic-agent-standalone-logging.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/elastic-agent-standalone-features.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/elastic-agent-standalone-download.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/examples/config-file-examples.asciidoc[leveloffset=+2] - -include::elastic-agent/grant-access-to-elasticsearch.asciidoc[leveloffset=+2] - -include::elastic-agent/example-standalone-monitor-nginx-serverless.asciidoc[leveloffset=+2] - -include::elastic-agent/example-standalone-monitor-nginx-ess.asciidoc[leveloffset=+2] - -include::elastic-agent/debug-standalone-elastic-agent.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/autodiscovery/elastic-agent-kubernetes-autodiscovery.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/autodiscovery/kubernetes-conditions-autodiscover.asciidoc[leveloffset=+3] - -include::elastic-agent/configuration/autodiscovery/kubernetes-hints-autodiscover.asciidoc[leveloffset=+3] - -include::elastic-agent/configuration/elastic-agent-monitoring.asciidoc[leveloffset=+2] - -include::elastic-agent/configuration/yaml/elastic-agent-reference-yaml.asciidoc[leveloffset=+2] - -include::integrations/integrations.asciidoc[leveloffset=+1] - -include::integrations/package-signatures.asciidoc[leveloffset=+2] - -include::integrations/add-integration-to-policy.asciidoc[leveloffset=+2] - -include::integrations/view-integration-policies.asciidoc[leveloffset=+2] - -include::integrations/edit-or-delete-integration-policy.asciidoc[leveloffset=+2] - -include::integrations/install-integration-assets.asciidoc[leveloffset=+2] - -include::integrations/view-integration-assets.asciidoc[leveloffset=+2] - -include::integrations/integration-level-outputs.asciidoc[leveloffset=+2] - -include::integrations/upgrade-integration.asciidoc[leveloffset=+2] - -include::integrations/managed-integrations-content.asciidoc[leveloffset=+2] - -include::integrations/integrations-assets-best-practices.asciidoc[leveloffset=+2] - -include::data-streams.asciidoc[leveloffset=+2] - -include::processors/processors.asciidoc[leveloffset=+1] - -include::commands.asciidoc[leveloffset=+1] - -include::troubleshooting/troubleshooting-intro.asciidoc[leveloffset=+1] - -include::troubleshooting/troubleshooting.asciidoc[leveloffset=+2] - -include::troubleshooting/faq.asciidoc[leveloffset=+2] - -include::release-notes/release-notes-8.18.asciidoc[leveloffset=+1] - -include::elastic-agent/install-fleet-managed-agent.asciidoc[leveloffset=+2] - -include::fleet/apis.asciidoc[] - -include::redirects.asciidoc[] diff --git a/docs/en/ingest-management/integrations/add-integration-to-policy.asciidoc b/docs/en/ingest-management/integrations/add-integration-to-policy.asciidoc deleted file mode 100644 index 6d7e1ad15..000000000 --- a/docs/en/ingest-management/integrations/add-integration-to-policy.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[add-integration-to-policy]] -= Add an integration to an {agent} policy - -++++ -Add an integration to an {agent} policy -++++ - -An <> consists of one or more integrations that are applied to the agents enrolled in that policy. -When you add an integration, the policy created for that integration can be shared with multiple {agent} policies. -This reduces the number of integrations policies that you need to actively manage. - -To add a new integration to one or more {agent} policies: - -. In {kib}, go to the **Integrations** page. -. The Integrations page shows {agent} integrations along with other types, such as {beats}. Scroll down and select **Elastic Agent only** to view only integrations that work with {agent}. -. Search for and select an integration. You can select a category to narrow your search. -. Click **Add **. -. You can opt to install an {agent} if you haven't already, or choose **Add integration only** to proceed. -. In Step 1 on the **Add ** page, you can select the configuration settings specific to the integration. -. In Step 2 on the page, you have two options: -.. If you'd like to create a new policy for your {agent}s, on the **New hosts** tab specify a name for the new agent policy and choose whether or not to collect system logs and metrics. -Collecting logs and metrics will add the System integration to the new agent policy. -.. If you already have an {agent} policy created, on the **Existing hosts** tab use the drop-down menu to specify one or more agent policies that you'd like to add the integration to. -. Click **Save and continue** to confirm your settings. - -This action installs the integration (if it's not already installed) and adds it to the {agent} policies that you specified. -{fleet} distributes the new integration policy to all {agent}s that are enrolled in the agent policies. - -You can update the settings for an installed integration at any time: - -. In {kib}, go to the **Integrations** page. -. On the **Integration policies** tab, for the integration that you like to update open the **Actions** menu and select **Edit integration**. -. On the **Edit ** page you can update any configuration settings and also update the list of {agent} polices to which the integration is added. -+ -If you clear the **Agent policies** field, the integration will be removed from any {agent} policies to which it had been added. -+ -To identify any integrations that have been "orphaned", that is, not associated with any {agent} policies, check the **Agent polices** column on the **Integration policies** tab. -Any integrations that are installed but not associated with an {agent} policy are as labeled as `No agent policies`. - -If you haven't deployed any {agent}s yet or set up agent policies, start with -one of our quick start guides: - -* {observability-guide}/logs-metrics-get-started.html[Get started with logs and metrics] -* {observability-guide}/ingest-traces.html[Get started with application traces and APM] diff --git a/docs/en/ingest-management/integrations/edit-or-delete-integration-policy.asciidoc b/docs/en/ingest-management/integrations/edit-or-delete-integration-policy.asciidoc deleted file mode 100644 index 0ec85b2ad..000000000 --- a/docs/en/ingest-management/integrations/edit-or-delete-integration-policy.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[edit-or-delete-integration-policy]] -= Edit or delete an {agent} integration policy - -++++ -Edit or delete an integration policy -++++ - - -To edit or delete an integration policy: - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search -for and select the integration. - -. Click the *Policies* tab to see the list of integration policies. - -. Scroll to a specific integration policy. -Open the *Actions* menu and select *Edit integration* or *Delete integration*. -+ -Editing or deleting an integration is permanent and cannot be undone. -If you make a mistake, you can always re-configure or re-add an integration. - -Any saved changes are immediately distributed and applied to all {agent}s -enrolled in the given policy. \ No newline at end of file diff --git a/docs/en/ingest-management/integrations/install-integration-assets.asciidoc b/docs/en/ingest-management/integrations/install-integration-assets.asciidoc deleted file mode 100644 index e2af836ae..000000000 --- a/docs/en/ingest-management/integrations/install-integration-assets.asciidoc +++ /dev/null @@ -1,76 +0,0 @@ -[[install-uninstall-integration-assets]] -= Install and uninstall {agent} integration assets - -++++ -Install and uninstall integration assets -++++ - -{agent} integrations come with a number of assets, such as dashboards, saved -searches, and visualizations for analyzing data. When you add an integration to -an agent policy in {fleet}, the assets are installed automatically. If you're -building a policy file by hand, you need to install required assets such as -index templates. - -[discrete] -[[install-integration-assets]] -== Install integration assets - -. In {kib}, go to the **Integrations** page and open the **Browse integrations** tab. Search for -and select an integration. You can select a category to narrow your search. - -. Click the **Settings** tab. - -. Click **Install assets** to set up the {kib} and {es} assets. - -Note that it's currently not possible to have multiple versions of the same integration installed. When you upgrade an integration, the previous version assets are removed and replaced by the current version. - -[IMPORTANT] -.Current limitations with integrations and {kib} spaces -==== -{agent} integration assets can be installed only on a single {kib} {kibana-ref}/xpack-spaces.html[space]. If you want to access assets in a different space, you can {kibana-ref}/managing-saved-objects.html#managing-saved-objects-copy-to-space[copy them]. However, many integrations include markdown panels with dynamically generated links to other dashboards. When assets are copied between spaces, these links may not behave as expected and can result in a 404 `Dashboard not found` error. Refer to known issue {kibana-issue}175072[#175072] for details. - -These limitations and future plans for {fleet}'s integrations support in multi-space environments are currently being discussed in {kibana-issue}175831[#175831]. Feedback is very welcome. For now, we recommend reviewing the specific integration documentation for any space-related considerations. -==== - -[discrete] -[[uninstall-integration-assets]] -== Uninstall integration assets - -Uninstall an integration to remove all {kib} and {es} assets that were installed -by the integration. - -. Before uninstalling an integration, -<> from any -{agent} policies that use it. -+ -Any {agent}s enrolled in the policy will stop using the deleted integration. - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search for -and select an integration. - -. Click the **Settings** tab. - -. Click **Uninstall ** to remove all {kib} and {es} assets that -were installed by this integration. - -[discrete] -[[reinstall-integration-assets]] -== Reinstall integration assets - -You may need to reinstall an integration package to resolve a specific problem, -such as: - -* An asset was edited manually, and you want to reset assets to their original -state. -* A temporary problem (like a network issue) occurred during package -installation or upgrade. -* A package was installed in a prior version that had a bug in the install code. - -To reinstall integration assets: - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search for -and select an integration. - -. Click the **Settings** tab. - -. Click **Reinstall ** to set up the {kib} and {es} assets. diff --git a/docs/en/ingest-management/integrations/integration-level-outputs.asciidoc b/docs/en/ingest-management/integrations/integration-level-outputs.asciidoc deleted file mode 100644 index 84541f6fa..000000000 --- a/docs/en/ingest-management/integrations/integration-level-outputs.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[integration-level-outputs]] -= Set integration-level outputs - -If you have an `Enterprise` link:https://www.elastic.co/subscriptions[{stack} subscription], you can configure {agent} data to be sent to different outputs for different integration policies. Note that the output clusters that you send data to must also be on the same subscription level. - -Integration-level outputs are very useful for certain scenarios. For example: - -* You can may want to send security logs monitored by an {agent} to one {ls} output, while informational logs are sent to a another {ls} output. -* If you operate multiple {beats} on a system and want to migrate these to {agent}, integration-level outputs enable you to maintain the distinct outputs that are currently used by each Beat. - -[discrete] -== Order of precedence - -For each {agent}, the agent policy configures sending data to the following outputs in decreasing order of priority: - -. The output set in the <>. -. The output set in the integration's parent <>. -This includes the case where an integration policy belongs to multiple {agent} policies. -. The global, default data output set in the <>. - -[discrete] -== Configure the output for an integration policy - -To configure an integration-level output for {agent} data: - -. In {kib}, go to **Integrations**. -. On the **Installed integrations** tab, select the integration that you'd like to update. -. Open the **Integration policies** tab. -. From the **Actions** menu next to the integration, select *Edit integration*. -. In the **integration settings** section, expand **Advanced options**. -. Use the **Output** dropdown menu to select an output specific to this integration policy. -. Click **Save and continue** to confirm your changes. - -[discrete] -== View the output configured for an integration - -To view which {agent} output is set for an integration policy: - -. In {fleet}, open the **Agent policies** tab. -. Select an {agent} policy. -. On the **Integrations** tab, the **Output** column indicates the output used for each integration policy. -If data is configured to be sent to either the global output defined in {fleet} settings or to the integration's parent {agent} policy, this is indicated in a tooltip. - - - - diff --git a/docs/en/ingest-management/integrations/integrations-assets-best-practices.asciidoc b/docs/en/ingest-management/integrations/integrations-assets-best-practices.asciidoc deleted file mode 100644 index 06e38e457..000000000 --- a/docs/en/ingest-management/integrations/integrations-assets-best-practices.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[integrations-assets-best-practices]] -= Best practices for integration assets - -When you use integrations with {fleet} and {agent} there are some restrictions to be aware of. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[assets-restrictions-standalone]] -== Using integration assets with standalone {agent} - -When you use standalone {agent} with integrations, the integration assets added to the {agent} policy must be installed on the destination {es} cluster. - -* If {kib} is available, the integration assets can be <>. - -* If {kib} is not available (for instance if you have a remote cluster without a {kib} instance), then the integration assets need to be installed manually. - -[discrete] -[[assets-restrictions-without-agent]] -== Using integration assets without {agent} - -{fleet} integration assets are meant to work only with {agent}. - -The {fleet} integration assets are not supposed to work when sending arbitrary logs or metrics collected with other products such as {filebeat}, {metricbeat} or {ls}. - -[discrete] -[[assets-restrictions-custom-integrations]] -== Using {fleet} and {agent} integration assets in custom integrations - -While it's possible to include {fleet} and {agent} integration assets in a custom integration, this is not recommended nor supported. Assets from another integration should not be referenced directly from a custom integration. - -As an example scenario, one may want to ingest Redis logs from Kafka. This can be done using the {integrations-docs}/redis-intro[Redis integration], but only certain files and paths are allowed. It's technically possible to use the {integrations-docs}/kafka_log[Custom Kafka Logs integration] with a custom ingest pipeline, referencing the ingest pipeline of the Redis integration to ingest logs into the index templates of the Custom Kafka Logs integration data streams. - -However, referencing assets of an integration from another custom integration is not recommended nor supported. A configuration as described above can break when the integration is upgraded, as can happen automatically. - -[discrete] -[[assets-restrictions-copying]] -== Copying {fleet} and {agent} integration assets - -As an alternative to referencing assets from another integration from within a custom integration, assets such as index templates and ingest pipelines can be copied so that they become standalone. - -This way, because the assets are not managed by another integration, there is less risk of a configuration breaking or of an integration asset being deleted when the other integration is upgraded. - -Note, however, that creating standalone integration assets based off of {fleet} and {agent} integrations is considered a custom configuration that is not tested nor supported. Whenever possible it's recommended to use standard integrations. - -[discrete] -[[assets-restrictions-editing-assets]] -== Editing assets managed by {fleet} - -{fleet}-managed integration assets should not be edited. Examples of these assets include an integration index template, the `@package` component templates, and ingest pipelines that are bundled with integrations. Any changes made to these assets will be overwritten when the integration is upgraded. - -[discrete] -[[assets-restrictions-custom-component-templates]] -== Creating custom component templates - -While creating a `@custom` component template for a package integration is supported, it involves risks which can prevent data from being ingested correctly. This practice can lead to broken indexing, data loss, and breaking of integration package upgrades. - -For example: - - * If the `@package` component template of an integration is changed from a "normal" datastream to `TSDB` or `LogsDB`, some of the custom settings or mappings introduced may not be compatible with these indexing modes. - * If the type of an ECS field is overridden from, for example, `keyword` to `text`, aggregations based on that field may be prevented for built-in dashboards. - -A similar caution against custom index mappings is noted in <>. - -[discrete] -[[assets-restrictions-custom-ingest-pipeline]] -== Creating a custom ingest pipeline - -If you create a custom index pipeline (as documented in the <> tutorial), Elastic is not responsible for ensuring that it indexes and behaves as expected. Creating a custom pipeline involves custom processing of the incoming data, which should be done with caution and tested carefully. - -Refer to <> to learn more. - -[discrete] -[[assets-restrictions-cloning-index-template]] -== Cloning the index template of an integration package - -When you clone the index template of an integration package, this involves risk as any changes made to the original index template when it is upgraded will not be propagated to the cloned version. That is, the structure of the new index template is effectively frozen at the moment that it is cloned. Cloning an index template of an integration package can therefore lead to broken indexing, data loss, and breaking of integration package upgrades. - -Additionally, cloning index templates to add or inject additional component templates cannot be tested by Elastic, so we cannot guarantee that the template will work in future releases. - -If you want to change the ILM Policy, the number of shards, or other settings for the datastreams of one or more integrations, but the changes do not need to be specific to a given namespace, it's highly recommended to use the `package@custom` component templates, as described in <> and <> of the Customize data retention policies tutorial, so as to avoid the problems mentioned above. - -If you want to change these settings for the data streams in one or more integrations and the changes **need to be namespace specific**, then you can do so following the steps in <> of the Customize data retention policies tutorial, but be aware of the restrictions mentioned above. diff --git a/docs/en/ingest-management/integrations/integrations.asciidoc b/docs/en/ingest-management/integrations/integrations.asciidoc deleted file mode 100644 index 2555d375a..000000000 --- a/docs/en/ingest-management/integrations/integrations.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[integrations]] -= Manage {agent} integrations - -++++ -Manage integrations -++++ - -**** -Integrations are available for a wide array of popular services and platforms. To -see the full list of available integrations, go to the *Integrations* page -in {kib}, or visit {integrations-docs}[Elastic Integrations]. - -{agent} integrations provide a simple, unified way to collect data from popular -apps and services, and protect systems from security threats. - -Each integration comes prepackaged with assets that support all of your -observability needs: - -* Data ingestion, storage, and transformation rules -* Configuration options -* Pre-built, custom dashboards and visualizations -* Documentation -**** - -[NOTE] -==== -Please be aware that some integrations may function differently across different spaces. Also, some might only work in the default space. We recommend reviewing the specific integration documentation for any space-related considerations. -==== - -The following table shows the main actions you can perform in the *Integrations* -app in {kib}. You can perform some of these actions from other places in {kib}, -too. - -[options,header] -|=== -| User action | Result - -|<> -|Configure an integration for a specific use case and add it to an {agent} policy. - -|<> -|View the integration policies created for a specific integration. - -|<> -|Change settings or delete the integration policy. - -|<> -|Install, uninstall, and reinstall integration assets in {kib}. - -|<> -|View the {kib} assets installed for a specific integration. - -|<> -|Upgrade an integration to the latest version. - -|=== - -[NOTE] -==== -The *Integrations* app in {kib} needs access to the public {package-registry} to -discover integrations. If your deployment has network restrictions, you can -{fleet-guide}/air-gapped.html#air-gapped-diy-epr[deploy your own self-managed {package-registry}]. -==== \ No newline at end of file diff --git a/docs/en/ingest-management/integrations/managed-integrations-content.asciidoc b/docs/en/ingest-management/integrations/managed-integrations-content.asciidoc deleted file mode 100644 index bc2073d10..000000000 --- a/docs/en/ingest-management/integrations/managed-integrations-content.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[managed-integrations-content]] -= Managed integrations content - -++++ -Managed integrations content -++++ - -Most integration content installed by {fleet} isn’t editable. This content is tagged with a **Managed** badge in the {kib} UI. Managed content itself cannot be edited or deleted, however managed visualizations, dashboards, and saved searches can be cloned. - -[role="screenshot"] -image::images/system-managed.png[An image of the new managed badge.] - -When a managed dashboard is cloned, any linked or referenced panels become part of the clone without relying on external sources. The panels are integrated into the cloned dashboard as stand alone components. For example, with a cloned dashboard, the cloned panels become entirely self-contained copies without any dependencies on the original configuration. Clones can be customized and modified without accidentally affecting the original. - -[NOTE] -==== -The cloned managed content retains the managed badge, but is independent from the original. -==== - -With managed content relating to specific visualization editor such as Lens, TSVB, and Maps, the clones retain the original reference configurations. To clone the visualization, view it in the editor then begin to make edits. Once finished editing you are prompted to save the edits as a new visualization. The same applies to editing any saved searches in a managed visualization. \ No newline at end of file diff --git a/docs/en/ingest-management/integrations/package-signatures.asciidoc b/docs/en/ingest-management/integrations/package-signatures.asciidoc deleted file mode 100644 index 3ac7f4cc0..000000000 --- a/docs/en/ingest-management/integrations/package-signatures.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[package-signatures]] -= Package signatures - -All integration packages published by Elastic have package signatures that -prevent malicious attackers from tampering with package content. When you -install an Elastic integration, {kib} downloads the package and verifies the -package signature against a public key. If the package is unverified, you can -choose to force install it. However, it's strongly recommended that you avoid -installing unverified packages. - -IMPORTANT: By installing an unverified package, you acknowledge that you -assume any risk involved. - -To force installation of an unverified package: - -* When using the {integrations} UI, you'll be prompted to confirm that you want -to install the unverified integration. Click **Install anyway** to force -installation. - -* When using the {fleet} API, if you attempt to install an unverified package, -you'll see a 400 response code with a verification failed message. To force -installation, set the URL parameter `ignoreUnverified=true`. For more -information, refer to <>. - -After installation, unverified {integrations} are flagged on the -**Installed integrations** tab of the {integrations} UI. - -[discrete] -[[why-verify-packages]] -== Why is package verification necessary? - -Integration packages contain instructions, such as ILM policies, transforms, and -mappings, that can significantly modify the structure of your {es} indices. -Relying solely on HTTPS DNS name validation to prove provenance of the package -is not a safe practice. A determined attacker could forge a certificate and -serve up packages intended to disrupt the target. - -Installing verified packages ensures that your integration software has not been -corrupted or otherwise tampered with. - -[discrete] -[[what-does-unverified-mean]] -== What does it mean for a package to be unverified? - -Here are some situations where an integration package will fail verification -during installation: - -* The package zip file on the Elastic server has been tampered with. -* The user has been maliciously redirected to a fake Elastic package registry. -* The public Elastic key has been compromised, and Elastic has signed packages -with an updated key. - -Here are some reasons why an integration might be flagged as unverified after -installation: - -* The integration package failed verification, but was force installed. -* The integration package was installed before {fleet} added support for package -signature verification. - -[discrete] -[[what-if-key-changes]] -== What if the Elastic key changes in the future? - -In the unlikely event that the Elastic signing key changes in the future, any -verified integration packages will continue to show as verified until new -packages are installed or existing ones are upgraded. If this happens, you can -set the `xpack.fleet.packageVerification.gpgKeyPath` setting in the `kibana.yml` -configuration file to use the new key. diff --git a/docs/en/ingest-management/integrations/upgrade-integration.asciidoc b/docs/en/ingest-management/integrations/upgrade-integration.asciidoc deleted file mode 100644 index b149bd533..000000000 --- a/docs/en/ingest-management/integrations/upgrade-integration.asciidoc +++ /dev/null @@ -1,161 +0,0 @@ -[[upgrade-integration]] -= Upgrade an {agent} integration - -++++ -Upgrade an integration -++++ - -IMPORTANT: By default, {kib} requires an internet connection to download -integration packages from the {package-registry}. Make sure the {kib} -server can connect to `https://epr.elastic.co` on port `443`. If network -restrictions prevent {kib} from reaching the public {package-registry}, -you can use a proxy server or host your own {package-registry}. To learn -more, refer to <>. - -Elastic releases {agent} integration updates periodically. To use new features -and capabilities, upgrade the installed integration to the latest version and -optionally upgrade integration policies to use the new version. - -TIP: In larger deployments, you should test integration upgrades on a sample -{agent} before rolling out a larger upgrade initiative. - -[discrete] -[[upgrade-integration-to-latest-version]] -== Upgrade an integration to the latest version - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search -for and select the integration you want to upgrade. Notice there is a warning -icon next to the version number to indicate a new version is available. - -. Click the *Settings* tab and notice the message about the new version. -+ -[role="screenshot"] -image::images/upgrade-integration.png[Settings tab under Integrations shows how to upgrade the integration] - -. Before upgrading the integration, decide whether to upgrade integration -policies to the latest version, too. To use new features and capabilities, -you'll need to upgrade existing integration policies. However, the upgrade may -introduce changes, such as field changes, that require you to resolve conflicts. -+ --- -* Select *Upgrade integration policies* to upgrade any eligible integration -policies when the integration is upgraded. - -* To continue using the older package version, deselect -*Upgrade integration policies*. You can still choose to -<> -later on. --- - -. Click *Upgrade to latest version*. -+ -If you selected *Upgrade integration policies* and there are conflicts, -<> -and resolve the conflicts in the policy editor. - -. After the upgrade is complete, verify that the installed version and latest -version match. - -NOTE: You must upgrade standalone agents separately. If you used {kib} to create -and download your standalone agent policy, see <>. - -[discrete] -[[upgrade-integration-policies-automatically]] -== Keep integration policies up to date automatically - -Some integration packages, like System, are installed by default during {fleet} setup. -These integrations are upgraded automatically when {fleet} detects that a new version is available. - -The following integrations are installed automatically when you select certain options in the {fleet} UI. -All of them have an option to upgrade integration policies automatically, too: - - * {integrations-docs}/elastic_agent[Elastic Agent] - installed automatically when the default **Collect agent logs** or **Collect agent metrics** option is enabled in an {agent} policy). - * {integrations-docs}/fleet_server[Fleet Server] - installed automatically when {fleet-server} is set up through the {fleet} UI. - * {integrations-docs}/system[System] - installed automatically when the default **Collect system logs and metrics** option is enabled in an {agent} policy). - -The {integrations-docs}/endpoint[Elastic Defend] integration also has an option to upgrade installation policies automatically. - -Note that for the following integrations, when the integration is updated automatically the integration policy is upgraded automatically as well. This behavior cannot be disabled. - -* {integrations-docs}/apm[Elastic APM] -* {integrations-docs}/cloud_security_posture#cloud-security-posture-management-cspm[Cloud Security Posture Management] -* {observability-guide}/monitor-uptime-synthetics.html[Elastic Synthetics] - -For integrations that support the option to auto-upgrade the integration policy, when this option is selected (the default), {fleet} automatically upgrades your policies when a new version of the integration is available. -If there are conflicts during the upgrade, your integration policies will not be upgraded, and you'll need to -<>. - -To keep integration policies up to data automatically: - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search for -and select the integration you want to configure. - -. Click *Settings* and make sure -*Keep integration policies up to data automatically* is selected. -+ -[role="screenshot"] -image::images/upgrade-integration-policies-automatically.png[Settings tab under Integrations shows how to keep integration policies up to date automatically] -+ -If this option isn't available on the *Settings* tab, this feature is not -available for the integration you're viewing. - - -[discrete] -[[upgrade-integration-policies-manually]] -== Upgrade integration policies manually - -If you can't upgrade integration policies when you upgrade the integration, -upgrade them manually. - -. Click the *Policies* tab and find the integration policies you want to -upgrade. -+ -[role="screenshot"] -image::images/upgrade-package-policy.png[Policies tab under Integrations shows how to upgrade the package policy] - -. Click *Upgrade* to begin the upgrade process. -+ -The upgrade will open in the policy editor. -+ -[role="screenshot"] -image::images/upgrade-policy-editor.png[Upgrade integration example in the policy editor] - -. Make any required configuration changes and, if necessary, resolve conflicts. -For more information, refer to <>. - -. Repeat this process for each policy with an out-of-date integration. - -Too many conflicts to resolve? Refer to the -<> for manual -steps. - -[discrete] -[[resolve-conflicts]] -== Resolve conflicts - -When attempting to upgrade an integration policy, it's possible that -breaking changes or conflicts exist between versions of an integration. For -example, if a new version of an integration has a required field and doesn't -specify a default value, {fleet} cannot perform the upgrade without user input. -Conflicts are also likely if an experimental package greatly restructures its -fields and configuration settings between releases. - -If {fleet} detects a conflict while automatically upgrading an integration -policy, it will not attempt to upgrade it. You'll need to: - -. <>. - -. Use the policy editor to fix any conflicts or errors. -+ -[role="screenshot"] -image::images/upgrade-resolve-conflicts.png[Resolve field conflicts in the policy editor] - -.. Under *Review field conflicts*, notice that you can click -*previous configuration* to view the raw JSON representation of the old -integration policy and compare values. This feature is useful when fields have -been deprecated or removed between releases. -+ -[role="screenshot"] -image::images/upgrade-view-previous-config.png[View previous configuration to resolve conflicts] - -.. In the policy editor, fix any errors and click *Upgrade integration*. diff --git a/docs/en/ingest-management/integrations/view-integration-assets.asciidoc b/docs/en/ingest-management/integrations/view-integration-assets.asciidoc deleted file mode 100644 index cb1f7fb96..000000000 --- a/docs/en/ingest-management/integrations/view-integration-assets.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[view-integration-assets]] -= View {agent} integration assets - -++++ -View integration assets -++++ - -{agent} integrations come with a number of assets, such as dashboards, saved -searches, and visualizations for analyzing data. - -To view integration assets: - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select -the integration you want to view. - -. Click the *Assets* tab to see a list of assets. If this tab does not exist, -the assets are not installed. - -If {agent} is already streaming data to {es}, you can follow the links to -view the assets in {kib}. \ No newline at end of file diff --git a/docs/en/ingest-management/integrations/view-integration-policies.asciidoc b/docs/en/ingest-management/integrations/view-integration-policies.asciidoc deleted file mode 100644 index ff4a35dec..000000000 --- a/docs/en/ingest-management/integrations/view-integration-policies.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[view-integration-policies]] -= View {agent} integration policies - -++++ -View integration policies -++++ - -An integration policy is created when you add an {integrations-docs}[integration] to an {agent} -policy. - -To view details about all the integration policies for a specific integration: - -. In {kib}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select -the integration you want to view. You can select a category to narrow your search. - -. Click the *Policies* tab to see the list of integration policies. - -Open the *Actions* menu to see other actions you can perform from this view. diff --git a/docs/en/ingest-management/migrate-auditbeat-to-agent.asciidoc b/docs/en/ingest-management/migrate-auditbeat-to-agent.asciidoc deleted file mode 100644 index 4d31ecde7..000000000 --- a/docs/en/ingest-management/migrate-auditbeat-to-agent.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -:osquery-docs: https://www.osquery.io/schema/5.1.0 - -[[migrate-auditbeat-to-agent]] -= Migrate from {auditbeat} to {agent} - -Before you begin, read <> to learn how to deploy -{agent} and install integrations. - -Then come back to this page to learn about the integrations available to replace -functionality provided by {auditbeat}. - -[discrete] -[[compatibility]] -== Compatibility - -The integrations that provide replacements for `auditd` and `file_integrity` -modules are only available in {stack} version 8.3 and later. - -[discrete] -[[use-integrations]] -== Replace {auditbeat} modules with {agent} integrations - -The following table describes the integrations you can use instead of -{auditbeat} modules and datasets. - -[options="header"] -|=== -| If you use... | You can use this instead... | Notes - -.2+| {auditbeat-ref}/auditbeat-module-auditd.html[Auditd] module - -| {integrations-docs}/auditd_manager[Auditd Manager] integration -| This integration is a direct replacement of the module. You can port rules and -configuration to this integration. Starting in {stack} 8.4, you can also set the -`immutable` flag in the audit configuration. - -| {integrations-docs}/auditd[Auditd Logs] integration -| Use this integration if you don't need to manage rules. It only parses logs from -the audit daemon `auditd`. Please note that the events created by this integration -are different than the ones created by -{integrations-docs}/auditd_manager[Auditd Manager], since the latter merges all -related messages in a single event while {integrations-docs}/auditd[Auditd Logs] -creates one event per message. - -| {auditbeat-ref}/auditbeat-module-file_integrity.html[File Integrity] module -| {integrations-docs}/fim[File Integrity Monitoring] integration -| This integration is a direct replacement of the module. It reports real-time -events, but cannot report who made the changes. If you need to track this -information, use {security-guide}/install-endpoint.html[{elastic-defend}] -instead. - -| {auditbeat-ref}/auditbeat-module-system.html[System] module -| It depends... -| There is not a single integration that collects all this information. - -| {auditbeat-ref}/auditbeat-dataset-system-host.html[System.host] dataset -| {integrations-docs}/osquery[Osquery] or {integrations-docs}/osquery_manager[Osquery Manager] integration -a| Schedule collection of information like: - -* {osquery-docs}/#system_info[system_info] for hostname, unique ID, and architecture -* {osquery-docs}/#os_version[os_version] -* {osquery-docs}/#interface_addresses[interface_addresses] for IPs and MACs - -.3+| {auditbeat-ref}/auditbeat-dataset-system-login.html[System.login] dataset - -| {security-guide}/install-endpoint.html[Endpoint] -| Report login events. - -| {integrations-docs}/osquery[Osquery] or {integrations-docs}/osquery_manager[Osquery Manager] integration -| Use the {osquery-docs}/#last[last] table for Linux and macOS. - -| {fleet} {integrations-docs}/system[system] integration -| Collect login events for Windows through the {integrations-docs}/system#security[Security event log]. - -.2+| {auditbeat-ref}/auditbeat-dataset-system-package.html[System.package] dataset - |{integrations-docs}/system_audit[System Audit] integration - a| This integration is a direct replacement of the System Package dataset. Starting in {stack} 8.7, you can port rules and configuration settings to this - integration. - This integration currently schedules collection of information such as: - -* {osquery-docs}/#rpm_packages[rpm_packages] -* {osquery-docs}/#deb_packages[deb_packages] -* {osquery-docs}/#homebrew_packages[homebrew_packages] - -| {integrations-docs}/osquery[Osquery] or {integrations-docs}/osquery_manager[Osquery Manager] integration -a| Schedule collection of information like: - -* {osquery-docs}/#rpm_packages[rpm_packages] -* {osquery-docs}/#deb_packages[deb_packages] -* {osquery-docs}/#homebrew_packages[homebrew_packages] -* {osquery-docs}/#apps[apps] (MacOS) -* {osquery-docs}/#programs[programs] (Windows) -* {osquery-docs}/#npm_packages[npm_packages] -* {osquery-docs}/#atom_packages[atom_packages] -* {osquery-docs}/#chocolatey_packages[chocolatey_packages] -* {osquery-docs}/#portage_packages[portage_packages] -* {osquery-docs}/#python_packages[python_packages] - -.3+| {auditbeat-ref}/auditbeat-dataset-system-process.html[System.process] dataset - -| {security-guide}/install-endpoint.html[Endpoint] -| Best replacement because out of the box it reports events for -every process in {ecs-ref}/index.html[ECS] format and has excellent -integration in {kibana-ref}/index.html[Kibana]. - -| {integrations-docs}/winlog[Custom Windows event log] and -{integrations-docs}/windows#sysmonoperational[Sysmon] integrations -| Provide process data. - -|{integrations-docs}/osquery[Osquery] or -{integrations-docs}/osquery_manager[Osquery Manager] integration -| Collect data from the {osquery-docs}/#process[process] table on some OSes -without polling. - -.2+| {auditbeat-ref}/auditbeat-dataset-system-socket.html[System.socket] dataset - -| {security-guide}/install-endpoint.html[Endpoint] -| Best replacement because it supports monitoring network connections on Linux, -Windows, and MacOS. Includes process and user metadata. Currently does not -do flow accounting (byte and packet counts) or domain name enrichment (but does -collect DNS queries separately). - -| {integrations-docs}/osquery[Osquery] or {integrations-docs}/osquery_manager[Osquery Manager] integration -| Monitor socket events via the {osquery-docs}/#socket_events[socket_events] table -for Linux and MacOS. - -| {auditbeat-ref}/auditbeat-dataset-system-user.html[System.user] dataset -| {integrations-docs}/osquery[Osquery] or {integrations-docs}/osquery_manager[Osquery Manager] integration -| Monitor local users via the {osquery-docs}/#user[user] table for Linux, Windows, and MacOS. - -|=== \ No newline at end of file diff --git a/docs/en/ingest-management/migrate-beats-to-agent.asciidoc b/docs/en/ingest-management/migrate-beats-to-agent.asciidoc deleted file mode 100644 index 7a5a1a833..000000000 --- a/docs/en/ingest-management/migrate-beats-to-agent.asciidoc +++ /dev/null @@ -1,479 +0,0 @@ -[[migrate-beats-to-agent]] -= Migrate from {beats} to {agent} - -Learn how to replace your existing {filebeat} and {metricbeat} deployments -with {agent}, our single agent for logs, metrics, security, and threat -prevention. - -[discrete] -[[why-migrate-to-elastic-agent]] -== Why migrate to {agent}? - -{agent} and {beats} provide similar functionality for log collection and -host monitoring, but {agent} has some distinct advantages over {beats}. - -* *Easier to deploy and manage.* Instead of deploying multiple {beats}, -you deploy a single {agent}. The {agent} downloads, configures, and manages any -underlying programs required to collect and parse your data. - -* *Easier to configure.* You no longer have to define and manage separate -configuration files for each Beat running on a host. Instead you define a single -agent policy that specifies which integration settings to use, and the {agent} -generates the configuration required by underlying programs, like {beats}. - -* *Central management.* Unlike {beats}, which require you to set up your own -automation strategy for upgrades and configuration management, {agent}s can be -managed from a central location in {kib} called {fleet}. In {fleet}, you can -view the status of running {agent}s, update agent policies and push them to your -hosts, and even trigger binary upgrades. - -* *Endpoint protection.* Probably the biggest advantage of using {agent} is that -it enables you to protect your endpoints from security threats. - -[discrete] -== Limitations and requirements - -There are currently some limitations and requirements to be aware of before -migrating to {agent}: - -* *No support for configuring the {beats} internal queue.* -Each Beat has an internal queue that stores events before batching and -publishing them to the output. To improve data throughput, {beats} users can set -{filebeat-ref}/configuring-internal-queue.html[configuration options] to tune -the performance of the internal queue. However, the endless fine tuning -required to configure the queue is cumbersome and not always fruitful. Instead -of expecting users to configure the internal queue, {agent} uses sensible -defaults. This means you won't be able to migrate internal queue configurations -to {agent}. - -For more information about {agent} limitations, see -<>. - -[discrete] -[[prepare-for-migration]] -== Prepare for the migration - -Before you begin: - -. Review your existing {beats} configurations and make a list of the -integrations that are required. For example, if your existing implementation -collects logs and metrics from Nginx, add Nginx to your list. - -. Make a note of any processors or custom configurations you want to migrate. -Some of these customizations may no longer be needed or possible in {agent}. - -. Decide if it's the right time to migrate to {agent}. Review the information -under <>. Make sure the integrations you need -are supported and Generally Available, and that the output and features you -require are also supported. - -If everything checks out, proceed to the next step. Otherwise, you might want -to continue using {beats} and possibly deploy {agent} alongside {beats} to -use features like endpoint protection. - -[discrete] -== Set up {fleet-server} (self-managed deployments only) - -To use {fleet} for central management, a <> must be -running and accessible to your hosts. - -If you're using {ecloud}, you can skip this step because {ecloud} runs a hosted -version of {fleet-server}. - -Otherwise, follow the steps for self-managed deployments described -in <> or <>, depending on your -deployment model, and then return to this page when you are done. - -[discrete] -== Deploy {agent} to your hosts to collect logs and metrics - -To begin the migration, deploy an {agent} to a host where {beats} shippers are -running. It's recommended that you set up and test the migration in a -development environment before deploying across your infrastructure. - -You can continue to run {beats} alongside {agent} until you're satisfied with -the data its sending to {es}. - -Read <> to learn how to deploy an {agent}. To -save time, return to this page when the {agent} is deployed, healthy, and -successfully sending data. - -Here's a high-level overview to help you understand the deployment process. - -.Overview of the {agent} deployment process -***** - -During the deployment process you: - -* *Create an agent policy.* An agent policy is similar to a configuration file, -but unlike a regular configuration file, which needs to be maintained on many -different host systems, you can configure and maintain the agent policy in a -central location in {fleet} and apply it to multiple {agent}s. - -* *Add integrations to the agent policy.* {agent} integrations provide a simple, -unified way to collect data from popular apps and services, and protect systems -from security threats. To define the agent policy, you add integrations for each -service or system you want to monitor. For example, to collect logs and metrics -from a system running Nginx, you might add the Nginx integration and the System -integration. -+ -*What happens when you add an integration to an agent policy?* The assets for the -integration, such as dashboards and ingest pipelines, get installed if they -aren't already. Plus the settings you specify for the integration, such as how -to connect to the source system or locate logs, are added as an integration -policy. -+ -For the example described earlier, you would have an agent policy that -contains two integration policies: one for collecting Nginx logs and metrics, -and another for collecting system logs and metrics. - -* *Install {agent} on the host and enroll it in the agent policy.* When you -enroll the {agent} in an agent policy, the agent gets added to {fleet}, where -you can monitor and manage the agent. - -***** - -TIP: It's best to add one integration at a time and test it before adding more -integrations to your agent policy. The System integration is a good way to -get started if it's supported on your OS. - -[discrete] -== View agent details and inspect the data streams - -After deploying an {agent} to a host, view details about the agent and inspect -the data streams it creates. To learn more about the benefits of using data streams, -refer to <>. - -. On the *Agents* tab in *{fleet}*, confirm that the {agent} status is `Healthy`. -+ -[role="screenshot"] -image::images/migration-agent-status-healthy01.png[Screen showing that agent status is Healthy] - -. Click the host name to examine the {agent} details. This page shows the -integrations that are currently installed, the policy the agent is enrolled in, -and information about the host machine: -+ -[role="screenshot"] -image::images/migration-agent-details01.png[Screen showing that agent status is Healthy] - -. Go back to the main {fleet} page and click the *Data streams* tab. You should -be able to see the data streams for various logs and metrics from the host. This -is out-of-the-box without any extra configuration or dashboard creation: -+ -[role="screenshot"] -image::images/migration-agent-data-streams01.png[Screen showing data streams created by the {agent}] - -. Go to *Analytics > Discover* and examine the data streams. Note that documents -indexed by {agent} match these patterns: -+ --- -* `logs-*` -* `metrics-*` --- -+ -If {beats} are installed on the host machine, the data in {es} will be -duplicated, with one set coming from {beats} and another from {agent} for the -_same_ data source. -+ -For example, filter on `filebeat-*` to see the data ingested by {filebeat}. -+ -[role="screenshot"] -image::images/migration-event-from-filebeat.png[Screen showing event from {filebeat}] -+ -Next, filter on `logs-*`. Notice that the document contains `data_stream.*` -fields that come from logs ingested by the {agent}. -+ -[role="screenshot"] -image::images/migration-event-from-agent.png[Screen showing event from {agent}] -+ -NOTE: This duplication is superfluous and will consume extra storage space on -your {es} deployment. After you've finished migrating all your configuration -settings to {agent}, you'll remove {beats} to prevent redundant messages. - - -[discrete] -== Add integrations to the agent policy - -Now that you've deployed an {agent} to your host and it's successfully sending -data to {es}, add another integration. For guidance on which integrations you -need, look at the list you created earlier when you -<>. - -For example, if the agent policy you created earlier includes the System -integration, but you also want to monitor Nginx: - -. From the main menu in {kib}, click *Add integrations* and add the Nginx -integration. -+ -[role="screenshot"] -image::images/migration-add-nginx-integration.png[Screen showing the Nginx integration] - -. Configure the integration, then apply it to the agent policy you used earlier. -Make sure you expand collapsed sections to see all the settings like log paths. -+ -[role="screenshot"] -image::images/migration-add-integration-policy.png[Screen showing Nginx configuration] -+ -When you save and deploy your changes, the agent policy is updated to include a -new integration policy for Nginx. All {agent}s enrolled in the agent policy get -the updated policy, and the {agent} running on your host will begin collecting -Nginx data. -+ -NOTE: Integration policy names must be globally unique across all agent -policies. - -. Go back to *{fleet} > Agents* and verify that the agent status is still -healthy. Click the host name to drill down into the agent details. From there, -you can see the agent policy and integration policies that are applied. -+ -If the agent status is not Healthy, click *Logs* to view the agent log and -troubleshoot the problem. - -. Go back to the main *{fleet}* page, and click *Data streams* to inspect the -data streams and navigate to the pre-built dashboards installed with the -integration. - -Notice again that the data is duplicated because you still have {beats} -running and sending data. - -[discrete] -== Migrate processor configurations - -Processors enable you to filter and enhance the data before it’s sent to the -output. Each processor receives an event, applies a defined action to the event, -and returns the event. If you define a list of processors, they are executed in -the order they are defined. Elastic provides a -{filebeat-ref}/defining-processors.html[rich set of processors] that are -supported by all {beats} and by {agent}. - -Prior to migrating from {beats}, you defined processors in the configuration -file for each Beat. After migrating to {agent}, however, the {beats} -configuration files are redundant. All configuration is policy-driven from -{fleet} (or for advanced use cases, specified in a standalone agent policy). Any -processors you defined previously in the {beats} configuration need to be added -to an integration policy; they cannot be defined in the {beats} configuration. - -IMPORTANT: Globally-defined processors are not currently supported by {agent}. -You must define processors in each integration policy where they are required. - -To add processors to an integration policy: - -. In {fleet}, open the *Agent policies* tab and click the policy name to view its integration policies. - -. Click the name of the integration policy to edit it. - -. Click the down arrow next to enabled streams, and under *Advanced options*, -add the processor definition. The processor will be applied to the data set -where it's defined. -+ -[role="screenshot"] -image::images/migration-add-processor.png[Screen showing how to add a processor to an integration policy] -+ -For example, the following processor adds geographically specific metadata to host events: -+ -[source,yaml] ----- -- add_host_metadata: - cache.ttl: 5m - geo: - name: nyc-dc1-rack1 - location: 40.7128, -74.0060 - continent_name: North America - country_iso_code: US - region_name: New York - region_iso_code: NY - city_name: New York ----- - -In {kib}, look at the data again to confirm it contains the fields you expect. - -[discrete] -== Preserve raw events - -In some cases, {beats} modules preserve the original, raw event, which consumes -more storage space, but may be a requirement for compliance or forensic use -cases. - -In {agent}, this behavior is optional and disabled by default. - -If you must preserve the raw event, edit the integration policy, and for each -enabled data stream, click the *Preserve original event* toggle. - -[role="screenshot"] -image::images/migration-preserve-raw-event.png[Screen showing how to add a processor to an integration policy] - -Do this for every data stream with a raw event you want to preserve. - -[discrete] -== Migrate custom dashboards - -Elastic integration packages provide many assets, such as pre-built dashboards, -to make it easier for you to visualize your data. In some cases, however, you -might have custom dashboards you want to migrate. - -Because {agent} uses different data streams, the fields exported by an {agent} -are slightly different from those exported {beats}. Any custom dashboards that -you created for {beats} need to be modified or recreated to use the new fields. - -You have a couple of options for migrating custom dashboards: - -* (Recommended) Recreate the custom dashboards based on the new data streams. - -* <> -and continue using custom dashboards. - -[discrete] -[[create-index-aliases]] -=== Create index aliases to point to data streams - -You may want to continue using your custom dashboards if the dashboards -installed with an integration are not adequate. To do this, use index aliases to -feed data streams into your existing custom dashboards. - -For example, custom dashboards that point to `filebeat-` or `metricbeat-` can be -aliased to use the equivalent data streams, `logs-` and `metrics-`. - -To use aliases: - -. Add a `filebeat` alias to the `logs-` data stream. For example: -+ -[source,json] ----- -POST _aliases -{ - "actions": [ - { - "add": { - "index": "logs-*", - "alias": "filebeat-" - } - } - - ] -} ----- - -. Add a `metribeat` alias to the `metrics-` data stream. -+ -[source,json] ----- -POST _aliases -{ - "actions": [ - { - "add": { - "index": "metrics-*", - "alias": "metricbeat-" - } - } - ] -} ----- - -IMPORTANT: These aliases must be added to both the index template and existing -indices. - -Note that custom dashboards will show duplicated data until you remove {beats} -from your hosts. - -For more information, see the {ref}/aliases.html[Aliases documentation]. - -[discrete] -== Migrate index lifecycle policies - -{ilm-cap} ({ilm}) policies in {es} enable you to manage indices -according to your performance, resiliency, and retention requirements. To learn -more about {ilm}, refer to the -{ref}/index-lifecycle-management.html[{ilm} documentation]. - -{ilm} is configured by default in {beats} (version 7.0 and later) and in {agent} -(all versions). To view the index lifecycle policies defined in {es}, go to -*Management > Index Lifecycle Policies*. - -[role="screenshot"] -image::images/migration-index-lifecycle-policies.png[Screen showing how to add a processor to an integration policy] - -If you used {ilm} with {beats}, you'll see index lifecycle policies like -*filebeat* and *metricbeat* in the list. After migrating to {agent}, you'll see -polices named *logs* and *metrics*, which encapsulate the {ilm} policies for all -`logs-*` and `metrics-*` index templates. - -When you migrate from {beats} to {agent}, you have a couple of options for -migrating index policy settings: - -* *Modify the newly created index lifecycle policies (recommended).* As -mentioned earlier, {ilm} is enabled by default when the {agent} is installed. -Index lifecycle policies are created and added to the index templates for -data streams created by integrations. -+ -If you have existing index lifecycle policies for {beats}, it's highly -recommended that you modify the lifecycle policies for {agent} to match your -previous policy. To do this: -+ --- -. In {kib}, go to *{stack-manage-app} > Index Lifecycle Policies* and search for a -{beats} policy, for example, *filebeat*. Under *Linked indices*, notice you can -view indices linked to the policy. Click the policy name to see the settings. - -. Click the *logs* policy and, if necessary, edit the settings to match the old -policy. - -. Under *Index Lifecycle Policies*, search for another {beats} policy, for -example, *metricbeat*. - -. Click the *metrics* policy and edit the settings to match the old policy. --- -+ -Optionally delete the {beats} index lifecycle policies when they are no longer -used by an index. - -* *Keep the {beats} policy and apply it to the index templates created for data -streams.* To preserve an existing policy, modify it, as needed, and apply it to -all the index templates created for data streams: -+ --- -. Under *Index Lifecycle Policies*, find the {beats} policy, for example, -*filebeat*. - -. In the *Actions* column, click the *Add policy to index template* icon. - -. Under *Index template*, choose a data stream index template, then add the -policy. - -. Repeat this procedure, as required, to apply the policy to other data stream -index templates. --- - -.What if you're not using {ilm} with {beats}? -**** -You can begin to use {ilm} now with {agent}. Under *Index lifecycle policies*, -click the policy name and edit the settings to meet your requirements. -**** - -[discrete] -== Remove {beats} from your host - -Any installed {beats} shippers will continue to work until you remove them. This -allows you to roll out the migration in stages. You will continue to see -duplicated data until you remove {beats}. - -When you're satisfied with your {agent} deployment, remove {beats} from your -hosts. All the data collected before installing the {agent} will still be -available in {es} until you delete the data or it's removed according to the -data retention policies you've specified for {ilm}. - -To remove {beats} from your host: - -. Stop the service by using the correct command for your system. - -. (Optional) Back up your {beats} configuration files in case you need to refer -to them in the future. - -. Delete the {beats} installation directory. If necessary, stop any orphan -processes that are running after you stopped the service. - -. If you added firewall rules to allow {beats} to communicate on your network, -remove them. - -. After you've removed all {beats}, revoke any API keys or remove privileges for -any {beats} users created to send data to {es}. diff --git a/docs/en/ingest-management/override-policy-settings.asciidoc b/docs/en/ingest-management/override-policy-settings.asciidoc deleted file mode 100644 index 1b050008b..000000000 --- a/docs/en/ingest-management/override-policy-settings.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[enable-custom-policy-settings]] -= Enable custom settings in an agent policy - -In certain cases it can be useful to enable custom settings that are not available in {fleet}, and that override the default behavior for {agent}. - -WARNING: Use these custom settings with caution as they are intended for special cases. We do not test all possible combinations of settings which will be passed down to the components of {agent}, so it is possible that certain custom configurations can result in breakages. - -* <> - -[discrete] -[[configure-agent-download-timeout]] -== Configure the agent download timeout - -You can configure the the amount of time that {agent} waits for an upgrade package download to complete. This is useful in the case of a slow or intermittent network connection. - -[source,shell] --- -PUT kbn:/api/fleet/agent_policies/ -{ - "name": "Test policy", - "namespace": "default", - "overrides": { - "agent": { - "download": { - "timeout": "120s" - } - } - } -} --- diff --git a/docs/en/ingest-management/overview.asciidoc b/docs/en/ingest-management/overview.asciidoc deleted file mode 100644 index 8967c131d..000000000 --- a/docs/en/ingest-management/overview.asciidoc +++ /dev/null @@ -1,181 +0,0 @@ -[[fleet-overview]] -= {fleet} and {agent} overview - -[discrete] -[[elastic-agent]] -== {agent} - -{agent} is a single, unified way to add monitoring for logs, metrics, and other -types of data to a host. It can also protect hosts from security threats, query -data from operating systems, forward data from remote services or hardware, and -more. A single agent makes it easier and faster to deploy monitoring across your -infrastructure. Each agent has a single policy you can update to add -integrations for new data sources, security protections, and more. - -As the following diagram illustrates, {agent} can monitor the host where it's -deployed, and it can collect and forward data from remote services and hardware -where direct deployment is not possible. - -image::images/agent-architecture.png[Image showing {agent} collecting data from local host and remote services] - -To learn about installation options, refer to <>. - -IMPORTANT: Using {fleet} and {agent} with link:{serverless-docs}[{serverless-full}]? Please note these <>. - -TIP: Looking for a general guide that explores all of your options for ingesting data? Check out {cloud}/ec-cloud-ingest-data.html[Adding data to Elasticsearch]. - -[discrete] -[[unified-integrations]] -== {integrations} - -{integrations-docs}[Elastic integrations] provide an easy way to connect Elastic to external services and systems, and quickly get insights or take action. -They can collect new sources of data, and they often ship -with out-of-the-box assets like dashboards, visualizations, and pipelines to -extract structured fields out of logs and events. This makes it easier to get insights -within seconds. Integrations are available for popular services and platforms -like Nginx or AWS, as well as many generic input types like log files. - -{kib} provides a web-based UI to add and manage integrations. You can browse a -unified view of available integrations that shows both {agent} and {beats} -integrations. - -[role="screenshot"] -image::images/integrations.png[Integrations page] - -[discrete] -[[configuring-integrations]] -== {agent} policies - -Agent policies specify which integrations you want to run and on which hosts. -You can apply an {agent} policy to multiple -agents, making it even easier to manage configuration at scale. - -[role="screenshot"] -image::images/add-integration.png[Add integration page] - -When you add an integration, you configure inputs for logs and metrics, such as the path to your Nginx access -logs. When you're done, you save the integration to an {agent} -policy. The next time enrolled agents check in, they receive the update. -Having the policies automatically deployed is more convenient -than doing it yourself by using SSH, Ansible playbooks, or some other tool. - -For more information, refer to <>. - -If you prefer infrastructure as code, you may use YAML files and APIs. -{fleet} has an API-first design. Anything you can do in the UI, you -can also do using the {fleet-guide}/fleet-api-docs.html[`API`]. -This makes it easy to automate and integrate with other systems. - -[discrete] -[[package-registry-intro]] -== {package-registry} - -The {package-registry} is an online package hosting service for the {agent} -integrations available in {kib}. - -{kib} connects to the {package-registry} at `epr.elastic.co` using the Elastic -Package Manager, downloads the latest integration package, and stores its assets -in {es}. This process typically requires an internet connection because -integrations are updated and released periodically. You can find more information about running the {package-registry} in air-gapped -environments in the section about {fleet-guide}/air-gapped.html[Air-gapped environments]. - -[discrete] -[[artifact-registry-intro]] -== Elastic Artifact Registry - -{fleet} and {agent} require access to the public Elastic Artifact Registry. The {agent} running on any of your internal hosts should have access to `artifacts.elastic.co` in order to perform self-upgrades and install of certain components which are needed for some of the data integrations. - -Additionally, access to `artifacts.security.elastic.co` is needed for {agent} updates and security artifacts when using {elastic-defend}. - -You can find more information about running the above mentioned resources in air-gapped -environments in the section about {fleet-guide}/air-gapped.html[Air-gapped environments]. - -[discrete] -[[central-management]] -== Central management in {fleet} - -{fleet} provides a web-based UI in {kib} for centrally managing {agent}s and -their policies. - -You can see the state of all your {agent}s in {fleet}. On the **Agents** page, -you can see which agents are healthy or unhealthy, and the last time they -checked in. You can also see the version of the {agent} binary and policy. - -[role="screenshot"] -image::images/kibana-fleet-agents.png[Agents page] - -{fleet} serves as the communication channel back to the {agent}s. Agents check -in for the latest updates on a regular basis. You can have any number of agents -enrolled into each agent policy, which allows you to scale up to -thousands of hosts. - -When you make a change to an agent policy, all the agents receive the update -during their next check-in. You no longer have to distribute policy updates -yourself. - -When you're ready to upgrade your {agent} binaries or integrations, you can -initiate upgrades in {fleet}, and the {agent}s running on your hosts will -upgrade automatically. - -[discrete] -[[selective-agent-management]] -=== Roll out changes to many {agent}s quickly - -Some subscription levels support bulk select operations, including: - -* Selective binary updates -* Selective agent policy reassignment -* Selective agent unenrollment - -This capability enables you to apply changes and trigger updates across many -{agent}s so you can roll out changes quickly across your organization. - -For more information, refer to {subscriptions}[{stack} subscriptions]. - -[discrete] -[[fleet-server-intro]] -== {fleet-server} - -{fleet-server} is the mechanism to connect {agent}s to {fleet}. It allows for -a scalable infrastructure and is supported in {ecloud} and self-managed clusters. -{fleet-server} is a separate process that communicates with the deployed {agent}s. -It can be started from any available x64 architecture {agent} artifact. - -For more information, refer to <>. - -.{fleet-server} with {serverless-full} -**** -On-premises {fleet-server} is not currently available for use with -link:{serverless-docs}[{serverless-full}] projects. In a {serverless-short} -environment we recommend using {fleet-server} on {ecloud}. -**** - -[discrete] -[[fleet-communication-layer]] -== {es} as the communication layer - -All communication between the {fleet} UI and {fleet-server} happens through -{es}. {fleet} writes policies, actions, and any changes to the `fleet-*` -indices in {es}. Each {fleet-server} monitors the indices, picks up changes, and -ships them to the {agent}s. To communicate to {fleet} about the status of the -{agent}s and the policy rollout, the {fleet-server}s write updates to the -`fleet-*` indices. - -[discrete] -[[agent-self-protection]] -== {agent} self-protection - -On macOS and Windows, when the {elastic-defend} integration is added to the -agent policy, {elastic-endpoint} can prevent malware from executing on -the host. For more information, refer to -{security-guide}/es-overview.html#self-protection[{elastic-endpoint} self-protection]. - -[discrete] -[[data-streams-intro]] -== Data streams make index management easier - -The data collected by {agent} is stored in indices that are more granular than -you'd get by default with the {beats} shippers or APM Server. This gives you more visibility into the -sources of data volume, and control over lifecycle management policies and index -permissions. These indices are called <>. - diff --git a/docs/en/ingest-management/processors/indexers-and-matchers.asciidoc b/docs/en/ingest-management/processors/indexers-and-matchers.asciidoc deleted file mode 100644 index 735e02236..000000000 --- a/docs/en/ingest-management/processors/indexers-and-matchers.asciidoc +++ /dev/null @@ -1,129 +0,0 @@ -[discrete] -[[kubernetes-indexers-and-matchers]] -== Indexers and matchers - -The `add_kubernetes_metadata` processor has two basic building blocks: - -* Indexers -* Matchers - -[discrete] -=== Indexers - -Indexers use Pod metadata to create unique identifiers for each one of the Pods. - -Available indexers are: - -`container`:: Identifies the Pod metadata using the IDs of its containers. -`ip_port`:: Identifies the Pod metadata using combinations of its IP and its exposed ports. -When using this indexer, metadata is identified using the combination of `ip:port` for each of the ports exposed by all containers of the pod. The `ip` is the IP of the pod. -`pod_name`:: Identifies the Pod metadata using its namespace and its name as -`namespace/pod_name`. -`pod_uid`:: Identifies the Pod metadata using the UID of the Pod. - -[discrete] -=== Matchers - -Matchers are used to construct the lookup keys that match with the identifiers -created by indexes. - -Available matchers are: - -`field_format`:: Looks up Pod metadata using a key created with a string format -that can include event fields. -+ -This matcher has an option `format` to define the string format. This string -format can contain placeholders for any field in the event. -+ -For example, the following configuration uses the `ip_port` indexer to identify -the Pod metadata by combinations of the Pod IP and its exposed ports, and uses -the destination IP and port in events as match keys: -+ -[source,yaml] ----- -- add_kubernetes_metadata: - ... - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - ip_port: - matchers: - - field_format: - format: '%{[destination.ip]}:%{[destination.port]}' ----- - -`fields`:: Looks up Pod metadata using as key the value of some specific fields. -When multiple fields are defined, the first one included in the event is used. -+ -This matcher has an option `lookup_fields` to define the files whose value will -be used for lookup. -+ -For example, the following configuration uses the `ip_port` indexer to identify -Pods, and defines a matcher that uses the destination IP or the server IP for the -lookup, the first it finds in the event: -+ -[source,yaml] ----- -- add_kubernetes_metadata: - ... - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - ip_port: - matchers: - - fields: - lookup_fields: ['destination.ip', 'server.ip'] ----- - -`logs_path`:: Looks up Pod metadata using identifiers extracted from the log -path stored in the `log.file.path` field. -+ --- -This matcher has the following configuration settings: - -`logs_path`:: (Optional) Base path of container logs. If not specified, it uses -the default logs path of the platform where Agent is running: for Linux - -`/var/lib/docker/containers/`, Windows - `C:\\ProgramData\\Docker\\containers`. -To change the default value: container ID must follow right after the `logs_path` - -`/`, where `container_id` is a 64-character-long -hexadecimal string. - -`resource_type`:: (Optional) Type of the resource to obtain the ID of. -Valid `resource_type`: -* `pod`: to make the lookup based on the Pod UID. When `resource_type` is set to -`pod`, `logs_path` must be set as well, supported path in this case: -** `/var/lib/kubelet/pods/` used to read logs from mounted into the Pod volumes, -those logs end up under `/var/lib/kubelet/pods//volumes//...` -To use `/var/lib/kubelet/pods/` as a `log_path`, `/var/lib/kubelet/pods` must be -mounted into the filebeat Pods. -** `/var/log/pods/` -Note: when using `resource_type: 'pod'` logs will be enriched only with Pod -metadata: Pod id, Pod name, etc., not container metadata. -* `container`: to make the lookup based on the container ID, `logs_path` must -be set to `/var/log/containers/`. -It defaults to `container`. --- -+ -To be able to use `logs_path` matcher agent's input path must be a subdirectory -of directory defined in `logs_path` configuration setting. -+ -The default configuration is able to lookup the metadata using the container ID -when the logs are collected from the default docker logs path -(`/var/lib/docker/containers//...` on Linux). -+ -For example the following configuration would use the Pod UID when the logs are -collected from `/var/lib/kubelet/pods//...`. -+ -[source,yaml] ----- -- add_kubernetes_metadata: - ... - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - pod_uid: - matchers: - - logs_path: - logs_path: '/var/lib/kubelet/pods' - resource_type: 'pod' ----- diff --git a/docs/en/ingest-management/processors/processor-add_cloud_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_cloud_metadata.asciidoc deleted file mode 100644 index 3f42bec73..000000000 --- a/docs/en/ingest-management/processors/processor-add_cloud_metadata.asciidoc +++ /dev/null @@ -1,225 +0,0 @@ -[[add-cloud-metadata-processor]] -= Add cloud metadata - -++++ -add_cloud_metadata -++++ - -TIP: Inputs that collect logs and metrics use this processor by default, so you -do not need to configure it explicitly. - -The `add_cloud_metadata` processor enriches each event with instance metadata -from the machine's hosting provider. At startup the processor queries a list of -hosting providers and caches the instance metadata. - -The following providers are supported: - -* Amazon Web Services (AWS) -* Digital Ocean -* Google Compute Engine (GCE) -* https://www.qcloud.com/?lang=en[Tencent Cloud] (QCloud) -* Alibaba Cloud (ECS) -* Huawei Cloud (ECS) -* Azure Virtual Machine -* Openstack Nova - -The Alibaba Cloud and Tencent providers are disabled by default, because -they require to access a remote host. Use the `providers` setting to select a -list of default providers to query. - -[discrete] -== Example - -This configuration enables the processor: - -[source,yaml] ----- - - add_cloud_metadata: ~ ----- - - -The metadata that is added to events varies by hosting provider. For examples, -refer to <>. - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `timeout` -| No -| `3s` -| Maximum amount of time to wait for a successful response when detecting the hosting provider. If a timeout occurs, no instance metadata is added to the events. This makes it possible to enable this processor for all your deployments (in the cloud or on-premise). - -| `providers` -| No -| -a| List of provider names to use. If `providers` is not configured, -all providers that do not access a remote endpoint are enabled by default. -The list of providers may alternatively be configured with the environment variable `BEATS_ADD_CLOUD_METADATA_PROVIDERS`, -by setting it to a comma-separated list of provider names. - -The list of supported provider names includes: - -* `alibaba` or `ecs` for the Alibaba Cloud provider (disabled by default). -* `azure` for Azure Virtual Machine (enabled by default). -* `digitalocean` for Digital Ocean (enabled by default). -* `aws` or `ec2` for Amazon Web Services (enabled by default). -* `gcp` for Google Compute Engine (enabled by default). -* `openstack` or `nova` for Openstack Nova (enabled by default). -* `openstack-ssl` or `nova-ssl` for Openstack Nova when SSL metadata APIs are enabled (enabled by default). -* `tencent` or `qcloud` for Tencent Cloud (disabled by default). -* `huawei` for Huawei Cloud (enabled by default). - -| `overwrite` -| No -| `false` -| Whether to overwrite existing cloud fields. If `true`, the processor -overwrites existing `cloud.*` fields. - -|=== - -The `add_cloud_metadata` processor supports SSL options to configure the http -client used to query cloud metadata. - -For more information, refer to <>, specifically -the settings under <> and <>. - -[discrete] -[[provider-specific-examples]] -== Provider-specific metadata examples - -The following -sections show examples for each of the supported providers. - -[discrete] -=== AWS - -[source,json] ----- -{ - "cloud": { - "account.id": "123456789012", - "availability_zone": "us-east-1c", - "instance.id": "i-4e123456", - "machine.type": "t2.medium", - "image.id": "ami-abcd1234", - "provider": "aws", - "region": "us-east-1" - } -} ----- - -[discrete] -=== Digital Ocean - -[source,json] ----- -{ - "cloud": { - "instance.id": "1234567", - "provider": "digitalocean", - "region": "nyc2" - } -} ----- - -[discrete] -=== GCP - -[source,json] ----- -{ - "cloud": { - "availability_zone": "us-east1-b", - "instance.id": "1234556778987654321", - "machine.type": "f1-micro", - "project.id": "my-dev", - "provider": "gcp" - } -} ----- - -[discrete] -=== Tencent Cloud - -[source,json] ----- -{ - "cloud": { - "availability_zone": "gz-azone2", - "instance.id": "ins-qcloudv5", - "provider": "qcloud", - "region": "china-south-gz" - } -} ----- - -[discrete] -=== Huawei Cloud - -[source,json] ----- -{ - "cloud": { - "availability_zone": "cn-east-2b", - "instance.id": "37da9890-8289-4c58-ba34-a8271c4a8216", - "provider": "huawei", - "region": "cn-east-2" - } -} ----- - -[discrete] -=== Alibaba Cloud - -This metadata is only available when VPC is selected as the network type of the -ECS instance. - -[source,json] ----- -{ - "cloud": { - "availability_zone": "cn-shenzhen", - "instance.id": "i-wz9g2hqiikg0aliyun2b", - "provider": "ecs", - "region": "cn-shenzhen-a" - } -} ----- - -[discrete] -=== Azure Virtual Machine - -[source,json] ----- -{ - "cloud": { - "provider": "azure", - "instance.id": "04ab04c3-63de-4709-a9f9-9ab8c0411d5e", - "instance.name": "test-az-vm", - "machine.type": "Standard_D3_v2", - "region": "eastus2" - } -} ----- - -[discrete] -=== Openstack Nova - -[source,json] ----- -{ - "cloud": { - "instance.name": "test-998d932195.mycloud.tld", - "instance.id": "i-00011a84", - "availability_zone": "xxxx-az-c", - "provider": "openstack", - "machine.type": "m2.large" - } -} ----- diff --git a/docs/en/ingest-management/processors/processor-add_cloudfoundry_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_cloudfoundry_metadata.asciidoc deleted file mode 100644 index 730088758..000000000 --- a/docs/en/ingest-management/processors/processor-add_cloudfoundry_metadata.asciidoc +++ /dev/null @@ -1,109 +0,0 @@ -[[add_cloudfoundry_metadata-processor]] -= Add Cloud Foundry metadata - -++++ -add_cloudfoundry_metadata -++++ - -The `add_cloudfoundry_metadata` processor annotates each event with relevant -metadata from Cloud Foundry applications. - -For events to be annotated with Cloud Foundry metadata, they must have a field -called `cloudfoundry.app.id` that contains a reference to a Cloud Foundry -application, and the configured Cloud Foundry client must be able to retrieve -information for the application. - -Each event is annotated with: - -* Application Name -* Space ID -* Space Name -* Organization ID -* Organization Name - -NOTE: Pivotal Application Service and Tanzu Application Service include this -metadata in all events from the firehose since version 2.8. In these cases the -metadata in the events is used, and `add_cloudfoundry_metadata` processor -doesn't modify these fields. - -For efficient annotation, application metadata retrieved by the Cloud Foundry -client is stored in a persistent cache on the filesystem. This is done so the -metadata can persist across restarts of {agent} and its underlying programs. For -control over this cache, use the `cache_duration` and `cache_retry_delay` -settings. - -[discrete] -== Example - -[source,yaml] -------------------------------------------------------------------------------- - - add_cloudfoundry_metadata: - api_address: https://api.dev.cfdev.sh - client_id: uaa-filebeat - client_secret: verysecret - ssl: - verification_mode: none - # To connect to Cloud Foundry over verified TLS you can specify a client and CA certificate. - #ssl: - # certificate_authorities: ["/etc/pki/cf/ca.pem"] - # certificate: "/etc/pki/cf/cert.pem" - # key: "/etc/pki/cf/cert.key" -------------------------------------------------------------------------------- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `api_address` -| No -| `http://api.bosh-lite.com` -| URL of the Cloud Foundry API. - -| `doppler_address` -| No -| `${api_address}/v2/info` -| URL of the Cloud Foundry Doppler Websocket. - -| `uaa_address` -| No -| `${api_address}/v2/info` -| URL of the Cloud Foundry UAA API. - -| `rlp_address` -| No -| `${api_address}/v2/info` -| URL of the Cloud Foundry RLP Gateway. - -| `client_id` -| Yes -| -| Client ID to authenticate with Cloud Foundry. - -|`client_secret` -| Yes -| -| Client Secret to authenticate with Cloud Foundry. - -|`cache_duration` -| No -| `120s` -| Maximum amount of time to cache an application's metadata. - -|`cache_retry_delay` -| No -| `20s` -| Time to wait before trying to obtain an application's metadata again in case of error. - -|`ssl` -| No -| -| SSL configuration to use when connecting to Cloud Foundry. For a list of -available settings, refer to <>, specifically -the settings under <> and <>. - -|=== diff --git a/docs/en/ingest-management/processors/processor-add_docker_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_docker_metadata.asciidoc deleted file mode 100644 index c60407fc5..000000000 --- a/docs/en/ingest-management/processors/processor-add_docker_metadata.asciidoc +++ /dev/null @@ -1,122 +0,0 @@ -[[add_docker_metadata-processor]] -= Add Docker metadata - -++++ -add_docker_metadata -++++ - -TIP: Inputs that collect logs and metrics use this processor by default, so you -do not need to configure it explicitly. - -The `add_docker_metadata` processor annotates each event with relevant metadata -from Docker containers. At startup the processor detects a Docker environment -and caches the metadata. - -For events to be annotated with Docker metadata, the configuration must be -valid, and the processor must be able to reach the Docker API. - -Each event is annotated with: - -* Container ID -* Name -* Image -* Labels - -[NOTE] -===== -When running {agent} in a container, you need to provide access to Docker’s unix -socket in order for the `add_docker_metadata` processor to work. You can do this -by mounting the socket inside the container. For example: - -`docker run -v /var/run/docker.sock:/var/run/docker.sock ...` - -To avoid privilege issues, you may also need to add `--user=root` to the `docker -run` flags. Because the user must be part of the Docker group in order to access -`/var/run/docker.sock`, root access is required if {agent} is running as -non-root inside the container. - -If the Docker daemon is restarted, the mounted socket will become invalid, and metadata -will stop working. When this happens, you can do one of the following: - -* Restart {agent} every time Docker is restarted -* Mount the entire `/var/run` directory (instead of just the socket) -===== - -[discrete] -== Example - -[source,yaml] -------------------------------------------------------------------------------- - - add_docker_metadata: - host: "unix:///var/run/docker.sock" - #match_fields: ["system.process.cgroup.id"] - #match_pids: ["process.pid", "process.parent.pid"] - #match_source: true - #match_source_index: 4 - #match_short_id: true - #cleanup_timeout: 60 - #labels.dedot: false - # To connect to Docker over TLS you must specify a client and CA certificate. - #ssl: - # certificate_authority: "/etc/pki/root/ca.pem" - # certificate: "/etc/pki/client/cert.pem" - # key: "/etc/pki/client/cert.key" -------------------------------------------------------------------------------- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `host` -| No -| `unix:///var/run/docker.sock` -| Docker socket (UNIX or TCP socket). - -| `ssl` -| No -| -| SSL configuration to use when connecting to the Docker socket. For a list of -available settings, refer to <>, specifically -the settings under <> and <>. - -| `match_fields` -| No -| -| List of fields to match a container ID. At least one of the fields most hold a container ID to get the event enriched. - -| `match_pids` -| No -| `["process.pid", "process.parent.pid"]` -| List of fields that contain process IDs. If the process is running in Docker, the event will be enriched. - -| `match_source` -| No -| `true` -| Whether to match the container ID from a log path present in the `log.file.path` field. - -| `match_short_id` -| No -| `false` -| Whether to match the container short ID from a log path present in the `log.file.path` field. This setting allows you to match directory names that have the first 12 characters of the container ID. For example, `/var/log/containers/b7e3460e2b21/*.log`. - -| `match_source_index` -| No -| `4` -| Index in the source path split by a forward slash (`/`) to find the container ID. For example, the default, `4`, matches the container ID in `/var/lib/docker/containers//*.log`. - -| `cleanup_timeout` -| No -| `60s` -| Time of inactivity before container metadata is cleaned up and forgotten. - -| `labels.dedot` -| No -| `false` -| Whether to replace dots (`.`) in labels with underscores (`_`). - -|=== diff --git a/docs/en/ingest-management/processors/processor-add_fields.asciidoc b/docs/en/ingest-management/processors/processor-add_fields.asciidoc deleted file mode 100644 index 84ab6a883..000000000 --- a/docs/en/ingest-management/processors/processor-add_fields.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[add_fields-processor]] -= Add fields - -++++ -add_fields -++++ - -The `add_fields` processor adds fields to the event. Fields can be scalar -values, arrays, dictionaries, or any nested combination of these. The -`add_fields` processor overwrites the target field if it already exists. By -default, the fields that you specify are grouped under the `fields` -sub-dictionary in the event. To group the fields under a different -sub-dictionary, use the `target` setting. To store the fields as top-level -fields, set `target: ''`. - -[discrete] -== Examples - -This configuration: - -[source,yaml] ------------------------------------------------------------------------------- - - add_fields: - target: project - fields: - name: myproject - id: '574734885120952459' ------------------------------------------------------------------------------- - -Adds these fields to any event: - -[source,json] -------------------------------------------------------------------------------- -{ - "project": { - "name": "myproject", - "id": "574734885120952459" - } -} -------------------------------------------------------------------------------- - -This configuration alters the event metadata: - -[source,yaml] ------------------------------------------------------------------------------- - - add_fields: - target: '@metadata' - fields: - op_type: "index" ------------------------------------------------------------------------------- - -When the event is ingested by {es}, the document will have `op_type: "index"` -set as a metadata field. - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `target` -| No -| `fields` -| Sub-dictionary to put all fields into. Set `target` to `@metadata` to add values to the event metadata instead of fields. - -| `fields` -| Yes -| -| Fields to be added. -|=== diff --git a/docs/en/ingest-management/processors/processor-add_host_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_host_metadata.asciidoc deleted file mode 100644 index 34b6c889d..000000000 --- a/docs/en/ingest-management/processors/processor-add_host_metadata.asciidoc +++ /dev/null @@ -1,136 +0,0 @@ -[[add_host_metadata-processor]] -= Add Host metadata - -++++ -add_host_metadata -++++ - -TIP: Inputs that collect logs and metrics use this processor by default, so you -do not need to configure it explicitly. - -The `add_host_metadata` processor annotates each event with relevant metadata -from the host machine. - -NOTE: If you are using {agent} to monitor external system, use the -<> processor instead of -`add_host_metadata`. - -[discrete] -== Example - -[source,yaml] ----- - - add_host_metadata: - cache.ttl: 5m - geo: - name: nyc-dc1-rack1 - location: 40.7128, -74.0060 - continent_name: North America - country_iso_code: US - region_name: New York - region_iso_code: NY - city_name: New York ----- - -The fields added to the event look like this: - -[source,json] ----- -{ - "host":{ - "architecture":"x86_64", - "name":"example-host", - "id":"", - "os":{ - "family":"darwin", - "type":"macos", - "build":"16G1212", - "platform":"darwin", - "version":"10.12.6", - "kernel":"16.7.0", - "name":"Mac OS X" - }, - "ip": ["192.168.0.1", "10.0.0.1"], - "mac": ["00:25:96:12:34:56", "72:00:06:ff:79:f1"], - "geo": { - "continent_name": "North America", - "country_iso_code": "US", - "region_name": "New York", - "region_iso_code": "NY", - "city_name": "New York", - "name": "nyc-dc1-rack1", - "location": "40.7128, -74.0060" - } - } -} ----- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -IMPORTANT: If `host.*` fields already exist in the event, they are overwritten by -default unless you set `replace_fields` to `true` in the processor -configuration. - -[options="header"] -|=== -| Name | Required | Default | Description - -| `netinfo.enabled` -| No -| `true` -| Whether to include IP addresses and MAC addresses as fields `host.ip` and `host.mac`. - -| `cache.ttl` -| No -| `5m` -| Sets the cache expiration time for the internal cache used by the processor. Negative values disable caching altogether. - -| `geo.name` -| No -| -| User-definable token to be used for identifying a discrete location. Frequently a data center, rack, or similar. - -| `geo.location` -| No -| -| Longitude and latitude in comma-separated format. - -| `geo.continent_name` -| No -| -| Name of the continent. - -| `geo.country_name` -| No -| -| Name of the country. - -| `geo.region_name` -| No -| -| Name of the region. - -| `geo.city_name` -| No -| -| Name of the city. - -| `geo.country_iso_code` -| No -| -| ISO country code. - -| `geo.region_iso_code` -| No -| -| ISO region code. - -| `replace_fields` -| No -| `true` -| Whether to replace original host fields from the event. If set `false`, original host fields from the event are not replaced by host fields from `add_host_metadata`. - -|=== diff --git a/docs/en/ingest-management/processors/processor-add_id.asciidoc b/docs/en/ingest-management/processors/processor-add_id.asciidoc deleted file mode 100644 index 6e163fe4b..000000000 --- a/docs/en/ingest-management/processors/processor-add_id.asciidoc +++ /dev/null @@ -1,36 +0,0 @@ -[[add_id-processor]] -= Generate an ID for an event - -++++ -add_id -++++ - -The `add_id` processor generates a unique ID for an event. - -[discrete] -== Example - -[source,yaml] ------------------------------------------------------ - - add_id: ~ ------------------------------------------------------ - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `target_field` -| No -| `@metadata._id` -| Field where the generated ID will be stored. - -| `type` -| No -| `elasticsearch` -| Type of ID to generate. Currently only `elasticsearch` is supported. The `elasticsearch` type uses the same algorithm that {es} uses to auto-generate document IDs. -|=== diff --git a/docs/en/ingest-management/processors/processor-add_kubernetes_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_kubernetes_metadata.asciidoc deleted file mode 100644 index b5d2a6f43..000000000 --- a/docs/en/ingest-management/processors/processor-add_kubernetes_metadata.asciidoc +++ /dev/null @@ -1,219 +0,0 @@ -[[add_kubernetes_metadata-processor]] -= Add Kubernetes metadata - -++++ -add_kubernetes_metadata -++++ - -TIP: Inputs that collect logs and metrics use this processor by default, so you -do not need to configure it explicitly. - -The `add_kubernetes_metadata` processor annotates each event with relevant -metadata based on which Kubernetes Pod the event originated from. At startup it -detects an `in_cluster` environment and caches the Kubernetes-related metadata. - -For events to be annotated with Kubernetes-related metadata, the Kubernetes -configuration must be valid. - -Each event is annotated with: - -* Pod Name -* Pod UID -* Namespace -* Labels - -In addition, the node and namespace metadata are added to the Pod metadata. - -The `add_kubernetes_metadata` processor has two basic building blocks: - -* Indexers -* Matchers - -Indexers use Pod metadata to create unique identifiers for each one of the Pods. -These identifiers help to correlate the metadata of the observed Pods with -actual events. For example, the `ip_port` indexer can take a Kubernetes Pod and -create identifiers for it based on all its `pod_ip:container_port` combinations. - -Matchers use information in events to construct lookup keys that match the -identifiers created by the indexers. For example, when the `fields` matcher -takes `["metricset.host"]` as a lookup field, it constructs a lookup key with -the value of the field `metricset.host`. When one of these lookup keys matches -with one of the identifiers, the event is enriched with the metadata of the -identified Pod. - -For more information about available indexers and matchers, plus some examples, -refer to <>. - -[discrete] -== Examples - -This configuration enables the processor when {agent} is run as a Pod in -Kubernetes. - -[source,yaml,subs="attributes+"] -------------------------------------------------------------------------------- - - add_kubernetes_metadata: - # Defining indexers and matchers manually is required for {beatname_lc}, for instance: - #indexers: - # - ip_port: - #matchers: - # - fields: - # lookup_fields: ["metricset.host"] - #labels.dedot: true - #annotations.dedot: true -------------------------------------------------------------------------------- - -This configuration enables the processor on an {agent} running as a -process on the Kubernetes node: - -[source,yaml,subs="attributes+"] -------------------------------------------------------------------------------- - - add_kubernetes_metadata: - host: - # If kube_config is not set, KUBECONFIG environment variable will be checked - # and if not present it will fall back to InCluster - kube_config: ${HOME}/.kube/config - # Defining indexers and matchers manually is required for {beatname_lc}, for instance: - #indexers: - # - ip_port: - #matchers: - # - fields: - # lookup_fields: ["metricset.host"] - #labels.dedot: true - #annotations.dedot: true -------------------------------------------------------------------------------- - -This configuration disables the default indexers and matchers, and -then enables different indexers and matchers: - -[source,yaml] -------------------------------------------------------------------------------- - - add_kubernetes_metadata: - host: - # If kube_config is not set, KUBECONFIG environment variable will be checked - # and if not present it will fall back to InCluster - kube_config: ~/.kube/config - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - ip_port: - matchers: - - fields: - lookup_fields: ["metricset.host"] - #labels.dedot: true - #annotations.dedot: true -------------------------------------------------------------------------------- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `host` -| No -| -| Node to scope {agent} to in case it cannot be accurately detected, as when running {agent} in host network mode. - -| `scope` -| No -| `node` -| Whether the processor should have visibility at the node level (`node`) or at the entire cluster level (`cluster`). - -|`namespace` -| No -| -| Namespace to collect the metadata from. If no namespaces is specified, collects metadata from all namespaces. - -|`add_resource_metadata` -| No -| -a| Filters and configuration for adding extra metadata to the event. This setting accepts the following settings: - -* `node` or `namespace`: Labels and annotations filters for the extra metadata coming from node and namespace. -By default all labels are included, but annotations are not. -To change the default behavior, you can set `include_labels`, `exclude_labels`, and `include_annotations`. -These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. -Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. -To turn off enrichment of `node` or `namespace` metadata individually, set `enabled: false`. -* `deployment`: If the resource is `pod` and it is created from a `deployment`, the deployment name is not added by default. To enable this behavior, set `deployment: true`. -* `cronjob`: If the resource is `pod` and it is created from a `cronjob`, the cronjob name is not added by default. To enable this behavior, set `cronjob: true`. - -[%collapsible] -.Expand this to see an example -==== - -[source,yaml] ----- - add_resource_metadata: - namespace: - include_labels: ["namespacelabel1"] - # use_regex_include: false - # use_regex_exclude: false - # exclude_labels: ["namespacelabel2"] - #labels.dedot: true - #annotations.dedot: true - node: - # use_regex_include: false - include_labels: ["nodelabel2"] - include_annotations: ["nodeannotation1"] - # use_regex_exclude: false - # exclude_annotations: ["nodeannotation2"] - #labels.dedot: true - #annotations.dedot: true - deployment: true - cronjob: true ----- -==== - -| `kube_config` -| No -| `KUBECONFIG` environment variable, if present -| Config file to use as the configuration for the Kubernetes client. - -| `kube_client_options` -| No -| -a| Additional configuration options for the Kubernetes client. -Currently client QPS and burst are supported. If this setting is not configured, -the Kubernetes client's -https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants[default QPS and burst] is used. - -[%collapsible] -.Expand this to see an example -==== - -[source,yaml] ----- - kube_client_options: - qps: 5 - burst: 10 ----- -==== - -| `cleanup_timeout` -| No -| `60s` -| Time of inactivity before stopping the running configuration for a container. - -|`sync_period` -| No -| -| Timeout for listing historical resources. - -| `labels.dedot` -| No -| `true` -| Whether to replace dots (`.`) in labels with underscores (`_`). - -`annotations.dedot` -| No -| `true` -| Whether to replace dots (`.`) in annotations with underscores (`_`). - -|=== - -include::indexers-and-matchers.asciidoc[] diff --git a/docs/en/ingest-management/processors/processor-add_labels.asciidoc b/docs/en/ingest-management/processors/processor-add_labels.asciidoc deleted file mode 100644 index 5e6c7904a..000000000 --- a/docs/en/ingest-management/processors/processor-add_labels.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[add_labels-processor]] -= Add labels - -++++ -add_labels -++++ - -The `add_labels` processors adds a set of key-value pairs to an event. The -processor flattens nested configuration objects like arrays or dictionaries into -a fully qualified name by merging nested names with a dot (`.`). Array entries -create numeric names starting with 0. Labels are always stored under the Elastic -Common Schema compliant `labels` sub-dictionary. - -[discrete] -== Example - -This configuration: - -[source,yaml] ----- - - add_labels: - labels: - number: 1 - with.dots: test - nested: - with.dots: nested - array: - - do - - re - - with.field: mi ----- - -Adds these fields to every event: - -[source,json] ----- -{ - "labels": { - "number": 1, - "with.dots": "test", - "nested.with.dots": "nested", - "array.0": "do", - "array.1": "re", - "array.2.with.field": "mi" - } -} ----- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description -| `labels` -| Yes -| -| Dictionaries of labels to be added. -|=== diff --git a/docs/en/ingest-management/processors/processor-add_locale.asciidoc b/docs/en/ingest-management/processors/processor-add_locale.asciidoc deleted file mode 100644 index 8a62af991..000000000 --- a/docs/en/ingest-management/processors/processor-add_locale.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -[[add_locale-processor]] -= Add the local time zone - -++++ -add_locale -++++ - -The `add_locale` processor enriches each event with either the machine's time -zone offset from UTC or the name of the time zone. The processor adds the a -`event.timezone` value to each event. - -[discrete] -== Examples - -The configuration adds the processor with the default settings: - -[source,yaml] -------------------------------------------------------------------------------- - - add_locale: ~ -------------------------------------------------------------------------------- - -This configuration adds the processor and configures it to add the time zone -abbreviation to events: - -[source,yaml] -------------------------------------------------------------------------------- - - add_locale: - format: abbreviation -------------------------------------------------------------------------------- - -NOTE: The `add_locale` processor differentiates between daylight savings time -(DST) and regular time. For example `CEST` indicates DST and and `CET` is -regular time. - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `format` -| No -| `offset` -| Whether an `offset` or time zone `abbreviation` is added to the event. -|=== diff --git a/docs/en/ingest-management/processors/processor-add_network_direction.asciidoc b/docs/en/ingest-management/processors/processor-add_network_direction.asciidoc deleted file mode 100644 index b168885be..000000000 --- a/docs/en/ingest-management/processors/processor-add_network_direction.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[[add_network_direction-processor]] -= Add network direction - -++++ -add_network_direction -++++ - -The `add_network_direction` processor attempts to compute the perimeter-based -network direction when given a source and destination IP address and a list of -internal networks. - -[discrete] -== Example - -[source,yaml] -------- - - add_network_direction: - source: source.ip - destination: destination.ip - target: network.direction - internal_networks: [ private ] -------- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `source` -| Yes -| -| Source IP. - -| `destination` -| Yes -| -| Destination IP. - -| `target` -| Yes -| -| Target field where the network direction will be written. - -| `internal_networks` -| Yes -| -| List of internal networks. The value can contain either CIDR blocks or a list of special values enumerated in the network section of <>. - -|=== - diff --git a/docs/en/ingest-management/processors/processor-add_nomad_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_nomad_metadata.asciidoc deleted file mode 100644 index 0960a59db..000000000 --- a/docs/en/ingest-management/processors/processor-add_nomad_metadata.asciidoc +++ /dev/null @@ -1,207 +0,0 @@ -[[add_nomad_metadata-processor]] -= Add Nomad metadata - -++++ -add_nomad_metadata -++++ - -experimental[] - -The `add_nomad_metadata` processor adds fields with relevant metadata for -applications deployed in Nomad. - -Each event is annotated with the following information: - -* Allocation name, identifier, and status -* Job name and type -* Namespace where the job is deployed -* Datacenter and region where the agent running the allocation is located. - -[discrete] -== Example - -[source,yaml] ----- - - add_nomad_metadata: ~ ----- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `address` -| No -| `http://127.0.0.1:4646` -| URL of the agent API used to request the metadata. - -|`namespace` -| No -| -| Namespace to watch. If set, only events for allocations in this namespace are annotated. - -|`region` -| No -| -| Region to watch. If set, only events for allocations in this region are annotated. - -|`secret_id` -| No -| -a| SecretID to use when connecting with the agent API. -This is an example ACL policy to apply to the token. - -[source,hcl] ----- -namespace "*" { - policy = "read" -} -node { - policy = "read" -} -agent { - policy = "read" -} ----- - -|`refresh_interval` -| No -| `30s` -| Interval used to update the cached metadata. - -| `cleanup_timeout` -| No -| `60s` -| Time to wait before cleaning up an allocation's associated resources after it has been removed. -This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. - -|`scope` -| No -| `node` -| Scope of the resources to watch. -Specify `node` to get metadata for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. - -| `node` -| No -| -a| When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically. - -For example, you can use the following configuration when {agent} is collecting events from all the allocations in the cluster: - -[source,yaml] ----- - - add_nomad_metadata: - scope: global ----- - -|=== - -[discrete] -== Indexers and matchers - -Indexers and matchers are used to correlate fields in events with actual -metadata. {agent} uses this information to know what metadata to include -in each event. - -[discrete] -=== Indexers - -Indexers use allocation metadata to create unique identifiers for each one of -the Pods. - -Available indexers are: - -`allocation_name`:: Identifies allocations by their name and namespace (as -`/`) - -`allocation_uuid`:: Identifies allocations by their unique identifier. - -[discrete] -=== Matchers - -Matchers are used to construct the lookup keys that match with the identifiers -created by indexes. - -[discrete] -==== `field_format` - -Looks up allocation metadata using a key created with a string format that can -include event fields. - -This matcher has an option `format` to define the string format. This string -format can contain placeholders for any field in the event. - -For example, the following configuration uses the `allocation_name` indexer to -identify the allocation metadata by its name and namespace, and uses custom -fields existing in the event as match keys: - -[source,yaml] ----- -- add_nomad_metadata: - ... - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - allocation_name: - matchers: - - field_format: - format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}' ----- - -[discrete] -==== `fields` - -Looks up allocation metadata using as key the value of some specific fields. -When multiple fields are defined, the first one included in the event is used. - -This matcher has an option `lookup_fields` to define the fields whose value will -be used for lookup. - -For example, the following configuration uses the `allocation_uuid` indexer to -identify allocations, and defines a matcher that uses some fields where the -allocation UUID can be found for lookup, the first it finds in the event: - -[source,yaml] ----- -- add_nomad_metadata: - ... - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - allocation_uuid: - matchers: - - fields: - lookup_fields: ['host.name', 'fields.nomad_alloc_uuid'] ----- - -[discrete] -==== `logs_path` - -Looks up allocation metadata using identifiers extracted from the log path -stored in the `log.file.path` field. - -This matcher has an optional `logs_path` option with the base path of the -directory containing the logs for the local agent. - -The default configuration is able to lookup the metadata using the allocation -UUID when the logs are collected under `/var/lib/nomad`. - -For example the following configuration would use the allocation UUID when the logs -are collected from `/var/lib/NomadClient001/alloc//alloc/logs/...`. - -[source,yaml] ----- -- add_nomad_metadata: - ... - default_indexers.enabled: false - default_matchers.enabled: false - indexers: - - allocation_uuid: - matchers: - - logs_path: - logs_path: '/var/lib/NomadClient001' ----- diff --git a/docs/en/ingest-management/processors/processor-add_observer_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_observer_metadata.asciidoc deleted file mode 100644 index b42fa764a..000000000 --- a/docs/en/ingest-management/processors/processor-add_observer_metadata.asciidoc +++ /dev/null @@ -1,119 +0,0 @@ -[[add_observer_metadata-processor]] -= Add Observer metadata - -++++ -add_observer_metadata -++++ - -beta[] - -The `add_observer_metadata` processor annotates each event with relevant -metadata from the observer machine. - -[discrete] -== Example - -[source,yaml] ----- - - add_observer_metadata: - cache.ttl: 5m - geo: - name: nyc-dc1-rack1 - location: 40.7128, -74.0060 - continent_name: North America - country_iso_code: US - region_name: New York - region_iso_code: NY - city_name: New York ----- - -The fields added to the event look like this: - -[source,json] ----- -{ - "observer" : { - "hostname" : "avce", - "type" : "heartbeat", - "vendor" : "elastic", - "ip" : [ - "192.168.1.251", - "fe80::64b2:c3ff:fe5b:b974", - ], - "mac" : [ - "dc:c1:02:6f:1b:ed", - ], - "geo": { - "continent_name": "North America", - "country_iso_code": "US", - "region_name": "New York", - "region_iso_code": "NY", - "city_name": "New York", - "name": "nyc-dc1-rack1", - "location": "40.7128, -74.0060" - } - } -} ----- - - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `netinfo.enabled` -| No -| `true` -| Whether to include IP addresses and MAC addresses as fields `observer.ip` and `observer.mac`. - -| `cache.ttl` -| No -| `5m` -| Sets the cache expiration time for the internal cache used by the processor. Negative values disable caching altogether. - -| `geo.name` -| No -| -| User-definable token to be used for identifying a discrete location. Frequently a data center, rack, or similar. - -| `geo.location` -| No -| -| Longitude and latitude in comma-separated format. - -| `geo.continent_name` -| No -| -| Name of the continent. - -| `geo.country_name` -| No -| -| Name of the country. - -| `geo.region_name` -| No -| -| Name of the region. - -| `geo.city_name` -| No -| -| Name of the city. - -| `geo.country_iso_code` -| No -| -| ISO country code. - -| `geo.region_iso_code` -| No -| -| ISO region code. - -|=== \ No newline at end of file diff --git a/docs/en/ingest-management/processors/processor-add_process_metadata.asciidoc b/docs/en/ingest-management/processors/processor-add_process_metadata.asciidoc deleted file mode 100644 index b4140f76d..000000000 --- a/docs/en/ingest-management/processors/processor-add_process_metadata.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[add_process_metadata-processor]] -= Add process metadata - -++++ -add_process_metadata -++++ - -The `add_process_metadata` processor enriches events with information from running -processes, identified by their process ID (PID). - -[discrete] -== Example - -[source,yaml] ----- - - add_process_metadata: - match_pids: [system.process.ppid] - target: system.process.parent ----- - -The fields added to the event look as follows: - -[source,json] ----- -"process": { - "name": "systemd", - "title": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22", - "exe": "/usr/lib/systemd/systemd", - "args": ["/usr/lib/systemd/systemd", "--switched-root", "--system", "--deserialize", "22"], - "pid": 1, - "parent": { - "pid": 0 - }, - "start_time": "2018-08-22T08:44:50.684Z", - "owner": { - "name": "root", - "id": "0" - } -}, -"container": { - "id": "b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1" -}, ----- - -Optionally, the process environment can be included, too: -[source,json] ----- - ... - "env": { - "HOME": "/", - "TERM": "linux", - "BOOT_IMAGE": "/boot/vmlinuz-4.11.8-300.fc26.x86_64", - "LANG": "en_US.UTF-8", - } - ... ----- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `match_pids` -| Yes -| -| List of fields to lookup for a PID. The processor searches the list sequentially until the field is found in the current event, and the PID lookup is then applied to the value of this field. - -| `target` -| No -| event root -| Destination prefix where the `process` object will be created. - -| `include_fields` -| No -| -| List of fields to add. By default, adds all available fields except `process.env`. - -| `ignore_missing` -| No -| `true` -| Whether to ignore missing fields. If `false`, discards events that don't contain any of the fields specified in `match_pids` and then generates an error. If `true`, missing fields are ignored. - -| `overwrite_keys` -| No -| `false` -| Whether to overwrite existing keys. If `false` and a target field already exists, it is not, overwritten, and an error is logged. If `true`, the target field is overwritten. - -| `restricted_fields` -| No -| `false` -| Whether to output restricted fields. If `false`, to avoid leaking sensitive data, the `process.env` field is not output. If `true`, the field will be present in the output. - -| `host_path` -| No -| root directory (`/`) of host -| Host path where `/proc` is mounted. For different runtime configurations of Kubernetes or Docker, set the `host_path` to overwrite the default. - -| `cgroup_prefixes` -| No -| `/kubepods` and `/docker` -| Prefix where the container ID is inside cgroup. For different runtime configurations of Kubernetes or Docker, set `cgroup_prefixes` to overwrite the defaults. - -| `cgroup_regex` -| No -| -a| Regular expression with capture group for capturing the container ID from the cgroup path. For example: - -. `^\/.+\/.+\/.+\/([0-9a-f]{64}).*` matches the container ID of a cgroup -like `/kubepods/besteffort/pod665fb997-575b-11ea-bfce-080027421ddf/b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1` -. `^\/.+\/.+\/.+\/docker-([0-9a-f]{64}).scope` matches the container ID of a cgroup -like `/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349abe_d645_11ea_9c4c_08002709c05c.slice/docker-80d85a3a585f1575028ebe468d83093c301eda20d37d1671ff2a0be50fc0e460.scope` -. `^\/.+\/.+\/.+\/crio-([0-9a-f]{64}).scope` matches the container ID of a cgroup -like `/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349abe_d645_11ea_9c4c_08002709c05c.slice/crio-80d85a3a585f1575028ebe468d83093c301eda20d37d1671ff2a0be50fc0e460.scope` - -If `cgroup_regex` is not set, the container ID is extracted from the cgroup file based on the `cgroup_prefixes` setting. - -| `cgroup_cache_expire_time` -| No -| `30s` -| Time in seconds before cgroup cache elements expire. To disable the cgroup cache, set this to `0`. In some container runtime technologies, like runc, the container's process is also a process in the host kernel and will be affected by PID rollover/reuse. Set the expire time to a value that is smaller than the PIDs wrap around time to avoid the wrong container ID. -|=== diff --git a/docs/en/ingest-management/processors/processor-add_tags.asciidoc b/docs/en/ingest-management/processors/processor-add_tags.asciidoc deleted file mode 100644 index 10babed3f..000000000 --- a/docs/en/ingest-management/processors/processor-add_tags.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ -[[add_tags-processor]] -= Add tags - -++++ -add_tags -++++ - -The `add_tags` processor adds tags to a list of tags. If the target field -already exists, the tags are appended to the existing list of tags. - -[discrete] -== Example - -This configuration: - -[source,yaml] ----- - - add_tags: - tags: [web, production] - target: "environment" ----- - -Adds the `environment` field to every event: - -[source,json] ----- -{ - "environment": ["web", "production"] -} ----- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `tags` -| Yes -| -| List of tags to add. - -| `target` -| No -| `tags` -| Field the tags will be added to. Setting tags in `@metadata` is not supported. -|=== diff --git a/docs/en/ingest-management/processors/processor-communityid.asciidoc b/docs/en/ingest-management/processors/processor-communityid.asciidoc deleted file mode 100644 index 4ce7ea2ca..000000000 --- a/docs/en/ingest-management/processors/processor-communityid.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[community_id-processor]] -= Community ID Network Flow Hash - -++++ -community_id -++++ - -The `community_id` processor computes a network flow hash according to the -https://github.com/corelight/community-id-spec[Community ID Flow Hash -specification]. - -The flow hash is useful for correlating all network events related to a -single flow. For example, you can filter on a community ID value and you might -get back the Netflow records from multiple collectors and layer 7 protocol -records from the Network Packet Capture integration. - -By default the processor is configured to read the flow parameters from the -appropriate Elastic Common Schema (ECS) fields. If you are processing ECS data, -no parameters are required. - -[discrete] -== Examples - -[source,yaml] ----- - - community_id: ----- - -If the data does not conform to ECS, you can customize the field names that the -processor reads from. You can also change the target field that the computed -hash is written to. For example: - -[source,yaml] ----- - - community_id: - fields: - source_ip: my_source_ip - source_port: my_source_port - destination_ip: my_dest_ip - destination_port: my_dest_port - iana_number: my_iana_number - transport: my_transport - icmp_type: my_icmp_type - icmp_code: my_icmp_code - target: network.community_id ----- - -If the necessary fields are not present in the event, the processor silently -continues without adding the target field. - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| No -| -a| Field names that the processor reads from: - -`source_ip`:: Field containing the source IP address. -`source_port`:: Field containing the source port. -`destination_ip`:: Field containing the destination IP address. -`destination_port`:: Field containing the destination port. -`iana_number`:: Field containing the IANA number. The following protocol numbers -are currently supported: 1 ICMP, 2 IGMP, 6 TCP, 17 UDP, 47 GRE, 58 ICMP IPv6, 88 -EIGRP, 89 OSPF, 103 PIM, and 132 SCTP. -`transport`:: Field containing the transport protocol. Used only when the -`iana_number` field is not present. -`icmp_type`:: Field containing the ICMP type. -`icmp_code`:: Field containing the ICMP code. - -| `target` -| No -| -| Field that the computed hash is written to. - -| `seed` -| No -| -| Seed for the community ID hash. Must be between 0 and 65535 (inclusive). The -seed can prevent hash collisions between network domains, such as a staging and -production network that use the same addressing scheme. This setting results in -a 16-bit unsigned integer that gets incorporated into all generated hashes. - -|=== diff --git a/docs/en/ingest-management/processors/processor-convert.asciidoc b/docs/en/ingest-management/processors/processor-convert.asciidoc deleted file mode 100644 index 875b2f31a..000000000 --- a/docs/en/ingest-management/processors/processor-convert.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[[convert-processor]] -= Convert field type - -++++ -convert -++++ - -The `convert` processor converts a field in the event to a different type, such -as converting a string to an integer. - -The supported types include: `integer`, `long`, `float`, `double`, `string`, -`boolean`, and `ip`. - -The `ip` type is effectively an alias for `string`, but with an added validation -that the value is an IPv4 or IPv6 address. - -[discrete] -== Example - -[source,yaml] ----- - - convert: - fields: - - {from: "src_ip", to: "source.ip", type: "ip"} - - {from: "src_port", to: "source.port", type: "integer"} - ignore_missing: true - fail_on_error: false ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| List of fields to convert. The list must contain at least one item. Each item must have a `from` key that specifies the source field. The `to` key is optional and specifies where to assign the converted value. If `to` is omitted, the `from` field is updated in-place. The `type` key specifies the data type to convert the value to. If `type` is omitted, the processor copies or renames the field without any type conversion. - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing `from` keys. If `true` and the `from` key is not found in the event, the processor continues to the next field. If `false`, the processor returns an error and does not process the remaining fields. - -| `fail_on_error` -| No -| `true` -| Whether to fail when a type conversion error occurs. If `false`, type conversion failures are ignored, and the processor continues to the next field. - -| `tag` -| No -| -| Identifier for this processor. Useful for debugging. - -| `mode` -| No -| `copy` -| When both `from` and `to` are defined for a field, `mode` controls whether to `copy` or `rename` the field when the type conversion is successful. -|=== diff --git a/docs/en/ingest-management/processors/processor-copy_fields.asciidoc b/docs/en/ingest-management/processors/processor-copy_fields.asciidoc deleted file mode 100644 index 1ac93b924..000000000 --- a/docs/en/ingest-management/processors/processor-copy_fields.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[copy_fields-processor]] -= Copy fields - -++++ -copy_fields -++++ - -The `copy_fields` processor takes the value of a field and copies it to a new -field. - -You cannot use this processor to replace an existing field. If the target -field already exists, you must <> or -<> the field before using `copy_fields`. - -[discrete] -== Example - -This configuration: - -[source,yaml] ----- - - copy_fields: - fields: - - from: message - to: event.original - fail_on_error: false - ignore_missing: true ----- - -Copies the original `message` field to `event.original`: - -[source,json] ------ -{ - "message": "my-interesting-message", - "event": { - "original": "my-interesting-message" - } -} ------ - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| List of `from` and `to` pairs to copy from and to. You can use the `@metadata.` prefix to copy values from or to event metadata. - -| `fail_on_error` -| No -| `true` -| Whether to fail if an error occurs. If `true` and an error occurs, any changes are reverted, and the original is returned. If `false`, processing continues even if an error occurs. - -| `ignore_missing` -| No -| `false` -| Whether to ignore events that lack the source field. If `false`, the processing of an event will fail if a field is missing. -|=== diff --git a/docs/en/ingest-management/processors/processor-decode_base64_field.asciidoc b/docs/en/ingest-management/processors/processor-decode_base64_field.asciidoc deleted file mode 100644 index b2c6ecec4..000000000 --- a/docs/en/ingest-management/processors/processor-decode_base64_field.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[decode_base64_field-processor]] -= Decode Base64 fields - -++++ -decode_base64_field -++++ - -The `decode_base64_field` processor specifies a field to base64 decode. - -To overwrite fields, either rename the target field or use the `drop_fields` -processor to drop the field, and then rename the field. - -[discrete] -== Example - -In this example, `field1` is decoded in `field2`. - -[source,yaml] ----- - - decode_base64_field: - field: - from: "field1" - to: "field2" - ignore_missing: false - fail_on_error: true ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -a| Contains: - -* `from: "old-key"`, where `from` is the origin -* `to: "new-key"`, where `to` is the target field name - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing keys. If `true`, missing keys that should be base64 decoded are ignored and no error is logged. If `false`, an error is logged and the behavior of `fail_on_error` is applied. - -| `fail_on_error` -| No -| `true` -| Whether to fail if an error occurs. If `true` and an error occurs, an error is logged and the event is dropped. If `false`, an error is logged, but the event is not modified. -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processor-decode_cef.asciidoc b/docs/en/ingest-management/processors/processor-decode_cef.asciidoc deleted file mode 100644 index 7b352ee2f..000000000 --- a/docs/en/ingest-management/processors/processor-decode_cef.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[decode_cef-processor]] -= Decode CEF - -++++ -decode_cef -++++ - -The `decode_cef` processor decodes Common Event Format (CEF) messages. - -NOTE: This processor only works with log inputs. - -[discrete] -== Example - -In this example, the `message` field is decoded as CEF after it is renamed to -`event.original`. It is best to rename `message` to `event.original` because the -decoded CEF data contains its own `message` field. - -[source,yaml] ----- - - rename: - fields: - - {from: "message", to: "event.original"} - - decode_cef: - field: event.original ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| No -| `message` -| Source field containing the CEF message to be parsed. - -| `target_field` -| No -| `cef` -| Target field where the parsed CEF object will be written. - -| `ecs` -| No -| `true` -| Whether to generate Elastic Common Schema (ECS) fields from the CEF data. Certain CEF header and extension values will be used to populate ECS fields. - -| `timezone` -| No -| `UTC` -| IANA time zone name (for example, `America/New_York`) or fixed time offset (for example, `+0200`) to use when parsing times that do not contain a time zone. Specify `Local` to use the machine's local time zone. - -| `ignore_missing` -| No -| `false` -| Whether to ignore errors when the source field is missing. - -| `ignore_failure` -| No -| false -| Whether to ignore failures when the source field does not contain a CEF message. - -| `id` -| No -| -| Identifier for this processor instance. Useful for debugging. | -|=== diff --git a/docs/en/ingest-management/processors/processor-decode_csv_fields.asciidoc b/docs/en/ingest-management/processors/processor-decode_csv_fields.asciidoc deleted file mode 100644 index 44712a351..000000000 --- a/docs/en/ingest-management/processors/processor-decode_csv_fields.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[decode_csv_fields-processor]] -= Decode CSV fields - -++++ -decode_csv_fields -++++ - -experimental[] - -The `decode_csv_fields` processor decodes fields containing records in -comma-separated format (CSV). It will output the values as an array of strings. - -NOTE: This processor only works with log inputs. - -[discrete] -== Example - -[source,yaml] ------------------------------------------------------ - - decode_csv_fields: - fields: - message: decoded.csv - separator: "," - ignore_missing: false - overwrite_keys: true - trim_leading_space: false - fail_on_error: true ------------------------------------------------------ - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| A mapping from the source field containing the CSV data to the destination field to which the decoded array will be written. - -| `separator` -| No -| comma character (`,`) -| Character to use as a column separator. To use a TAB character, set this value to "\t". - -| `ignore_missing` -| No -| `false` -| Whether to ignore events that lack the source field. If `false`, events missing the source field will fail processing. - -| `overwrite_keys` -| No -| `false` -| Whether the target field is overwritten if it already exists. If `false`, processing of an event fails if the target field already exists. - -| `trim_leading_space` -| No -| `false` -| Whether extra space after the separator is trimmed from values. This works even if the separator is also a space. - -| `fail_on_error` -| No -| `true` -| Whether to fail if an error occurs. If `true` and an error occurs, any changes to the event are reverted, and the original event is returned. If `false`, processing continues even if an error occurs. - -|=== diff --git a/docs/en/ingest-management/processors/processor-decode_duration.asciidoc b/docs/en/ingest-management/processors/processor-decode_duration.asciidoc deleted file mode 100644 index cf33ddf42..000000000 --- a/docs/en/ingest-management/processors/processor-decode_duration.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[decode_duration-processor]] -= Decode duration - -++++ -decode_duration -++++ - -The `decode_duration` processor decodes a Go-style duration string into a specific `format`. - -For more information about the Go `time.Duration` string style, refer to the https://pkg.go.dev/time#Duration[Go documentation]. -[discrete] -== Example - -[source,yaml] ----- -processors: - - decode_duration: - field: "app.rpc.cost" - format: "milliseconds" ----- - -[discrete] -== Configuration settings - -[options="header"] -|====== -| Name | Required | Default | Description | -| `field` | yes | | Which field of event needs to be decoded as `time.Duration` | -| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` | -|====== - diff --git a/docs/en/ingest-management/processors/processor-decode_json_fields.asciidoc b/docs/en/ingest-management/processors/processor-decode_json_fields.asciidoc deleted file mode 100644 index 3e606528c..000000000 --- a/docs/en/ingest-management/processors/processor-decode_json_fields.asciidoc +++ /dev/null @@ -1,78 +0,0 @@ -[[decode-json-fields]] -= Decode JSON fields - -++++ -decode_json_fields -++++ - -The `decode_json_fields` processor decodes fields containing JSON strings and -replaces the strings with valid JSON objects. - -[discrete] -== Example - -[source,yaml] ------------------------------------------------------ - - decode_json_fields: - fields: ["field1", "field2", ...] - process_array: false - max_depth: 1 - target: "" - overwrite_keys: false - add_error_key: true ------------------------------------------------------ - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| Fields containing JSON strings to decode. - -| `process_array` -| No -| `false` -| Whether to process arrays. - -| `max_depth` -| No -| `1` -| Maximum parsing depth. A value of `1` decodes the JSON objects in fields indicated in `fields`. A value of `2` also decodes the objects embedded in the fields of these parsed documents. - -| `target` -| No -| -| Field under which the decoded JSON will be written. By default, the decoded JSON object replaces the string field from which it was read. To merge the decoded JSON fields into the root of the event, specify `target` with an empty string (`target: ""`). Note that the `null` value (`target:`) is treated as if the field was not set. - -| `overwrite_keys` -| No -| `false` -| Whether existing keys in the event are overwritten by keys from the decoded JSON object. - -| `expand_keys` -| No -| -| Whether keys in the decoded JSON should be recursively de-dotted and expanded into a hierarchical object structure. For example, `{"a.b.c": 123}` would be expanded into `{"a":{"b":{"c":123}}}`. - -| `add_error_key` -| No -| `false` -| If `true` and an error occurs while decoding JSON keys, the `error` field will become a part of the event with the error message. If `false`, there will not be any error in the event's field. - -| `document_id` -| No -| -| JSON key that's used as the document ID. If configured, the field will be removed from the original JSON document and stored in `@metadata._id`. - -|=== \ No newline at end of file diff --git a/docs/en/ingest-management/processors/processor-decode_xml.asciidoc b/docs/en/ingest-management/processors/processor-decode_xml.asciidoc deleted file mode 100644 index 95b1f7e92..000000000 --- a/docs/en/ingest-management/processors/processor-decode_xml.asciidoc +++ /dev/null @@ -1,133 +0,0 @@ -[[decode_xml-processor]] -= Decode XML - -++++ -decode_xml -++++ - -The `decode_xml` processor decodes XML data that is stored under the `field` -key. It outputs the result into the `target_field`. - -[discrete] -== Examples - -This example demonstrates how to decode an XML string contained in the `message` -field and write the resulting fields into the root of the document. Any fields -that already exist are overwritten. - -[source,yaml] ----- - - decode_xml: - field: message - target_field: "" - overwrite_keys: true ----- - -By default any decoding errors that occur will stop the processing chain, and -the error will be added to the `error.message` field. To ignore all errors and -continue to the next processor, set `ignore_failure: true`. To specifically -ignore failures caused by `field` not existing, set `ignore_missing: true`. - -[source,yaml] ----- - - decode_xml: - field: example - target_field: xml - ignore_missing: true - ignore_failure: true ----- - -By default the names of all keys converted from XML are converted to lowercase. -To disable this behavior, set `to_lower: false`, for example: - -[source,yaml] ----- - - decode_xml: - field: message - target_field: xml - to_lower: false ----- - -Example XML input: - -[source,xml] ----- - - - William H. Gaddis - The Recognitions - One of the great seminal American novels of the 20th century. - - ----- - -Will produce the following output: - -[source,json] ----- -{ - "xml": { - "catalog": { - "book": { - "author": "William H. Gaddis", - "review": "One of the great seminal American novels of the 20th century.", - "seq": "1", - "title": "The Recognitions" - } - } - } -} ----- - - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| `message` -| Source field containing the XML. - -| `target_field` -| No -| -| The field under which the decoded XML will be written. By default the decoded XML object replaces the field from which it was read. To merge the decoded XML fields into the root of the event, specify `target_field` with an empty string (`target_field: ""`). Note that the `null` value (`target_field:`) is treated as if the field was not set at all. - -| `overwrite_keys` -| No -| `true` -| Whether keys that already exist in the event are overwritten by keys from the decoded XML object. - -| `to_lower` -| No -| `true` -| Whether to convert all keys to lowercase. - -| `document_id` -| No -| -| XML key to use as the document ID. If configured, the field will be removed from the original XML document and stored in `@metadata._id`. - -| `ignore_missing` -| No -| `false` -| Whether to return an error if a specified field does not exist. - -| `ignore_failure` -| No -| `false` -| Whether to ignore all errors produced by the processor. - -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processor-decode_xml_wineventlog.asciidoc b/docs/en/ingest-management/processors/processor-decode_xml_wineventlog.asciidoc deleted file mode 100644 index 1162e15fc..000000000 --- a/docs/en/ingest-management/processors/processor-decode_xml_wineventlog.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[decode_xml_wineventlog-processor]] -= Decode XML Wineventlog - -++++ -decode_xml_wineventlog -++++ - -experimental[] - -The `decode_xml_wineventlog` processor decodes Windows Event Log data in XML -format that is stored under the `field` key. It outputs the result into the -`target_field`. - -[discrete] -== Examples - -[source,yaml] -------------------------------------------------------------------------------- - - decode_xml_wineventlog: - field: event.original - target_field: winlog -------------------------------------------------------------------------------- - -[source,json] -------------------------------------------------------------------------------- -{ - "event": { - "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success" - } -} -------------------------------------------------------------------------------- - -Will produce the following output: - -[source,json] -------------------------------------------------------------------------------- -{ - "event": { - "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success", - "action": "Special Logon", - "code": "4672", - "kind": "event", - "outcome": "success", - "provider": "Microsoft-Windows-Security-Auditing", - }, - "host": { - "name": "vagrant", - }, - "log": { - "level": "information", - }, - "winlog": { - "channel": "Security", - "outcome": "success", - "activity_id": "{ffb23523-1f32-0000-c335-b2ff321fd701}", - "level": "information", - "event_id": 4672, - "provider_name": "Microsoft-Windows-Security-Auditing", - "record_id": 11303, - "computer_name": "vagrant", - "keywords_raw": 9232379236109516800, - "opcode": "Info", - "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}", - "event_data": { - "SubjectUserSid": "S-1-5-18", - "SubjectUserName": "SYSTEM", - "SubjectDomainName": "NT AUTHORITY", - "SubjectLogonId": "0x3e7", - "PrivilegeList": "SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege" - }, - "task": "Special Logon", - "keywords": [ - "Audit Success" - ], - "message": "Special privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege", - "process": { - "pid": 652, - "thread": { - "id": 4660 - } - } - } -} -------------------------------------------------------------------------------- - -See <> for a list of supported conditions. - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| `message` -| Source field containing the XML. - -| `target_field` -| Yes -| `winlog` -| The field under which the decoded XML will be written. To merge the decoded XML fields into the root of the event, specify `target_field` with an empty string (`target_field: ""`). - -| `overwrite_keys` -| No -| `true` -| Whether keys that already exist in the event are overwritten by keys from the decoded XML object. - -| `map_ecs_fields` -| No -| `true` -| Whether to map additional ECS fields when possible. Note that ECS field keys are placed outside of `target_field`. - -| `ignore_missing` -| No -| `false` -| Whether to return an error if a specified field does not exist. - -| `ignore_failure` -| No -| `false` -| Whether to ignore all errors produced by the processor. - -|=== - -[discrete] -[[wineventlog-field-mappings]] -== Field mappings - -The field mappings are as follows: - -[cols=" | -| winlog.event_id | | -| winlog.provider_name | | `Name` attribute -| winlog.record_id | | -| winlog.task | | -| winlog.computer_name | | -| winlog.keywords | | list of each `Keyword` -| winlog.opcodes | | -| winlog.provider_guid | | `Guid` attribute -| winlog.version | | -| winlog.time_created | | `SystemTime` attribute -| winlog.outcome | | "success" if bit 0x20000000000000 is set, "failure" if 0x10000000000000 is set -| winlog.level | | converted to lowercase -| winlog.message | | line endings removed -| winlog.user.identifier | | -| winlog.user.domain | | -| winlog.user.name | | -| winlog.user.type | | converted from integer to String -| winlog.event_data | | map where `Name` attribute in Data element is key, and value is the value of the Data element -| winlog.user_data | | map where `Name` attribute in Data element is key, and value is the value of the Data element -| winlog.activity_id | | -| winlog.related_activity_id | | -| winlog.kernel_time | | -| winlog.process.pid | | -| winlog.process.thread.id | | -| winlog.processor_id | | -| winlog.processor_time | | -| winlog.session_id | | -| winlog.user_time | | -| winlog.error.code | | -|======================================================== - - -If `map_ecs_fields` is enabled then the following field mappings are also performed: - -[cols=" | `Name` attribute -| event.action | | -| event.host.name | | -| event.outcome | winlog.outcome | -| log.level | winlog.level | -| message | winlog.message | -| error.code | winlog.error.code | -| error.message | winlog.error.message | -|======================================================== diff --git a/docs/en/ingest-management/processors/processor-decompress_gzip_field.asciidoc b/docs/en/ingest-management/processors/processor-decompress_gzip_field.asciidoc deleted file mode 100644 index c972a955a..000000000 --- a/docs/en/ingest-management/processors/processor-decompress_gzip_field.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[[decompress_gzip_field-processor]] -= Decompress gzip fields - -++++ -decompress_gzip_field -++++ - -The `decompress_gzip_field` processor specifies a field to gzip decompress. - -To overwrite fields, either first rename the target field, or use the -`drop_fields` processor to drop the field, and then decompress the field. - -[discrete] -== Example - -In this example, `field1` is decompressed in `field2`. - -[source,yaml] -------- - - decompress_gzip_field: - field: - from: "field1" - to: "field2" - ignore_missing: false - fail_on_error: true -------- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -a| Contains: - -* `from: "old-key"`, where `from` is the origin -* `to: "new-key"`, where `to` is the target field name - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing keys. If `true`, no error is logged if a key that should be decompressed is missing. - -| `fail_on_error` -| No -|`true` -| If `true` and an error occurs, decompression of fields is stopped, and the original event is returned. If `false`, decompression continues even if an error occurs during decoding. - -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processor-detect_mime_type.asciidoc b/docs/en/ingest-management/processors/processor-detect_mime_type.asciidoc deleted file mode 100644 index a6165cbc9..000000000 --- a/docs/en/ingest-management/processors/processor-detect_mime_type.asciidoc +++ /dev/null @@ -1,50 +0,0 @@ -[[detect_mime_type-processor]] -= Detect mime type - -++++ -detect_mime_type -++++ - -The `detect_mime_type` processor attempts to detect a mime type for a field that -contains a given stream of bytes. - -[discrete] -== Example - -In this example, `http.request.body.content` is used as the source, and -`http.request.mime_type` is set to the detected mime type. - -[source,yaml] -------- - - detect_mime_type: - field: http.request.body.content - target: http.request.mime_type -------- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -| Field used as the data source. - -| `target` -| Yes -| -| Field to populate with the detected type. You can use the `@metadata.` prefix -to set the value in the event metadata instead of fields. - -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processor-dissect.asciidoc b/docs/en/ingest-management/processors/processor-dissect.asciidoc deleted file mode 100644 index de76d099a..000000000 --- a/docs/en/ingest-management/processors/processor-dissect.asciidoc +++ /dev/null @@ -1,129 +0,0 @@ -[[dissect-processor]] -= Dissect strings - -++++ -dissect -++++ - -The `dissect` processor tokenizes incoming strings using defined patterns. - -[discrete] -== Example - -[source,yaml] -------- - - dissect: - tokenizer: "%{key1} %{key2} %{key3|convert_datatype}" - field: "message" - target_prefix: "dissect" -------- - -For a full example, see <>. - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `tokenizer` -| Yes -| -| Field used to define the *dissection* pattern. You can provide an optional convert datatype after the key by using a pipe character (`\|`) as a separator to convert the value from `string` to `integer`, `long`, `float`, `double`, `boolean`, or `IP`. - -| `field` -| No -| `message` -| Event field to tokenize. - -| `target_prefix` -| No -| `dissect` -| Name of the field where the values will be extracted. When an empty string is defined, the processor creates the keys at the root of the event. When the target key already exists in the event, the processor won't replace it and log an error; you need to either drop or rename the key before using dissect, or enable the `overwrite_keys` flag. - -| `ignore_failure` -| No -| `false` -| Whether to return an error if the tokenizer fails to match the message field. If `true`, the processor silently restores the original event, allowing execution of subsequent processors (if any). If `false`, the processor logs an error, preventing execution of other processors. - -| `overwrite_keys` -| No -| `false` -| Whether to overwrite existing keys. If `true`, the processor overwrites existing keys in the event. If `false`, the processor fails if the key already exists. - -| `trim_values` -| No -| `none` -a| Enables the trimming of the extracted values. Useful to remove leading and trailing spaces. Possible values are: - -* `none`: no trimming is performed. -* `left`: values are trimmed on the left (leading). -* `right`: values are trimmed on the right (trailing). -* `all`: values are trimmed for leading and trailing. - -| `trim_chars` -| No -| (`" "`) to trim space characters -| Set of characters to trim from values when `trim_values` is enabled. To trim multiple characters, set this value to a string containing all characters to trim. For example, `trim_chars: " \t"` trims spaces and tabs. - -|=== - -For tokenization to be successful, all keys must be found and extracted. If a key -cannot be found, an error is logged, and no modification is done on the original -event. - -NOTE: A key can contain any characters except reserved suffix or prefix modifiers: `/`,`&`, `+`, `#` -and `?`. - -See <> for a list of supported conditions. - -[discrete] -[[dissect-example]] -== Dissect example - -For this example, imagine that an application generates the following messages: - -[source,sh] ----- -"321 - App01 - WebServer is starting" -"321 - App01 - WebServer is up and running" -"321 - App01 - WebServer is scaling 2 pods" -"789 - App02 - Database will be restarted in 5 minutes" -"789 - App02 - Database is up and running" -"789 - App02 - Database is refreshing tables" ----- - -Use the `dissect` processor to split each message into three fields, for example, `service.pid`, -`service.name`, and `service.status`: - -[source,yaml] ----- - - dissect: - tokenizer: '"%{service.pid|integer} - %{service.name} - %{service.status}"' - field: "message" - target_prefix: "" ----- - -This configuration produces fields like: - -[source,json] ----- -"service": { - "pid": 321, - "name": "App01", - "status": "WebServer is up and running" -}, ----- - -`service.name` is an ECS {ref}/keyword.html[keyword field], which means that you -can use it in {es} for filtering, sorting, and aggregations. - -When possible, use ECS-compatible field names. For more information, see the -{ecs-ref}/index.html[Elastic Common Schema] documentation. diff --git a/docs/en/ingest-management/processors/processor-dns.asciidoc b/docs/en/ingest-management/processors/processor-dns.asciidoc deleted file mode 100644 index 3a75419bd..000000000 --- a/docs/en/ingest-management/processors/processor-dns.asciidoc +++ /dev/null @@ -1,144 +0,0 @@ -[[dns-processor]] -= DNS Reverse Lookup - -++++ -dns -++++ - -The `dns` processor performs reverse DNS lookups of IP addresses. It caches the -responses that it receives in accordance to the time-to-live (TTL) value -contained in the response. It also caches failures that occur during lookups. -Each instance of this processor maintains its own independent cache. - -The processor uses its own DNS resolver to send requests to nameservers and does -not use the operating system's resolver. It does not read any values contained -in `/etc/hosts`. - -This processor can significantly slow down your pipeline's throughput if you -have a high latency network or slow upstream nameserver. The cache will help -with performance, but if the addresses being resolved have a high cardinality, -cache benefits are diminished due to the high miss ratio. - -For example, if each DNS lookup takes 2 milliseconds, the maximum -throughput you can achieve is 500 events per second (1000 milliseconds / 2 -milliseconds). If you have a high cache hit ratio, your throughput can be -higher. - -[discrete] -== Examples - -This is a minimal configuration example that resolves the IP addresses contained -in two fields. - -[source,yaml] ----- - - dns: - type: reverse - fields: - source.ip: source.hostname - destination.ip: destination.hostname ----- - -This examples shows all configuration options. - -[source,yaml] ----- - - dns: - type: reverse - action: append - transport: tls - fields: - server.ip: server.hostname - client.ip: client.hostname - success_cache: - capacity.initial: 1000 - capacity.max: 10000 - min_ttl: 1m - failure_cache: - capacity.initial: 1000 - capacity.max: 10000 - ttl: 1m - nameservers: ['192.0.2.1', '203.0.113.1'] - timeout: 500ms - tag_on_failure: [_dns_reverse_lookup_failed] ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `type` -| Yes -| -| Type of DNS lookup to perform. The only supported type is `reverse`, which queries for a PTR record. - -| `action` -| No -| `append` -| Defines the behavior of the processor when the target field already exists in the event. The options are `append` and `replace`. - -| `fields` -| Yes -| -| Mapping of source field names to target field names. The value of the source field is used in the DNS query, and the result is written to the target field. - -| `success_cache.capacity.initial` -| No -| `1000` -| Initial number of items that the success cache is allocated to hold. When initialized, the processor will allocate memory for this number of items. - -| `success_cache.capacity.max` -| No -| `10000` -| Maximum number of items that the success cache can hold. When the maximum capacity is reached, a random item is evicted. - -| `success_cache.min_ttl` -| Yes -| `1m` -| Duration of the minimum alternative cache TTL for successful DNS responses. Ensures that `TTL=0` successful reverse DNS responses can be cached. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - -| `failure_cache.capacity.initial` -| No -| `1000` -| Initial number of items that the failure cache is allocated to hold. When initialized, the processor will allocate memory for this number of items. - -| `failure_cache.capacity.max` -| No -| `10000` -| Maximum number of items that the failure cache can hold. When the maximum capacity is reached, a random item is evicted. - -| `failure_cache.ttl` -| No -| `1m` -| Duration for which failures are cached. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - -| `nameservers` -| Yes (on Windows) -| -| List of nameservers to query. If there are multiple servers, the resolver queries them in the order listed. If none are specified, it reads the nameservers listed in `/etc/resolv.conf` once at initialization. On Windows you must always supply at least one nameserver. - -| `timeout` -| No -| `500ms` -| Duration after which a DNS query will timeout. This is timeout for each DNS request, so if you have two nameservers, the total timeout will be 2 times this value. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - -| `tag_on_failure` -| No -| `false` -| List of tags to add to the event when any lookup fails. The tags are only added once even if multiple lookups fail. By default no tags are added upon failure. - -| `transport` -| No -| `udp` -| Type of transport connection that should be used: `tls` (DNS over TLS) or `udp`. - -|=== diff --git a/docs/en/ingest-management/processors/processor-drop_event.asciidoc b/docs/en/ingest-management/processors/processor-drop_event.asciidoc deleted file mode 100644 index 2f1792f1a..000000000 --- a/docs/en/ingest-management/processors/processor-drop_event.asciidoc +++ /dev/null @@ -1,24 +0,0 @@ -[[drop_event-processor]] -= Drop events - -++++ -drop_event -++++ - -The `drop_event` processor drops the entire event if the associated condition -is fulfilled. The condition is mandatory, because without one, all the events -are dropped. - -[discrete] -== Example - -[source,yaml] ------- - - drop_event: - when: - condition ------- - -See <> for a list of supported conditions. - -include::processors.asciidoc[tag=processor-limitations] diff --git a/docs/en/ingest-management/processors/processor-drop_fields.asciidoc b/docs/en/ingest-management/processors/processor-drop_fields.asciidoc deleted file mode 100644 index 849c4217f..000000000 --- a/docs/en/ingest-management/processors/processor-drop_fields.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[[drop_fields-processor]] -= Drop fields from events - -++++ -drop_fields -++++ - -The `drop_fields` processor specifies which fields to drop if a certain -condition is fulfilled. The condition is optional. If it's missing, the -specified fields are always dropped. The `@timestamp` and `type` fields cannot -be dropped, even if they show up in the `drop_fields` list. - -[discrete] -== Example - -[source,yaml] ------------------------------------------------------ - - drop_fields: - when: - condition - fields: ["field1", "field2", ...] - ignore_missing: false ------------------------------------------------------ - -NOTE: If you define an empty list of fields under `drop_fields`, no fields -are dropped. - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| If non-empty, a list of matching field names will be removed. Any element in array can contain a regular expression delimited by two slashes ('/reg_exp/'), in order to match (name) and remove more than one field. - -| `ignore_missing` -| No -| `false` -| If `true`, the processor ignores missing fields and does not return an error. - -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processor-extract_array.asciidoc b/docs/en/ingest-management/processors/processor-extract_array.asciidoc deleted file mode 100644 index 17e04c246..000000000 --- a/docs/en/ingest-management/processors/processor-extract_array.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[extract_array-processor]] -= Extract array - -++++ -extract_array -++++ - -experimental[] - -The `extract_array` processor populates fields with values read from an array -field. - -[discrete] -== Example - -The following example populates `source.ip` with the first element of -the `my_array` field, `destination.ip` with the second element, and -`network.transport` with the third. - -[source,yaml] ------------------------------------------------------ - - extract_array: - field: my_array - mappings: - source.ip: 0 - destination.ip: 1 - network.transport: 2 ------------------------------------------------------ - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -| The array field whose elements are to be extracted. - -| `mappings` -| Yes -| -| Maps each field name to an array index. Use 0 for the first element in the array. Multiple fields can be mapped to the same array element. - -| `ignore_missing` -| No -| `false` -| Whether to ignore events where the array field is missing. If `false`, processing of an event fails if the specified field does not exist. - -| `overwrite_keys` -| No -| `false` -| Whether to overwrite target fields specified in the mapping if the fields already exist. If `false`, processing fails if a target field already exists. - -| `fail_on_error` -| No -| `true` -| If `true` and an error occurs, any changes to the event are reverted, and the original event is returned. If `false`, processing continues despite errors. - -| `omit_empty` -| No -| `false` -| Whether empty values are extracted from the array. If `true`, instead of the target field being set to an empty value, it is left unset. The empty string (`""`), an empty array (`[]`), or an empty object (`{}`) are considered empty values. - -|=== diff --git a/docs/en/ingest-management/processors/processor-fingerprint.asciidoc b/docs/en/ingest-management/processors/processor-fingerprint.asciidoc deleted file mode 100644 index 783ee2239..000000000 --- a/docs/en/ingest-management/processors/processor-fingerprint.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[fingerprint-processor]] -= Generate a fingerprint of an event - -++++ -fingerprint -++++ - -The `fingerprint` processor generates a fingerprint of an event based on a -specified subset of its fields. - -The value that is hashed is constructed as a concatenation of the field name and -field value separated by `|`. For example `|field1|value1|field2|value2|`. - -Nested fields are supported in the following format: `"field1.field2"`, for example: `["log.path.file", "foo"]` - -[discrete] -== Example - -[source,yaml] ------------------------------------------------------ - - fingerprint: - fields: ["field1", "field2", ...] ------------------------------------------------------ - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| List of fields to use as the source for the fingerprint. The list will be alphabetically sorted by the processor. - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing fields. - -| `target_field` -| No -| `fingerprint` -| Field in which the generated fingerprint should be stored. - -| `method` -| No -| `sha256` -| Algorithm to use for computing the fingerprint. Must be one of: `md5`, `sha1`, `sha256`, `sha384`, `sha512`, or `xxhash`. - -| `encoding` -| No -| `hex` -| Encoding to use on the fingerprint value. Must be one of: `hex`, `base32`, or `base64`. - -|=== diff --git a/docs/en/ingest-management/processors/processor-include_fields.asciidoc b/docs/en/ingest-management/processors/processor-include_fields.asciidoc deleted file mode 100644 index 126348572..000000000 --- a/docs/en/ingest-management/processors/processor-include_fields.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[include_fields-processor]] -= Keep fields from events - -++++ -include_fields -++++ - -The `include_fields` processor specifies which fields to export if a certain -condition is fulfilled. The condition is optional. If it's missing, the -specified fields are always exported. The `@timestamp`, `@metadata`, and `type` -fields are always exported, even if they are not defined in the `include_fields` -list. - -[discrete] -== Example - -[source,yaml] -------- - - include_fields: - when: - condition - fields: ["field1", "field2", ...] -------- - -See <> for a list of supported conditions. - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -You can specify multiple `include_fields` processors under the `processors` -section. - -NOTE: If you define an empty list of fields under `include_fields`, only -the required fields, `@timestamp` and `type`, are exported. diff --git a/docs/en/ingest-management/processors/processor-move_fields.asciidoc b/docs/en/ingest-management/processors/processor-move_fields.asciidoc deleted file mode 100644 index 99a1c6cba..000000000 --- a/docs/en/ingest-management/processors/processor-move_fields.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ -[[move_fields-processor]] -= Move fields - -++++ -move_fields -++++ - -The `move_fields` processor moves event fields from one object into another. It can also rearrange fields or add a prefix to fields. - -The processor extracts fields from `from`, then uses `fields` and `exclude` as filters to choose which fields to move into the `to` field. - -[discrete] -== Example - -For example, given the following event: - -[source,json] ----- -{ - "app": { - "method": "a", - "elapsed_time": 100, - "user_id": 100, - "message": "i'm a message" - } -} ----- - -To move `method` and `elapsed_time` into another object, use this configuration: - -[source,yaml] ----- -processors: - - move_fields: - from: "app" - fields: ["method", "elapsed_time"], - to: "rpc." ----- - -Your final event will be: - -[source,json] ----- -{ - "app": { - "user_id": 100, - "message": "i'm a message", - "rpc": { - "method": "a", - "elapsed_time": 100 - } - } -} ----- - - -To add a prefix to the whole event: - -[source,json] ----- -{ - "app": { "method": "a"}, - "cost": 100 -} ----- - -Use this configuration: - -[source,yaml] ----- -processors: - - move_fields: - to: "my_prefix_" ----- - -Your final event will be: - -[source,json] ----- -{ - "my_prefix_app": { "method": "a"}, - "my_prefix_cost": 100 -} ----- - -[discrete] -== Configuration settings - -[options="header"] -|====== -| Name | Required | Default | Description | -| `from` | no | | Which field you want extract. This field and any nested fields will be moved into `to` unless they are filtered out. If empty, indicates event root. | -| `fields` | no | | Which fields to extract from `from` and move to `to`. An empty list indicates all fields. | -| `ignore_missing` | no | false | Ignore "not found" errors when extracting fields. | -| `exclude` | no | | A list of fields to exclude and not move. | -| `to` | yes | | These fields extract from `from` destination field prefix the `to` will base on fields root. | -|====== \ No newline at end of file diff --git a/docs/en/ingest-management/processors/processor-parse_aws_vpc_flow_log.asciidoc b/docs/en/ingest-management/processors/processor-parse_aws_vpc_flow_log.asciidoc deleted file mode 100644 index f39004afb..000000000 --- a/docs/en/ingest-management/processors/processor-parse_aws_vpc_flow_log.asciidoc +++ /dev/null @@ -1,263 +0,0 @@ -[[processor-parse-aws-vpc-flow-log]] -= Parse AWS VPC Flow Log - -++++ -parse_aws_vpc_flow_log -++++ - -The `parse_aws_vpc_flow_log` processor decodes AWS VPC Flow log messages. - -[discrete] -== Example - -The following example configuration decodes the `message` field using the -default version 2 VPC flow log format. - -[source,yaml] ----- -processors: - - parse_aws_vpc_flow_log: - format: version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status - field: message ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| No -| `message` -| Source field containing the VPC flow log message. - -| `target_field` -| No -| `aws.vpcflow` -| Target field for the VPC flow log object. This applies only to the original VPC flow log fields. ECS fields are written to the standard location. - -| `format` -| Yes -| -| VPC flow log format. This supports VPC flow log fields from versions 2 through 5. It will accept a string or a list of strings. Each format must have a unique number of fields to enable matching it to a flow log message. - -| `mode` -| No -| `ecs` -a| Controls which fields are generated. The available options are: - -* `original`: generates the fields specified in the format string. -* `ecs`: maps the original fields to ECS and removes the original fields that are mapped to ECS. -* `ecs_and_original`: maps the original fields to ECS and retains all the original fields. - -To learn more, refer to <>. - -| `ignore_missing` -| No -| false -| Whether to ignore a missing source field. - -| `ignore_failure` -| No -| false -| Whether to ignore failures while parsing and transforming the flow log message. - -| `id` -| No -| -| Instance ID for debugging purposes. -|=== - -[discrete] -[[modes]] -== Modes - -This section provides more information about available modes. - -[discrete] -=== Original - -This mode returns the same fields found in the `format` string. It will drop any -fields whose value is a dash (`-`). It converts the strings into the appropriate -data types. These are the known field names and their data types. - -NOTE: The AWS VPC flow field names use underscores instead of dashes within -{agent}. You may configure the `format` using field names that contain either. - -[options="header"] -|=== -| VPC Flow Log Field | Data Type | -| account_id | string | -| action | string | -| az_id | string | -| bytes | long | -| dstaddr | ip | -| dstport | integer | -| end | timestamp | -| flow_direction | string | -| instance_id | string | -| interface_id | string | -| log_status | string | -| packets | long | -| pkt_dst_aws_service | string | -| pkt_dstaddr | ip | -| pkt_src_aws_service | string | -| pkt_srcaddr | ip | -| protocol | integer | -| region | string | -| srcaddr | ip | -| srcport | integer | -| start | timestamp | -| sublocation_id | string | -| sublocation_type | string | -| subnet_id | string | -| tcp_flags | integer | -| tcp_flags_array* | integer | -| traffic_path | integer | -| type | string | -| version | integer | -| vpc_id | string | -|=== - -[discrete] -=== ECS - -This mode maps the original VPC flow log fields into their associated Elastic -Common Schema (ECS) fields. It removes the original fields that were mapped to -ECS to reduced duplication. These are the field associations. There may be some -transformations applied to derive the ECS field. - -[options="header"] -|=== -| VPC Flow Log Field | ECS Field | -| account_id | cloud.account.id | -| action | event.outcome | -| action | event.action | -| action | event.type | -| az_id | cloud.availability_zone | -| bytes | network.bytes | -| bytes | source.bytes | -| dstaddr | destination.address | -| dstaddr | destination.ip | -| dstport | destination.port | -| end | @timestamp | -| end | event.end | -| flow_direction | network.direction | -| instance_id | cloud.instance.id | -| packets | network.packets | -| packets | source.packets | -| protocol | network.iana_number | -| protocol | network.transport | -| region | cloud.region | -| srcaddr | network.type | -| srcaddr | source.address | -| srcaddr | source.ip | -| srcport | source.port | -| start | event.start | -|=== - -[discrete] -=== ECS and Original - -This mode maps the fields into ECS and retains all the original fields. Below -is an example document produced using `ecs_and_orignal` mode. - -[source,json] ----- -{ - "@timestamp": "2021-03-26T03:29:09Z", - "aws": { - "vpcflow": { - "account_id": "64111117617", - "action": "REJECT", - "az_id": "use1-az5", - "bytes": 1, - "dstaddr": "10.200.0.0", - "dstport": 33004, - "end": "2021-03-26T03:29:09Z", - "flow_direction": "ingress", - "instance_id": "i-0axxxxxx1ad77", - "interface_id": "eni-069xxxxxb7a490", - "log_status": "OK", - "packets": 52, - "pkt_dst_aws_service": "CLOUDFRONT", - "pkt_dstaddr": "10.200.0.80", - "pkt_src_aws_service": "AMAZON", - "pkt_srcaddr": "89.160.20.156", - "protocol": 17, - "region": "us-east-1", - "srcaddr": "89.160.20.156", - "srcport": 50041, - "start": "2021-03-26T03:28:12Z", - "sublocation_id": "fake-id", - "sublocation_type": "wavelength", - "subnet_id": "subnet-02d645xxxxxxxdbc0", - "tcp_flags": 1, - "tcp_flags_array": [ - "fin" - ], - "traffic_path": 1, - "type": "IPv4", - "version": 5, - "vpc_id": "vpc-09676f97xxxxxb8a7" - } - }, - "cloud": { - "account": { - "id": "64111117617" - }, - "availability_zone": "use1-az5", - "instance": { - "id": "i-0axxxxxx1ad77" - }, - "region": "us-east-1" - }, - "destination": { - "address": "10.200.0.0", - "ip": "10.200.0.0", - "port": 33004 - }, - "event": { - "action": "reject", - "end": "2021-03-26T03:29:09Z", - "outcome": "failure", - "start": "2021-03-26T03:28:12Z", - "type": [ - "connection", - "denied" - ] - }, - "message": "5 64111117617 eni-069xxxxxb7a490 89.160.20.156 10.200.0.0 50041 33004 17 52 1 1616729292 1616729349 REJECT OK vpc-09676f97xxxxxb8a7 subnet-02d645xxxxxxxdbc0 i-0axxxxxx1ad77 1 IPv4 89.160.20.156 10.200.0.80 us-east-1 use1-az5 wavelength fake-id AMAZON CLOUDFRONT ingress 1", - "network": { - "bytes": 1, - "direction": "ingress", - "iana_number": "17", - "packets": 52, - "transport": "udp", - "type": "ipv4" - }, - "related": { - "ip": [ - "89.160.20.156", - "10.200.0.0", - "10.200.0.80" - ] - }, - "source": { - "address": "89.160.20.156", - "bytes": 1, - "ip": "89.160.20.156", - "packets": 52, - "port": 50041 - } -} ----- - diff --git a/docs/en/ingest-management/processors/processor-rate_limit.asciidoc b/docs/en/ingest-management/processors/processor-rate_limit.asciidoc deleted file mode 100644 index 861124e31..000000000 --- a/docs/en/ingest-management/processors/processor-rate_limit.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[rate_limit-processor]] -= Rate limit the flow of events -beta[] - -++++ -rate_limit -++++ - -The `rate_limit` processor limits the throughput of events based on -the specified configuration. - -In the current implementation, rate-limited events are dropped. Future -implementations may allow rate-limited events to be handled differently. - -[discrete] -== Examples - -[source,yaml] ------------------------------------------------------ -- rate_limit: - limit: "10000/m" ------------------------------------------------------ - -[source,yaml] ------------------------------------------------------ -- rate_limit: - fields: - - "cloudfoundry.org.name" - limit: "400/s" ------------------------------------------------------ - -[source,yaml] ------------------------------------------------------ -- if.equals.cloudfoundry.org.name: "acme" - then: - - rate_limit: - limit: "500/s" ------------------------------------------------------ - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `limit` -| Yes -| -| The rate limit. Supported time units for the rate are `s` (per second), `m` (per minute), and `h` (per hour). - -| `fields` -| No -| -| List of fields. The rate limit will be applied to each distinct value derived by combining the values of these fields. - -|=== diff --git a/docs/en/ingest-management/processors/processor-registered_domain.asciidoc b/docs/en/ingest-management/processors/processor-registered_domain.asciidoc deleted file mode 100644 index 8944d0515..000000000 --- a/docs/en/ingest-management/processors/processor-registered_domain.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[[registered_domain-processor]] -= Registered Domain - -++++ -registered_domain -++++ - -The `registered_domain` processor reads a field containing a hostname and then -writes the "registered domain" contained in the hostname to the target field. -For example, given `www.google.co.uk`, the processor would output `google.co.uk`. -In other words, the "registered domain" is the effective top-level domain -(`co.uk`) plus one level (`google`). Optionally, the processor can store the -rest of the domain, the `subdomain`, into another target field. - -This processor uses the Mozilla Public Suffix list to determine the value. - -[discrete] -== Example - -[source,yaml] ----- - - registered_domain: - field: dns.question.name - target_field: dns.question.registered_domain - target_etld_field: dns.question.top_level_domain - target_subdomain_field: dns.question.sudomain - ignore_missing: true - ignore_failure: true ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -| Source field containing a fully qualified domain name (FQDN). - -| `target_field` -| Yes -| -| Target field for the registered domain value. - -| `target_etld_field` -| No -| -| Target field for the effective top-level domain value. - -| `target_subdomain_field` -| No -| -| Target subdomain field for the subdomain value. - -| `ignore_missing` -| No -| `false` -| Whether to ignore errors when the source field is missing. - -| `ignore_failure` -| No -| `false` -| Whether to ignore all errors produced by the processor. - -| `id` -| No -| -| Identifier for this processor instance. Useful for debugging. - -|=== diff --git a/docs/en/ingest-management/processors/processor-rename.asciidoc b/docs/en/ingest-management/processors/processor-rename.asciidoc deleted file mode 100644 index fc53f2b82..000000000 --- a/docs/en/ingest-management/processors/processor-rename.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -[[rename-processor]] -= Rename fields from events - -++++ -rename -++++ - -The `rename` processor specifies a list of fields to rename. This processor -cannot be used to overwrite fields. To overwrite fields, either first rename the -target field, or use the `drop_fields` processor to drop the field, and then -rename the field. - -TIP: You can rename fields to resolve field name conflicts. For example, if an -event has two fields, `c` and `c.b` (where `b` is a subfield of `c`), assigning -scalar values results in an {es} error at ingest time. The assignment -`{"c": 1,"c.b": 2}` would result in an error because `c` is an object and cannot -be assigned a scalar value. To prevent this conflict, rename `c` to `c.value` -before assigning values. - -[discrete] -== Example - -[source,yaml] -------- - - rename: - fields: - - from: "a.g" - to: "e.d" - ignore_missing: false - fail_on_error: true -------- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -|`fields` -| Yes -| -a| Contains: - -* `from: "old-key"`, where `from` is the original field name. You can use the `@metadata.` prefix in this field to rename keys in the event metadata instead of event fields. -* `to: "new-key"`, where `to` is the target field name. - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing keys. If `true`, no error is logged when a key that should be renamed is missing. - -| `fail_on_error` -| No -| `true` -| Whether to fail renaming if an error occurs. If `true` and an error occurs, the renaming of fields is stopped, and the original event is returned. If `false`, renaming continues even if an error occurs during renaming. - -|=== - -See <> for a list of supported conditions. - -You can specify multiple `rename` processors under the `processors` -section. diff --git a/docs/en/ingest-management/processors/processor-replace.asciidoc b/docs/en/ingest-management/processors/processor-replace.asciidoc deleted file mode 100644 index 270d2ae49..000000000 --- a/docs/en/ingest-management/processors/processor-replace.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[replace-fields]] -= Replace fields from events - -++++ -replace -++++ - -The `replace` processor takes a list of fields to search for a matching -value and replaces the matching value with a specified string. - -The `replace` processor cannot be used to create a completely new value. - -TIP: You can use this processor to truncate a field value or replace -it with a new string value. You can also use this processor to mask PII -information. - -[discrete] -== Example - -The following example changes the path from `/usr/bin` to `/usr/local/bin`: - -[source,yaml] -------- - - replace: - fields: - - field: "file.path" - pattern: "/usr/" - replacement: "/usr/local/" - ignore_missing: false - fail_on_error: true -------- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -a| List of one or more items. Each item contains a `field: field-name`, -`pattern: regex-pattern`, and `replacement: replacement-string`, where: - -* `field` is the original field name. You can use the `@metadata.` prefix in this field to replace values in the event metadata instead of event fields. -* `pattern` is the regex pattern to match the field's value -* `replacement` is the replacement string to use to update the field's value - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing fields. If `true`, no error is logged if the specified field is missing. - -|`fail_on_error` -| No -| `true` -| Whether to fail replacement of field values if an error occurs. -If `true` and there's an error, the replacement of field values is stopped, and the original event is returned. -If `false`, replacement continues even if an error occurs during replacement. - -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processor-script.asciidoc b/docs/en/ingest-management/processors/processor-script.asciidoc deleted file mode 100644 index a4bf2e324..000000000 --- a/docs/en/ingest-management/processors/processor-script.asciidoc +++ /dev/null @@ -1,191 +0,0 @@ -[[script-processor]] -= Script Processor - -++++ -script -++++ - -The `script` processor executes Javascript code to process an event. The processor -uses a pure Go implementation of ECMAScript 5.1 and has no external -dependencies. This can be useful in situations where one of the other processors -doesn't provide the functionality you need to filter events. - -The processor can be configured by embedding Javascript in your configuration -file or by pointing the processor at external files. - -[discrete] -== Examples - -[source,yaml] ----- - - script: - lang: javascript - source: > - function process(event) { - event.Tag("js"); - } ----- - -This example loads `filter.js` from disk: - -[source,yaml] ----- - - script: - lang: javascript - file: ${path.config}/filter.js ----- - -Parameters can be passed to the script by adding `params` to the config. -This allows for a script to be made reusable. When using `params` the -code must define a `register(params)` function to receive the parameters. - -[source,yaml] ----- - - script: - lang: javascript - tag: my_filter - params: - threshold: 15 - source: > - var params = {threshold: 42}; - function register(scriptParams) { - params = scriptParams; - } - function process(event) { - if (event.Get("severity") < params.threshold) { - event.Cancel(); - } - } ----- - -If the script defines a `test()` function, it will be invoked when the processor -is loaded. Any exceptions thrown will cause the processor to fail to load. This -can be used to make assertions about the behavior of the script. - -[source,javascript] ----- -function process(event) { - if (event.Get("event.code") === 1102) { - event.Put("event.action", "cleared"); - } - return event; -} - -function test() { - var event = process(new Event({event: {code: 1102}})); - if (event.Get("event.action") !== "cleared") { - throw "expected event.action === cleared"; - } -} ----- - -[discrete] -== Configuration settings - -include::processors.asciidoc[tag=processor-limitations] - -[options="header"] -|=== -| Name | Required | Default | Description - -| `lang` -| Yes -| -| The value of this field must be `javascript`. - -| `tag` -| No -| -| Optional identifier added to log messages. If defined, this tag enables metrics logging for this instance of the processor. The metrics include the number of exceptions and a histogram of the execution times for the `process` function. - -| `source` -| -| -| Inline Javascript source code. - -| `file` -| -| -| Path to a script file to load. Relative paths are interpreted as relative to the `path.config` directory. Globs are expanded. - -| `files` -| -| -| List of script files to load. The scripts are concatenated together. Relative paths are interpreted as relative to the `path.config` directory. Globs are expanded. - -| `params` -| -| -| A dictionary of parameters that are passed to the `register` of the script. - -| `tag_on_exception` -| -| `_js_exception` -| Tag to add to events in case the Javascript code causes an exception while processing an event. - -| `timeout` -| -| no timeout -| An execution timeout for the `process` function. When the `process` function takes longer than the `timeout` period, the function is interrupted. You can set this option to prevent a script from running for too long (like preventing an infinite `while` loop). -| `max_cached_sessions` -| -| `4` -| The maximum number of Javascript VM sessions that will be cached to avoid reallocation. - -|=== - -[discrete] -== Event API - -The `Event` object passed to the `process` method has the following API. - -[frame="topbot",options="header"] -|=== -|Method |Description - -|`Get(string)` -|Get a value from the event (either a scalar or an object). If the key does not -exist `null` is returned. If no key is provided then an object containing all -fields is returned. - -*Example*: `var value = event.Get(key);` - -|`Put(string, value)` -|Put a value into the event. If the key was already set then the -previous value is returned. It throws an exception if the key cannot be set -because one of the intermediate values is not an object. - -*Example*: `var old = event.Put(key, value);` - -|`Rename(string, string)` -|Rename a key in the event. The target key must not exist. It -returns true if the source key was successfully renamed to the target key. - -*Example*: `var success = event.Rename("source", "target");` - -|`Delete(string)` -|Delete a field from the event. It returns true on success. - -*Example*: `var deleted = event.Delete("user.email");` - -|`Cancel()` -|Flag the event as cancelled which causes the processor to drop -event. - -*Example*: `event.Cancel(); return;` - -|`Tag(string)` -|Append a tag to the `tags` field if the tag does not already -exist. Throws an exception if `tags` exists and is not a string or a list of -strings. - -*Example*: `event.Tag("user_event");` - -|`AppendTo(string, string)` -|`AppendTo` is a specialized `Put` method that converts the existing value to an -array and appends the value if it does not already exist. If there is an -existing value that's not a string or array of strings then an exception is -thrown. - -*Example*: `event.AppendTo("error.message", "invalid file hash");` -|=== diff --git a/docs/en/ingest-management/processors/processor-syntax.asciidoc b/docs/en/ingest-management/processors/processor-syntax.asciidoc deleted file mode 100644 index 696df8b2a..000000000 --- a/docs/en/ingest-management/processors/processor-syntax.asciidoc +++ /dev/null @@ -1,326 +0,0 @@ -[[processor-syntax]] -= Processor syntax - -Specify a list of one or more processors: - -* When configuring processors in the standalone {agent} configuration file, put -this list under the `processors` setting. -* When using the Integrations UI in {kib}, put this list in the **Processors** -field. - -Each processor begins with a dash (-) and includes the processor name, an -optional <>, and configuration settings to pass to the -processor: - -[source,yaml] ------- -- : - when: - - - -- : - when: - - ------- - - -If a <> is specified, it must be met in order for the -processor to run. If no condition is specified, the processor always runs. - -To accomplish complex conditional processing, use the if-then-else processor -configuration. This configuration allows you to run multiple processors based on -a single condition. For example: - -[source,yaml] ----- -- if: - - then: <1> - - : - - - : - - ... - else: <2> - - : - - - : - ----- -<1> `then` must contain a single processor or a list of processors that will -execute when the condition is `true`. -<2> `else` is optional. It can contain a single processor or a list of -processors that will execute when the condition is `false`. - -[discrete] -[[processor-conditions]] -== Conditions - -Each condition receives a field to compare. You can specify multiple fields -under the same condition by using `AND` between the fields (for example, -`field1 AND field2`). - -For each field, you can specify a simple field name or a nested map, for example -`dns.question.name`. - -Refer to the {integrations-docs}[integrations documentation] for a list of all -fields created by a specific integration. - -The supported conditions are: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[processor-condition-equals]] -=== `equals` - -With the `equals` condition, you can check if a field has a certain value. -The condition accepts only an integer or string value. - -For example, the following condition checks if the response code of the HTTP -transaction is 200: - -[source,yaml] -------- -equals: - http.response.code: 200 -------- - -[discrete] -[[processor-condition-contains]] -=== `contains` - -The `contains` condition checks if a value is part of a field. The field can be -a string or an array of strings. The condition accepts only a string value. - -For example, the following condition checks if an error is part of the -transaction status: - -[source,yaml] ------- -contains: - status: "Specific error" ------- - -[discrete] -[[processor-condition-regexp]] -=== `regexp` - -The `regexp` condition checks the field against a regular expression. The -condition accepts only strings. - -For example, the following condition checks if the process name starts with -`foo`: - -[source,yaml] ------ -regexp: - system.process.name: "^foo.*" ------ - -[discrete] -[[processor-condition-range]] -=== `range` - -The `range` condition checks if the field is in a certain range of values. The -condition supports `lt`, `lte`, `gt` and `gte`. The condition accepts only -integer or float values. - -For example, the following condition checks for failed HTTP transactions by -comparing the `http.response.code` field with 400. - - -[source,yaml] ------- -range: - http.response.code: - gte: 400 ------- - -This can also be written as: - -[source,yaml] ----- -range: - http.response.code.gte: 400 ----- - -The following condition checks if the CPU usage in percentage has a value -between 0.5 and 0.8. - -[source,yaml] ------- -range: - system.cpu.user.pct.gte: 0.5 - system.cpu.user.pct.lt: 0.8 ------- - -[discrete] -[[processor-condition-network]] -=== `network` - -The `network` condition checks if the field is in a certain IP network range. -Both IPv4 and IPv6 addresses are supported. The network range may be specified -using CIDR notation, like "192.0.2.0/24" or "2001:db8::/32", or by using one of -these named ranges: - -- `loopback` - Matches loopback addresses in the range of `127.0.0.0/8` or - `::1/128`. -- `unicast` - Matches global unicast addresses defined in RFC 1122, RFC 4632, - and RFC 4291 with the exception of the IPv4 broadcast address - (`255.255.255.255`). This includes private address ranges. -- `multicast` - Matches multicast addresses. -- `interface_local_multicast` - Matches IPv6 interface-local multicast addresses. -- `link_local_unicast` - Matches link-local unicast addresses. -- `link_local_multicast` - Matches link-local multicast addresses. -- `private` - Matches private address ranges defined in RFC 1918 (IPv4) and - RFC 4193 (IPv6). -- `public` - Matches addresses that are not loopback, unspecified, IPv4 - broadcast, link-local unicast, link-local multicast, interface-local - multicast, or private. -- `unspecified` - Matches unspecified addresses (either the IPv4 address - "0.0.0.0" or the IPv6 address "::"). - -The following condition returns true if the `source.ip` value is within the -private address space. - -[source,yaml] ----- -network: - source.ip: private ----- - -This condition returns true if the `destination.ip` value is within the -IPv4 range of `192.168.1.0` - `192.168.1.255`. - -[source,yaml] ----- -network: - destination.ip: '192.168.1.0/24' ----- - -And this condition returns true when `destination.ip` is within any of the given -subnets. - -[source,yaml] ----- -network: - destination.ip: ['192.168.1.0/24', '10.0.0.0/8', loopback] ----- - -[discrete] -[[processor-condition-has_fields]] -=== `has_fields` - -The `has_fields` condition checks if all the given fields exist in the -event. The condition accepts a list of string values denoting the field names. - -For example, the following condition checks if the `http.response.code` field -is present in the event. - - -[source,yaml] ------- -has_fields: ['http.response.code'] ------- - - -[discrete] -[[processor-condition-or]] -=== `or` - -The `or` operator receives a list of conditions. - -[source,yaml] -------- -or: - - - - - - - ... - -------- - -For example, to configure the condition -`http.response.code = 304 OR http.response.code = 404`: - -[source,yaml] ------- -or: - - equals: - http.response.code: 304 - - equals: - http.response.code: 404 ------- - -[discrete] -[[processor-condition-and]] -=== `and` - -The `and` operator receives a list of conditions. - -[source,yaml] -------- -and: - - - - - - - ... - -------- - -For example, to configure the condition -`http.response.code = 200 AND status = OK`: - -[source,yaml] ------- -and: - - equals: - http.response.code: 200 - - equals: - status: OK ------- - -To configure a condition like ` OR AND `: - -[source,yaml] ------- -or: - - - - and: - - - - - ------- - -[discrete] -[[processor-condition-not]] -=== `not` - -The `not` operator receives the condition to negate. - -[source,yaml] -------- -not: - - -------- - -For example, to configure the condition `NOT status = OK`: - -[source,yaml] ------- -not: - equals: - status: OK ------- diff --git a/docs/en/ingest-management/processors/processor-syslog.asciidoc b/docs/en/ingest-management/processors/processor-syslog.asciidoc deleted file mode 100644 index 3d95dcbe4..000000000 --- a/docs/en/ingest-management/processors/processor-syslog.asciidoc +++ /dev/null @@ -1,156 +0,0 @@ -[[syslog-processor]] -= Syslog - -++++ -syslog -++++ - -The syslog processor parses RFC 3146 and/or RFC 5424 formatted syslog messages -that are stored in a field. The processor itself does not handle receiving syslog -messages from external sources. This is done through an input, such as the TCP -input. Certain integrations, when enabled through configuration, will embed the -syslog processor to process syslog messages, such as Custom TCP Logs and -Custom UDP Logs. - -[discrete] -== Example - -[source,yaml] -------------------------------------------------------------------------------- - - syslog: - field: message -------------------------------------------------------------------------------- - -[source,json] -------------------------------------------------------------------------------- -{ - "message": "<165>1 2022-01-11T22:14:15.003Z mymachine.example.com eventslog 1024 ID47 [exampleSDID@32473 iut=\"3\" eventSource=\"Application\" eventID=\"1011\"][examplePriority@32473 class=\"high\"] this is the message" -} -------------------------------------------------------------------------------- - -Will produce the following output: - -[source,json] -------------------------------------------------------------------------------- -{ - "@timestamp": "2022-01-11T22:14:15.003Z", - "log": { - "syslog": { - "priority": 165, - "facility": { - "code": 20, - "name": "local4" - }, - "severity": { - "code": 5, - "name": "Notice" - }, - "hostname": "mymachine.example.com", - "appname": "eventslog", - "procid": "1024", - "msgid": "ID47", - "version": 1, - "structured_data": { - "exampleSDID@32473": { - "iut": "3", - "eventSource": "Application", - "eventID": "1011" - }, - "examplePriority@32473": { - "class": "high" - } - } - } - }, - "message": "this is the message" -} -------------------------------------------------------------------------------- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| `message` -| Source field containing the syslog message. - -| `format` -| No -| `auto` -| Syslog format to use: `rfc3164` or `rfc5424`. To automatically detect the format from the log entries, set this option to `auto`. - -| `timezone` -| No -| `Local` -| IANA time zone name (for example, `America/New York`) or a fixed time offset (for example, `+0200`) to use when parsing syslog timestamps that do not contain a time zone. Specify `Local` to use the machine's local time zone. - -| `overwrite_keys` -| No -| `true` -| Whether keys that already exist in the event are overwritten by keys from the syslog message. - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing fields. If `true` the processor does not return an error when a specified field does not exist. - -| `ignore_failure` -| No -| `false` -| Whether to ignore all errors produced by the processor. - -| `tag` -| No -| -| An identifier for this processor. Useful for debugging. - -|=== - -[discrete] -== Timestamps - -The RFC 3164 format accepts the following forms of timestamps: - -* Local timestamp (`Mmm dd hh:mm:ss`): - ** `Jan 23 14:09:01` -* RFC-3339*: - ** `2003-10-11T22:14:15Z` - ** `2003-10-11T22:14:15.123456Z` - ** `2003-10-11T22:14:15-06:00` - ** `2003-10-11T22:14:15.123456-06:00` - -NOTE: The local timestamp (for example, `Jan 23 14:09:01`) that accompanies an -RFC 3164 message lacks year and time zone information. The time zone will be -enriched using the `timezone` configuration option, and the year will be -enriched using the system's local time (accounting for time zones). Because of -this, it is possible for messages to appear in the future. For example, this -might happen if logs generated on December 31 2021 are ingested on January -1 2022. The logs would be enriched with the year 2022 instead of 2021. - -The RFC 5424 format accepts the following forms of timestamps: - -* RFC-3339: - ** `2003-10-11T22:14:15Z` - ** `2003-10-11T22:14:15.123456Z` - ** `2003-10-11T22:14:15-06:00` - ** `2003-10-11T22:14:15.123456-06:00` - -Formats with an asterisk (*) are a non-standard allowance. - -[discrete] -== Structured Data - -For RFC 5424-formatted logs, if the structured data cannot be parsed according -to RFC standards, the original structured data text will be prepended to the message -field, separated by a space. - diff --git a/docs/en/ingest-management/processors/processor-timestamp.asciidoc b/docs/en/ingest-management/processors/processor-timestamp.asciidoc deleted file mode 100644 index bd63194de..000000000 --- a/docs/en/ingest-management/processors/processor-timestamp.asciidoc +++ /dev/null @@ -1,113 +0,0 @@ -[[timestamp-processor]] -= Timestamp - -++++ -timestamp -++++ - -beta[] - -The `timestamp` processor parses a timestamp from a field. By default the -timestamp processor writes the parsed result to the `@timestamp` field. You can -specify a different field by setting the `target_field` parameter. The timestamp -value is parsed according to the `layouts` parameter. Multiple layouts can be -specified and they will be used sequentially to attempt parsing the timestamp -field. - -NOTE: The timestamp layouts used by this processor are different than the - formats supported by date processors in Logstash and Elasticsearch Ingest - Node. - -The `layouts` are described using a reference time that is based on this -specific time: - - Mon Jan 2 15:04:05 MST 2006 - -Since MST is GMT-0700, the reference time is: - - 01/02 03:04:05PM '06 -0700 - -To define your own layout, rewrite the reference time in a format that matches -the timestamps you expect to parse. For more layout examples and details see the -https://godoc.org/time#pkg-constants[Go time package documentation]. - -If a layout does not contain a year then the current year in the specified -`timezone` is added to the time value. - -[discrete] -== Example - -Here is an example that parses the `start_time` field and writes the result -to the `@timestamp` field then deletes the `start_time` field. When the -processor is loaded, it will immediately validate that the two `test` timestamps -parse with this configuration. - -[source,yaml] ----- - - timestamp: - field: start_time - layouts: - - '2006-01-02T15:04:05Z' - - '2006-01-02T15:04:05.999Z' - - '2006-01-02T15:04:05.999-07:00' - test: - - '2019-06-22T16:33:51Z' - - '2019-11-18T04:59:51.123Z' - - '2020-08-03T07:10:20.123456+02:00' - - drop_fields: - fields: [start_time] ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -| Source field containing the time to be parsed. - -| `target_field` -| No -| `@timestamp` -| Target field for the parsed time value. The target value is always written as UTC. - -| `layouts` -| Yes -| -| Timestamp layouts that define the expected time value format. In addition layouts, `UNIX` and `UNIX_MS` are accepted. - -| `timezone` -| No -| `UTC` -| IANA time zone name (for example, `America/New_York`) or fixed time offset (for example, `+0200`) to use when parsing times that do not contain a time zone. Specify `Local` to use the machine's local time zone. - -| `ignore_missing` -| No -| `false` -| Whether to ignore errors when the source field is missing. - -| `ignore_failure` -| No -| `false` -| Whether to gnore all errors produced by the processor. - -| `test` -| No -| -| List of timestamps that must parse successfully when loading the processor. - -| `id` -| No -| -| Identifier for this processor instance. Useful for debugging. | -|=== diff --git a/docs/en/ingest-management/processors/processor-translate_sid.asciidoc b/docs/en/ingest-management/processors/processor-translate_sid.asciidoc deleted file mode 100644 index c52f22078..000000000 --- a/docs/en/ingest-management/processors/processor-translate_sid.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -[[translate_sid-processor]] -= Translate SID - -++++ -translate_sid -++++ - -The `translate_sid` processor translates a Windows security identifier (SID) -into an account name. It retrieves the name of the account associated with the -SID, the first domain on which the SID is found, and the type of account. This -is only available on Windows. - -Every account on a network is issued a unique SID when the account is first -created. Internal processes in Windows refer to an account's SID rather than -the account's user or group name, and these values sometimes appear in logs. - -If the SID is invalid (malformed) or does not map to any account on the local -system or domain, the processor will return an error unless `ignore_failure` is -set. - -[discrete] -== Example - -[source,yaml] ----- - - translate_sid: - field: winlog.event_data.MemberSid - account_name_target: user.name - domain_target: user.domain - ignore_missing: true - ignore_failure: true ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `field` -| Yes -| -| Source field containing a Windows security identifier (SID). - -| `account_name_target` -| Yes* -| -| Target field for the account name value. - -| `account_type_target` -| Yes* -| -| Target field for the account type value. - -| `domain_target` -| Yes* -| -| Target field for the domain value. - -| `ignore_missing` -| No -| `false` -| Ignore errors when the source field is missing. - -| `ignore_failure` -| No -| `false` -| Ignore all errors produced by the processor. -|=== - -* At least one of `account_name_target`, `account_type_target`, and -`domain_target` must be configured. - diff --git a/docs/en/ingest-management/processors/processor-truncate_fields.asciidoc b/docs/en/ingest-management/processors/processor-truncate_fields.asciidoc deleted file mode 100644 index 8d97662f8..000000000 --- a/docs/en/ingest-management/processors/processor-truncate_fields.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[truncate_fields-processor]] -= Truncate fields - -++++ -truncate_fields -++++ - -The `truncate_fields` processor truncates a field to a given size. If the size -of the field is smaller than the limit, the field is left as is. - - -[discrete] -== Example - -This configuration truncates the field named `message` to five characters: - -[source,yaml] ----- - - truncate_fields: - fields: - - message - max_characters: 5 - fail_on_error: false - ignore_missing: true ----- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - -| `fields` -| Yes -| -| List of fields to truncate. You can use the `@metadata.` prefix to truncate values in the event metadata instead of event fields. - -| `max_bytes` -| Yes -| -| Maximum number of bytes in a field. Mutually exclusive with `max_characters`. - -| `max_characters` -| Yes -| -| Maximum number of characters in a field. Mutually exclusive with `max_bytes`. - -| `fail_on_error` -| No -| `true` -| If `true` and an error occurs, any changes to the event are reverted, and the original event is returned. If `false`, processing continues even if an error occurs. - -| `ignore_missing` -| No -| `false` -| Whether to ignore events that lack the source field. If `false`, processing of the event fails if a field is missing. - -|=== diff --git a/docs/en/ingest-management/processors/processor-urldecode.asciidoc b/docs/en/ingest-management/processors/processor-urldecode.asciidoc deleted file mode 100644 index 47f7c607e..000000000 --- a/docs/en/ingest-management/processors/processor-urldecode.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -[[urldecode-processor]] -= URL Decode - -++++ -urldecode -++++ - -The `urldecode` processor specifies a list of fields to decode from URL encoded -format. - -[discrete] -== Example - -In this example, `field1` is decoded in `field2`. - -[source,yaml] -------- - - urldecode: - fields: - - from: "field1" - to: "field2" - ignore_missing: false - fail_on_error: true -------- - -[discrete] -== Configuration settings - -// set this attribute for processors that reference event fields -:works-with-fields: -include::processors.asciidoc[tag=processor-limitations] -// reset the attribute to null -:works-with-fields!: - -[options="header"] -|=== -| Name | Required | Default | Description - - -| `fields` -| Yes -| -a| Contains: - -* `from: "source-field"`, where `from` is the source field name -* `to: "target-field"`, where `to` is the target field name (defaults to the `from` value) - -| `ignore_missing` -| No -| `false` -| Whether to ignore missing keys. If `true`, no error is logged if a key that should be URL-decoded is missing. - -| `fail_on_error` -| No -| `true` -| Whether to fail if an error occurs. If `true` and an error occurs, the URL-decoding of fields is stopped, and the original event is returned. If `false`, decoding continues even if an error occurs during decoding. - -|=== - -See <> for a list of supported conditions. diff --git a/docs/en/ingest-management/processors/processors-list.asciidoc b/docs/en/ingest-management/processors/processors-list.asciidoc deleted file mode 100644 index feb82035a..000000000 --- a/docs/en/ingest-management/processors/processors-list.asciidoc +++ /dev/null @@ -1,89 +0,0 @@ -include::processor-add_cloud_metadata.asciidoc[leveloffset=+1] - -include::processor-add_cloudfoundry_metadata.asciidoc[leveloffset=+1] - -include::processor-add_docker_metadata.asciidoc[leveloffset=+1] - -include::processor-add_fields.asciidoc[leveloffset=+1] - -include::processor-add_host_metadata.asciidoc[leveloffset=+1] - -include::processor-add_id.asciidoc[leveloffset=+1] - -include::processor-add_kubernetes_metadata.asciidoc[leveloffset=+1] - -include::processor-add_labels.asciidoc[leveloffset=+1] - -include::processor-add_locale.asciidoc[leveloffset=+1] - -include::processor-add_network_direction.asciidoc[leveloffset=+1] - -include::processor-add_nomad_metadata.asciidoc[leveloffset=+1] - -include::processor-add_observer_metadata.asciidoc[leveloffset=+1] - -include::processor-add_process_metadata.asciidoc[leveloffset=+1] - -include::processor-add_tags.asciidoc[leveloffset=+1] - -include::processor-communityid.asciidoc[leveloffset=+1] - -include::processor-convert.asciidoc[leveloffset=+1] - -include::processor-copy_fields.asciidoc[leveloffset=+1] - -include::processor-decode_base64_field.asciidoc[leveloffset=+1] - -include::processor-decode_cef.asciidoc[leveloffset=+1] - -include::processor-decode_csv_fields.asciidoc[leveloffset=+1] - -include::processor-decode_duration.asciidoc[leveloffset=+1] - -include::processor-decode_json_fields.asciidoc[leveloffset=+1] - -include::processor-decode_xml.asciidoc[leveloffset=+1] - -include::processor-decode_xml_wineventlog.asciidoc[leveloffset=+1] - -include::processor-decompress_gzip_field.asciidoc[leveloffset=+1] - -include::processor-detect_mime_type.asciidoc[leveloffset=+1] - -include::processor-dissect.asciidoc[leveloffset=+1] - -include::processor-dns.asciidoc[leveloffset=+1] - -include::processor-drop_event.asciidoc[leveloffset=+1] - -include::processor-drop_fields.asciidoc[leveloffset=+1] - -include::processor-extract_array.asciidoc[leveloffset=+1] - -include::processor-fingerprint.asciidoc[leveloffset=+1] - -include::processor-include_fields.asciidoc[leveloffset=+1] - -include::processor-move_fields.asciidoc[leveloffset=+1] - -include::processor-parse_aws_vpc_flow_log.asciidoc[leveloffset=+1] - -include::processor-rate_limit.asciidoc[leveloffset=+1] - -include::processor-registered_domain.asciidoc[leveloffset=+1] - -include::processor-rename.asciidoc[leveloffset=+1] - -include::processor-replace.asciidoc[leveloffset=+1] - -include::processor-script.asciidoc[leveloffset=+1] - -include::processor-syslog.asciidoc[leveloffset=+1] - -include::processor-timestamp.asciidoc[leveloffset=+1] - -include::processor-translate_sid.asciidoc[leveloffset=+1] - -include::processor-truncate_fields.asciidoc[leveloffset=+1] - -include::processor-urldecode.asciidoc[leveloffset=+1] diff --git a/docs/en/ingest-management/processors/processors.asciidoc b/docs/en/ingest-management/processors/processors.asciidoc deleted file mode 100644 index 29c190282..000000000 --- a/docs/en/ingest-management/processors/processors.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[[elastic-agent-processor-configuration]] -= Define processors - -{agent} processors are lightweight processing components that you can use to -parse, filter, transform, and enrich data at the source. For example, you can -use processors to: - -* reduce the number of exported fields -* enhance events with additional metadata -* perform additional processing and decoding -* sanitize data - -Each processor receives an event, applies a defined action to the event, and -returns the event. If you define a list of processors, they are executed in the -order they are defined. - -[source,yaml] ----- -event -> processor 1 -> event1 -> processor 2 -> event2 ... ----- - -// set this attribute to show the full description here -:works-with-fields: - -// tag::processor-limitations[] - -ifdef::works-with-fields[] -NOTE: {agent} processors execute _before_ ingest pipelines, which means that -your processor configurations cannot refer to fields that are created by ingest -pipelines or {ls}. For more limitations, refer to <> -endif::[] - -ifndef::works-with-fields[] -NOTE: {agent} processors execute _before_ ingest pipelines, which means that -they process the raw event data rather than the final event sent to {es}. For -related limitations, refer to <> -endif::[] - -// end::processor-limitations[] - -// reset the field to null -:works-with-fields!: - -[discrete] -[[where-valid]] -== Where are processors valid? - -The processors described in this section are valid: - -* **Under integration settings in the Integrations UI in {kib}**. For example, -when configuring an Nginx integration, you can define processors for a specific -dataset under **Advanced options**. The processor in this example adds geo -metadata to the Nginx access logs collected by {agent}: -+ -[role="screenshot"] -image::images/add-processor.png[Screen showing how to add a processor to an integration policy] -+ -NOTE: Some integrations do not currently support processors. - -* **Under input configuration settings for standalone {agent}s**. For example: -+ -[source,yaml] ----- -inputs: - - type: logfile - use_output: default - data_stream: - namespace: default - streams: - - data_stream: - dataset: nginx.access - type: logs - ignore_older: 72h - paths: - - /var/log/nginx/access.log* - tags: - - nginx-access - exclude_files: - - .gz$ - processors: - - add_host_metadata: - cache.ttl: 5m - geo: - name: nyc-dc1-rack1 - location: '40.7128, -74.0060' - continent_name: North America - country_iso_code: US - region_name: New York - region_iso_code: NY - city_name: New York - - add_locale: null ----- - -You can define processors that apply to a specific input defined in the configuration. -Applying a processor to all the inputs on a global basis is currently not supported. - -[discrete] -[[limitations]] -== What are some limitations of using processors? - -Processors have the following limitations. - -* Cannot enrich events with data from {es} or other custom data -sources. -* Cannot process data after it's been converted to the Elastic Common Schema -(ECS) because the conversion is performed by {es} ingest pipelines. This means -that your processor configuration cannot refer to fields that are created by -ingest pipelines or {ls} because those fields are created _after_ the processor -runs, not before. -* May break integration ingest pipelines in {es} if the user-defined processing -removes or alters fields expected by ingest pipelines. -* If you create new fields via processors, you are responsible for setting up -field mappings in the `*-@custom` component template and making sure the new -mappings are aligned with ECS. - -[discrete] -[[processing-options]] -== What other options are available for processing data? - -The {stack} provides several options for processing data collected by {agent}. -The option you choose depends on what you need to do: - -|=== -| If you need to... | Do this... - -| Sanitize or enrich raw data at the source -| Use an {agent} processor - -| Convert data to ECS, normalize field data, or enrich incoming data -| Use {ref}/ingest.html#pipelines-for-fleet-elastic-agent[ingest pipelines] - -| Define or alter the schema at query time -| Use {ref}/runtime.html[runtime fields] - -| Do something else with your data -| Use {logstash-ref}/filter-plugins.html[Logstash plugins] - -|=== - -[discrete] -[[how-different]] -== How are {agent} processors different from {ls} plugins or ingest pipelines? - -Logstash plugins and ingest pipelines both require you to send data to another -system for processing. Processors, on the other hand, allow you to apply -processing logic at the source. This means that you can filter out data you -don't want to send across the connection, and you can spread some of the -processing load across host systems running on edge nodes. - -include::processor-syntax.asciidoc[leveloffset=+1] - -include::processors-list.asciidoc[] diff --git a/docs/en/ingest-management/quick-starts.asciidoc b/docs/en/ingest-management/quick-starts.asciidoc deleted file mode 100644 index 4860de4fd..000000000 --- a/docs/en/ingest-management/quick-starts.asciidoc +++ /dev/null @@ -1,8 +0,0 @@ -[[fleet-elastic-agent-quick-start]] -= Quick starts - -Want to get up and running with {fleet} and {agent} quickly? Read our getting -started guides: - -* {observability-guide}/logs-metrics-get-started.html[Get started with logs and metrics] -* {observability-guide}/ingest-traces.html[Get started with application traces and APM] diff --git a/docs/en/ingest-management/redirects.asciidoc b/docs/en/ingest-management/redirects.asciidoc deleted file mode 100644 index 11cd2f078..000000000 --- a/docs/en/ingest-management/redirects.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -[role="exclude",id="add-a-fleet-server"] -== Add a {fleet-server} - -Refer to <>. - - -[role="exclude",id="view-elastic-agent-status"] -== View {agent} status - -Refer to <>. - - -[role="exclude",id="elastic-agent-logging"] -== View {agent} logs in {fleet} - -Refer to <>. diff --git a/docs/en/ingest-management/release-notes/release-notes-8.10.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.10.asciidoc deleted file mode 100644 index 9f6362408..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.10.asciidoc +++ /dev/null @@ -1,636 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.10.4 relnotes - -[[release-notes-8.10.4]] -== {fleet} and {agent} 8.10.4 - -Review important information about the {fleet} and {agent} 8.10.4 release. - -[discrete] -[[breaking-changes-8.10.4]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -elastic-agent:: - -[discrete] -[[breaking-3591]] -.`elastic-agent-autodiscover` library has been updated to version 0.6.4, disabling metadata For `kubernetes.deployment` and `kubernetes.cronjob` fields. -[%collapsible] -==== -*Details* + -The `elastic-agent-autodiscover` Kubernetes library by default comes with `add_resource_metadata.deployment=false` and `add_resource_metadata.cronjob=false`. - -*Impact* + -Pods that will be created from deployments or cronjobs will not have the extra metadata field for `kubernetes.deployment` or `kubernetes.cronjob`, respectively. This change was made to avoid the memory impact of keeping the feature enabled in big Kubernetes clusters. - -For more information, refer to {agent-pull}3591[#3591]. -==== - -[discrete] -[[enhancements-8.10.4]] -=== Enhancements - -{agent}:: -* Enable {agent} to upgrade securely in an air-gapped environment where {fleet-server} is the only reachable URI. {agent-pull}3591[#3591] {agent-issue}3863[#3863] - -[discrete] -[[bug-fixes-8.10.4]] -=== Bug fixes - -{fleet}:: -* Fix validation errors in KQL queries. ({kibana-pull}168329[#168329]) - -// end 8.10.4 relnotes - -// begin 8.10.3 relnotes - -[[release-notes-8.10.3]] -== {fleet} and {agent} 8.10.3 - -Review important information about the {fleet} and {agent} 8.10.3 release. - -[discrete] -[[security-update-8.10.3]] -=== Security updates - -* **Fleet Server Insertion of Sensitive Information into Log File (ESA-2023-20)** -+ -An issue was discovered in Fleet Server >= v8.10.0 and < v8.10.3 where Agent enrollment tokens are being inserted into the Fleet Server’s log file in plain text. -+ -These enrollment tokens could allow someone to enroll an agent into an agent policy, and potentially use that to retrieve other secrets in the policy including for Elasticsearch and third-party services. Alternatively a threat actor could potentially enroll agents to the clusters and send arbitrary events to Elasticsearch. -+ -The issue is resolved in 8.10.3. -+ -For more information, see our related -https://discuss.elastic.co/t/fleet-server-v8-10-3-security-update/344737[security -announcement]. - -[discrete] -[[known-issues-8.10.3]] -=== Known issues - -IMPORTANT: The <> that prevents successful upgrades in an air-gapped environment for {agent} versions 8.9.0 to 8.10.2 has been resolved in this release. If you're using an air-gapped environment, we recommend installing version 8.10.3 or any higher version to avoid not being unable to upgrade. - -[discrete] -[[enhancements-8.10.3]] -=== Enhancements - -{agent}:: -* Improve {agent} uninstall on Windows by adding delay between retries when file removal is blocked by busy files {agent-pull}3431[#3431] {agent-issue}3221[#3221] - -[discrete] -[[bug-fixes-8.10.3]] -=== Bug fixes - -{fleet}:: -* Fix incorrect index template used from the data stream name ({kibana-pull}166941[#166941]) -* Increase package install max timeout limit and add concurrency control to rollovers ({kibana-pull}166775[#166775]) -* Fix bulk action dropdown ({kibana-pull}166475[#166475]) - -{agent}:: -* Resilient handling of air gapped PGP checks. {agent} should not fail when remote PGP is specified (or official Elastic fallback PGP is used) and remote is not available {agent-pull}3427[#3427] {agent-pull}3426[#3426] {agent-issue}3368[#3368] - -// end 8.10.3 relnotes - -// begin 8.10.2 relnotes - -[[release-notes-8.10.2]] -== {fleet} and {agent} 8.10.2 - -Review important information about the {fleet} and {agent} 8.10.2 release. - -[discrete] -[[known-issues-8.10.2]] -=== Known issues - -[[known-issue-3375-v8102]] -.PGP key download fails in an air-gapped environment -[%collapsible] -==== - -*Details* - -IMPORTANT: If you're using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. -This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, the agent won't be able to download the remote key and therefore cannot be upgraded. - -*Impact* + - -For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available. - -*Option 1* - -If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in <>. -Please note that you need to enable HTTP Proxy usage for `artifacts.elastic.co` to bypass this problem, so you can craft the `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables to be used exclusively for it. - -*Option 2* - -As the upgrade URL is not customizable, we have to "trick" the system by pointing `https://artifacts.elastic.co/` to another host that will have the file. - -The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from `https://artifacts.elastic.co/GPG-KEY-elastic-agent``. - -_Example 1: Manual_ - -Edit the {agent} server hosts file to add the following content: - -[source,sh] ----- - artifacts.elastic.co ----- - -The Linux hosts file path is `/etc/hosts`. - -Windows hosts file path is `C:\Windows\System32\drivers\etc\hosts`. - -_Example 2: Puppet_ - -[source,yaml] ----- -host { 'elastic-artifacts': - ensure => 'present' - comment => 'Workaround for PGP check' - ip => '' -} ----- - -_Example 3: Ansible_ - -[source,yaml] ----- -- name : 'elastic-artifacts' - hosts : 'all' - become: 'yes' - - tasks: - - name: 'Add entry to /etc/hosts' - lineinfile: - path: '/etc/hosts' - line: ' artifacts.elastic.co' ----- - -==== - -[discrete] -[[enhancements-8.10.2]] -=== Enhancements - -{agent}:: -* Updated Go version to 1.20.8. {agent-pull}3393[#3393] - -[discrete] -[[bug-fixes-8.10.2]] -=== Bug fixes - -{fleet}:: -* Fixed force delete package API, fixed validation check to reject request if package is used by agents. ({kibana-pull}166623[#166623]) - -// end 8.10.2 relnotes - -// begin 8.10.1 relnotes - -[[release-notes-8.10.1]] -== {fleet} and {agent} 8.10.1 - -Review important information about the {fleet} and {agent} 8.10.1 release. - -[discrete] -[[known-issues-8.10.1]] -=== Known issues - -[[known-issue-3375-v8101]] -.PGP key download fails in an air-gapped environment -[%collapsible] -==== - -*Details* - -IMPORTANT: If you're using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. -This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, the agent won't be able to download the remote key and therefore cannot be upgraded. - -*Impact* + - -For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available. - -*Option 1* - -If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in <>. -Please note that you need to enable HTTP Proxy usage for `artifacts.elastic.co` to bypass this problem, so you can craft the `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables to be used exclusively for it. - -*Option 2* - -As the upgrade URL is not customizable, we have to "trick" the system by pointing `https://artifacts.elastic.co/` to another host that will have the file. - -The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from `https://artifacts.elastic.co/GPG-KEY-elastic-agent``. - -_Example 1: Manual_ - -Edit the {agent} server hosts file to add the following content: - -[source,sh] ----- - artifacts.elastic.co ----- - -The Linux hosts file path is `/etc/hosts`. - -Windows hosts file path is `C:\Windows\System32\drivers\etc\hosts`. - -_Example 2: Puppet_ - -[source,yaml] ----- -host { 'elastic-artifacts': - ensure => 'present' - comment => 'Workaround for PGP check' - ip => '' -} ----- - -_Example 3: Ansible_ - -[source,yaml] ----- -- name : 'elastic-artifacts' - hosts : 'all' - become: 'yes' - - tasks: - - name: 'Add entry to /etc/hosts' - lineinfile: - path: '/etc/hosts' - line: ' artifacts.elastic.co' ----- - -==== - -[discrete] -[[enhancements-8.10.1]] -=== Enhancements - -{agent}:: -* Improve logging during {agent} upgrades. {agent-pull}3382[#3382] - -[discrete] -[[bug-fixes-8.10.1]] -=== Bug fixes - -{fleet}:: -* Show snapshot version in agent upgrade modal and allow custom values. ({kibana-pull}165978[#165978]). - -{agent}:: -* Rollback {agent} upgrade if upgraded agent process crashes immediately. {agent-pull}3166[#3166] {agent-issue}3124[#3124] - - -// end 8.10.1 relnotes - -// begin 8.10.0 relnotes - -[[release-notes-8.10.0]] -== {fleet} and {agent} 8.10.0 - -Review important information about the {fleet} and {agent} 8.10.0 release. - -[discrete] -[[breaking-changes-8.10.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-6862]] -.{agent} diagnostics unavailable with {fleet-server} below 8.10.0. -[%collapsible] -==== -*Details* + -The mechanism that {fleet} uses to generate diagnostic bundles has been updated. To <>, {fleet-server} needs to be at version 8.10.0 or higher. - -*Impact* + -If you need to access a diagnostic bundle for an agent, ensure that {fleet-server} is at the required version. - -==== - -[discrete] -[[known-issues-8.10.0]] -=== Known issues - -[[known-issue-3375-v8100]] -.PGP key download fails in an air-gapped environment -[%collapsible] -==== - -*Details* - -IMPORTANT: If you're using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. -This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, the agent won't be able to download the remote key and therefore cannot be upgraded. - -*Impact* + - -For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available. - -*Option 1* - -If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in <>. -Please note that you need to enable HTTP Proxy usage for `artifacts.elastic.co` to bypass this problem, so you can craft the `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables to be used exclusively for it. - -*Option 2* - -As the upgrade URL is not customizable, we have to "trick" the system by pointing `https://artifacts.elastic.co/` to another host that will have the file. - -The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from `https://artifacts.elastic.co/GPG-KEY-elastic-agent``. - -_Example 1: Manual_ - -Edit the {agent} server hosts file to add the following content: - -[source,sh] ----- - artifacts.elastic.co ----- - -The Linux hosts file path is `/etc/hosts`. - -Windows hosts file path is `C:\Windows\System32\drivers\etc\hosts`. - -_Example 2: Puppet_ - -[source,yaml] ----- -host { 'elastic-artifacts': - ensure => 'present' - comment => 'Workaround for PGP check' - ip => '' -} ----- - -_Example 3: Ansible_ - -[source,yaml] ----- -- name : 'elastic-artifacts' - hosts : 'all' - become: 'yes' - - tasks: - - name: 'Add entry to /etc/hosts' - lineinfile: - path: '/etc/hosts' - line: ' artifacts.elastic.co' ----- - -==== - -[[known-issue-166553]] -.Filtering Elastic Agents in Kibana generates an "Error fetching agents" message -[%collapsible] -==== - -*Details* - -A {kibana-ref}/kuery-query.html[KQL query] in a Fleet search field now returns a `400` error when the query is not valid. - -Previously, the search fields would accept any type of query, but with the merge of {kibana-pull}161064[#161064] any type of KQL sent to {fleet} needs to have a valid field name, otherwise it returns an error. - -*Cause* + - -Entering an invalid KQL query on one of the {fleet} KQL search fields or through the API produces the error. - -Affected search fields in the {fleet} UI: - -* Agent list -* Agent policies -* Enrollment Keys - -Affected endpoints in the <> (these are the endpoints that accept the parameter `ListWithKuery`): - -* `GET api/fleet/agents` -* `GET api/fleet/agent_status` -* `GET api/fleet/agent_policies` -* `GET api/fleet/package_policies` -* `GET api/fleet/enrollment_api_keys` -* `GET api/fleet/agent_status` - -*Impact* + - -To avoid getting the `400` error, the queries should be valid. - -For instance, entering the query `8.10.0` results in an error. The correct query should be: `local_metadata.agent.version="8.10.0"`. - -As another example, when viewing the *Agents* tab in *Fleet*, typing a hostname such as `a0c8c88ef2f5` in the search field results in an error. The correct query should have the correct field name, taken from among the allowed ones, for example `local_metadata.host.hostname: a0c8c88ef2f5`. - -The list of available field names is visible by clicking on any of the search fields. - -==== - - -[discrete] -[[new-features-8.10.0]] -=== New features - -The 8.10.0 release Added the following new and notable features. - -{fleet}:: -* Enable agent policy secret storage when all fleet servers are above 8.10.0 {kibana-pull}163627[#163627]. -* Kafka integration API {kibana-pull}159110[#159110]. - -{fleet-server}:: -* Add a new policy token that can be used to enroll {agent} into fleet server. {fleet-server-pull}2654[#2654] -* Add a Kafka output type for agent policies. {fleet-server-pull}2850[#2850] -* Fleet Server support to handle agent policy secrets. {fleet-server-pull}2863[#2863] {fleet-server-issue}2485[#2485] - -{agent}:: -* Report the version from the {agent} package instead of the agent binary to enhance release process. {agent-pull}2908[#2908] -* Implement tamper protection for {elastic-endpoint} uninstall use cases. {agent-pull}2781[#2781] -* Add component-level diagnostics and CPU profiling. {agent-pull}3118[#3118] -* Improve upgrade process to use upgraded version of Watcher to ensure a successful upgrade. {agent-pull}3140[#3140] {agent-issue}2873[#2873] - -[discrete] -[[enhancements-8.10.0]] -=== Enhancements - -{fleet}:: -* Add support for runtime fields. {kibana-pull}161129[#161129]. - -{fleet-server}:: -* Keep the {fleet-server} service running when {es} is not available. {fleet-server-pull}2693[#2693] {fleet-server-issue}2683[#2683] -* Add APM trace fields to HTTP request logs. {fleet-server-pull}2743[#2743] -* File transfers with integrations now use datastreams. {fleet-server-pull}2743[#2741] -* Use a unique ID for agent action results to ensure accurate counts on {fleet} UI. {fleet-server-pull}2782[#2782] {fleet-server-issue}2596[#2596] - -{agent}:: -* Redundant calls to `/api/fleet/setup` were removed in favor of {kib}-initiated calls. {agent-pull}2985[#2985] {agent-issue}2910[#2910] -* Updated Go version to 1.20.7. {agent-pull}3177[#3177] -* Add runtime prevention to prevent {elastic-defend} from running if {agent} is not installed in the default location. {agent-pull}3114[#3114] -* Add a new flag `complete` to agent metadata to signal that the instance running is synthetics-capable. {agent-pull}3190[#3190] {fleet-server-issue}1754[#1754] -* Add support for setting GOMAXPROCS to limit CPU usage through the agent policy. {agent-pull}3179[#3179] -* Add logging to the restart step of the {agent} upgrade rollback process. {agent-pull}3245[#3245] {agent-issue}3305[#3305] - -[discrete] -[[bug-fixes-8.10.0]] -=== Bug fixes - -{fleet}:: -* Only show agent dashboard links if there is more than one non-server agent and if the dashboards exist. {kibana-pull}164469[#164469]. -* Exclude synthetics from per-policy-outputs. {kibana-pull}161949[#161949]. -* Fix the path for hint templates for auto-discover. {kibana-pull}161075[#161075]. - -{agent}:: -* Don't trigger Indicator of Compromise (IoC) alert on Windows uninstall. {agent-pull}3014[#3014] {agent-issue}2970[#2970] -* Fix credential redaction in diagnostic bundle collection. {agent-pull}3165[#3165] -* Ensure that {agent} upgrades are rolled back even when the upgraded agent crashes immediately and repeatedly. {agent-pull}3220[#3220] {agent-issue}3123[#3123] -* Ensure that Elastic Agent is restarted during rollback. {agent-pull}3268[#3268] -* Fix how the diagnostics command handles the custom path to save the diagnostics. {agent-pull}3340[#3340] {agent-issue}3339[#3339] - -// end 8.10.0 relnotes - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.11.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.11.asciidoc deleted file mode 100644 index 04ee9632c..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.11.asciidoc +++ /dev/null @@ -1,808 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.11.4 relnotes - -[[release-notes-8.11.4]] -== {fleet} and {agent} 8.11.4 - -Review important information about {fleet-server} and {agent} for the 8.11.4 release. - -[discrete] -[[security-updates-8.11.4]] -=== Security updates - -{agent}:: -* Updated Go version to 1.20.12. {agent-pull}3885[#3885] - -[discrete] -[[known-issues-8.11.4]] -=== Known issues - -[[known-issue-169825-8.11.4]] -.Current stack version is not in the list of {agent} versions in {kib} {fleet} UI -[%collapsible] -==== - -*Details* - -On the {fleet} UI in {kib}: - -* When adding a new {agent}, the user interface shows a previous version instead of the current version. -* When you attempt an upgrade, the modal window shows an earlier version as the latest version. - -*Impact* + - -You can use the following steps as a workaround: - -*When upgrading {agent} currently on versions 8.10.4 or earlier (simpler)* - -. Open the {fleet} UI. Under the *Agents* tab select *Upgrade agent* from the actions menu. The version field in the *Upgrade agent* UI allows you to enter any version. -. Enter `8.11.0` or whichever version you want to upgrade the {agents} to. Do not choose a version later than the version of {kib} or {fleet-server} that you're running. - -*When upgrading {agent} currently on any version (more complex, requires API)* - -. Open {kib} and navigate to *Management -> Dev Tools*. -. Choose one of the API requests below and submit it through the console. Each of the requests uses version `8.11.0` as an example, but this can be changed to any available version. -+ -* To upgrade a single {agent} to any version, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents//upgrade -{"version":"8.11.0"} ----- -+ -* To upgrade a set of {agents} based on a known set of agent IDs, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "version":"8.11.0", - "agents":["",""], - "start_time":"2023-11-10T09:41:39.850Z" -} ----- -* To upgrade a set of {agents} running a specific policy, and below a specific version (for example, `8.11.0`), run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id: and fleet-agents.agent.version<", - "version": "8.11.0" -} ----- -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id:uuid1-uuid2-uuid3-uuid4 and fleet-agents.agent.version<8.11.0", - "version": "8.11.0" -} ----- - -TIP: To find the ID for any {agent}, open the **Agents** tab in {fleet} and select **View agent** from the **Actions** menu. The agent ID and other details are shown. - -To learn more about these requests, refer to the <>. - -==== - -// end 8.11.4 relnotes - -// begin 8.11.3 relnotes - -[[release-notes-8.11.3]] -== {fleet} and {agent} 8.11.3 - -Review important information about {fleet-server} and {agent} for the 8.11.3 release. - -[discrete] -[[known-issues-8.11.3]] -=== Known issues - -[[known-issue-169825-8.11.3]] -.Current stack version is not in the list of {agent} versions in {kib} {fleet} UI -[%collapsible] -==== - -*Details* - -On the {fleet} UI in {kib}: - -* When adding a new {agent}, the user interface shows a previous version instead of the current version. -* When you attempt an upgrade, the modal window shows an earlier version as the latest version. - -*Impact* + - -You can use the following steps as a workaround: - -*When upgrading {agent} currently on versions 8.10.4 or earlier (simpler)* - -. Open the {fleet} UI. Under the *Agents* tab select *Upgrade agent* from the actions menu. The version field in the *Upgrade agent* UI allows you to enter any version. -. Enter `8.11.0` or whichever version you want to upgrade the {agents} to. Do not choose a version later than the version of {kib} or {fleet-server} that you're running. - -*When upgrading {agent} currently on any version (more complex, requires API)* - -. Open {kib} and navigate to *Management -> Dev Tools*. -. Choose one of the API requests below and submit it through the console. Each of the requests uses version `8.11.0` as an example, but this can be changed to any available version. -+ -* To upgrade a single {agent} to any version, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents//upgrade -{"version":"8.11.0"} ----- -+ -* To upgrade a set of {agents} based on a known set of agent IDs, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "version":"8.11.0", - "agents":["",""], - "start_time":"2023-11-10T09:41:39.850Z" -} ----- -* To upgrade a set of {agents} running a specific policy, and below a specific version (for example, `8.11.0`), run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id: and fleet-agents.agent.version<", - "version": "8.11.0" -} ----- -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id:uuid1-uuid2-uuid3-uuid4 and fleet-agents.agent.version<8.11.0", - "version": "8.11.0" -} ----- - -TIP: To find the ID for any {agent}, open the **Agents** tab in {fleet} and select **View agent** from the **Actions** menu. The agent ID and other details are shown. - -To learn more about these requests, refer to the <>. - -==== - -[discrete] -[[security-updates-8.11.3]] -=== Security updates - -{agent}:: -The 8.11.3 patch release contains a fix for a potential security vulnerability. Please see our link:https://discuss.elastic.co/c/announcements/security-announcements/31[security advisory for more details]. - -[discrete] -[[bug-fixes-8.11.3]] -=== Bug fixes - -{fleet}:: -* Fix a 500 error in the {fleet} API when a request for the product versions endpoint throws `ECONNREFUSED`. ({kibana-pull}172850[#172850]) -* Fix {agent} policy timeout to accept only integers. ({kibana-pull}172222[#172222]) - -// end 8.11.3 relnotes - -// begin 8.11.2 relnotes - -[[release-notes-8.11.2]] -== {fleet} and {agent} 8.11.2 - -Review important information about {fleet-server} and {agent} for the 8.11.2 release. - -IMPORTANT: The memory leak <> that affects Windows users running {agent} is resolved in this release. If you're currently on {agent} version 8.11.0 or 8.11.1, we strongly recommend upgrading to 8.11.2 or a later release to avoid the issue. If you're on an earlier version, avoid upgrading to version 8.11.0 or 8.11.1 and update directly to version 8.11.2 or later. - -[discrete] -[[known-issues-8.11.2]] -=== Known issues - -[[known-issue-169826-8.11.2]] -.Triggering {agent} upgrades from {kib} {fleet} UI in an air-gapped environment will fail -[%collapsible] -==== - -*Details* - -When attempting to upgrade an {agent}, {kib} tries to access https://www.elastic.co/api/product_versions. -In an air-gapped environment, this call will be blocked and the upgrade flow will therefore be blocked too. - -Upgrade {kib} to version 8.11.3 to solve the issue. - -==== - -[[known-issue-169825-8.11.2]] -.Current stack version is not in the list of {agent} versions in {kib} {fleet} UI -[%collapsible] -==== - -*Details* - -On the {fleet} UI in {kib}: - -* When adding a new {agent}, the user interface shows a previous version instead of the current version. -* When you attempt an upgrade, the modal window shows an earlier version as the latest version. - -*Impact* + - -You can use the following steps as a workaround: - -*When upgrading {agent} currently on versions 8.10.4 or earlier (simpler)* - -. Open the {fleet} UI. Under the *Agents* tab select *Upgrade agent* from the actions menu. The version field in the *Upgrade agent* UI allows you to enter any version. -. Enter `8.11.0` or whichever version you want to upgrade the {agents} to. Do not choose a version later than the version of {kib} or {fleet-server} that you're running. - -*When upgrading {agent} currently on any version (more complex, requires API)* - -. Open {kib} and navigate to *Management -> Dev Tools*. -. Choose one of the API requests below and submit it through the console. Each of the requests uses version `8.11.0` as an example, but this can be changed to any available version. -+ -* To upgrade a single {agent} to any version, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents//upgrade -{"version":"8.11.0"} ----- -+ -* To upgrade a set of {agents} based on a known set of agent IDs, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "version":"8.11.0", - "agents":["",""], - "start_time":"2023-11-10T09:41:39.850Z" -} ----- -* To upgrade a set of {agents} running a specific policy, and below a specific version (for example, `8.11.0`), run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id: and fleet-agents.agent.version<", - "version": "8.11.0" -} ----- -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id:uuid1-uuid2-uuid3-uuid4 and fleet-agents.agent.version<8.11.0", - "version": "8.11.0" -} ----- - -TIP: To find the ID for any {agent}, open the **Agents** tab in {fleet} and select **View agent** from the **Actions** menu. The agent ID and other details are shown. - -To learn more about these requests, refer to the <>. - -==== - -[discrete] -[[enhancements-8.11.2]] -=== Enhancements - -{fleet}:: -* Improve UX for policy secrets. {kibana-pull}171405[#171405] - -{agent}:: -* Add configuration parameters for the Kubernetes `leader_election` provider. {agent-pull}3625[#3625] -* Update NodeJS version bundled with Heartbeat to v18.18.2. {agent-pull}3655[#3655] -* Update Go version to 1.20.11. {agent-pull}3748[#3748] - -[discrete] -[[bug-fixes-8.11.2]] -=== Bug fixes - -{fleet}:: -* Support integration secrets in a local package registry with variables `secret: true` and `required: false`. {kibana-pull}172078[#172078] -* Fix agent metrics retrieval on the agent list page, previously displaying N/A for metrics for users with more than 10 agents. {kibana-pull}172016[#172016] -* Only add `time_series_metric` if TSDB is enabled. {kibana-pull}171712[#171712] -* Fix inability to upgrade agents from version 8.10.4 to version 8.11. {kibana-pull}170974[#170974] - -{agent}:: -* Fix logging calls that have missing arguments. {agent-pull}3679[#3679] -* Fix {fleet}-managed {agent} ignoring the `agent.download.proxy_url` setting after a policy is updated. {agent-pull}3803[#3803] {agent-issue}3560[#3560] -* Properly convert component error fields to YAML in agent diagnostics. {agent-pull}3835[#3835] {agent-issue}2940[#2940] - -// end 8.11.2 relnotes - -// begin 8.11.1 relnotes - -[[release-notes-8.11.1]] -== {fleet} and {agent} 8.11.1 - -Review important information about {fleet-server} and {agent} for the 8.11.1 release. - -IMPORTANT: Due to a memory leak issue, Windows users running {agent} are recommended to avoid upgrading to this release and waiting for the upcoming 8.11.2 release in which the issue is resolved. If you've already upgraded to version 8.11.0 or 8.11.1, we recommend upgrading to 8.11.2 as soon as it becomes available. See the <> for more detail. - -[discrete] -[[known-issues-8.11.1]] -=== Known issues - -[[known-issue-3712-8.11.1]] -* The <> that could prevent the {agent} or Integrations Server component from booting up within an ECE deployment has been resolved in this release. - -[[known-issue-115-8.11.1]] -.Memory leak running {agent} in Windows environments with the System Integration -[%collapsible] -==== - -*Details* - -A memory leak has been identified in {beats} on Windows. All {beats} running Elastic Stack version 8.11.0 or 8.11.1 are affected. The leak also affects the {agent} System integration which is implemented with {beats}. The leak will eventually exhaust all memory on the host system, typically after several days. - -*Impact* + - -This issue has been fixed in version 8.11.2. For a Windows environment, we strongly recommend upgrading directly to 8.11.2 or any later release. - -If you're already running {agent} version 8.11.0 or 8.11.1 on Windows and do not want to upgrade, we recommend that you: - -. Disable the `process` and `process_summary` metrics in your System integration. -. Disable logs and metrics collection. -. Restart {agent}. - -Note that disabling these datasets will prevent the collection of process-related metrics. - -Another workaround is to downgrade {agent} to a version below 8.11.0. Note that this could result in missing or reindexed logs or metrics as the "state" will not be persisted after {agent} is uninstalled and reinstalled. - -For {beats} we currently do not have a workaround apart from upgrading to 8.12.2 or a later release. - -==== - -[[known-issue-169825-8.11.1]] -.Current stack version is not in the list of {agent} versions in {kib} {fleet} UI -[%collapsible] -==== - -*Details* - -On the {fleet} UI in {kib}: - -* When adding a new {agent}, the user interface shows a previous version instead of the current version. -* When attempting to upgrade, the modal window to pick the version shows an earlier version as the latest version. - -*Impact* + - -You can use the following steps as a workaround: - -*When upgrading {agent} currently on versions 8.10.4 or lower (simpler)* - -. Open the {fleet} UI. Under the *Agents* tab select *Upgrade agent* from the actions menu. The version field in the *Upgrade agent* UI allows you to enter any version. -. Enter `8.11.0` or whichever version you want to upgrade the [agents] to. Do not choose a version above the version of {kib} or {fleet-server} that you're running. - -*When upgrading {agent} currently on any version (more complex, requires API)* - -. Open {kib} and navigate to *Management -> Dev Tools*. -. Choose one of the API requests below and submit it through the console. Each of the requests uses version `8.11.0` as an example, but this can be changed to any available version. -+ -* To upgrade a single {agent} to any version, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents//upgrade -{"version":"8.11.0"} ----- -+ -* To upgrade a set of {agents} based on a known set of agent IDs, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "version":"8.11.0", - "agents":["",""], - "start_time":"2023-11-10T09:41:39.850Z" -} ----- -* To upgrade a set of {agents} running a specific policy, and below a specific version (for example, `8.11.0`), run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id: and fleet-agents.agent.version<", - "version": "8.11.0" -} ----- -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id:uuid1-uuid2-uuid3-uuid4 and fleet-agents.agent.version<8.11.0", - "version": "8.11.0" -} ----- - -TIP: To find the ID for any {agent}, open the **Agents** tab in {fleet} and select **View agent** from the **Actions** menu. The agent ID and other details are shown. - -To learn more about these requests, refer to the <>. - -==== - -[discrete] -[[new-features-8.11.1]] -=== New features - -The 8.11.1 release Added the following new and notable features. - -{agent}:: -* Add the dimensions `component.id` and `component.binary` to {agent} and {beats} monitoring output, to support unique entries for the Time Series Database (TSDB) feature. {agent-pull}3626[#3626] https://github.com/elastic/integrations/issues//7977[#7977] - -[discrete] -[[bug-fixes-8.11.1]] -=== Bug fixes - -{fleet}:: -* Append space ID to security solution tag. ({kibana-pull}170789[#170789]). -* Modify bulk unenroll to include inactive agents. ({kibana-pull}170249[#170249]). - -// end 8.11.1 relnotes - -// begin 8.11.0 relnotes - -[[release-notes-8.11.0]] -== {fleet} and {agent} 8.11.0 - -Review important information about {fleet-server} and {agent} for the 8.11.0 release. - -IMPORTANT: Due to a memory leak issue, Windows users running {agent} are recommended to avoid upgrading to this release and waiting for the upcoming 8.11.2 release in which the issue is resolved. If you've already upgraded to 8.11.0 or 8.11.1, we recommend upgrading to 8.11.2 as soon as it becomes available. See the <> for more detail. - -[discrete] -[[security-updates-8.7.x]] -=== Security updates - -{agent}:: -* Updated Go version to 1.20.10. {agent-pull}3[#3601] - -[discrete] -[[breaking-changes-8.11.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-3505]] -.Compression is enabled by default for {agent} {es} outputs -[%collapsible] -==== -*Details* + -The default compression level for {es} outputs is changing from `0` to `1`. - -*Impact* + -On typical workloads this is expected to decrease network data volume by 70-80%, while increasing CPU use by 20-25% and ingestion time by 10%. The previous behavior can be restored by adding the setting `compression_level: 0` to the agent output configuration. -==== - -[discrete] -[[breaking-3593]] -.`elastic-agent-autodiscover` library has been updated to version 0.6.4, disabling metadata For `kubernetes.deployment` and `kubernetes.cronjob` fields. -[%collapsible] -==== -*Details* + -The `elastic-agent-autodiscover` Kubernetes library by default comes with `add_resource_metadata.deployment=false` and `add_resource_metadata.cronjob=false`. - -*Impact* + -Pods that will be created from deployments or cronjobs will not have the extra metadata field for `kubernetes.deployment` or `kubernetes.cronjob`, respectively. This change was made to avoid the memory impact of keeping the feature enabled in big Kubernetes clusters. -For more information, refer to {agent-pull}3593[#3593]. -==== - -[discrete] -[[known-issues-8.11.0]] -=== Known issues - -[[known-issue-115-8.11.0]] -.Memory leak running {agent} in Windows environments with the System Integration -[%collapsible] -==== - -*Details* - -A memory leak has been identified in {beats} on Windows. All {beats} running Elastic Stack version 8.11.0 or 8.11.1 are affected. The leak also affects the {agent} System integration which is implemented with {beats}. The leak will eventually exhaust all memory on the host system, typically after several days. - -*Impact* + - -This issue has been fixed in version 8.11.2. For a Windows environment, we strongly recommend upgrading directly to 8.11.2 or any higher release. - -If you're already running {agent} version 8.11.0 or 8.11.1 on Windows and do not want to upgrade, we recommend that you: - -. Disable the `process` and `process_summary` metrics in your System integration. -. Disable logs and metrics collection. -. Restart {agent}. - -Note that disabling these datasets will prevent the collection of process-related metrics. - -Another workaround is to downgrade {agent} to a version below 8.11.0. Note that this could result in missing or reindexed logs or metrics as the "state" will not be persisted after {agent} is uninstalled and reinstalled. - -For {beats} we currently do not have a workaround apart from upgrading to 8.12.2 or a later release. - -==== - -[[known-issue-169825-8.11.0]] -.Current stack version is not in the list of {agent} versions in {kib} {fleet} UI -[%collapsible] -==== - -*Details* - -On the {fleet} UI in {kib}: - -* When adding a new {agent}, the user interface shows a previous version instead of the current version. -* When attempting to upgrade, the modal window to pick the version shows an earlier version as the latest version. - -*Impact* + - -You can use the following steps as a workaround: - -*When upgrading {agent} currently on versions 8.10.4 or lower (simpler)* - -. Open the {fleet} UI. Under the *Agents* tab select *Upgrade agent* from the actions menu. The version field in the *Upgrade agent* UI allows you to enter any version. -. Enter `8.11.0` or whichever version you want to upgrade the [agents] to. Do not choose a version above the version of {kib} or {fleet-server} that you're running. - -*When upgrading {agent} currently on any version (more complex, requires API)* - -. Open {kib} and navigate to *Management -> Dev Tools*. -. Choose one of the API requests below and submit it through the console. Each of the requests uses version `8.11.0` as an example, but this can be changed to any available version. -+ -* To upgrade a single {agent} to any version, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents//upgrade -{"version":"8.11.0"} ----- -+ -* To upgrade a set of {agents} based on a known set of agent IDs, run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "version":"8.11.0", - "agents":["",""], - "start_time":"2023-11-10T09:41:39.850Z" -} ----- -* To upgrade a set of {agents} running a specific policy, and below a specific version (for example, `8.11.0`), run: -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id: and fleet-agents.agent.version<", - "version": "8.11.0" -} ----- -+ -[source,console] ----- -POST kbn:/api/fleet/agents/bulk_upgrade -{ - "agents": "fleet-agents.policy_id:uuid1-uuid2-uuid3-uuid4 and fleet-agents.agent.version<8.11.0", - "version": "8.11.0" -} ----- - -TIP: To find the ID for any {agent}, open the **Agents** tab in {fleet} and select **View agent** from the **Actions** menu. The agent ID and other details are shown. - -To learn more about these requests, refer to the <>. - -==== - -[discrete] -[[known-issue-3712]] -.Integrations Server / APM unable to boot in specific ECE environments -[%collapsible] -==== -*Details* + -A permissions change in the {agent} Docker container can prevent the {agent} or Integrations Server component from booting up within an ECE deployment. The change affects ECE installations that are deployed with a Linux UID other than `1000`. - -*Impact* + -ECE users with deployments that include APM or Integrations Server are recommended to wait for the next patch release, which is planned to include a fix for this problem. -==== - -[discrete] -[[new-features-8.11.0]] -=== New features - -The 8.11.0 release Added the following new and notable features. - -{fleet}:: -* Set env variable `ELASTIC_NETINFO:false` in {kib} ({kibana-pull}166156[#166156]). -* Added restart upgrade action ({kibana-pull}166154[#166154]). -* Adds ability to set a proxy for agent binary source ({kibana-pull}164168[#164168]). -* Adds ability to set a proxy for agent download source ({kibana-pull}164078[#164078]). - -{agent}:: -* Add support for processors in hints-based Kubernetes autodiscover. {agent-pull}3107[#3107] {agent-issue}2959[#2959] -* Print out {agent} installation steps to show progress. {agent-pull}3338[#3338] -* Add colors to {agent} messages printed by the elastic-agent logs command based on their level. {agent-pull}3345[#3345] - -[discrete] -[[enhancements-8.11.0]] -=== Enhancements - -{fleet}:: -* Adds sidebar navigation showing headings extracted from the readme ({kibana-pull}167216[#167216]). - -{fleet-server}:: -* Expand APM traces to track coordinator and monitor transactions. Add additonal spans across all API endpoints to better track what the server does. Add spans to bulker interactions that link with the queue flush transaction that the bulk action is executed through. {fleet-server-pull}2929[#2929] -* Add endpoint to serve PGP keys that clients can use when validating upgrades in cases where the embedded PGP key in a client is compromised and the client can't reach the internet. {fleet-server-pull}2977[#2977] {fleet-server-issue}2887[#2887] -* Add ActionLimit and a Gzip writer pool to handle checkin responses, to help prevent OOM errors when updates are issued to many clients. {fleet-server-pull}2929[#2994] -* Send errors in API calls and bulker flushes to APM. fleet-server-pull}3053[#3053] - -{agent}:: -* Improve {agent} uninstall on Windows by adding delay between retries when file removal is blocked by busy files. {agent-pull}3431[#3431] {agent-issue}3221[#3221] -* Support the NETINFO variable in Elastic Kubernetes manifests. Setting a new environmental variable `ELASTIC_NETINFO=false` globally disables the `netinfo.enabled` parameter of the `add_host_metadata` processor. This disables the indexing of `host.ip` and `host.mac` fields. {agent-pull}3354[#3354] -* The {agent} uninstall process now finds and kills any running upgrade Watcher process. Uninstalls initiated within 10 minutes of a previous upgrade now work as expected. {agent-pull}3384[#3384] {agent-issue}3371[#3371] -* Fix the Kubernetes `deploy/kubernetes/creator_k8.sh` script to correcly exclude configmaps. {agent-pull}3396[#3396] -* Allow fetching the GPG key used for upgrade package signature verification from {fleet-server}. This enables upgrades using rotated GPG keys in air gapped environments where {fleet-server} is the only reachable URI. {agent-pull}3543[#3543] {agent-issue}3264[#3264] -* Enable tamper protection feature flag by default for {agent} version 8.11.0. {agent-pull}3478[#3478] -* Increase {agent} monitoring metrics interval from 10s to 60s to reduce the default ingestion load and long term storage requirements. {agent-pull}3578[#3578] - -[discrete] -[[bug-fixes-8.11.0]] -=== Bug fixes - -{fleet}:: -* Vastly improve performance of Fleet final pipeline's date formatting logic for `event.ingested` ({kibana-pull}167318[#167318]). - -{fleet-server}:: -* Fix errors produced by the {fleet-server} bulker to be ECS compliant. {fleet-server-pull}3034[#3034] {fleet-server-issue}3033[#3033] - -{agent}:: -* Enable resilient handling of air gapped PGP checks. {agent} should not fail when remote PGP is specified (or official Elastic fallback PGP is used) and remote is not available. {agent-pull}3427[#3427] {agent-pull}3426[#3426] {agent-issue}3368[#3368] -* Prevent a standalone {agent} from being upgraded if an upgrade is already in progress. {agent-pull}3473[#3473] {agent-issue}2706[#2706] -* Fix a bug that affected reporting progress of the {agent} artifact download during an upgrade. {agent-pull}3548[#3548] -* Upgrade `elastic-agent-libs` to v0.6.0 to fix the {agent} Windows service becoming unresponsive. Fixes Windows service timeouts during WMI queries and during service shutdown. {agent-pull}3632[#3632] {agent-issue}3061[#3061] -* Increase wait period between service restarts on failure to 15s on Windows. {agent-pull}3657[#3657] -* Prevent multiple attempts by {agent} to stop an already stopped service. {agent-pull}3482[#3482] - -// end 8.11.0 relnotes - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.12.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.12.asciidoc deleted file mode 100644 index 76df286cd..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.12.asciidoc +++ /dev/null @@ -1,776 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.12.2 relnotes - -[[release-notes-8.12.2]] -== {fleet} and {agent} 8.12.2 - -Review important information about {fleet-server} and {agent} for the 8.12.2 release. - -[discrete] -[[bug-fixes-8.12.2]] -=== Bug fixes - -{fleet}:: -* Fix a popover about inactive agents not being dismissible. ({kibana-pull}176929[#176929]) -* Fix logstash output being link:https://www.rfc-editor.org/rfc/rfc952[RFC-952] compliant. ({kibana-pull}176298[#176298]) -* Fix assets being unintentionally moved to the default space during Fleet setup. ({kibana-pull}176173[#176173]) -* Fix categories labels in integration overview. ({kibana-pull}176141[#176141]) -* Fix the ability to delete agent policies with inactive agents from UI, the inactive agents need to be unenrolled first. ({kibana-pull}175815[#175815]) - -{fleet-server}:: -* Fix a bug where agents were stuck in a non-upgradeable state after upgrade. This resolves the <> that affected versions 8.12.0 and 8.12.1. {fleet-server-pull}3264[#3264] {fleet-server-issue}3263[#3263] -* Fix chunked file delivery so that files are delivered in order. {fleet-server-pull}3283[#3283] - -{agent}:: -* On Windows, make sure the {agent} service is stopped before uninstalling. {agent-pull}4224[#4224] {agent-issue}4164[#4164] - -* Fix {agent} download settings like proxy_url not being respected when download upgrade signature artifact signature files. {agent-pull}4270[#4270] {agent-issue}4237[#4237] - -// end 8.12.2 relnotes - -// begin 8.12.1 relnotes - -[[release-notes-8.12.1]] -== {fleet} and {agent} 8.12.1 - -Review important information about {fleet-server} and {agent} for the 8.12.1 release. - -[discrete] -[[breaking-changes-8.12.1]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-170270-8.12.1]] -.Naming collisions with {fleet} custom ingest pipelines -[%collapsible] -==== -*Summary* + -If you were relying on an ingest pipeline of the form `${type}-${integration}@custom` introduced in version 8.12.0 (for example, `traces-apm@custom`, `logs-nginx@custom`, or `metrics-system@custom`) you need to update your pipeline's name to include an `.integration` suffix (for example, `logs-nginx.integration@custom`) to preserve your expected ingestion behavior. - -*Details* + -In version 8.12.0, {fleet} added new custom ingest pipeline names for adding custom processing to integration data streams. These pipeline names used patterns as follows: - -* `global@custom` -* `${type}@custom` (for example `traces@custom`) -* `${type}-${integration}@custom` (for example `traces-apm@custom`) -* `${type}-${integration}-${dataset}@custom` pre-existing (for example `traces-apm.rum@custom`) - -However, it was discovered in {kibana-issue}175254[#175254] that the `${type-integration}@custom` pattern can collide in cases where the `integration` name is _also_ a dataset name. The clearest case of these collisions was in the APM integration's data streams, for example: - -* `traces-apm` -* `traces-apm.rum` -* `traces-apm.sampled` - -Because `traces-apm` is a legitimate data stream defined by the APM integration (see the relevant https://github.com/elastic/integrations/blob/main/packages/apm/data_stream/traces/manifest.yml[manifest.yml] file), it incurred a collision of these custom pipeline names on version 8.12.0. For example: - -[source,json] ----- -// traces-apm -{ - "pipeline": { - "name": "traces-apm@custom", // <--- - "ignore_missing_pipeline": true - } -} ----- - -[source,json] ----- -// traces-apm.rum -{ - "pipeline": { - "name": "traces-apm@custom", // <--- - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm.rum@custom", - "ignore_missing_pipeline": true - } -} ----- - -Prior to version 8.12.0, the `traces-apm@custom` custom pipeline name was already supported. So, if you had already defined and were using the supported `traces-apm@custom` pipeline, and then upgraded to 8.12.0, you would observe that documents ingested to `traces-apm.rum` and `traces-apm.sampled` would also be processed by your pre-existing `traces-apm@custom` ingest pipeline. This could cause breakages and unexpected pipeline processing errors. - -To correct this in version 8.12.1, {fleet} now appends a suffix to the "integration level" custom ingest pipeline name. The new suffix prevents collisions between datasets and integration names moving forward. For example: - -[source,json] ----- -// traces-apm -{ - "pipeline": { - "name": "traces-apm.integration@custom", // <--- Integration level pipeline - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm@custom", // <--- Dataset level pipeline - "ignore_missing_pipeline": true - } -} ----- - -[source,json] ----- -// traces-apm.rum -{ - "pipeline": { - "name": "traces-apm.integration@custom", // <--- Integration level pipeline - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm.rum@custom", // <--- Dataset level pipeline - "ignore_missing_pipeline": true - } -} ----- - -So, if you are relying on an integration level custom ingest pipeline introduced in version 8.12.0, you need to update its name to include the new `.integration` suffix to preserve your existing ingestion behavior. - -Refer to the <> documentation for details and examples. -==== - -[discrete] -[[known-issues-8.12.1]] -=== Known issues - -[[known-issue-3263-8121]] -.Agents upgraded to 8.12.0 are stuck in a non-upgradeable state -[%collapsible] -==== - -*Details* - -An issue discovered in {fleet-server} prevents {agents} that have been upgraded to version 8.12.0 from being upgraded again, using the {fleet} UI, to version 8.12.1 or higher. - -*Impact* + - -As a workaround, we recommend you to use the {kib} {fleet} API to update any documents in which `upgrade_details` is either `null` or not defined. Note that these steps must be run as a superuser. - -[source,"shell"] ----- - POST _security/role/fleet_superuser - { - "indices": [ - { - "names": [".fleet*",".kibana*"], - "privileges": ["all"], - "allow_restricted_indices": true - } - ] - } ----- - -[source,"shell"] ----- -POST _security/user/fleet_superuser - { - "password": "password", - "roles": ["superuser", "fleet_superuser"] - } ----- - -[source,"shell"] ----- -curl -sk -XPOST --user fleet_superuser:password -H 'content-type:application/json' \ - -H'x-elastic-product-origin:fleet' \ - http://localhost:9200/.fleet-agents/_update_by_query \ - -d '{ - "script": { - "source": "ctx._source.remove(\"upgrade_details\")", - "lang": "painless" - }, - "query": { - "bool": { - "must_not": { - "exists": { - "field": "upgrade_details" - } - } - } - } -}' ----- - -[source,"shell"] ----- -DELETE _security/user/fleet_superuser -DELETE _security/role/fleet_superuser ----- - -After running these API requests, wait at least 10 minutes, and then the agents should be upgradeable again. - -==== - -[discrete] -[[bug-fixes-8.12.1]] -=== Bug fixes - -{fleet}:: -* Fix the display of category label on the Integration overview page. ({kibana-pull}176141[#176141]) -* Fix conflicting dynamic template mappings for intermediate objects. ({kibana-pull}175970[#175970]) -* Fix reserved keys for Elasticsearch output YAML box. ({kibana-pull}175901[#175901]) -* Prevent deletion of agent policies with inactive agents from UI. ({kibana-pull}175815[#175815]) -* Fix incorrect count of agents in bulk actions. ({kibana-pull}175318[#175318]) -* Fix a custom integrations not displaying on the Installed integrations page. ({kibana-pull}174804[#174804]) - -{agent}:: -* On Windows, prevent uninstalling from within the directory where {agent} is installed. {agent-pull}4108[#4108] {agent-issue}3342[#3342] - -// end 8.12.1 relnotes - -// begin 8.12.0 relnotes - -[[release-notes-8.12.0]] -== {fleet} and {agent} 8.12.0 - -Review important information about {fleet-server} and {agent} for the 8.12.0 release. - -[discrete] -[[security-updates-8.12.0]] -=== Security updates - -{agent}:: -* Update Go version to 1.20.12. {agent-pull}3885[#3885] - -[discrete] -[[breaking-changes-8.12.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-170270]] -.Possible naming collisions with {fleet} custom ingest pipelines -[%collapsible] -==== -*Details* + -Starting in this release, {fleet} <> can be configured to process events at various levels of customization. If you have a custom pipeline already defined that matches the name of a {fleet} custom ingest pipeline, it may be unexpectedly called for other data streams in other integrations. For details and investigation about the issue refer to {kibana-issue}175254[#175254]. A fix is planned for delivery in the next 8.12 minor release. - -**Affected ingest pipelines** - -**APM** - -* `traces-apm` -* `traces-apm.rum` -* `traces-apm.sampled`` - -For APM, if you had previously <> of the form `traces-apm@custom` to customize the ingestion of documents ingested to the `traces-apm` data stream, then by nature of the new `@custom` hooks introduced in issue {kibana-issue}168019[#168019], the `traces-apm@custom` pipeline will be called as a pipeline processor in both the `traces-apm.rum` and `traces-apm.sampled` ingest pipelines. See the following for a comparison of the relevant `processors` blocks for each of these pipeline before and after upgrading to 8.12.0: - -[source,json] ----- -// traces-apm-8.x.x -{ - "pipeline": { - "name": "traces-apm@custom", - "ignore_missing_pipeline": true - } -} - -// traces-apm-8.12.0 -{ - "pipeline": { - "name": "global@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm@custom", <--- Duplicate pipeline entry - "ignore_missing_pipeline": true - } -} ----- - -[source,json] ----- -// traces-apm.rum-8.x.x -{ - "pipeline": { - "name": "traces-apm.rum@custom", - "ignore_missing_pipeline": true - } -} - -// traces-apm.rum-8.12.0 -{ - "pipeline": { - "name": "global@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm@custom", <--- Collides with `traces-apm@custom` that may be preexisting - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm.rum@custom", - "ignore_missing_pipeline": true - } -} ----- - -[source,json] ----- - -// traces-apm.sampled-8.x.x -{ - "pipeline": { - "name": "traces-apm.rum@custom", - "ignore_missing_pipeline": true - } -} - -// traces-apm.sampled-8.12.0 -{ - "pipeline": { - "name": "global@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces@custom", - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm@custom", <--- Collides with `traces-apm@custom` that may be preexisting - "ignore_missing_pipeline": true - } -}, -{ - "pipeline": { - "name": "traces-apm.sampled@custom", - "ignore_missing_pipeline": true - } -} ----- - -The immediate workaround to avoid this unwanted behavior is to edit both the `traces-apm.rum` and `traces-apm.sampled` ingest pipelines to no longer include the `traces-apm@custom` pipeline processor. - -**Please note that this is a temporary workaround, and this change will be undone if the APM integration is upgraded or reinstalled.** - -**{agent}** - -The `elastic_agent` integration is subject to the same type of breaking change as described for APM, above. The following ingest pipelines are impacted: - -* `logs-elastic_agent` -* `logs-elastic_agent.apm_server` -* `logs-elastic_agent.auditbeat` -* `logs-elastic_agent.cloud_defend` -* `logs-elastic_agent.cloudbeat` -* `logs-elastic_agent.endpoint_security` -* `logs-elastic_agent.filebeat` -* `logs-elastic_agent.filebeat_input` -* `logs-elastic_agent.fleet_server` -* `logs-elastic_agent.heartbeat` -* `logs-elastic_agent.metricbeat` -* `logs-elastic_agent.osquerybeat` -* `logs-elastic_agent.packetbeat` -* `logs-elastic_agent.pf_elastic_collector` -* `logs-elastic_agent.pf_elastic_symbolizer` -* `logs-elastic_agent.pf_host_agent` - -The behavior is similar to what's described for APM above: pipelines such as `logs-elastic_agent.filebeat` will include a `pipeline` processor that calls `logs-elastic_agent@custom`. If you have custom processing logic defined in a `logs-elastic_agent@custom` ingest pipeline, it will be called by all of the pipelines listed above. - -The workaround is the same: remove the `logs-elastic_agent@custom` pipeline processor from all of the ingest pipelines listed above. - - -==== - -[discrete] -[[known-issues-8.12.0]] -=== Known issues - -[[known-issue-4084]] -.For new DEB and RPM installations the `elastic-agent enroll` command incorrectly reports failure -[%collapsible] -==== - -*Details* - -When you run the <> command for an RPM or DEB {agent} package, a `Retarting agent daemon` message appears in the command output, followed by a `Restart attempt failed` error. - -*Impact* + - -The error does not mean that the enrollment failed. The enrollment actually succeeded. You can ignore the `Restart attempt failed` error and continue by running the following commands, after which {agent} should successfully connect to {fleet}: - -[source,console] ----- -sudo systemctl enable elastic-agent -sudo systemctl start elastic-agent ----- - -==== - -[[known-issue-37754]] -.Performance regression in AWS S3 inputs using SQS notification -[%collapsible] -==== - -*Details* - -In 8.12 the default memory queue flush interval was raised from 1 second to 10 seconds. In many configurations this improves performance because it allows the output to batch more events per round trip, which improves efficiency. However, the SQS input has an extra bottleneck that interacts badly with the new value. - -For more details see {beats-issue}37754[#37754]. - -*Impact* + - -If you are using the Elasticsearch output, and your configuration uses a performance preset, switch it to `preset: latency`. If you use no preset or use `preset: custom`, then set `queue.mem.flush.timeout: 1s` in your output configuration. - -If you are not using the Elasticsearch output, set `queue.mem.flush.timeout: 1s` in your output configuration. - -To configure the output parameters for a {fleet}-managed agent, see <>. For a standalone agent, see <>. - -==== - -[[known-issue-sec8366]] -.{fleet} setup can fail when there are more than one thousand {agent} policies -[%collapsible] -==== - -*Details* - -When you set up {fleet} with a very high volume of {agent} policies, one thousand or more, you may encounter an error similar to the following: - -[source,console] ----- -[ERROR][plugins.fleet] Unknown error happened while checking Uninstall Tokens validity: 'ResponseError: all shards failed: search_phase_execution_exception - Caused by: - too_many_nested_clauses: Query contains too many nested clauses; maxClauseCount is set to 5173 ----- - -The exact number of {agent} policies required to cause the error depends in part on the size of the {es} cluster, but generally it can happen with volumes above approximately one thousand policies. - -*Impact* + - -Currently there is no workaround for the issue but a fix is planned to be included in the next version 8.12 release. - -Note that according to our <>, the current recommended maximum number of {agent} policies supported by {fleet} is 500. - -==== - -[[known-issue-3263-8120]] -.Agents upgraded to 8.12.0 are stuck in a non-upgradeable state -[%collapsible] -==== - -*Details* - -An issue discovered in {fleet-server} prevents {agents} that have been upgraded to version 8.12.0 from being upgraded again, using the {fleet} UI, to version 8.12.1 or higher. - -This issue is planned to be fixed in versions 8.12.2 and 8.13.0. - -*Impact* + - -As a workaround, we recommend you to use the {kib} {fleet} API to update any documents in which `upgrade_details` is either `null` or not defined. Note that these steps must be run as a superuser. - -[source,"shell"] ----- - POST _security/role/fleet_superuser - { - "indices": [ - { - "names": [".fleet*",".kibana*"], - "privileges": ["all"], - "allow_restricted_indices": true - } - ] - } ----- - -[source,"shell"] ----- -POST _security/user/fleet_superuser - { - "password": "password", - "roles": ["superuser", "fleet_superuser"] - } ----- - -[source,"shell"] ----- -curl -sk -XPOST --user fleet_superuser:password -H 'content-type:application/json' \ - -H'x-elastic-product-origin:fleet' \ - http://localhost:9200/.fleet-agents/_update_by_query \ - -d '{ - "script": { - "source": "ctx._source.remove(\"upgrade_details\")", - "lang": "painless" - }, - "query": { - "bool": { - "must_not": { - "exists": { - "field": "upgrade_details" - } - } - } - } -}' ----- - -[source,"shell"] ----- -DELETE _security/user/fleet_superuser -DELETE _security/role/fleet_superuser ----- - -After running these API requests, wait at least 10 minutes, and then the agents should be upgradeable again. -==== - -[[known-issue-3939]] -.Remote {es} output does not support {elastic-defend} response actions -[%collapsible] -==== - -*Details* - -Support for a <> was introduced in this release to enable {agents} to send integration or monitoring data to a remote {es} cluster. A bug has been found that causes {elastic-defend} response actions to stop working when a remote {es} output is configured for an agent. - -*Impact* + - -This bug is currently being investigated and is expected to be resolved in an upcoming release. - -==== - - -[discrete] -[[new-features-8.12.0]] -=== New features - -The 8.12.0 release Added the following new and notable features. - -{fleet}:: -* Add {agent} upgrade states and display each agent's progress through the upgrade process. See <> for details. ({kibana-pull}167539[#167539]) -* Add support for preconfigured output secrets. ({kibana-pull}172041[#172041]) -* Add support for pipelines to process events at various levels of customization. ({kibana-pull}170270[#170270]) -* Add UI components to create and edit output secrets. ({kibana-pull}169429[#169429]) -* Add support for remote ES output. ({kibana-pull}169252[#169252]) -* Add the ability to specify secrets in outputs. ({kibana-pull}169221[#169221]) -* Add an integrations configs tab to display input templates. ({kibana-pull}168827[#168827]) -* Add a {kib} task to publish Agent metrics. ({kibana-pull}168435[#168435]) - -{agent}:: -* Add a "preset" field to {es} output configurations that applies a set of configuration overrides based on a desired performance priority. {beats-pull}37259[#37259] {agent-pull}3879[#3879] {agent-issue}3797[#3797] -* Send the current agent upgrade details to {fleet-server} as part of the check-in API's request body. {agent-pull}3528[#3528] {agent-issue}3119[#3119] -* Add new fields for retryable upgrade steps to upgrade details metadata. {agent-pull}3845[#3845] {agent-issue}3818[#3818] -* Improve the upgrade watcher to no longer require root access. {agent-pull}3622[#3622] -* Enable hints autodiscovery for {agent} so that the host for a container in a Kubernetes pod no longer needs to be specified manually. {agent-pull}3575[#3575] -{agent-issue}1453[#1453] -* Enable hints autodiscovery for {agent} so that a configuration can be defined through annotations for specific containers inside a pod. {agent-pull}3416[#3416] -{agent-issue}3161[#3161] -* Support flattened `data_stream.*` fields in an {agent} input configuration. {agent-pull}3465[#3465] {agent-issue}3191[#3191] - -[discrete] -[[enhancements-8.12.0]] -=== Enhancements - -{fleet}:: -* Add support for Elasticsearch output performance presets. ({kibana-pull}172359[#172359]) -* Add a new `keep_monitoring_alive` flag to agent policies. ({kibana-pull}168865[#168865]) -* Add support for additional types for dynamic mappings. ({kibana-pull}168842[#168842]) -* Use default component templates from Elasticsearch. ({kibana-pull}163731[#163731]) - -{agent}:: -* Use shorter timeouts for diagnostic requests unless CPU diagnostics are requested. {agent-pull}3794[#3794] {agent-issue}3197[#3197] -* Add configuration parameters for the Kubernetes `leader_election` provider. {agent-pull}3625[#3625] -* Remove duplicated tags that may be specified during an agent enrollment. {agent-pull}3740[#3740] {agent-issue}858[#858] -* Include upgrade details in an agent diagnostics bundle {agent-pull}3624[#3624] and in the `elastic-agent status` command output. {agent-pull}3615[#3615] {agent-issue}3119[#3119] -* Start and stop the monitoring server based on the monitoring configuration. {agent-pull}3584[#3584] {agent-issue}2734[#2734] -* Copy files concurrently to reduce the time taken to install and upgrade {agent} on systems running SSDs. {agent-pull}3212[#3212] -* Update `elastic-agent-libs` from version 0.7.2 to 0.7.3. {agent-pull}4000[#4000] - -[discrete] -[[bug-fixes-8.12.0]] -=== Bug fixes - -{fleet}:: -* Allow agent upgrades if patch version is higher than {kib}. ({kibana-pull}173167[#173167]) -* Fix secrets with dot-separated variable names. ({kibana-pull}173115[#173115]) -* Fix endpoint privilege management endpoints return errors. ({kibana-pull}171722[#171722]) -* Fix expiration time for immediate bulk upgrades being too short. ({kibana-pull}170879[#170879]) -* Fix incorrect overwrite of `logs-*` and `metrics-*` data views on every integration install. ({kibana-pull}170188[#170188]) -* Create intermediate objects when using dynamic mappings. ({kibana-pull}169981[#169981]) - -{agent}:: -* Preserve build metadata in upgrade version strings. {agent-pull}3824[#3824] {agent-issue}3813[#3813] -* Create a custom `MarshalYAML()` method to properly handle error fields in agent diagnostics. {agent-pull}3835[#3835] {agent-issue}2940[#2940] -* Fix the {agent} ignoring the `agent.download.proxy_url` setting during a policy update. {agent-pull}3803[#3803] {agent-issue}3560[#3560] -* Only try to download an upgrade locally if the `file://` prefix is specified for the source URI. {agent-pull}3682[#3682] -* Fix logging calls that have missing arguments. {agent-pull}3679[#3679] -* Update NodeJS version bundled with Heartbeat to v18.18.2. {agent-pull}3655[#3655] -* Use a third-party library to track progress during install and uninstall operations. {agent-pull}3623[#3623] {agent-issue}3607[#3607] -* Enable the {agent} container to run on Azure Container Instances. {agent-pull}3778[#3778] {agent-issue}3711[#3711] -* When a scheduled upgrade expires, set the upgrade state to failed. {agent-pull}3902[#3902] {agent-issue}3817[#3817] -* Update `elastic-agent-autodiscover` to version 0.6.6 and fix default metadata configuration. {agent-pull}3938[#3938] - -// end 8.12.0 relnotes - - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.13.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.13.asciidoc deleted file mode 100644 index 5060a98c3..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.13.asciidoc +++ /dev/null @@ -1,551 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.13.4 relnotes - -[[release-notes-8.13.4]] -== {fleet} and {agent} 8.13.4 - -There are no bug fixes for {fleet} or {agent} in this release. - -[discrete] -[[known-issues-8.13.4]] -=== Known issues - -[[known-issue-174855-8.13.4]] -.ECS fields are not included to the `index.query.default_field` in {agent} integrations -[%collapsible] -==== -*Details* - -Due to changes introduced to support the ecs@mappings component template (see link:https://github.com/elastic/kibana/pull/174855[elastic/kibana/pull/174855]), {fleet} no longer includes ECS fields in the integrations' `index.query.default_field`. Not including ECS fields in the `index.query.default_field` setting may affect integrations that rely on fieldless queries (when no field is specified for a query). - -If you run a query without specifying a field, the query will not return results for ECS fields. - -*Impact* + - -In version 8.14.0 and later, {fleet} sets `index.query.default_field` to `*`, so agentless queries will work as expected. We recommend users of {fleet} upgrade to 8.14 when that release becomes available. - -If you are running 8.13.x and unable to upgrade to 8.14.0, you can follow the workarounds described in the link:https://support.elastic.co/knowledge/bbdbeb57. -==== - -// end 8.13.4 relnotes - -// begin 8.13.3 relnotes - -[[release-notes-8.13.3]] -== {fleet} and {agent} 8.13.3 - -Review important information about {fleet-server} and {agent} for the 8.13.3 release. - -[discrete] -[[security-updates-8.13.3]] -=== Security updates - -{agent}:: -* Update Go version to 1.21.9. {agent-pull}4508[#4508] - -[discrete] -[[known-issues-8.13.3]] -=== Known issues - -[[known-issue-174855-8.13.3]] -.ECS fields are not included to the `index.query.default_field` in {agent} integrations -[%collapsible] -==== -*Details* - -Due to changes introduced to support the ecs@mappings component template (see link:https://github.com/elastic/kibana/pull/174855[elastic/kibana/pull/174855]), {fleet} no longer includes ECS fields in the integrations' `index.query.default_field`. Not including ECS fields in the `index.query.default_field` setting may affect integrations that rely on fieldless queries (when no field is specified for a query). - -If you run a query without specifying a field, the query will not return results for ECS fields. - -*Impact* + - -In version 8.14.0 and later, {fleet} sets `index.query.default_field` to `*`, so agentless queries will work as expected. We recommend users of {fleet} upgrade to 8.14 when that release becomes available. - -If you are running 8.13.x and unable to upgrade to 8.14.0, you can follow the workarounds described in the link:https://support.elastic.co/knowledge/bbdbeb57. -==== - -[discrete] -[[bug-fixes-8.13.3]] -=== Bug fixes - -{fleet}:: -* Fix managed agent policy preconfiguration update. ({kibana-pull}181624[#181624]) -* Use lowercase dataset name in template and pipeline names. ({kibana-pull}180887[#180887]) -* Fix KQL/kuery for getting {fleet-server} agent count. ({kibana-pull}180650[#180650]) - -{agent}:: -* Fix unconditionally logging all 400 HTTP responses from {fleet-server} as API compatibility errors. {agent-pull}4481[#4481] {agent-issue}4477[#4477] -* Allow the `elastic-agent upgrade` command to succeed when the agent restarts before it has completely acknowledged the upgrade. {agent-pull}4519[#4519] {agent-issue}3890[#3890s] -* Only allow installing Elastic Defend on Windows kernels newer than 6, that is, Windows 10 / -Server 2016 and newer. -// end 8.13.3 relnotes - -// begin 8.13.2 relnotes - -[[release-notes-8.13.2]] -== {fleet} and {agent} 8.13.2 - -Review important information about {fleet-server} and {agent} for the 8.13.2 release. - -[discrete] -[[known-issues-8.13.2]] -=== Known issues - -[[known-issue-241-8.13.2]] -.Beats MSI binaries do not support directories with a trailing slash -[%collapsible] -==== - -*Details* - -Due to changes introduced to support customizing an MSI install folder (see link:https://github.com/elastic/elastic-stack-installers/pull/209[#209]), Beats MSI binaries, which currently are in beta, will not properly handle directories that end in a slash. This defect may affect many deployments using the {beats} MSI binaries. - -*Impact* + - -This issue has been link:https://github.com/elastic/elastic-stack-installers/pull/264[resolved] in version 8.14.0 and later releases. We recommend users of {beats} MSI to upgrade to 8.14 when that release becomes available. - -==== - -[[known-issue-174855-8.13.2]] -.ECS fields are not included to the `index.query.default_field` in {agent} integrations -[%collapsible] -==== -*Details* - -Due to changes introduced to support the ecs@mappings component template (see link:https://github.com/elastic/kibana/pull/174855[elastic/kibana/pull/174855]), {fleet} no longer includes ECS fields in the integrations' `index.query.default_field`. Not including ECS fields in the `index.query.default_field` setting may affect integrations that rely on fieldless queries (when no field is specified for a query). - -If you run a query without specifying a field, the query will not return results for ECS fields. - -*Impact* + - -In version 8.14.0 and later, {fleet} sets `index.query.default_field` to `*`, so agentless queries will work as expected. We recommend users of {fleet} upgrade to 8.14 when that release becomes available. - -If you are running 8.13.x and unable to upgrade to 8.14.0, you can follow the workarounds described in the link:https://support.elastic.co/knowledge/bbdbeb57. -==== - -[discrete] -[[bug-fixes-8.13.2]] -=== Bug fixes - -{fleet}:: -* Fixes having to wait ten minutes after agent upgrade if agent cleared watching state ({kibana-pull}179917[#179917]). -* Fixes using the latest available version in K8's manifest instead of the latest compatible version ({kibana-pull}179662[#179662]). -* Fixes a step in add agent instructions where a query to get all agents was unnecessary ({kibana-pull}179603[#179603]). - -// end 8.13.2 relnotes - -// begin 8.13.1 relnotes - -[[release-notes-8.13.1]] -== {fleet} and {agent} 8.13.1 - -Review important information about {fleet-server} and {agent} for the 8.13.1 release. - -[discrete] -[[known-issues-8.13.1]] -=== Known issues - -[[known-issue-241-8.13.1]] -.Beats MSI binaries do not support directories with a trailing slash -[%collapsible] -==== - -*Details* - -Due to changes introduced to support customizing an MSI install folder (see link:https://github.com/elastic/elastic-stack-installers/pull/209[#209]), Beats MSI binaries, which currently are in beta, will not properly handle directories that end in a slash. This defect may affect many deployments using the {beats} MSI binaries. - -*Impact* + - -This issue has been link:https://github.com/elastic/elastic-stack-installers/pull/264[resolved] in version 8.14.0 and later releases. We recommend users of {beats} MSI to upgrade to 8.14 when that release becomes available. - -==== - -[[known-issue-174855-8.13.1]] -.ECS fields are not included to the `index.query.default_field` in {agent} integrations -[%collapsible] -==== -*Details* - -Due to changes introduced to support the ecs@mappings component template (see link:https://github.com/elastic/kibana/pull/174855[elastic/kibana/pull/174855]), {fleet} no longer includes ECS fields in the integrations' `index.query.default_field`. Not including ECS fields in the `index.query.default_field` setting may affect integrations that rely on fieldless queries (when no field is specified for a query). - -If you run a query without specifying a field, the query will not return results for ECS fields. - -*Impact* + - -In version 8.14.0 and later, {fleet} sets `index.query.default_field` to `*`, so agentless queries will work as expected. We recommend users of {fleet} upgrade to 8.14 when that release becomes available. - -If you are running 8.13.x and unable to upgrade to 8.14.0, you can follow the workarounds described in the link:https://support.elastic.co/knowledge/bbdbeb57. -==== - -[discrete] -[[enhancements-8.13.1]] -=== Enhancements - -{fleet}:: -* Remove `index.query.default_field` setting from managed component template settings. ({kibana-pull}178020[#178020]) - -[discrete] -[[bug-fixes-8.13.1]] -=== Bug fixes - -{fleet}:: -* Use index exists check in fleet-metrics-task. ({kibana-pull}179404[#179404]) - -// end 8.13.1 relnotes - -// begin 8.13.0 relnotes - -[[release-notes-8.13.0]] -== {fleet} and {agent} 8.13.0 - -Review important information about {fleet-server} and {agent} for the 8.13.0 release. - -[discrete] -[[security-updates-8.13.0]] -=== Security updates - -{agent}:: -* Update Go version to 1.21.8. {agent-pull}4221[#4221] - -[discrete] -[[breaking-changes-8.13.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -// copied from Kibana release notes: https://github.com/elastic/kibana/pull/179216 -[discrete] -[[breaking-176879]] -.Removes conditional topics for Kafka outputs -[%collapsible] -==== -*Details* + -The Kafka output no longer supports conditional topics while the final syntax is evaluated ahead of Kafka output GA. For more information, refer to ({kibana-pull}176879[#176879]). -==== - -// copied from Kibana release notes: https://github.com/elastic/kibana/pull/179216 -[discrete] -[[breaking-176443]] -.Most Fleet installed integrations are now read-only and labelled with a *Managed* tag in the Kibana UI -[%collapsible] -==== -*Details* + -Integration content installed by {fleet} is no longer editable. This content is tagged with *Managed* in the {kib} UI, and is Elastic managed. This content cannot be edited or deleted, however managed visualizations, dashboards, and saved searches can be cloned. The clones can be customized. -When cloning a dashboard the cloned panels become entirely independent copies that are unlinked from the original configurations and dependencies. -Managed content relating to specific visualization editors such as Lens, TSVB, and Maps, the clones retain the original reference configurations. The same applies to editing any saved searches in a managed visualization. -For more information, refer to ({kibana-pull}172393[#172393]). -==== - -// copied from Beats release notes: https://github.com/elastic/beats/pull/37795 -[discrete] -[[breaking-37795]] -.The behavior of `queue.mem.flush.min_events` has been simplified. -[%collapsible] -==== -*Details* + -The behavior of `queue.mem.flush.min_events` has been simplified. It now serves as a simple maximum on the size of all event batches. There are no longer performance implications in its relationship to `bulk_max_size`. - -For more information, refer to ({beats-pull}37795[#37795]). -==== - -[discrete] -[[notable-changes-8.13.0]] -=== Notable changes - -The following are notable, non-breaking updates to be aware of: - -* Changes to features that are in Technical Preview. -* Changes to log formats. -* Changes to non-public APIs. -* Behaviour changes that repair critical bugs. - -{fleet}:: -* Adds reference to `ecs@mappings` for each index template ({kibana-pull}174855[#174855]). - -[discrete] -[[known-issues-8.13.0]] -=== Known issues - -[[known-issue-241-8.13.0]] -.Beats MSI binaries do not support directories with a trailing slash -[%collapsible] -==== - -*Details* - -Due to changes introduced to support customizing an MSI install folder (see link:https://github.com/elastic/elastic-stack-installers/pull/209[#209]), Beats MSI binaries, which currently are in beta, will not properly handle directories that end in a slash. This defect may affect many deployments using the {beats} MSI binaries. - -*Impact* + - -This issue has been link:https://github.com/elastic/elastic-stack-installers/pull/264[resolved] in version 8.14.0 and later releases. We recommend users of {beats} MSI to upgrade to 8.14 when that release becomes available. - -==== - -[[known-issue-174855-8.13.0]] -.ECS fields are not included to the `index.query.default_field` in {agent} integrations -[%collapsible] -==== -*Details* - -Due to changes introduced to support the ecs@mappings component template (see link:https://github.com/elastic/kibana/pull/174855[elastic/kibana/pull/174855]), {fleet} no longer includes ECS fields in the integrations' `index.query.default_field`. Not including ECS fields in the `index.query.default_field` setting may affect integrations that rely on fieldless queries (when no field is specified for a query). - -If you run a query without specifying a field, the query will not return results for ECS fields. - -*Impact* + - -In version 8.14.0 and later, {fleet} sets `index.query.default_field` to `*`, so agentless queries will work as expected. We recommend users of {fleet} upgrade to 8.14 when that release becomes available. - -If you are running 8.13.x and unable to upgrade to 8.14.0, you can follow the workarounds described in the link:https://support.elastic.co/knowledge/bbdbeb57. -==== - -[discrete] -[[new-features-8.13.0]] -=== New features - -The 8.13.0 release added the following new and notable features. - -{fleet}:: -* Adds support for the `subobjects` setting on the object type mapping ({kibana-pull}171826[#171826]). - -{fleet-server}:: -* Add support for storing output secrets in a new `secrets` block. {fleet-server-pull}3061[3061] {fleet-server-issue}2966[2966] -* Add support for the remote {es} output type in {fleet-server}. {fleet-server-pull}3051[3051] -* Report the health state of remote {es} outputs to the `logs-fleet_server.output_health-default` data stream. {fleet-server-pull}3127[3127] {fleet-server-issue}3116[3116] -* Add a `policy_debounce_time` configuration to add a forced delay to the policy index monitor when it successfully gathers new documents. {fleet-server-pull}3234[3234] - -{agent}:: -* Log a summary of each policy configuration change received from {fleet}. {agent-pull}4050[#4050] {agent-issue}3406[#3406] -* Add the full version number to the installation directory name. {agent-pull}4193[#4193] {agent-issue}2579[#2579] -* Ignore Kubernetes node and namespace update events that do not change pod metadata. {agent-pull}4226[#4226] {beats-issue}37338[#37338] -* Add the new ETW input mapping to the Filebeat specification so that it's available in {agent}. {agent-pull}4037[#4037] {beats-pull}36915[#36915] -* Add the new WebSocket input mapping to the Filebeat specification so that it's available in {agent}. {agent-pull}4242[#4242] {beats-pull}37774[#37774] -* Create the `.installed` marker earlier on in the install process, allowing the use of `elastic-agent uninstall` to cleanup if the install fails. {agent-pull}4172[#4172] {agent-issue}4051[#4051] -* Add a postrm script to {agent} DEB and RPM packages. {agent-pull}4334[#4334] {agent-issue}3784[#3784] {agent-issue}4267[#4267] -* Kubernetes secrets provider has been improved to update a Kubernetes secret when the secret value changes. {agent-pull}4371[#4371] {agent-issue}4168[#4168] -* Upgrade link:https://github.com/elastic/elastic-agent-system-metrics[elastic-agent-system-metrics] to version 0.9.2. {agent-pull}4383[#4383] -* Allow users to configure number of output workers (for outputs that support workers) with either `worker` or `workers`. {beats-pull}38257[38257] - -[discrete] -[[enhancements-8.13.0]] -=== Enhancements - -{fleet}:: -* Adds `skipRateLimitCheck` flag to the Upgrade API and Bulk_upgrade API ({kibana-pull}176923[#176923]). -* Adds making datastream rollover lazy ({kibana-pull}176565[#176565]). -* Stops creating the `{type}-{datastet}@custom` component template during package installation ({kibana-pull}175469[#175469]). -* Adds the `xpack.fleet.isAirGapped` flag ({kibana-pull}174214[#174214]). -* Add a warning when downloading the new version in an agent upgrade is failing ({kibana-pull}173844[#173844]). -* Adds a message explaining why an agent is not upgradeable ({kibana-pull}173253[#173253]). -* Makes logs-* and metrics-* data views available across all spaces ({kibana-pull}172991[#172991]). -* Adds flag for pre-release to templates/inputs endpoint ({kibana-pull}174471[#174471]). -* Adds concurrency control to Fleet data stream API handler ({kibana-pull}174087[#174087]). -* Adds a handlebar helper to percent encode a given string ({kibana-pull}173119[#173119]). - -{fleet-server}:: -* Relax version checks in snapshot builds to support automated testing during minor release updates. {fleet-server-pull}3039[3039] {fleet-server-issue}2960[2960] -* Add top level keys for policy definition into {fleet-server} OpenAPI specification. {fleet-server-pull}3048[3048] -* Define the `action.data` and `ack` event schemas. {fleet-server-pull}3060[3060] -* Add additional transaction labels with {es} error details to requests. {fleet-server-pull}3124[3124] {fleet-server-issue}3098[3098] -* Calls with unauthorized API keys now return a `401` error. {fleet-server-pull}3135[3135] {fleet-server-issue}2861[2861] -* Use the Shutdown method with a timeout to gracefully halt HTTP servers. {fleet-server-pull}3165[3165] {fleet-server-issue}2902[2902] -* Replace the policy and action limiters with a unified checkin limiter. {fleet-server-pull}3255[3255] {fleet-server-issue}2254[3254] -* Change the response code for {es} call failures to `503`. {fleet-server-pull}3235[3235] {fleet-server-issue}2852[2852] - -{agent}:: -* Move the control socket path to always be inside of the top level of the {agent} installation directory. {agent-pull}3909[#3909] {agent-issue}3840[#3840] -* Add mTLS flags to {agent} install and enroll commands to enable use of certificates for communication in on-prem proxy setups. {agent-pull}4007[#4007] -* Improve error handling by adding error descriptors to the `inspect` command and config methods. {agent-pull}4074[#4074] -* Add an `agent.providers.initial_default` configuration flag to disable providers by default. {agent-pull}4166[#4166] {agent-issue}4145[#4145] -* Add environment variable bindings so that {fleet-server} and {agents} started in container mode can specify mTLS variables. {agent-pull}4261[#4261] - -[discrete] -[[bug-fixes-8.13.0]] -=== Bug fixes - -{fleet}:: -* Fixes a bug where secret values were not deleted on output type change ({kibana-pull}178964[#178964]). -* Fixes formatting for some integrations on the overview page ({kibana-pull}178937[#178937]). -* Fixes the name of {es} output workers configuration key ({kibana-pull}178329[#178329]). -* Fixes clean up of the `.fleet-policies` entries when deleting an agent policy. ({kibana-pull}178276[#178276]). -* Fixes only showing remote {es} output health status if later than last updated time ({kibana-pull}177685[#177685]). -* Fixes status summary when `showUpgradeable` is selected ({kibana-pull}177618[#177618]). -* Fixes issue of agent sometimes not getting inputs using a new agent policy with system integration ({kibana-pull}177594[#177594]). -* Fixes the activity flyout keeping the scroll state on rerender ({kibana-pull}177029[#177029]). -* Fixes inactive popover tour not resetting ({kibana-pull}176929[#176929]). -* Fixes `isPackageVersionOrLaterInstalled` to check for installed package ({kibana-pull}176532[#176532]). -* Removes pre-release exception for Synthetics package ({kibana-pull}176249[#176249]). -* Fixes output validation when creating package policy ({kibana-pull}175985[#175985]). -* Fixes allowing an agent to upgrade to a newer patch version than fleet-server ({kibana-pull}175775[#175775]). -* Fixes asset creation during custom integration installation ({kibana-pull}174869[#174869]). -* Fixes cascading agent policy's namespace to package policies ({kibana-pull}174776[#174776]). - -{fleet-server}:: -* Add missing `Elastic-Api-Version` and `X-Request-Id` headers to the {fleet-server} OpenAPI specification. {fleet-server-pull}3044[3044] -* Replace all secret references in input objects. {fleet-server-pull}3086[3086] {fleet-server-issue}3083[3083] -* Deprecate the redundant `fleet.agent.logging.level` attribute. {fleet-server-pull}3195[3195] {fleet-server-issue}3126[3126] -* Add validation to make sure that status and message are present in the checkin API request body. {fleet-server-pull}3233[3233] {fleet-server-issue}2420[2420] -* Fix a bug where agents were stuck in non-upgradeable state after an upgrade. {fleet-server-pull}3264[3264] {fleet-server-issue}3263[3263] -* Fix chunked file delivery so that files are delivered in order. {fleet-server-pull}3283[#3283] -* Fix a bug where the self monitor stops output health reporting if the output configuration is not acknowledged by agents. {fleet-server-pull}3335[#3335] {fleet-server-issue}3334[3334] - -{agent}:: -* Fix component control protocol to allow checkin to be chunked across multiple messages. Fixes errors related to the gRPC max message size being exceeded. {agent-pull}3884[#3884] {agent-issue}2460[#2460] -* Fix the creation of directories when unpacking tar.gz packages. {agent-pull}4100[#4100] {agent-issue}4093[#4093] -* Set a timeout of 1 minute for the FQDN lookup function. {agent-pull}4147[#4147] -* Increase timeout for file removal during {agent} uninstall. {agent-pull}4310[#4310] {agent-issue}4164[#4164] - -// end 8.13.0 relnotes - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[notable-changes-8.13.0]] -//=== Notable changes - -//The following are notable, non-breaking updates to be aware of: - -//* Changes to features that are in Technical Preview. -//* Changes to log formats. -//* Changes to non-public APIs. -//* Behaviour changes that repair critical bugs. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.14.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.14.asciidoc deleted file mode 100644 index 06a5a66a7..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.14.asciidoc +++ /dev/null @@ -1,365 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.14.3 relnotes - -[[release-notes-8.14.3]] -== {fleet} and {agent} 8.14.3 - -Review important information about the {fleet} and {agent} 8.14.3 release. - -[discrete] -[[bug-fixes-8.14.3]] -=== Bug fixes - -{agent}:: -* Update the `elastic-agent install -f` command to use paths associated with the installed agent binary instead of paths associated with the agent running the install command. {agent-pull}4965[#4965] {agent-issue}4506[#4506] -* Allow the {agent} container to work with a read-only filesystem. {agent-pull}4995[#4995] -* Upgrade NodeJS to LTS version 18.20.3. {agent-pull}5010[#5010] -* Fix {agent} to account for the timeout period when waiting for {fleet-server} to start. {agent-pull}5034[#5034] {agent-issue}5033[#5033] - -// end 8.14.3 relnotes - - -// begin 8.14.2 relnotes - -[[release-notes-8.14.2]] -== {fleet} and {agent} 8.14.2 - -Review important information about the {fleet} and {agent} 8.14.2 release. - -[discrete] -[[notable-changes-8.14.2]] -=== Notable changes - -The following are notable, non-breaking updates to be aware of: - -{fleet}:: -* The `health_check` API endpoint has been updated to accept host IDs. An `id` string should be used in place of the `host` field, which is now deprecated. ({kibana-pull}185014[#185014]) - -[discrete] -[[new-features-8.14.2]] -=== New features - -The 8.14.2 release added the following new and notable features. - -{agent}:: -* The following processors are now available to users running {agent} in `otel` mode: -** link:https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor[Resource Detection Processor] {agent-pull}4811[#4811] -** link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.102.0/processor/k8sattributesprocessor[Kubernetes Attributes Processor] {agent-pull}4893[#4893] -** link:https://github.com/elastic/opentelemetry-collector-components/tree/processor/elasticinframetricsprocessor/v0.1.0/processor/elasticinframetricsprocessor[Elastic Infra Metrics Processor] {agent-pull}4968[#4968] -* An `otelcol` shortcut has been added for the `elasticc-agent otel` command. {agent-pull}4816[#4816] {agent-issue}4661[#4661] -* An `agent.monitoring.metrics_period` setting has been added, enabling you to control the sampling period of {agent} monitoring metrics according to your needs. {agent-pull}4961[#4961] - -[discrete] -[[enhancements-8.14.2]] -=== Enhancements - -{agent}:: -* Add link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_kustomize/[kustomize] templates using default manifests for Kubernetes onboarding. {agent-pull}4754[#4754] {agent-issue}4657[#4657] -* Capture early errors on Windows in the Application eventlog. {agent-pull}4846[#4846] {agent-issue}4627[#4627] - -[discrete] -[[bug-fixes-8.14.2]] -=== Bug fixes - -{agent}:: -* Fix an issue where the `log_writer.go` can panic in the case it logs an empty line. {agent-pull}4910[#4910] {agent-issue}4907[#4907] -* Increase the removal timeout period to 60 seconds when uninstalling {agent}. {agent-pull}4921[#4921] {agent-issue}4164[#4164] - -// end 8.14.2 relnotes - -// begin 8.14.1 relnotes - -[[release-notes-8.14.1]] -== {fleet} and {agent} 8.14.1 - -Review important information about the {fleet} and {agent} 8.14.1 release. - -[discrete] -[[security-updates-8.14.1]] -=== Security updates - -{fleet-server}:: -* Update {fleet-server} Go version to 1.21.11. {fleet-server-pull}3607[#3607] - -[discrete] -[[new-features-8.14.1]] -=== New features - -The 8.14.1 release added the following new and notable features. - -{agent}:: -* For users of {agent} running as an OpenTelemetry Collector, the link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter[`elasticsearchexporter`] and the link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor[`filterprocessor`] are now available to configure in agent pipelines. {agent-pull}4707[#4707] {agent-pull}4708[#4708] - -[discrete] -[[enhancements-8.14.1]] -=== Enhancements - -{agent}:: -* The more reliable snapshot API is now used in place of the artifact API for snapshot downloads, for both {agent} upgrades to a snapshot and for artifact fetching used in testing. {agent-pull}4693[#4693] {agent-issue}4458[#4458] -* The {agent} diagnostics bundle now contains an `agent-info.yaml` file, which provides information on the running agent, including its agent ID, whether or not it's a snapshot, any headers, and if it's running in `unprivileged` mode. {agent-pull}4725[#4725] {agent-issue}4439[#4439] - -[discrete] -[[bug-fixes-8.14.1]] -=== Bug fixes - -{fleet}:: -* The "restart upgrade" button for single agent upgrades is now enabled as expected when the upgrade has been pending for over 2 hours. {kibana-pull}184586[#184586] - -{agent}:: -* Make {agent} delayed enrollment try indefinitely until the agent is able to enroll successfully into {fleet}. {agent-pull}4727[#4727] {agent-issue}4716[#4716] -* Fix delayed enrollment to work in unprivileged mode on Windows. {agent-pull}4779[#4779] {agent-issue}4678[#4678] - -// end 8.14.1 relnotes - -// begin 8.14.0 relnotes - -[[release-notes-8.14.0]] -== {fleet} and {agent} 8.14.0 - -Review important information about the {fleet} and {agent} 8.14.0 release. - -[discrete] -[[security-updates-8.14.0]] -=== Security updates - -{fleet-server}:: -* Update {fleet-server} Go version to 1.21.10. {fleet-server-pull}3528[#3528] - -{agent}:: -* Update {agent} Go version to 1.21.10. {agent-pull}4718[#4718] -* Update all `opentelemetry-collector-contrib` packages. {agent-pull}4572[#4572] - -[discrete] -[[new-features-8.14.0]] -=== New features - -The 8.14.0 release added the following new and notable features. - -{fleet}:: -* (Technical preview) Kibana administrators can now assign granular subfeature privileges for {fleet}, {agents}, agent policies, and settings to user roles. ({kibana-pull}179889[#179889]). -* The `index.mapping.total_fields.limit` field on integration index templates is now set to 1000 by default instead of 10000. If an integration data stream includes more than 500 fields, the limit will be increased to 10000. ({kibana-pull}178398[#178398]) -* `index_template.mappings.subobjects: false` is now the default for custom integration data streams to avoid subobject and scalar mapping conflicts. ({kibana-pull}178397[#178397]) -* Fleet no longer sets `index.query.default_field` on integration component templates, favoring the Elasticsearch default value of `index.query.default_field: *`. This allows queries without a field specified to be run against all integration fields by default. ({kibana-pull}178020[#178020]) -* Allow managed content installed by {fleet} to be deleted. Note: this content will be recreated when an integration is upgraded or reinstalled. ({kibana-pull}179113[#179113]) - -{agent}:: -* The Kubernetes secrets provider has been improved to update a Kubernetes secret when the secret value changes. {agent-pull}4371[#4371] {agent-issue}4168[#4168] -* The OpenTelemetry link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor[filterprocessor] is now available to users running {agent} in `otel` mode. {agent-pull}4708[#4708] -* The OpenTelemetry link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter[elasticsearchexporter] is now available to users running {agent} in `otel` mode. {agent-pull}4707[#4707] - -[discrete] -[[enhancements-8.14.0]] -=== Enhancements - -{fleet}:: -* Add `time_series_dimension: true` to dynamic field mappings defined in integrations with `dimension: true`. ({kibana-pull}180023[#180023]) -* Allow additional CPU metrics to be collected when requesting diagnostics from an agent. ({kibana-pull}179819[#179819]) -* Add new "advanced settings" section to agent policy settings page sourced from configuration. ({kibana-pull}179795[#179795]) -* Add an Elastic Defend advanced policy option for pruning capability arrays. ({kibana-pull}179766[#179766]) -* The "agent activity" flyout now includes several new features: ({kibana-pull}179161[#179161]) -** A "review errors" button now appears above the agent listing table when new activity events are loaded that include errors. Clicking the button will open the activity flyout with these errors shown. -** Agent activity now supports pagination. Click the "show more" button at the bottom of the list to load additional activity events. -** Agent activity from a given date can now be loaded by clicking the "Go to date" button and selecting a date. -* Surface `unhealthy_reason` in agent metrics that indicates which component (input/output/other) is causing an agent to be considered unhealthy. ({kibana-pull}178605[#178605]) -* Add a warning which is displayed when trying to upgrade agent to version > max {fleet-server} version. ({kibana-pull}178079[#178079]) - -{fleet-server}:: -* When running in `agent` mode, {fleet-server} will use the APMConfig settings of the expected input if it's set over the settings in `inputs[0].server.instrumentation`. This should make it easier for managed agents to inject APM configuration data. {fleet-server-pull}3277[#3277] {fleet-server-issue}2868[#2868] -* Allow specification in the {fleet-server} settings for whether or not a diagnostics bundle should contain additional CPU metrics. {fleet-server-pull}3333[#3333] {agent-issue}3491[#3491] -* Allow {fleet} to set the trace level for logging. {fleet-server-pull}3350[#3350] - -{agent}:: -* The CPU and memory usage of the internal monitoring {beats} is now included in the agent CPU and memory usage calculations in {fleet}. {agent-pull}4326[#4326] {agent-issue}4082[#4082] -* Add the optional CPU profile collection to the {fleet} diagnostics action handler. {agent-pull}4394[#4394] {agent-issue}3491[#3491] -* Enable `--unprivileged` on Mac OS, allowing {agent} to run as an unprivileged user. {agent-pull}4362[#4362] {agent-issue}3867[#3867] -* Make the `enroll` command more stable by handling temporary server errors. {agent-pull}4523[#4523] {agent-issue}4513[#4513] -* Reduce the overall download and on-disk size of {agent}. {agent-pull}4516[#4516] {agent-issue}3364[#3364] -** Linux: -43% reduction from 1800MB to 1018MB compared to 8.13.4 when extracted -** MacOS: -44% reduction from 1100MB to 619MB compared to 8.13.4 when extracted -** Windows: -43% reduction from 891MB to 504MB compared to 8.13.4 when extracted -* Remove `cloud-defend` from Linux `.tar.gz` archives; it now appears only in Docker images where it is required. {agent-pull}4584[#4584] -* Reduce the disk usage of {agent} self-monitoring logs shipped to {fleet} by 16% by dropping "Non-zero metrics..." logs automatically. {agent-pull}4633[#4633] {agent-issue}4252[#4252] - -[discrete] -[[bug-fixes-8.14.0]] -=== Bug fixes - -{fleet}:: -* Add validation to dataset field in input packages to disallow special characters. ({kibana-pull}182925[#182925]) -* Fix rollback input package install on failure. ({kibana-pull}182665[#182665]) -* Fix cloudflare template error. ({kibana-pull}182645[#182645]) -* Fix displaying `Config` and `API reference` tabs if they are not needed. ({kibana-pull}182518[#182518]) -* Allow upgrading an agent to a newer version when that agent is also a {fleet-server}. ({kibana-pull}181575[#181575]) -* Fix flattened inputs in the configuration tab. ({kibana-pull}181155[#181155]) -* Add callout when editing an output about plain text secrets being re-saved to secret storage. ({kibana-pull}180334[#180334]) -* Remove unnecessary field definitions for custom integrations. ({kibana-pull}178293[#178293]) -* Fix secrets UI inputs in forms when secrets storage is disabled server side. ({kibana-pull}178045[#178045]) -* Fix not being able to preview or download files with special characters. ({kibana-pull}176822[#176822]) -* Fix overly strict KQL validation being applied in search boxes. ({kibana-pull}176806[#176806]) - -{fleet-server}:: -* Respond with a `429` error, instead of a misleading `401 unauthorized response`, when an Elasticsearch API key authentication returns a `429` error. {fleet-server-pull}3278[#3278] -* Add an `unhealthy_reason` value (`input`/`output`/`other`) to {fleet-server} metrics published regularly in agent documents. {agent-pull}3338[#3338] -* Update endpoints to return a `400` status code instead of `500` for bad requests. {fleet-server-pull}3407[#3407] {fleet-server-issue}3110[3110] - -{agent}:: -* Use `IgnoreCommas` in default configuration options to correct parse functions used as part of variable substitutions. {agent-pull}4436[#4436] -* Stop logging all `400` errors as {fleet-server} API incompatibility errors. {agent-pull}4481[#4481] {agent-issue}4477[#4477] -* Fix failing upgrade command when the gRPC server connection is interrupted. {agent-pull}4519[#4519] {agent-issue}3890[#3890] -* Fix an issue where the `kubernetes_leaderelection` provider would not try to reacquire the lease once lost. {agent-pull}4542[#4542] {agent-issue}4543[#4543] -* Always select the more recent watcher during the {agent} upgrade/downgrade process. {agent-pull}4491[#4491] {agent-issue}4072[#4072] -* Reduce the disk usage of {agent} self-monitoring metrics shipped to {fleet} by 13% by dropping the {beats} `state` metricset. {agent-pull}4579[#4579] {agent-issue}4153[#4153] - -// end 8.14.0 relnotes - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[notable-changes-8.13.0]] -//=== Notable changes - -//The following are notable, non-breaking updates to be aware of: - -//* Changes to features that are in Technical Preview. -//* Changes to log formats. -//* Changes to non-public APIs. -//* Behaviour changes that repair critical bugs. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.15.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.15.asciidoc deleted file mode 100644 index 94f3248f5..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.15.asciidoc +++ /dev/null @@ -1,527 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.15.5 relnotes - -[[release-notes-8.15.5]] -== {fleet} and {agent} 8.15.5 - -[discrete] -[[enhancements-8.15.5]] -=== Enhancements - -{agent}:: -* Emit Pod data only for running Pods in the Kubernetes provider. {agent-pull}6011[#6011] {agent-issue}5835[#5835] {agent-issue}5991[#5991] - -// end 8.15.5 relnotes - -// begin 8.15.4 relnotes - -[[release-notes-8.15.4]] -== {fleet} and {agent} 8.15.4 - -[discrete] -[[bug-fixes-8.15.4]] -=== Bug fixes - -{agent}:: -* Improve upgrade experience by assuming that {agent} is installed, allowing to use proper control socket path to communicate with running {agent}. {agent-pull}5879[#5879] - -// end 8.15.4 relnotes - -// begin 8.15.3 relnotes - -[[release-notes-8.15.3]] -== {fleet} and {agent} 8.15.3 - -Review important information about the {fleet} and {agent} 8.15.3 release. - -[discrete] -[[known-issues-8.15.3]] -=== Known issues - -[[known-issue-issue-41355-8.15.3]] -.The memory usage of {beats} based integrations is not correctly limited by the number of events actively in the memory queue, but rather the maximum size of the memory queue regardless of usage. -[%collapsible] -==== - -*Details* - -In 8.15, events in the memory queue are not freed when they are acknowledged (as intended), but only when they are overwritten by later events in the queue buffer. This means for example if a configuration has a queue size of 5000, but the input data is low-volume and only 100 events are active at once, then the queue will gradually store more events until reaching 5000 in memory at once, then start replacing those with new events. - -See {beats} issue link:https://github.com/elastic/beats/issues/41355[#41355]. - -*Impact* + - -Memory usage may be higher than in previous releases depending on the throughput of {agent}. A fix is planned for 8.15.4. - -- The worst memory increase is for low-throughput configs with large queues. -- For users whose queues were already sized proportionate to their throughput, memory use is increased but only marginally. -- Affected users can mitigate the higher memory usage by lowering their queue size. - -==== - -[discrete] -[[security-updates-8.15.3]] -=== Security updates - -{agent}:: -* Update Go version to 1.22.8. {agent-pull}5718[#5718] - -[discrete] -[[enhancements-8.15.3]] - -=== Enhancements - -{agent}:: -* Adjust the default memory requests and limits for {agent} when it runs in a Kubernetes cluster. {agent-pull}5614[#5614] {agent-issue}5613[#5613] {agent-issue}4729[#4729] -* Use a metadata watcher for ReplicaSets in the K8s provider to collect only the name and OwnerReferences, which are used to connect Pods to Deployments and DaemonSets. {agent-pull}5699[#5699] {agent-issue}5623[#5623] - -[discrete] -[[bug-fixes-8.15.3]] -=== Bug fixes - -{agent}:: -* Add `pprof` endpoints to the monitoring server if they're enabled in the {agent} configuration. {agent-pull}5562[#5562] -* Stop the `elastic-agent inspect` command from printing the output configuration twice. {agent-pull}5692[#5692] {agent-issue}4471[#4471] - -// end 8.15.3 relnotes - -// begin 8.15.2 relnotes - -[[release-notes-8.15.2]] -== {fleet} and {agent} 8.15.2 - -Review important information about the {fleet} and {agent} 8.15.2 release. - -[discrete] -[[known-issues-8.15.2]] -=== Known issues - -[[known-issue-issue-41355-8.15.2]] -.The memory usage of {beats} based integrations is not correctly limited by the number of events actively in the memory queue, but rather the maximum size of the memory queue regardless of usage. -[%collapsible] -==== - -*Details* - -In 8.15, events in the memory queue are not freed when they are acknowledged (as intended), but only when they are overwritten by later events in the queue buffer. This means for example if a configuration has a queue size of 5000, but the input data is low-volume and only 100 events are active at once, then the queue will gradually store more events until reaching 5000 in memory at once, then start replacing those with new events. - -See {beats} issue link:https://github.com/elastic/beats/issues/41355[#41355]. - -*Impact* + - -Memory usage may be higher than in previous releases depending on the throughput of {agent}. A fix is planned for 8.15.4. - -- The worst memory increase is for low-throughput configs with large queues. -- For users whose queues were already sized proportionate to their throughput, memory use is increased but only marginally. -- Affected users can mitigate the higher memory usage by lowering their queue size. - -==== - -[discrete] -[[enhancements-8.15.2]] -=== Enhancements - -Fleet:: -* Bump the maximum supported package spec version to 3.2. ({kibana-pull}193574[#193574]) - -[discrete] -[[bug-fixes-8.15.2]] -=== Bug fixes - -Fleet:: -* Prevent extra `agent_status` call with empty `policyId`, resulting in incorrect agent count on the `Edit Integration` page. ({kibana-pull}192549[#192549]) -* Set the correct title for the `Restart upgrade` agent modal. ({kibana-pull}192536[#192536]) - -{agent}:: -* Add the `health_check` extension to the `otel.yml` file bundled with the {agent} package, in order to prevent installation of the `open-telemetry/opentelemetry-collector` Helm chart from failing. {agent-pull}5369[#5369] {agent-issue}5092[#5092] -* Fix bug that prevented the {beats} {es} output from recovering from an interrupted connection. {beats-pull}40769[#40796] {beats-issue}40705[#40705] -* Prevent the {agent} from crashing when self unenrolling due to too many authentication failures against {fleet-server}. {agent-pull}5438[#5438] {agent-issue}5434[#5434] -* Set the default log level when an {agent} policy is changed and a log level is not set in the policy. {agent-pull}5452[#5452] {agent-issue}5451[#5451] - -{fleet-server}:: -* Enable missing warnings for configuration options that have been deprecated throughout the 8.x lifecycle. {fleet-server-pull}3901[#3901] - -// end 8.15.2 relnotes - -// begin 8.15.1 relnotes - -[[release-notes-8.15.1]] -== {fleet} and {agent} 8.15.1 - -Review important information about the {fleet} and {agent} 8.15.1 release. - -[discrete] -[[bug-fixes-8.15.1]] -=== Bug fixes - -{fleet}:: -* Remove duplicative retries from client-side requests to APIs that depend on EPR ({kibana-pull}190722[#190722]). -* Add mappings for properties of nested objects that were previously omitted ({kibana-pull}191730[#191730]). - -{agent}:: -* Fix the Debian packaging to properly copy the `state.enc` and `state.yml` files to the new version of the {agent}. {agent-pull}5260[#5260] {agent-issue}5101[#5101] -* Switch from wall clock to montonic clocks for component check-in calculation. {agent-pull}5284[#5284] {agent-issue}5277[#5277] -* For a failed installation, return a `nil` error instead of `syscall.Errno(0)` which indicates a successful operation on Windows. {agent-pull}5317[#5317] {agent-issue}4496[#4496] - -[discrete] -[[known-issues-8.15.1]] -=== Known issues - -[[known-issue-issue-40705]] -.{beats} based integrations stop publishing data after a network error unless restarted. -[%collapsible] -==== - -*Details* - -A bugfix merged for 8.15.1 can cause repeated `Get \"https://${ELASTICSEARCH_HOST}:443\": context canceled` errors -after a transient network error (for example DNS failure) that prevent {agent} integrations based on {beats} from publishing data. -{agent} must be restarted for publishing to continue. - -See {beats} issue link:https://github.com/elastic/beats/issues/40705[#40705] for details. - -*Impact* + - -Avoid upgrading to 8.15.1. - -==== - -[[known-issue-issue-191730]] -.Fleet configures additional properties in some nested objects in index templates of integrations. -[%collapsible] -==== - -*Details* - -A bugfix intended to be released in 8.16.0 was also included in 8.15.1. It fixes -an actual issue where some mappings were not being generated, but this also -includes additional mappings when installing some integrations in 8.15.1 that -were not included when using 8.15.0. - -*Impact* + - -Users may notice that some index templates include additional mappings for the -same package versions. - -==== - -[[known-issue-issue-41355-8.15.1]] -.The memory usage of {beats} based integrations is not correctly limited by the number of events actively in the memory queue, but rather the maximum size of the memory queue regardless of usage. -[%collapsible] -==== - -*Details* - -In 8.15, events in the memory queue are not freed when they are acknowledged (as intended), but only when they are overwritten by later events in the queue buffer. This means for example if a configuration has a queue size of 5000, but the input data is low-volume and only 100 events are active at once, then the queue will gradually store more events until reaching 5000 in memory at once, then start replacing those with new events. - -See {beats} issue link:https://github.com/elastic/beats/issues/41355[#41355]. - -*Impact* + - -Memory usage may be higher than in previous releases depending on the throughput of {agent}. A fix is planned for 8.15.4. - -- The worst memory increase is for low-throughput configs with large queues. -- For users whose queues were already sized proportionate to their throughput, memory use is increased but only marginally. -- Affected users can mitigate the higher memory usage by lowering their queue size. - -==== - -// end 8.15.1 relnotes - -// begin 8.15.0 relnotes - -[[release-notes-8.15.0]] -== {fleet} and {agent} 8.15.0 - -Review important information about the {fleet} and {agent} 8.15.0 release. - -[discrete] -[[security-updates-8.15.0]] -=== Security updates - -{fleet-server}:: -* Update {fleet-server} Go version to 1.22.5. {fleet-server-pull}3681[#3681] - - -[discrete] -[[known-issues-8.15.0]] -=== Known issues - -[[known-issue-issue-40608]] -.Azure EventHub input for {agent} fails to start on Windows -[%collapsible] -==== - -*Details* - -The Azure EventHub input fails to start on {agent} version 8.15 running on Windows. -The {agent} status will be reported as unhealthy. -See {beats} issue link:https://github.com/elastic/beats/issues/40608[#40608] for details. - -*Impact* + - -If you're using {agent} on Windows with any integration which makes use of the Azure EventHub input, we recommend not upgrading {agent} to version 8.15.0 and instead waiting for a later release. A fix is planned for version 8.15.1. - -==== - -[[known-issue-issue-41355]] -.The memory usage of {beats} based integrations is not correctly limited by the number of events actively in the memory queue, but rather the maximum size of the memory queue regardless of usage. -[%collapsible] -==== - -*Details* - -In 8.15, events in the memory queue are not freed when they are acknowledged (as intended), but only when they are overwritten by later events in the queue buffer. This means for example if a configuration has a queue size of 5000, but the input data is low-volume and only 100 events are active at once, then the queue will gradually store more events until reaching 5000 in memory at once, then start replacing those with new events. - -See {beats} issue link:https://github.com/elastic/beats/issues/41355[#41355]. - -*Impact* + - -Memory usage may be higher than in previous releases depending on the throughput of {agent}. A fix is planned for 8.15.4. - -- The worst memory increase is for low-throughput configs with large queues. -- For users whose queues were already sized proportionate to their throughput, memory use is increased but only marginally. -- Affected users can mitigate the higher memory usage by lowering their queue size. - -==== - -[discrete] -[[new-features-8.15.0]] -=== New features - -The 8.15.0 release Added the following new and notable features. - -{fleet-server}:: -* When {fleet-server} runs in `elastic-agent` mode, it's now able to use the enrollment configuration options in `output.elasticsearch.bootstrap` from its policy, instead of overwriting the matching keys in `output.elasticsearch`. {fleet-server-pull}3506[#3506] {fleet-server-issue}3464[#3464] -* As part of making {fleet} space aware, {fleet-server} now adds a `namespaces` property to created `.fleet-*` documents. {fleet-server-pull}3535[#3535] {fleet-server-issue}3505[#3505] - -{agent}:: -* Enable {agent} to monitor and report usage metrics for {elastic-endpoint}. {agent-pull}4789[#4789] {agent-issue}4083[#4083] -* Add the AWS Asset Inventory input to Cloudbeat. {agent-pull}4804[#4804] -* Unhide the `--unprivileged` option for the `elastic-agent install` command and mark the usage of the flag as being in a `beta` technical preview state. {agent-pull}4914[#4914] -* To ensure that {agent} starts correctly when run in a container, ensure that the `statePath` set by the container command generates a Unix socket path that is smaller than 108 characters. {agent-pull}4909[#4909] -* Enable {agent} to receive an event logger configuration through {fleet}. {agent-pull}4932[#4932] {agent-issue}4874[#4874] - -[discrete] -[[enhancements-8.15.0]] -=== Enhancements - -{fleet}:: -* Use API key for standalone agent onboarding. ({kibana-pull}187133[#187133]) -//* Add action for upgrading all agents on a policy. ({kibana-pull}186827[#186827]) -//* Change agent policies in edit package policy page. ({kibana-pull}186084[#186084]) -//* Create shared package policy. ({kibana-pull}185916[#185916]) -* Make {fleet} & Integrations layouts full width. ({kibana-pull}186056[#186056]) -* Add support for setting `add_fields` processors on all agents under an agent policy. ({kibana-pull}184693[#184693]) -//* Introduce `policy_ids` in package policy SO ({kibana-pull}184636[#184636]) -* Add force flag to delete `agent_policies` API. ({kibana-pull}184419[#184419]) -* Surface option to delete diagnostics files. ({kibana-pull}183690[#183690]) -* Add data tags to agent policy APIs. ({kibana-pull}183563[#183563]) -* Allow to reset log level for agents >= 8.15.0. ({kibana-pull}183434[#183434]) -* Add support for mappings with `store: true`. ({kibana-pull}183390[#183390]) -* Add warning if need root integrations trying to be used with unprivileged agents. ({kibana-pull}183283[#183283]) -* Add unprivileged vs privileged agent count to Fleet UI. ({kibana-pull}183077[#183077]) -* Show all integration assets on detail page. ({kibana-pull}182180[#182180]) -* Add overrides to package policies update endpoint. ({kibana-pull}181453[#181453]) -* Enable `agent.monitoring.http` settings on agent policy UI. ({kibana-pull}180922[#180922]) -* Share Modal redesign, clean up, and tests. ({kibana-pull}180406[#180406]) -* UI for the custom integration creation with AI. ({kibana-pull}186304[#186304]) - -{fleet-server}:: -* {agent} diagnostic bundles now provide additional TLS information for {fleet-server}. {fleet-server-pull}3587[#3587] - -{agent}:: -//* Support setting {agent} log level from a {fleet} policy. {agent-pull}3090[#3090] {agent-issue}2851[#2851] -// On hold based on conversation with Shaunak -* Add commands to switch between {agent} `unprivileged` and `privileged` modes. {agent-pull}4621[#4621] {agent-issue}2790[#2790] -* Implement reading and applying TLS configuration for a {fleet} client using the CA, certificate, and key included in a {fleet} policy. {agent-pull}4770[#4770] {agent-issue}2247[#2247] {agent-issue}2248[#2248] -* Add {filebeat} benchmark input to {agent}. {agent-issue}4849[#4849] -* Add a `conn` param and a `conn-skip` flag to the {agent} diagnostics command. {agent-pull}4946[#4946] {agent-issue}4880[#4880] -* Add the ability for a variable to not be expanded and replaced in {agent} inputs. {agent-pull}5035[#5035] {agent-issue}2177[#2177] -* Inject the `proxy_url` value into {endpoint}'s {es} output configuration, and {endpoint} or {apm}'s {fleet} configuration if the attribute is missing and either the `HTTPS_PROXY` or `HTTP_PROXY` environment variable is set. {agent-pull}5044[#5044] {agent-issue}2602[#2602] - -[discrete] -[[bug-fixes-8.15.0]] -=== Bug fixes - -{fleet}:: -* Fix navigating back to Agent policy integration list. ({kibana-pull}189165[#189165]) -* Fix copy agent policy, missed bump revision. ({kibana-pull}188935[#188935]) -* Force field `enabled=false` on inputs that have all their streams disabled. ({kibana-pull}188919[#188919]) -* Fill in empty values for `constant_keyword` fields from existing mappings. ({kibana-pull}188145[#188145]) -* Enrollment token table may show an empty last page. ({kibana-pull}188049[#188049]) -* Separate `showInactive` from unenrolled status filter. ({kibana-pull}187960[#187960]) -* Missing policy filter in Fleet Server check to enable secrets. ({kibana-pull}187935[#187935]) -* Allow preconfigured agent policy only with name and ID. ({kibana-pull}187542[#187542]) -* Show warning callout in configs tab when an error occurs. ({kibana-pull}187487[#187487]) -* Enable rollover in custom integrations install when getting `mapper_exception` error. ({kibana-pull}186991[#186991]) -* Add concurrency limit to EPM bulk install API and fix duplicate installations. ({kibana-pull}185900[#185900]) -* Include inactive agents in agent policy agent count. ({kibana-pull}184517[#184517]) -* Fix KQL filtering. ({kibana-pull}183757[#183757]) -* Prevent concurrent runs of Fleet setup. ({kibana-pull}183636[#183636]) - -{fleet-server}:: -* Support receiving the download rate sent by {agent} in string format. {fleet-server-pull}3677[#3677] {fleet-server-issue}3446[#3446] - -{agent}:: -* When {agent} starts, wait for Watcher to start before releasing resources associated with it. {agent-pull}4834[#4834] {agent-issue}2190[#2190] -* For the Kubernetes provider, fix the namespace filter on watchers started by a pod and service eventer. {agent-pull}4975[#4975] -* Adjust the {agent} `container` subcommand to write the `container-paths.yml` configuration into the `STATE_PATH` on startup. {agent-pull}4995[#4995] -* Apply setting capabilities to the correct binary. {agent-pull}5070[#5070] -* Reduce {agent} image size by setting capabilities in the builder Docker image instead of the final image. {agent-pull}5070[#5073] -* Fix an issue where installation can fail on Windows systems in the case that the user doesn't have a home directory. {agent-pull}5118[#5118] {agent-issue}5019[#5019] - -// end 8.15.0 relnotes - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[notable-changes-8.13.0]] -//=== Notable changes - -//The following are notable, non-breaking updates to be aware of: - -//* Changes to features that are in Technical Preview. -//* Changes to log formats. -//* Changes to non-public APIs. -//* Behaviour changes that repair critical bugs. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.16.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.16.asciidoc deleted file mode 100644 index 9ae0a60ac..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.16.asciidoc +++ /dev/null @@ -1,492 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.16.4 relnotes - -Review important information about the {fleet} and {agent} 8.16.4 release. - -[[release-notes-8.16.4]] -== {fleet} and {agent} 8.16.4 - -[discrete] -[[security-updates-8.16.4]] -=== Security updates - -{agent}:: -* Upgrade NodeJS to LTS v18.20.6. {agent-pull}6641[#6641] - -[discrete] -[[bug-fixes-8.16.4]] -=== Bug fixes - -{agent}:: -* Emit vars even if provider data is empty from the start. {agent-pull}6598[#6598] -* Redact secrets within complex nested paths. {agent-pull}6710[#6710] -* Improve the CLI output message when `elastic-agent uninstall` runs after the agent has previously been unenrolled. {agent-pull}6735[#6735] - -// end 8.16.4 relnotes - -// begin 8.16.3 relnotes - -[[release-notes-8.16.3]] -== {fleet} and {agent} 8.16.3 - -Review important information about the {fleet} and {agent} 8.16.3 release. - -[discrete] -[[bug-fixes-8.16.3]] -=== Bug fixes - -{fleet}:: -* Fixed an issue that prevented {agent} tags from being displayed when the agent list is filtered. ({kibana-pull}205163[#205163]) - -// end 8.16.3 relnotes - -// begin 8.16.2 relnotes - -[[release-notes-8.16.2]] -== {fleet} and {agent} 8.16.2 - -Review important information about the {fleet} and {agent} 8.16.2 release. - -[discrete] -[[known-issues-8.16.2]] -=== Known Issues - -[discrete] -[[known-issue-6213-8-16-2]] -.An {agent} with the Defend integration may report an Orphaned status and will not be able to be issued an upgrade action through {fleet}. -[%collapsible] -==== -*Details* + -A known issue in the {agent} may prevent it from being targetted with an upgrade action for a future release. -This may occur if the Defend integration is used and the agent is stopped on a running instance for too long. -An agent may be stopped as part of an upgrade process. - -*Impact* + -A bug fix is present in the 8.16.3 and 8.17.1 releases of {fleet} that will prevent this from occuring. - -If you have agents that are affected, the workaround is as follows: -[source,shell] ----- -# Get a Token to issue an update_by_query request: -curl -XPOST --user elastic:${SUPERUSER_PASS} -H 'x-elastic-product-origin:fleet' -H'content-type:application/json' "https://${ELASTICSEARCH_HOST}/_security/service/elastic/fleet-server/credential/token/fix-unenrolled" - -# Issue an update_by_query request that targets effected agents: -curl -XPOST -H 'Authorization: Bearer ${TOKEN}' -H 'x-elastic-product-origin:fleet' -H 'content-type:application/json' "https://${ELASTICSEARCH_HOST}/.fleet-agents/_update_by_query" -d '{"query": {"bool": {"must": [{ "exists": { "field": "unenrolled_at" } }],"must_not": [{ "term": { "active": "false" } }]}},"script": {"source": "ctx._source.unenrolled_at = null;","lang": "painless"}}' ----- -==== - -[discrete] -[[enhancements-8.16.2]] -=== Enhancements - -In this release we've introduced an image based on the hardened link:https://github.com/wolfi-dev/[Wolfi] image to provide additional security to our self-managed customers, and improve our supply chain security posture. Wolfi-based images require Docker version 20.10.10 or higher. - -{agent}:: -* Perform check for an external package manager only at startup. {agent-pull}6178[#6178] {agent-issue}5835[#5835] {agent-issue}5991[#5991] -* Remove some unnecessary copies when generating component configuration. {agent-pull}6184[#6184] {agent-issue}5835[#5835] {agent-issue}5991[#5991] -* Use xxHash instead of sha256 for hashing AST nodes when generating component configuration. {agent-pull}6192[#6192] {agent-issue}5835[#5835] {agent-issue}5991[#5991] -* Cache conditional sections when applying variables to component configuration. {agent-pull}6229[#6229] {agent-issue}5835[#5835] {agent-issue}5991[#5991] - -// end 8.16.2 relnotes - -// begin 8.16.1 relnotes - -[[release-notes-8.16.1]] -== {fleet} and {agent} 8.16.1 - -Review important information about the {fleet} and {agent} 8.16.1 release. - -[discrete] -[[known-issues-8.16.1]] -=== Known Issues - -[discrete] -[[known-issue-6213-8-16-1]] -.An {agent} with the Defend integration may report an Orphaned status and will not be able to be issued an upgrade action through {fleet}. -[%collapsible] -==== -*Details* + -A known issue in the {agent} may prevent it from being targetted with an upgrade action for a future release. -This may occur if the Defend integration is used and the agent is stopped on a running instance for too long. -An agent may be stopped as part of an upgrade process. - -*Impact* + -A bug fix is present in the 8.16.3 and 8.17.1 releases of the {fleet} that will prevent this from occuring. - -If you have agents that are affected, the workaround is as follows: -[source,shell] ----- -# Get a Token to issue an update_by_query request: -curl -XPOST --user elastic:${SUPERUSER_PASS} -H 'x-elastic-product-origin:fleet' -H'content-type:application/json' "https://${ELASTICSEARCH_HOST}/_security/service/elastic/fleet-server/credential/token/fix-unenrolled" - -# Issue an update_by_query request that targets effected agents: -curl -XPOST -H 'Authorization: Bearer ${TOKEN}' -H 'x-elastic-product-origin:fleet' -H 'content-type:application/json' "https://${ELASTICSEARCH_HOST}/.fleet-agents/_update_by_query" -d '{"query": {"bool": {"must": [{ "exists": { "field": "unenrolled_at" } }],"must_not": [{ "term": { "active": "false" } }]}},"script": {"source": "ctx._source.unenrolled_at = null;","lang": "painless"}}' ----- -==== - -[discrete] -[[bug-fixes-8.16.1]] -=== Bug fixes - -{agent}:: -* During an {agent} upgrade, resolve paths to a proper value assuming that the upgrading agent is installed. {agent-pull}5879[#5879] {agent-issue}5872[#5872] -* Trim spaces in the user input accepted by the cli.confirm function. This allows users to enter spaces around the `yes/no` inputs in CLI confirmation prompts. {agent-pull}5909[#5909] -* Skip calling the `notifyFleetAuditUninstall` function to notify {fleet} on Windows during {agent} uninstall, to significantly reduce likelihood of an exception being thrown. {agent-pull}6065[#6065] {agent-issue}5952[#5952] - -// end 8.16.1 relnotes - -// begin 8.16.0 relnotes - -[[release-notes-8.16.0]] -== {fleet} and {agent} 8.16.0 - -Review important information about the {fleet} and {agent} 8.16.0 release. - -[discrete] -[[security-updates-8.16.0]] -=== Security updates - -{fleet-server}:: -* Update {fleet-server} Go version to 1.23.1. {fleet-server-pull}3924[#3924] - -[discrete] -[[breaking-changes-8.16.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -{agent}:: -* When using the System integration, uppercase characters in the `host.hostname` are being converted to lowercase in {agent} output. This can possibly result in duplicated host entries appearing in {kib}. {beats-issue}39993[#3993] - -[discrete] -[[known-issues-8.16.0]] -=== Known issues - -[[known-issue-191661]] -.{fleet} UI listing shows "No agent found" -[%collapsible] -==== - -*Details* - -In the {fleet} UI in {kib}, the listing {agents} might show "No agent found" with a toast message "Error fetching agents" or "Agent policy ... not found". - -This error can happen if the {agents} being searched and listed in the UI are using an {agent} policy which doesn't exist. - -*Impact* + - -As a workaround for the issue, you can upgrade your {stack} to verion 8.16.1. The issue has been resolved by {kib} link:https://github.com/elastic/kibana/pull/199325[#199325]. - -==== - -[[known-issue-5952]] -.{agent} throws exception when uninstalling on Windows -[%collapsible] -==== - -*Details* - -{fleet}-managed {agent} sometimes throws an exception when uninstalling on Microsoft Windows systems. - -For example: - -[source,shell] ----- -C:\>"C:\Program Files\Elastic\Agent\elastic-agent.exe" uninstall -Elastic Agent will be uninstalled from your system at C:\Program Files\Elastic\Agent. Do you want to continue? [Y/n]:y -[====] Attempting to notify Fleet of uninstall [37s] unexpected fault address 0x18000473ef1 -fatal error: fault -[signal 0xc0000005 code=0x1 addr=0x18000473ef1 pc=0x9f3004] - -goroutine 1 gp=0xc00007c000 m=5 mp=0xc000116008 [running]: -runtime.throw({0x207a4ba?, 0xa2d986?}) - runtime/panic.go:1023 +0x65 fp=0xc000067588 sp=0xc000067558 pc=0xcf8c5 -runtime.sigpanic() - runtime/signal_windows.go:414 +0xd0 fp=0xc0000675d0 sp=0xc000067588 pc=0xe6a10 -(...) - github.com/elastic/elastic-agent/internal/pkg/agent/errors/generators.go:23 -github.com/elastic/elastic-agent/internal/pkg/fleetapi.(*AuditUnenrollCmd).Execute(0xc00073f998, {0x4, 0x23cf148}, 0x0) - github.com/elastic/elastic-agent/internal/pkg/fleetapi/audit_unenroll_cmd.go:74 +0x324 fp=0xc000067738 sp=0xc0000675d0 pc=0x9f3004 -runtime: g 1: unexpected return pc for github.com/elastic/elastic-agent/internal/pkg/fleetapi.(*AuditUnenrollCmd).Execute called from 0xc0006817a0 -stack: frame={sp:0xc0000675d0, fp:0xc000067738} stack=[0xc000064000,0xc000068000) -0x000000c0000674d0: 0x000000c000067508 0x00000000000d14af -0x000000c0000674e0: 0x00000000023c9c90 0x0000000000000001 -0x000000c0000674f0: 0x0000000000000001 0x000000c00006756b -(...) ----- - -For other examples, refer to {agent} link:https://github.com/elastic/elastic-agent/issues/5952#issuecomment-2475044465[issue #5952]. - -This problem occurs when {agent} notifies {fleet} to audit the uninstall process. - -*Impact* + - -As a workaround, we recommend trying again to uninstall the agent. - -==== - -[discrete] -[[known-issue-6213-8-16-0]] -.An {agent} with the Defend integration may report an Orphaned status and will not be able to be issued an upgrade action through {fleet}. -[%collapsible] -==== -*Details* + -A known issue in the {agent} may prevent it from being targetted with an upgrade action for a future release. -This may occur if the Defend integration is used and the agent is stopped on a running instance for too long. -An agent may be stopped as part of an upgrade process. - -*Impact* + -A bug fix is present in the 8.16.3 and 8.17.1 releases of {fleet} that will prevent this from occuring. - -If you have agents that are affected, the workaround is as follows: -[source,shell] ----- -# Get a Token to issue an update_by_query request: -curl -XPOST --user elastic:${SUPERUSER_PASS} -H 'x-elastic-product-origin:fleet' -H'content-type:application/json' "https://${ELASTICSEARCH_HOST}/_security/service/elastic/fleet-server/credential/token/fix-unenrolled" - -# Issue an update_by_query request that targets effected agents: -curl -XPOST -H 'Authorization: Bearer ${TOKEN}' -H 'x-elastic-product-origin:fleet' -H 'content-type:application/json' "https://${ELASTICSEARCH_HOST}/.fleet-agents/_update_by_query" -d '{"query": {"bool": {"must": [{ "exists": { "field": "unenrolled_at" } }],"must_not": [{ "term": { "active": "false" } }]}},"script": {"source": "ctx._source.unenrolled_at = null;","lang": "painless"}}' ----- -==== - -[discrete] -[[known-issue-206131]] -.Integration output fails when using default output -[%collapsible] -==== -*Details* + -Beginning in version 8.16.0 you can specify an output per integration policy. However, setting the integration output to the default creates an invalid output name. - -*Impact* + -As a workaround, you can create a clone of the default output and then set it as the output for an integration policy. Refer to issue link:https://github.com/elastic/kibana/issues/206131[#206131] for details and status. - -==== - -[discrete] -[[new-features-8.16.0]] -=== New features - -The 8.16.0 release Added the following new and notable features. - -{fleet}:: -* Add support for content-only packages in integrations UI. {kibana-pull}195831[#195831] -* Add advanced agent monitoring options for HTTP endpoint and diagnostics. {kibana-pull}193361[#193361] -* Add support for periodic unenrollment of inactive agents. Once an {agent} transitions to an `inactive` state and after a configurable timeout has expired, the agent will be unenrolled. {kibana-pull}189861[#189861] -* Add support for dynamic topics to the Kafka output. This allows the Kafka output to write to a topic which is dynamically set in an event field. {kibana-pull}192720[#192720] -* Add support for GeoIP processor databases in Ingest Pipelines. {kibana-pull}190830[#190830] -* Add support for reusable/shareable integration policies. This feature allows you to create integrations policies that can be shared with multiple {agent} policies, thereby reducing the number of integrations policies that you need to actively manage. {kibana-pull}187153[#187153] -* Add support for integration-level outputs. This feature enables you to send integration data to a specific output, overwriting the output defined in the {agent} Policy. {kibana-pull}189125[#189125] - - -{fleet-server}:: -* Add `/api/fleet/agents/:id/audit/unenroll` API that an {agent} or Endpoint process may use to report that an agent was uninstalled or unenrolled to {fleet}. {fleet-server-pull}3818[#3818] {agent-issue}484[#484] -* Add a `secret_paths` attribute to the policy data sent to agents. This attribute is a list of keys that {fleet-server} has replaced with a reference to a secret value. {fleet-server-pull}3908[#3908] {fleet-server-issue}3657[#3657] - -{agent}:: -* Uninstalling a {fleet}-managed {agent} instance will now do a best-effort attempt to notify {fleet-server} of the agent removal so the agent status appears correctly in the {fleet} UI (related to {fleet-server-pull}3818[#3818] above). {agent-pull}5302[#5302] {agent-issue}484[#484] -* Introduce a Helm Chart for deploying {agent} in Kubernetes. {agent-pull}5331[#5331] {agent-issue}3847[#3847] -* Remove support for the experimental shippers feature. {agent-pull}5308[#5308] {agent-issue}4547[#4547] -* Add the GCP Asset Inventory input to Cloudbeat. {agent-pull}5422[#5422] -* Add support for passphrase protected mTLS client certificate key during install/enroll. {agent-pull}5494[#5494] {agent-issue}5489[#5489] -* Elastic Defend now accepts a passphrase protected client certificate key for mTLS. {agent-pull}5542[#5542] {agent-issue}5490[#5490] -* Add a Kustomize template to enable hints-based autodiscovery by default when deploying standalone {agent} in a Kubernetes cluster. This also removes `root` privileges from the init container. {agent-pull}5643[#5643] - -[discrete] -[[enhancements-8.16.0]] -=== Enhancements - -{fleet}:: -* Update maximum supported package version. {kibana-pull}196551[#196551] -* Add additional columns to {agent} Logs UI. {kibana-pull}192262[#192262] -* Show `+build` versions for {agent} upgrades. {kibana-pull}192171[#192171] -* Add format parameter to `agent_policies` APIs. {kibana-pull}191811[#191811] -* Add toggles for `agent.monitoring.http.enabled` and `agent.monitoring.http.buffer.enabled` to agent policy advanced settings. {kibana-pull}190984[#190984] -* Support integration policies without agent policy references (aka orphaned integration policies). {kibana-pull}190649[#190649] -* Allow `traces` to be added to the `monitoring_enabled` array in Agent policies. {kibana-pull}189908[#189908] -* Add setup technology selector to the Add Integration page. {kibana-pull}189612[#189612] - -{fleet-server}:: -* Alter the checkin API to remove attributes set by the audit or unenroll API (follow-up to {fleet-server-pull}3818[#3818] above). {fleet-server-pull}3827[#3827] {agent-issue}484[#484] -* Enable warnings for configuration options that have been deprecated throughout the 8.x lifecycle. {fleet-server-pull}3901[#3901] - -{agent}:: -* Re-enable support for Elastic Defend on Windows Server 2012 and 2012 R2. {agent-pull}5429[#5429] -* Include the correct Elastic License 2.0 file in build artifacts and packages. {agent-pull}5464[#5464] -* Add the `pprofextension` to the {agent} OTel collector. {agent-pull}5556[#5556] -* Update the base container image from Ubuntu 20.04 to Ubuntu 24.04. {agent-pull}5644[#5644] {agent-issue}5501[#5501] -* Redact values from the `elastic-agent inspect` command output for any keys in the `secret_paths` array. {agent-pull}5621[#5621] -* Redact secret paths in files written in {agent} diagnostics bundles. {agent-pull}5745[#5745] -* Update the versions of OpenTelemetry Collector components from v0.111.0/v1.17.0 to v0.112.0/v1.18.0. {agent-pull}5838[#5838] - -[discrete] -[[bug-fixes-8.16.0]] -=== Bug fixes - -{fleet}:: -* Revert "Fix client-side validation for agent policy timeout fields". {kibana-pull}194338[#194338] -* Add proxy arguments to install snippets. {kibana-pull}193922[#193922] -* Rollover if dimension mappings changed in dynamic templates. {kibana-pull}192098[#192098] - -{fleet-server}:: -* Fix the error handling when {fleet-server} attempts to authenticate with {es}. {fleet-server-pull}3935[#3935] {fleet-server-issue}3929[#3929] -* Fix an issue that caused {fleet-server} to report a `500` error on {agent} check-in because the agent has upgrade details but the referenced action ID is not found. {fleet-server-pull}3991[#3991] - -{agent}:: -* Fix {agent} crashing when self unenrolling due to too many authentication failures against {fleet-server}. {agent-pull}5438[#5438] {agent-issue}5434[#5434] -* Change the deprecated `maintainer` label in Dockerfile to use the `org.opencontainers.image.authors` label instead. {agent-pull}5527[#5527] - -// end 8.16.0 relnotes - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[notable-changes-8.13.0]] -//=== Notable changes - -//The following are notable, non-breaking updates to be aware of: - -//* Changes to features that are in Technical Preview. -//* Changes to log formats. -//* Changes to non-public APIs. -//* Behaviour changes that repair critical bugs. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.17.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.17.asciidoc deleted file mode 100644 index a54177d8f..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.17.asciidoc +++ /dev/null @@ -1,348 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.17.2 relnotes - -[[release-notes-8.17.2]] -== {fleet} and {agent} 8.17.2 - -Review important information about the {fleet} and {agent} 8.17.2 release. - -[discrete] -[[security-updates-8.17.2]] -=== Security updates - -{fleet-server}:: -* Upgrade `golang.org/x/net` to v0.34.0 and `golang.org/x/crypto` to v0.32.0. {fleet-server-pull}4405[#4405] - - -[discrete] -[[enhancements-8.17.2]] -=== Enhancements - -{agent}:: -* Upgrade NodeJS for Heartbeat to LTS v18.20.6. {agent-pull}6641[#6641] - -[discrete] -[[bug-fixes-8.17.2]] -=== Bug fixes - -{agent}:: -* Emit variables even if provider data is empty from the start. {agent-pull}6598[#6598] - -// end 8.17.2 relnotes - -// begin 8.17.1 relnotes - -[[release-notes-8.17.1]] -== {fleet} and {agent} 8.17.1 - -Review important information about the {fleet} and {agent} 8.17.1 release. - -[discrete] -[[breaking-changes-8.17.1]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -{agent}:: -* {agent} Docker images for {ecloud} have been reverted from having been based off of Ubuntu 24.04 to being based off of Ubuntu 20.04. This is to ensure compatibility with {ece}, support for new Wolfi-based images, and for GNU C Library (glibc) compatibility. {agent-pull}6393[#6393] - -[discrete] -[[known-issues-8.17.1]] -=== Known issues - -[[known-issue-1671]] -.{kib} out of memory crashes on 1 GB {ecloud} {kib} instances using {elastic-sec} view -[%collapsible] -==== - -*Details* - -{ecloud} deployments that use the smallest available {kib} instance size of 1 GB may crash due to out of memory errors when the Security UI is loaded. - -*Impact* + - -The root cause is inefficient memory allocation, and this is exacerbated when the prebuilt security rules package is installed on the initial load of the {elastic-sec} UI. - -As a workaround, you can upgrade your deployment to 8.17.1 in which this issue has been resolved by https://github.com/elastic/kibana/pull/208869[#208869] and https://github.com/elastic/kibana/pull/208475[#208475]. - -==== - -[discrete] -[[new-features-8.17.1]] -=== New features - -The 8.17.1 release Added the following new and notable features. - -* Add the link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/loadbalancingexporter[Otel loadbalancing exporter] to {agent}. {agent-pull}6315[#6315] - -[discrete] -[[enhancements-8.17.1]] -=== Enhancements - -{agent}:: - -* Respond with an error message in case the user running the `enroll` command and the user who is the owner of the {agent} program files don't match. {agent-pull}6144[#6144] {agent-issue}4889[#4889] -* Implement the `MarshalJSON` method on the `component.Component` struct, so that when the component model is logged, the output does not contain any secrets that might be part of the component model. {agent-pull}6329[#6329] {agent-issue}5675[#5675] - -[discrete] -[[bug-fixes-8.17.1]] -=== Bug fixes - -{fleet-server}:: -* Do not set the `unenrolled_at` attribute when the audit/unenroll API is called. {fleet-server-pull}4221[#4221] {agent-issue}6213[#6213] -* Remove PGP endpoint auth requirement so that air-gapped {agents} can retreive a PGP key from {fleet-server}. {fleet-server-pull}4256[#4256] {fleet-server-issue}4255[#4255] - -{agent}:: -* During uninstall, call the audit or unenroll API before components are stopped, if {agent} is running a {fleet-server} instance. {agent-pull}6085[#6085] {agent-issue}5752[#5752] -* Update OTel components to v0.115.0. {agent-pull}6391[#6391] -* Restore the `cloud-defend` binary which was accidentally removed in version 8.17.0. {agent-pull}6470[#6470] {agent-issue}6469[#6469] - -// end 8.17.1 relnotes - -// begin 8.17.0 relnotes - -[[release-notes-8.17.0]] -== {fleet} and {agent} 8.17.0 - -Review important information about the {fleet} and {agent} 8.17.0 release. - -[discrete] -[[breaking-changes-8.17.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -{agent}:: -* {agent} is now compiled using Debian 11 and linked against glibc 2.31 instead of 2.19. Drops support for Debian 10. {agent-pull}5847[#5847] - -[discrete] -[[known-issues-8.17.0]] -=== Known Issues - -[discrete] -[[known-issue-6213-8-17-0]] -.An {agent} with the Defend integration may report an Orphaned status and will not be able to be issued an upgrade action through {fleet}. -[%collapsible] -==== -*Details* + -A known issue in the {agent} may prevent it from being targetted with an upgrade action for a future release. -This may occur if the Defend integration is used and the agent is stopped on a running instance for too long. -An agent may be stopped as part of an upgrade process. - -*Impact* + -A bug fix is present in the 8.17.1 release of {fleet} that will prevent this from occuring. - -If you have agents that are affected, the workaround is as follows: -[source,shell] ----- -# Get a Token to issue an update_by_query request: -curl -XPOST --user elastic:${SUPERUSER_PASS} -H 'x-elastic-product-origin:fleet' -H'content-type:application/json' "https://${ELASTICSEARCH_HOST}/_security/service/elastic/fleet-server/credential/token/fix-unenrolled" - -# Issue an update_by_query request that targets effected agents: -curl -XPOST -H 'Authorization: Bearer ${TOKEN}' -H 'x-elastic-product-origin:fleet' -H 'content-type:application/json' "https://${ELASTICSEARCH_HOST}/.fleet-agents/_update_by_query" -d '{"query": {"bool": {"must": [{ "exists": { "field": "unenrolled_at" } }],"must_not": [{ "term": { "active": "false" } }]}},"script": {"source": "ctx._source.unenrolled_at = null;","lang": "painless"}}' ----- -==== - -[discrete] -[[new-features-8.17.0]] -=== New features - -The 8.17.0 release Added the following new and notable features. - -{fleet}:: -* Expose advanced file logging configuration in the UI. {kibana-pull}200274[#200274] - -{agent}:: -* Add support for running as a pre-existing user when installing in unprivileged mode, with technical preview support for pre-existing Windows Active Directory users. {agent-pull}5988[#5988] {agent-issue}4585[#4585] -* Add a new custom link:https://github.com/elastic/integrations/tree/main/packages/filestream[Filestream logs integration]. This will enable migration from the custom log integration which is based on a log input that is planned for deprecation. https://github.com/elastic/integrations/pull/11332[#11332]. - -[discrete] -[[enhancements-8.17.0]] -=== Enhancements - -{fleet}:: -* Filter the integrations/packages list shown in the UI depending on the `policy_templates_behavior` field. {kibana-pull}200605[#200605] -* Add a `@custom` component template to integrations index template's `composed_of` array. {kibana-pull}192731[#192731] - -{fleet-server}:: -* Update `elastic-agent-libs` to version `0.14.0`. {fleet-server-pull}4042[#4042] - -{agent}:: -* Enable persistence in the configuration provided with our OTel Collector distribution. {agent-pull}5549[#5549] -* Restrict using the CLI to upgrade for {fleet}-managed {agents}. {agent-pull}5864[#5864] {agent-issue}4890[#4890] -* Add `os_family`, `os_platform` and `os_version` to the {agent} host provider, enabling differentiating Linux distributions. This is required to support Debian 12 and other distributions that are moving away from traditional log files in favour of Journald. {agent-pull}5941[#5941] https://github.com/elastic/integrations/issues/10797[10797] https://github.com/elastic/integrations/pull/11618[11618] -* Emit Pod data only for Pods that are running in the Kubernetes provider. {agent-pull}6011[#6011] {agent-issue}5835[#5835] {agent-issue}5991[#5991] -* Remove {endpoint-sec} from Linux container images. {endpoint-sec} cannot run in containers since it has a systemd dependency. {agent-pull}6016[#6016] {agent-issue}5495[#5495] -* Update OTel components to v0.114.0. {agent-pull}6113[#6113] -* Redact common secrets like API keys and passwords in the output from `elastic-agent inspect` command. {agent-pull}6224[#6224] - -[discrete] -[[bug-fixes-8.17.0]] -=== Bug fixes - -{fleet}:: -* Allow creation of an integration policy with no agent policies. {kibana-pull}201051[#201051] - -{fleet-server}:: -* Fix `server.address` field which was appearing empty in HTTP request logs. {fleet-server-pull}4142[#4142] -* Remove a race condition that may occur when remote ES outputs are used. {fleet-server-pull}4171[#4171] {fleet-server-pull}4170[#4170] - -{agent}:: -* Make redaction of common keys in diagnostics case insensitive. {agent-pull}6109[#6109] - - -// end 8.17.0 relnotes - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[notable-changes-8.13.0]] -//=== Notable changes - -//The following are notable, non-breaking updates to be aware of: - -//* Changes to features that are in Technical Preview. -//* Changes to log formats. -//* Changes to non-public APIs. -//* Behaviour changes that repair critical bugs. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.18.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.18.asciidoc deleted file mode 100644 index 88a8b7b00..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.18.asciidoc +++ /dev/null @@ -1,229 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.18.0 relnotes - -[[release-notes-8.18.0]] -== {fleet} and {agent} 8.18.0 - -Review important information about the {fleet} and {agent} 8.18.0 release. - -[discrete] -[[new-features-8.18.0]] -=== New features - -The 8.18.0 release Added the following new and notable features. - -{fleet}:: -* Add next steps and actions to the agentless integrations flyout. ({kibana-pull}203824[#203824]) -* Add support for selecting columns when exporting agents to CSV. ({kibana-pull}203103[#203103]) -* Add status tracking for agentless integrations. ({kibana-pull}199567[#199567]) - -{agent}:: -* Add ability to run the Elastic Distribution of OTel Collector at the same time as other inputs. This feature is in technical preview. {agent-pull}5767[#5767] {agent-issue}5796[#5796] -* Add a sample configuration to be used when deploying the link:https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-kube-stack[OpenTelemetry Kube Stack Helm Chart]. {agent-pull}5822[#5822] -* Add the GeoIP OpenTelemetry processor to {agent}. {agent-pull}6134[#6134] -* Add the link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/routingconnector[OpenTelemetry routing connector] to the Elastic Distribution of OTel Collector. {agent-pull}6210[#6210] -* Add support for the link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/loadbalancingexporter[OTel loadbalancing exporter] to {agent}. {agent-pull}6315[#6315] -* Add a new Kubernetes deployment of the Elastic Distribution of OTel Collector named "gateway", to simplify the daemonset collector configuration and unify managed/self-managed scenarios. {agent-pull}6444[#6444] -* Add the `components` command for {agent} in OTel mode, to list the supported components the the Elastic Distribution of OTel Collector includes. {agent-pull}6539[#6539] -* Add the `receivercreator` and `k8sobserver` components to the Elastic Distribution of OTel Collector to help cover autodiscovery scenarios in Kubernetes. {agent-pull}6561[#6561] -* Add the `kafkaexporter` and `kafkareceiver` the Elastic Distribution of OTel Collector to help prepare support for a Kafka output. {agent-pull}6593[#6593] {agent-issue}6562[#6562] -* Add the nopreceiver to the Elastic Distribution of OTel Collector. {agent-pull}6603[#6603] -* Change the default gRPC port to 0 when {agent} is run in a container. {agent-pull}6585[#6585] - -[discrete] -[[enhancements-8.18.0]] -=== Enhancements - -{fleet}:: -* Enable sub-feature privileges for {fleet}. ({kibana-pull}203182[#203182]) - -{fleet-server}:: -* Validate user pbkdf2 settings for FIPS compliance. {fleet-server-pull}4542[#4542] -* Update {fleet-server} Go version to 1.24.0. {fleet-server-pull}4543[#4543] - - -{agent}:: -* Re-enable the OTel subcommand on Windows. {agent-pull}6068[#6068] {agent-issue}4976[#4976] {agent-issue}5710[#5710] -* Update the {agent} to only run composable providers if they are referenced in the agent policy. {agent-pull}6169[#6169] {agent-issue}3609[#3609] {agent-issue}4648[#4648] -* Add a flag to skip {fleet} audit or unenroll when uninstalling {agent}. {agent-pull}6206[#6206] {agent-issue}5757[#5757] -* Embed hints-based inputs in the {agent} Kubernetes container image. {agent-pull}6381[#6381] {agent-issue}5661[#5661] -* Add an error to the Windows Application Event Log if the `install`, `uninstall`, or `enroll` commands fail. {agent-pull}6410[#6410] {agent-issue}6338[#6338] -* Add a logger to print the status and code when an {agent} enrollment call to {fleet} fails. {agent-pull}6477[#6477] {agent-issue}6287[#6287] -* Update {agent} Go version to 1.24.0. {agent-pull}6932[#6932] -* Update OTel components to v0.120.x. {agent-pull}7443[#7443] - -[discrete] -[[bug-fixes-8.18.0]] -=== Bug fixes - -{fleet}:: -* Support `is_default` on integration deployment modes. ({kibana-pull}208284[#208284]) -* Fix a UI error caused when an agent becomes orphaned. ({kibana-pull}207746[#207746]) -* Restrict non-local {es} output types for agentless integrations and policies. ({kibana-pull}207296[#207296]) -* Fix API code to prevent bulk actions from timing out. ({kibana-pull}205735[#205735]) -* Fix generation of dynamic mapping for objects with specific subfields. ({kibana-pull}204104[#204104]) -* Fix logic to ensure that agents are only considered stuck in updating when an upgrade fails. ({kibana-pull}202126[#202126]) - -{fleet-server}:: -* Return a 429 error when the {fleet-server} connection limit is reached instead of silently closing connections. {fleet-server-pull}4402[#4402] - -{agent}:: -* Prevent installation of {elastic-defend} in emulated environment, in which it's not supported. {agent-pull}6095[#6095] {agent-issue}6082[#6082] -* Re-enable notifying {fleet} when [agent] is uninstalled on Windows. {agent-pull}6257[#6257] {agent-issue}5952[#5952] -* Log a warning on same version upgrade attempts and prevent the agent from reporting a failed upgrade state. {agent-pull}6273[#6273] {agent-issue}6186[#6186] -* Add retries for requesting download verifiers when upgrading an agent. {agent-pull}6276[#6276] {agent-issue}5163[#5163] -* Replace `list` with `items` from from `kibanaFetchToken` as `list` is deprecated in the API response and will be removed. {agent-pull}6437[#6437] {agent-issue}6023[#6023] -* Restore `cloud-defend` as an expected binary after it was accidentally removed from containers in 8.17.0 and later versions. {agent-pull}6470[#6470] {agent-issue}6469[#6469] -* Restore the `maintainer` label for container images rather than the default inherited from a base image. {agent-pull}6512[#6512] -* Fix enrollment for containerized {agent} when the enrollment token changes or the agent is unenrolled. {agent-pull}6568[#6568] {agent-issue}3586[#3586] -* Change how Windows process handles are obtained when assigning sub-processes to Job objects. {agent-pull}6825[#6825] - -// end 8.18.0 relnotes - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[notable-changes-8.13.0]] -//=== Notable changes - -//The following are notable, non-breaking updates to be aware of: - -//* Changes to features that are in Technical Preview. -//* Changes to log formats. -//* Changes to non-public APIs. -//* Behaviour changes that repair critical bugs. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.3.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.3.asciidoc deleted file mode 100644 index 50abf8e4b..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.3.asciidoc +++ /dev/null @@ -1,300 +0,0 @@ -// Use these for links to issue and pulls. -:kib-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/beats/issues/fleet-server/ -:fleet-server-pull: https://github.com/elastic/beats/pull/fleet-server/ - - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.3.3 relnotes - -[[release-notes-8.3.3]] -== {fleet} and {agent} 8.3.3 - -Review important information about the {fleet} and {agent} 8.3.3 release. - -[discrete] -[[known-issues-8.3.3]] -=== Known issues - -[[known-issue-803]] -.Enroll command fails with "no such file or directory" error on DEB and RPM -[%collapsible] -==== - -*Details* - -An error in a post-install script in version 8.3.3 prevents DEB and RPM -distributions from enrolling. - -*Impact* + - -To resolve this problem, run the following command. Replace the elastic-agent -data path with the correct path for your system: - -[source,sh] ----- -sudo unlink /usr/share/elastic-agent/bin/elastic-agent -sudo ln -s /var/lib/elastic-agent/data/elastic-agent-/elastic-agent /usr/share/elastic-agent/bin/elastic-agent ----- - -==== - -[discrete] -[[bug-fixes-8.3.3]] -=== Bug fixes - -{fleet}:: -* Pass start_time to actions when the maintenance window for rolling upgrades -is set to immediately {kibana-pull}136384[#136384] -* Allow agent bulk actions without specific licence restrictions -{kibana-pull}136334[#136334] -* Adds reinstall button to integration settings page {kibana-pull}135590[#135590] - -{agent}:: -* Change default value of VerificationMode from empty string to `full` -{agent-issue}184[#184] {agent-libs-pull}59[#59] -* Add filemod times to contents of diagnostics collect command {agent-pull}570[#570] -* Allow colon (`:`) characters in dynamic variables {agent-issue}624[#624] -{agent-pull}680[#680] -* Allow dash (`-`) characters in variable names in EQL expressions -{agent-issue}709[#709] {agent-pull}710[#710] -* Allow slash (`/`) characters in variable names in EQL and transpiler -{agent-issue}715[#715] {agent-pull}718[#718] -* Fix problem with {agent} incorrectly creating a {filebeat} `redis` input when -a policy contains a {packetbeat} `redis` input {agent-issue}427[#427] -{agent-pull}700[#700] - -// end 8.3.3 relnotes - -// begin 8.3.2 relnotes - -[[release-notes-8.3.2]] -== {fleet} and {agent} 8.3.2 - -Review important information about the {fleet} and {agent} 8.3.2 release. - -[discrete] -[[bug-fixes-8.3.2]] -=== Bug fixes - -{fleet}:: -* Keep all agents selected in query selection mode {kibana-pull}135530[#135530] - -{agent}:: -* No bug fixes for this release - -// end 8.3.2 relnotes - -// begin 8.3.1 relnotes - -[[release-notes-8.3.1]] -== {fleet} and {agent} 8.3.1 - -Review important information about the {fleet} and {agent} 8.3.1 release. - -[discrete] -[[bug-fixes-8.3.1]] -=== Bug fixes - -{fleet}:: -* Fixes dropping select all {kibana-pull}135124[#135124] -* Improves bulk actions for more than 10k agents {kibana-pull}134565[#134565] - -{agent}:: -* No bug fixes for this release - -// end 8.3.1 relnotes - -// begin 8.3.0 relnotes - -[[release-notes-8.3.0]] -== {fleet} and {agent} 8.3.0 - -Review important information about the {fleet} and {agent} 8.3.0 release. - -[discrete] -[[new-features-8.3.0]] -=== New features - -The 8.3.0 release adds the following new and notable features. - -{fleet}:: -* Changes to agent upgrade modal to allow for rolling upgrades {kibana-pull}132421[#132421] - -{agent}:: -* Adds ability to <> during {agent} -installation and enrollment {agent-issue}149[#149] {agent-pull}336[#336] -* Adds support for Cloudbeat {agent-pull}179[#179] -* Adds support for Kubernetes cronjobs {agent-pull}279[#279] -* Increases the download artifact timeout to 10 minutes and adds log download -statistics {agent-pull}308[#308] -* Saves the agent configuration and the state encrypted on the disk -{agent-issue}535[#535] {agent-pull}398[#398] -* Supports scheduled actions and cancellation of pending actions -{agent-issue}393[#393] {agent-pull}419[#419] - -[discrete] -[[enhancements-8.3.0]] -=== Enhancements - -{fleet}:: -* Moves integration labels below title and normalizes styling {kibana-pull}134360[#134360] -* Adds First Integration Multi Page Steps Flow MVP (cloud only) {kibana-pull}132809[#132809] -* Optimizes package installation performance, phase 2 {kibana-pull}131627[#131627] -* Adds APM instrumentation for package install process {kibana-pull}131223[#131223] -* Adds "Label" column + filter to Agent list table {kibana-pull}131070[#131070] -* Adds `cache-control` headers to key `/epm` endpoints in Fleet API {kibana-pull}130921[#130921] -* Optimizes package installation performance, phase 1 {kibana-pull}130906[#130906] -* Adds experimental features (feature flags) config to {fleet} plugin {kibana-pull}130253[#130253] -* Adds redesigned {fleet-server} flyout {kibana-pull}127786[#127786] - -{agent}:: -* Bumps `node.js` version for Heartbeat/synthetics to 16.15.0 -{agent-pull}446[#446] -* Adds extra k8s resources in `clusterRole` to better filter objects in -dashboards and visualizations {agent-pull}424[#424] -* Collects Endpoint Security logs on the `elastic-agent diagnostics collect` -command {agent-issue}105[#105] {agent-pull}242[#242] - -[discrete] -[[bug-fixes-8.3.0]] -=== Bug fixes - -{fleet}:: -* Bulk reassign kuery optimize {kibana-pull}134673[#134673] -* Fixes flickering tabs layout in add agent flyout {kibana-pull}133769[#133769] -* Adds $ProgressPreference to Windows install command in flyout {kibana-pull}133756[#133756] -* Fixes sorting by size on data streams table {kibana-pull}132833[#132833] - -{agent}:: -* {agent} now logs stdout and stderr of applications run as processes {agent-issue}88[#88] - -// end 8.3.x relnotes - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.3.x relnotes - -//[[release-notes-8.3.x]] -//== {fleet} and {agent} 8.3.x - -//Review important information about the {fleet} and {agent} 8.3.x release. - -//[discrete] -//[[security-updates-8.3.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.3.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.3.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.3.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.3.x, and will be removed in -//8.3.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.3.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.3.x]] -//=== New features - -//The 8.3.x release adds the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.3.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.3.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.3.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.4.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.4.asciidoc deleted file mode 100644 index b0bdd57b9..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.4.asciidoc +++ /dev/null @@ -1,350 +0,0 @@ -// Use these for links to issue and pulls. -:kib-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:beats-issue: https://github.com/elastic/beats/issues/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.4.3 relnotes - -[[release-notes-8.4.3]] -== {fleet} and {agent} 8.4.3 - -Review important information about the {fleet} and {agent} 8.4.3 release. - -[discrete] -[[bug-fixes-8.4.3]] -=== Bug fixes - -{fleet}:: -* {fleet-server} no longer silently discards user-specified values for cache and -server limits when defaults are loaded {fleet-server-issue}1841[#1841] -{fleet-server-pull}1912[#1912] - -{agent}:: -* Use at least warning level for all status logs {agent-pull}1218[#1218] -* Fix unintended reset of source URI when downloading components {agent-pull}1252[#1252] -* Create separate status reporter for local only events so that degraded -{fleet} checkins no longer affect health on successful {fleet} checkins {agent-issue}1157[#1157] {agent-pull}1285[#1285] -* Add success log message after previous checkin failures {agent-pull}1327[#1327] -* Fix Unix socket connection errors when using diagnostics command {agent-pull}2201[#2201] - -[discrete] -[[enhancements-8.4.3]] -=== Enhancements - -{fleet}:: -* No enhancements for this release - -{agent}:: -* Improve logging during upgrades {agent-pull}1287[#1287] - -// end 8.4.3 relnotes - -// begin 8.4.2 relnotes - -[[release-notes-8.4.2]] -== {fleet} and {agent} 8.4.2 - -Review important information about the {fleet} and {agent} 8.4.2 release. - -[discrete] -[[bug-fixes-8.4.2]] -=== Bug fixes - -{fleet}:: -* Apply fixes for package policy upgrade API with multiple ids {kibana-pull}140069[#140069] -* Improve performance for many integration policies {kibana-pull}139648[#139648] - -{agent}:: -* No bug fixes for this release - -// end 8.4.2 relnotes - -// begin 8.4.1 relnotes - -[[release-notes-8.4.1]] -== {fleet} and {agent} 8.4.1 - -Review important information about the {fleet} and {agent} 8.4.1 release. - -[discrete] -[[known-issues-8.4.1]] -=== Known issues - -// tag::credentials-error[] -.Credentials error prevents {agent} from ingesting AWS logs -[%collapsible] -==== - -*Details* - -{agent}s configured to use the AWS integration may return an error similar to the following: - -[source,shell] ----- -sqs ReceiveMessage failed: operation error SQS: ReceiveMessage, https response -error StatusCode: 403, RequestID: cb57783a-505f-5099-9160-23b8eea8ddbb, -api error SignatureDoesNotMatch: Credential should be scoped to a valid region. ----- - -This error was introduced by a breaking change in the AWS library. - -*Impact* + - -{agent} is unable to ingest AWS logs and some metrics. To resolve this error: - -* If you are using the default domain `amazonaws.com`, upgrade the AWS -integration package to version 1.23.4 to apply the temporary fix added in -https://github.com/elastic/integrations/pull/4103[PR #4103]. If this does not -solve your problem, set the AWS region (either from an environment variable, -credentials or instance profile) where {agent} is running. -* Otherwise, wait to upgrade until the permanent fix added by -https://github.com/elastic/beats/pull/32921[PR #32921] is available in an -upcoming stack release. - -==== -// end::credentials-error[] - -[discrete] -[[bug-fixes-8.4.1]] -=== Bug fixes - -There are no bug fixes for {fleet} or {agent} in this release. - -// end 8.4.1 relnotes - -// begin 8.4.0 relnotes - -[[release-notes-8.4.0]] -== {fleet} and {agent} 8.4.0 - -Review important information about the {fleet} and {agent} 8.4.0 release. - -[discrete] -[[breaking-changes-8.4.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-135669]] -.xpack.agents.* are uneditable in UI when defined in kibana.yml -[%collapsible] -==== -*Details* + -When you configure `setxpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.agents.elasticsearch.hosts` in kibana.yml, you are unable to update the fields on the Fleet UI. -For more information, refer to {kibana-pull}135669[#135669]. - -*Impact* + -To configure `setxpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.agents.elasticsearch.hosts` on the Fleet UI, avoid configuring the settings in kibana.yml. -==== - -[discrete] -[[known-issues-8.4.0]] -=== Known issues - -include::release-notes-8.4.asciidoc[tag=credentials-error] - -[discrete] -[[new-features-8.4.0]] -=== New features - -The 8.4.0 release adds the following new and notable features. - -{fleet}:: -* Allow user to force install an unverified package {kibana-pull}136108[#136108] -* Add tag rename and delete feature {kibana-pull}135712[#135712] -* Add UI to bulk update agent tags {kibana-pull}135646[#135646] -* Add API to bulk update agent tags {kibana-pull}135520[#135520] -* Add UI to add and remove agent tags {kibana-pull}135320[#135320] -* Support sorting agent list {kibana-pull}135218[#135218] -* Promote Logstash output support to GA {kibana-pull}135028[#135028] -* Create new API to manage download_source setting for agent binary downloads -{kibana-pull}134889[#134889] - -{agent}:: -* Add `@metadata.input_id` and `@metadata.stream_id` when applying the inject -stream processor {agent-pull}527[#527] -* Improve {agent} status reporting: add a liveness endpoint, allow the -fleet-gateway component to report degraded state, and add the status update time -and messages to the status output {agent-issue}390[#390] {agent-pull}569[#569] -* Redact sensitive information collected by the -`elastic-agent diagnostics collect` command {agent-issue}241[#241] -{agent-pull}566[#566] - -[discrete] -[[enhancements-8.4.0]] -=== Enhancements - -{fleet}:: -* Remove Kubernetes package granularity {kibana-pull}136622[#136622] -* Align {agent} manifests with the elastic-agent repo and add comments {kibana-pull}136394[#136394] -* Configure source URI in global settings and in agent policy settings {kibana-pull}136263[#136263] -* Add Kubernetes in platforms selection list and update managed agent installation steps {kibana-pull}136109[#136109] -* Enable user to write custom ingest pipelines for {fleet}-installed datastreams {kibana-pull}134578[#134578] -* Update manifests for agent on Kubernetes with new permissions {kibana-pull}133495[#133495] -* Add support for a textarea type in integrations {kibana-pull}133070[#133070] - -{agent}:: -There are no enhancements beyond the new features added in this release - -[discrete] -[[bug-fixes-8.4.0]] -=== Bug fixes - -{fleet}:: -* Use point in time for agent status query to provide accurate reporting -{kibana-pull}135816[#135816] - -{agent}:: -* Change default value of VerificationMode from empty string to `full` -{agent-issue}184[#184] {agent-libs-pull}59[#59] -* Add filemod times to contents of diagnostics collect command {agent-pull}570[#570] -* Allow colon (:) characters in dynamic variables {agent-issue}624[#624] {agent-pull}680[#680] -* Allow dash (`-`) characters in variable names in EQL expressions -{agent-issue}709[#709] {agent-pull}710[#710] -* Allow slash (`/`) characters in variable names in EQL and transpiler -{agent-issue}715[#715] {agent-pull}718[#718] -* Fix problem with {agent} incorrectly creating a {filebeat} `redis` input when -a policy contains a {packetbeat} `redis` input {agent-issue}427[#427] -{agent-pull}700[#700] -* Fix data duplication for standalone {agent} on Kubernetes using the default -manifest {beats-issue}31512[#31512] {agent-pull}742[#742] -* {agent} upgrades now clean up unneeded artifacts {agent-issue}693[#693] -{agent-issue}694[#694] {agent-pull}752[#752] -* Fix a panic caused by a race condition when installing the {agent} -{agent-issue}806[#806] {agent-pull}823[#823] - -// end 8.4.0 relnotes - - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.4.x relnotes - -//[[release-notes-8.4.x]] -//== {fleet} and {agent} 8.4.x - -//Review important information about the {fleet} and {agent} 8.4.x release. - -//[discrete] -//[[security-updates-8.4.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.4.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.4.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.4.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.4.x, and will be removed in -//8.4.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.4.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.4.x]] -//=== New features - -//The 8.4.x release adds the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.4.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.4.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.4.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.5.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.5.asciidoc deleted file mode 100644 index 2dc76bc29..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.5.asciidoc +++ /dev/null @@ -1,342 +0,0 @@ -// Use these for links to issue and pulls. -:kib-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.5.2 relnotes - -[[release-notes-8.5.2]] -== {fleet} and {agent} 8.5.2 - -Review important information about the {fleet} and {agent} 8.5.2 release. - -[discrete] -[[bug-fixes-8.5.2]] -=== Bug fixes - -{fleet}:: -* Fix known issue with adding {fleet-server} integration on Windows by always using posix paths for zip files {kib-issue}/144880[#144880] {kibana-pull}/144899[#144899] - -{fleet-server}:: -* Add `active: true` filter to enrollment key queries to allow {fleet-server} to handle cases where there are more than 10 inactive keys associated with a policy {fleet-server-issue}2029[#2029] {fleet-server-pull}2044[#2044] - -{agent}:: -* No bug fixes in this release - -// end 8.5.2 relnotes - -// begin 8.5.1 relnotes - -[[release-notes-8.5.1]] -== {fleet} and {agent} 8.5.1 - -Review important information about the {fleet} and {agent} 8.5.1 release. - -[discrete] -[[known-issues-8.5.1]] -=== Known issues - -[[known-issue-144880]] -.Unable to add {fleet-server} integration on Windows -[%collapsible] -==== - -*Details* - -We discovered a high severity issue in version 8.5.1 that only affects Windows -users in self-managed environments. When you attempt to add a {fleet-server}, -{kib} is unable to add the {fleet-server} integration, and the {fleet-server} -polices are created without the necessary integration. For more information, -see {kib-issue}/144880[issue #144880]. - -*Impact* + - -This issue will be resolved in version 8.5.2. We advise Windows users not to -upgrade to version 8.5.1. -==== - -[[known-issue-2383]] -.Offline {agent}s fail to unenroll after timeout has expired -[%collapsible] -==== - -*Details* - -A {fleet-server-issue}/2091[known issue] in {fleet-server} 8.5.1 prevents -offline agents from being automatically unenrolled after the unenrollment -timeout expires. - -*Impact* + - -Offline agents will be displayed in the {fleet} Agents list until you explicitly -force <> them. You can do this through the -{fleet} UI or by using the API. - -To use the API: - -. Find agent's ID. Go to *{fleet} > Agents* and click the agent to see its -details. Copy the Agent ID. - -. In a terminal window, run: -+ -[source,shell] ----- -curl -u : --request POST \ - --url /api/fleet/agents//unenroll \ - --header 'content-type: application/json' \ - --header 'kbn-xsrf: xx' \ - --data-raw '{"force":true,"revoke":true}' \ - --compressed ----- -+ -Where `` is the ID you copied in the previous step. - -==== - -[discrete] -[[enhancements-8.5.1]] -=== Enhancements - -{agent}:: -* Improve shutdown logs {agent-pull}1618[#1618] - -[discrete] -[[bug-fixes-8.5.1]] -=== Bug fixes - -{fleet}:: -* Make asset tags space aware {kibana-pull}144066[#144066] - -{fleet-server}:: -* No bug fixes for this release - -{agent}:: -* Fix: Windows Agent left unhealthy after removing Endpoint integration {agent-pull}1286[#1286] -* Fix how multiple {fleet-server} hosts are handled {agent-pull}1329[#1329] -* Beats will now attempt to recover if a lock file has not been removed {beats-pull}33169[#33169] - -// end 8.5.1 relnotes - -// begin 8.5.0 relnotes - -[[release-notes-8.5.0]] -== {fleet} and {agent} 8.5.0 - -Review important information about the {fleet} and {agent} 8.5.0 release. - -[discrete] -[[breaking-changes-8.5.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-PR1709]] -.{fleet-server} and {agent} now reject certificates signed with SHA-1 -[%collapsible] -==== -*Details* + -With the upgrade to Go 1.18, {fleet-server} now rejects certificates signed with -SHA-1. For more information, refer to the Go 1.18 -https://tip.golang.org/doc/go1.18#sha1[release notes]. - -*Impact* + -Do not sign certificates with SHA-1. If you are using old certificates signed -with SHA-1, update them now. -==== - -[discrete] -[[new-features-8.5.0]] -=== New features - -The 8.5.0 release adds the following new and notable features. - -{fleet}:: -* Add agent activity flyout {kibana-pull}140510[#140510] -* Add a new event toggle to capture terminal output in endpoint {kibana-pull}139421[#139421] -* Make batch actions asynchronous {kibana-pull}138870[#138870] -* Add ability to tag integration assets {kibana-pull}137184[#137184] -* Add support for input-only packages {kibana-pull}140035[#140035] - -{fleet-server}:: -* Log redacted config when config updates {fleet-server-issue}1626[#1626] {fleet-server-pull}1671[#1671] - -{agent}:: -* Add `lumberjack` input type to the {filebeat} spec {agent-pull}959[#959] -* Add support for hints-based autodiscovery in Kubernetes provider {agent-pull}698[#698] -* Improve logging during upgrades {agent-pull}1287[#1287] - -[discrete] -[[enhancements-8.5.0]] -=== Enhancements - -{fleet}:: -* Add toggle for experimental synthetic `_source` support in {fleet} data streams {kibana-pull}140132[#140132] -* Enhance the package policy API to create or update a package policy API with a simplified way to define inputs {kibana-pull}139420[#139420] -* Support new subscription and license fields {kibana-pull}137799[#137799] - -{agent}:: -* Improve logging of {fleet} check-in errors and only report the local state as degraded after two consecutive failed check-ins {agent-issue}1154[#1154] {agent-pull}1477[#1477] - -[discrete] -[[bug-fixes-8.5.0]] -=== Bug fixes - -{fleet}:: -* Refresh search results when clearing category filter {kibana-pull}142853[#142853] -* Respect `default_field: false` when generating index settings {kibana-pull}142277[#142277] -* Fix repeated debug logs when bundled package directory does not exist {kibana-pull}141660[#141660] - -{fleet-server}:: -* Fix a race condition between the unenroller goroutine and the main -goroutine for the coordinator monitor {fleet-server-issue}1738[#1738] -* Remove events from agent check-in body {fleet-server-issue}1774[#1774] -* Improve authc debug logging {fleet-server-pull}1870[#1870] -* Add error detail to catch-all HTTP error response {fleet-server-pull}1854[#1854] -* Fix issue where errors were ignored when written to {es} {fleet-server-pull}1896[#1896] -* Update `apikey.cache_hit` log field name to match convention {fleet-server-pull}1900[#1900] -* Custom server limits are no longer ignored when default limits are loaded {fleet-server-issue}1841[#1841] {fleet-server-pull}1912[#1912] -* Use separate rate limiters for internal and external API listeners to prevent {fleet-server} from shutting down under load {fleet-server-issue}1859[#1859] {fleet-server-pull}1904[#1904] -* Fix `fleet.migration.total` log key overlap {fleet-server-pull}1951[#1951] - -{agent}:: -* Fix a panic caused by a race condition when installing the {agent} {agent-issue}806[#806] {agent-pull}823[#823] -* Use the {agent} configuration directory as the root of the `inputs.d` folder {agent-issue}663[#663] {agent-pull}840[#840] -* Fix unintended reset of source URI when downloading components {agent-pull}1252[#1252] -* Create separate status reporter for local-only events so that degraded {fleet} check-ins no longer affect health of successful {fleet} check-ins {agent-issue}1157[#1157] {agent-pull}1285[#1285] -* Add success log message after previous check-in failures {agent-pull}1327[#1327] -* Fix docker provider `add_fields` processors {agent-pull}1420[#1420] -* Fix admin permission check on localized windows {agent-pull}1552[#1552] - -// end 8.5.0 relnotes - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.5.x relnotes - -//[[release-notes-8.5.x]] -//== {fleet} and {agent} 8.5.x - -//Review important information about the {fleet} and {agent} 8.5.x release. - -//[discrete] -//[[security-updates-8.5.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.5.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.5.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.5.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.5.x, and will be removed in -//8.5.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.5.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.5.x]] -//=== New features - -//The 8.5.x release adds the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.5.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.5.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.5.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.6.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.6.asciidoc deleted file mode 100644 index a6b7a2866..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.6.asciidoc +++ /dev/null @@ -1,682 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.6.2 relnotes - -[[release-notes-8.6.2]] -== {fleet} and {agent} 8.6.2 - -Review important information about the {fleet} and {agent} 8.6.2 release. - -[discrete] -[[known-issues-8.6.2]] -=== Known issues - -[discrete] -[[known-issue-issue-2066-8-6-2]] -.Osquery live query results can take up to five minutes to show up in {kib}. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent live query results from being available -in the {kib} UI even though the results have been successfully sent to {es}. -For more information, refer to {agent-issue}2066[#2066]. - -*Impact* + -Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes. -==== - -[[known-issue-2170-8-6-2]] -.Adding a {fleet-server} integration to an agent results in panic if the agent was not bootstrapped with a {fleet-server}. -[%collapsible] -==== - -*Details* - -A panic occurs because the {agent} does not have a `fleet.server` in the `fleet.enc` -configuration file. When this happens, the agent fails with a message like: - -[source,shell] ----- -panic: runtime error: invalid memory address or nil pointer dereference -[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x557b8eeafc1d] -goroutine 86 [running]: -github.com/elastic/elastic-agent/internal/pkg/agent/application.FleetServerComponentModifier.func1({0xc000652f00, 0xa, 0x10}, 0x557b8fa8eb92?) -... ----- - -For more information, refer to {agent-issue}2170[#2170]. - -*Impact* + - -To work around this problem, uninstall the {agent} and install it again with -{fleet-server} enabled during the bootstrap process. -==== - -[[known-issue-issue-2103-8.6.2]] -.Installing {agent} on MacOS Ventura may fail if Full Disk Access has not been granted to the application used for installation. -[%collapsible] -==== -*Details* + -This issue occurs on MacOS Ventura when Full Disk Access is not granted to the application that runs the installation command. -This could be either a Terminal or any custom package that a user has built to distribute {agent}. - -For more information, refer to {agent-issue}2103[#2103]. - -*Impact* + -{agent} will fail to install and produce "Error: failed to fix permissions: chown elastic-agent.app: operation not permitted" message. -Ensure that the application used to install {agent} (for example, the Terminal or custom package) has Full Disk Access before running `sudo ./elastic-agent install`. -==== - -[[known-issue-issue-2343-8.6.2]] -.{agent} upgrades scheduled for a future time do not run. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent upgrades scheduled to execute at a later time from running. -For more information refer to {agent-issue}2343[#2343]. - -*Impact* + -{kib} may show an agent as being stuck with the `Updating` status. -If the scheduled start time has passed, you may force the agent to run by sending it any action (excluding an upgrade action), such as a change to the policy or the log level. -==== - -[[known-issue-issue-2303-8.6.2]] -.{fleet} ignores custom `server.*` attributes provided through integration settings. -[%collapsible] -==== -*Details* + -{fleet} will ignore any custom `server.*` attributes provided through the custom configurations yaml block of the intgration. -For more information refer to {fleet-server-issue}2303[#2303]. - -*Impact* + -Custom yaml settings are silently ignored by {fleet}. -Settings with input blocks, such as Max agents are still effective. -==== - -[[known-issue-issue-2249-8.6.2]] -.{agent} does not load {fleet} configuration when upgrading to {agent} version 8.6.x from version 8.2.x or earlier. -[%collapsible] -==== -*Details* + -{agent} will not load `fleet.yml` information after upgrading to {agent} version 8.6.x from version 8.2.x or earlier -This is due to the unencrypted config stored in `fleet.yml` not being migrated correctly to the encrypted config store `fleet.enc`. -For a managed agent the symptom manifests as the inability to connect to {fleet} after upgrade with the following log: - -[source,shell] ----- -Error: fleet configuration is invalid: empty access token -... ----- -For more information refer to {agent-issue}2249[#2249]. - -*Impact* + -{agent} loses the ability to connect back to {fleet} after upgrade. -Fix for this issue is available in version 8.7.0 and higher, for version 8.6 a re-enroll is necessary for the agent to connect back to {fleet} -==== - -[discrete] -[[enhancements-8.6.2]] -=== Enhancements - -{fleet}:: -* Adds the ability to run agent policy schema in batches during {fleet} setup. -Also adds `xpack.fleet.setup.agentPolicySchemaUpgradeBatchSize` config -{kibana-pull}150688[#150688] - -[discrete] -[[bug-fixes-8.6.2]] -=== Bug fixes - -{fleet}:: -* Fix max 20 installed integrations returned from Fleet API {kibana-pull}150780[#150780] -* Fix updates available when beta integrations are off {kibana-pull}149515[#149515] {kibana-pull}149486[#149486] - -{fleet-server}:: -* Prevent {fleet-server} from crashing by allowing the the Warn log level to be -specified as "warning" or "warn" {fleet-server-issue}2328[#2328] {fleet-server-pull}2331[#2331] - -{agent}:: -* Ignore {fleet} connectivity state when considering whether an upgrade should be rolled back. Avoids unnecessary upgrade failures due to transient network errors {agent-pull}2239[#2239] -* Preserve persistent process state between upgrades. The {filebeat} registry is now correctly preserved during {agent} upgrades. {agent-issue}2136[#2136] {agent-pull}2207[#2207] -* Enable nodejs engine validation when bundling synthetics -{agent-issue}2249[#2249] {agent-pull}2256[#2256] {agent-pull}2225[#2225] -* Guarantee that services are stopped before they are started. Fixes occasional upgrade failures when Elastic Defend is installed {agent-pull}2226[#2226] -* Fix an issue where inputs in {beats} started by {agent} can be incorrectly disabled. This primarily occurs when changing the log level. {agent-issue}2232[#2232] {beats-pull}34504[#34504] - -// end 8.6.2 relnotes - -[[release-notes-8.6.1]] -== {fleet} and {agent} 8.6.1 - -Review important information about the {fleet} and {agent} 8.6.1 release. - -[discrete] -[[known-issues-8.6.1]] -=== Known issues - -[discrete] -[[known-issue-issue-2066-8-6-1]] -.Osquery live query results can take up to five minutes to show up in {kib}. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent live query results from being available -in the {kib} UI even though the results have been successfully sent to {es}. -For more information, refer to {agent-issue}2066[#2066]. - -*Impact* + -Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes. -==== - -[[known-issue-2170]] -.Adding a {fleet-server} integration to an agent results in panic if the agent was not bootstrapped with a {fleet-server}. -[%collapsible] -==== - -*Details* - -A panic occurs because the {agent} does not have a `fleet.server` in the `fleet.enc` -configuration file. When this happens, the agent fails with a message like: - -[source,shell] ----- -panic: runtime error: invalid memory address or nil pointer dereference -[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x557b8eeafc1d] -goroutine 86 [running]: -github.com/elastic/elastic-agent/internal/pkg/agent/application.FleetServerComponentModifier.func1({0xc000652f00, 0xa, 0x10}, 0x557b8fa8eb92?) -... ----- - -For more information, refer to {agent-issue}2170[#2170]. - -*Impact* + - -To work around this problem, uninstall the {agent} and install it again with -{fleet-server} enabled during the bootstrap process. -==== - -[[known-issue-2232-8-6-1]] -.Changing the {agent} log level can incorrectly disable inputs in supervised {beats}. -[%collapsible] -==== - -*Details* - -Data collection may be disabled when the {agent} log level is changed. Avoid changing the {agent} log level. - -Upgrade to 8.6.2 to fix the problem. For more information, refer to {agent-issue}2232[#2232]. -==== - -[[known-issue-2328-8-6-1]] -.{fleet-server} will crash when configured to use the Warning log level. -[%collapsible] -==== - -*Details* - -{fleet-server} will crash when configured to use the Warning log level. Do not use the Warning log level. -Affected {fleet-server} instances must be reinstalled to fix the problem. - -Upgrade to 8.6.2 to fix the problem. For more information, refer to {fleet-server-issue}2328[#2328]. -==== - -[[known-issue-issue-unsigned-8.6.1]] -.The initial release of {agent} 8.6.1 for MacOS contained an unsigned elastic-agent executable. On MacOS Ventura, upgrading from 8.6.1 to any other version will fail and can disable Elastic Defend protections. -[%collapsible] -==== -*Details* + -The initial release of {agent} version 8.6.1 for MacOS contained an unsigned elastic-agent executable and a correctly signed endpoint-security -executable. The endpoint-security executable implements the endpoint protection functionality of the Elastic Defend integration. - -New functionality in MacOS Gatekeeper in MacOS Ventura prevents the unsigned elastic-agent executable from modifying the installation of the signed endpoint-security executable causing upgrades from affected 8.6.1 versions to fail. The failed upgrade can leave {agent} in an unhealthy -state with the endpoint-security executable disabled. Note that MacOS Gatekeeper implements a signature cache, such that the upgrade is only likely to fail on MacOS Ventura machines that have been rebooted since the first upgrade to version 8.6.1. - -As of February 27th 2023 the {agent} 8.6.1 artifacts for MacOS have been updated with a correctly signed elastic-agent executable. To verify that -the signature of the elastic-agent executable is correct, run the command below and ensure that Elasticsearch, Inc appears in the Authority field. - -[source,shell] ----- -tar xvfz elastic-agent-8.6.1-darwin-aarch64.tar.gz -cd elastic-agent-8.6.1-darwin-aarch64 -codesign -dvvvv ./elastic-agent -... -Signature size=9068 -Authority=Developer ID Application: Elasticsearch, Inc (2BT3HPN62Z) -Authority=Developer ID Certification Authority -Authority=Apple Root CA -Timestamp=Feb 24, 2023 at 4:33:02 AM -... ----- - -*Impact* + -Any {agent} deployed to MacOS Ventura that was upgraded to version 8.6.1 prior to February 27th 2023 must be reinstalled using a version with correctly signed executables. -Upgrades to any other version will fail and lead to broken functionality, including disabling the protections from Elastic Defend. - -The specific steps to follow to correct this problem are: - -1. Download a version of {agent} with correctly signed executables. -2. Unenroll the affected agents, either from the command line or the Fleet UI. A new agent ID will be generated when reinstalling. -3. Run the `elastic-agent uninstall` command to remove the incorrectly signed version of {agent}. -4. From the directory containing the new, correctly signed {agent} artifacts run the `elastic-agent install` command. The agent may be reenrolled at install time or separately with the `elastic-agent enroll` command. - -==== - -[[known-issue-issue-2103-8.6.1]] -.Installing {agent} on MacOS Ventura may fail if Full Disk Access has not been granted to the application used for installation. -[%collapsible] -==== -*Details* + -This issue occurs on MacOS Ventura when Full Disk Access is not granted to the application that runs the installation command. -This could be either a Terminal or any custom package that a user has built to distribute {agent}. - -For more information, refer to {agent-issue}2103[#2103]. - -*Impact* + -{agent} will fail to install and produce "Error: failed to fix permissions: chown elastic-agent.app: operation not permitted" message. -Ensure that the application used to install {agent} (for example, the Terminal or custom package) has Full Disk Access before running `sudo ./elastic-agent install`. -==== - -[[known-issue-issue-2343-8.6.1]] -.{agent} upgrades scheduled for a future time do not run. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent upgrades scheduled to execute at a later time from running. -For more information refer to {agent-issue}2343[#2343]. - -*Impact* + -{kib} may show an agent as being stuck with the `Updating` status. -If the scheduled start time has passed, you may force the agent to run by sending it any action (excluding an upgrade action), such as a change to the policy or the log level. -==== - -[[known-issue-issue-2303-8.6.1]] -.{fleet} ignores custom `server.*` attributes provided through integration settings. -[%collapsible] -==== -*Details* + -{fleet} will ignore any custom `server.*` attributes provided through the custom configurations yaml block of the intgration. -For more information refer to {fleet-server-issue}2303[#2303]. - -*Impact* + -Custom yaml settings are silently ignored by {fleet}. -Settings with input blocks, such as Max agents are still effective. -==== - -[[known-issue-issue-2249-8.6.1]] -.{agent} does not load {fleet} configuration when upgrading to {agent} version 8.6.x from version 8.2.x or earlier. -[%collapsible] -==== -*Details* + -{agent} will not load `fleet.yml` information after upgrading to {agent} version 8.6.x from version 8.2.x or earlier -This is due to the unencrypted config stored in `fleet.yml` not being migrated correctly to the encrypted config store `fleet.enc`. -For a managed agent the symptom manifests as the inability to connect to {fleet} after upgrade with the following log: - -[source,shell] ----- -Error: fleet configuration is invalid: empty access token -... ----- -For more information refer to {agent-issue}2249[#2249]. - -*Impact* + -{agent} loses the ability to connect back to {fleet} after upgrade. -Fix for this issue is available in version 8.7.0 and higher, for version 8.6 a re-enroll is necessary for the agent to connect back to {fleet} -==== - -[discrete] -[[bug-fixes-8.6.1]] -=== Bug fixes - -{fleet}:: -* Fix missing policy ID in installation URL for cloud integrations {kibana-pull}149243[#149243] -* Fix package installation APIs to install packages without a version {kibana-pull}149193[#149193] -* Fix issue where the latest GA version could not be installed if there was a newer prerelease version in the registry -{kibana-pull}149133[#149133] {kibana-pull}149104[#149104] - -{fleet-server}:: -* Update the `.fleet-agent` index when acknowledging policy changes when {ls} -is the configured output. Fixes agents always showing as Updating when using the -{ls} output {fleet-server-pull}2119[#2119] - -{agent}:: -* Fix issue where {beats} started by {agent} may fail with an `output unit has no config` error {agent-pull}2138[#2138] {agent-issue}2086[#2086] -* Restore the ability to set custom HTTP headers at enrollment time. Fixes traffic filters in Integrations Server cloud deployments {agent-pull}2158[#2158] {beats-issue}32993[#32993] -* Make it easier to filter agent logs from the combined agent log file {agent-pull}2044[#2044] {agent-issue}1810[#1810] - -// end 8.6.1 relnotes - -// begin 8.6.0 relnotes - -[[release-notes-8.6.0]] -== {fleet} and {agent} 8.6.0 - -Review important information about the {fleet} and {agent} 8.6.0 release. - -[discrete] -[[breaking-changes-8.6.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-1994]] -.Each input in an agent policy must have a unique ID -[%collapsible] -==== -*Details* + -Each input in an agent policy must have a unique ID, like `id: my-unique-input-id`. -This change only affects standalone agents. Unique IDs are automatically generated in -agent policies managed by {fleet}. For more information, refer to -{agent-pull}/1994[#1994] - -*Impact* + -Make sure that your standalone agent policies have a unique ID. -==== - -[discrete] -[[breaking-1140]] -.Diagnostics `--pprof` argument has been removed and is now always provided -[%collapsible] -==== -*Details* + -The `diagnostics` command gathers diagnostic information about the {agent} and -each component/unit it runs. Starting in 8.6.0, the `--pprof` -argument is no longer available because `pprof` information is now always -provided. For more information, refer to {agent-pull}1140[#1140]. - -*Impact* + -Remove the `--pprof` argument from any scripts or commands you use. -==== - -[discrete] -[[breaking-1398]] -.Not dedoted Kubernetes pod labels to be used for autodiscover conditions -[%collapsible] -==== -*Details* + -Kubernetes pod labels used in autodiscover conditions are not dedoted anymore. This means that -`.` are not replaced with `_` in labels like `app.kubernetes.io/component=controller`. -This follows the same approach as kubernetes annotations. For more information refer to <>. - -*Impact* + -Any template used for standalone elastic agent or installed integration that makes use -of dedoted kubernetes labels inside conditions has to be updated. -==== - -[discrete] -[[known-issues-8.6.0]] -=== Known issues - -[discrete] -[[known-issue-issue-2066]] -.Osquery live query results can take up to five minutes to show up in {kib}. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent live query results from being available -in the {kib} UI even though the results have been successfully sent to {es}. -For more information, refer to {agent-issue}2066[#2066]. - -*Impact* + -Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes. -==== - -[[known-issue-issue-2103-8.6.0]] -.Installing {agent} on MacOS Ventura may fail if Full Disk Access has not been granted to the application used for installation. -[%collapsible] -==== -*Details* + -This issue occurs on MacOS Ventura when Full Disk Access is not granted to the application that runs the installation command. -This could be either a Terminal or any custom package that a user has built to distribute {agent}. - -For more information, refer to {agent-issue}2103[#2103]. - -*Impact* + -{agent} will fail to install and produce "Error: failed to fix permissions: chown elastic-agent.app: operation not permitted" message. -Ensure that the application used to install {agent} (for example, the Terminal or custom package) has Full Disk Access before running `sudo ./elastic-agent install`. -==== - - -[[known-issue-issue-2086]] -.Beats started by agent may fail with `output unit has no config` error. -[%collapsible] -==== -*Details* + -A known issue in {agent} may lead to Beat processes being started without a -valid output. To correct the problem, trigger a restart of {agent} -or the affected Beats. For Beats managed by {agent}, you can trigger a restart by changing the -{agent} log level or the output section of the {agent} policy. -For more information, refer to {agent-issue}2086[#2086]. - -*Impact* + -{agent} will appear unhealthy and the affected Beats will not be able to write -event data to {es} or Logstash. -==== - -[[known-issue-issue-2343-8.6.0]] -.{agent} upgrades scheduled for a future time do not run. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent upgrades scheduled to execute at a later time from running. -For more information refer to {agent-issue}2343[#2343]. - -*Impact* + -{kib} may show an agent as being stuck with the `Updating` status. -If the scheduled start time has passed, you may force the agent to run by sending it any action (excluding an upgrade action), such as a change to the policy or the log level. -==== - -[[known-issue-issue-2303-8.6.0]] -.{fleet} ignores custom `server.*` attributes provided through integration settings. -[%collapsible] -==== -*Details* + -{fleet} will ignore any custom `server.*` attributes provided through the custom configurations yaml block of the intgration. -For more information refer to {fleet-server-issue}2303[#2303]. - -*Impact* + -Custom yaml settings are silently ignored by {fleet}. -Settings with input blocks, such as Max agents are still effective. -==== - -[[known-issue-issue-2249-8.6.0]] -.{agent} does not load {fleet} configuration when upgrading to {agent} version 8.6.x from version 8.2.x or earlier. -[%collapsible] -==== -*Details* + -{agent} will not load `fleet.yml` information after upgrading to {agent} version 8.6.x from version 8.2.x or earlier -This is due to the unencrypted config stored in `fleet.yml` not being migrated correctly to the encrypted config store `fleet.enc`. -For a managed agent the symptom manifests as the inability to connect to {fleet} after upgrade with the following log: - -[source,shell] ----- -Error: fleet configuration is invalid: empty access token -... ----- -For more information refer to {agent-issue}2249[#2249]. - -*Impact* + -{agent} loses the ability to connect back to {fleet} after upgrade. -Fix for this issue is available in version 8.7.0 and higher, for version 8.6 a re-enroll is necessary for the agent to connect back to {fleet} -==== - -[discrete] -[[new-features-8.6.0]] -=== New features - -The 8.6.0 release adds the following new and notable features. - -{fleet}:: -* Differentiate kubernetes integration multipage experience {kibana-pull}145224[#145224] -* Add prerelease toggle to Integrations list {kibana-pull}143853[#143853] -* Add link to allow users to skip multistep add integration workflow {kibana-pull}143279[#143279] - -{agent}:: -* Upgrade Node to version 18.12.0 {agent-pull}1657[#1657] -* Add experimental support for running the elastic-agent-shipper {agent-pull}1527[#1527] {agent-issue}219[#219] -* Collect logs from sub-processes via stdout and stderr and write them to a single, unified Elastic Agent log file {agent-pull}1702[#1702] {agent-issue}221[#221] -* Remove inputs when all streams are removed {agent-pull}1869[#1869] {agent-issue}1868[#1868] -* No longer restart {agent} on log level change {agent-pull}1914[#1914] {agent-issue}1896[#1896] -* Add `inspect components` command to inspect the computed components/units model of the current configuration (for example, `elastic-agent inspect components`) {agent-pull}1701[#1701] {agent-issue}836[#836] -* Add support for the Common Expression Language (CEL) {filebeat} input type {agent-pull}1719[#1719] -* Only support {es} as an output for the beta synthetics integration {agent-pull}1491[#1491] -* New control protocol between the {agent} and its subprocesses enables per integration health reporting and simplifies new input development {agent-issue}836[#836] {agent-pull}1701[#1701] -* All binaries for every supported integration are now bundled in the {agent} by default {agent-issue}836[#836] {agent-pull}126[#126] - -[discrete] -[[enhancements-8.6.0]] -=== Enhancements - -{fleet}:: -* Add `?full` option to get package info endpoint to return all package fields {kibana-pull}144343[#144343] - -{agent}:: -* Health Status: {agent} now indicates detailed status information for each sub-process and input type {fleet-server-pull}1747[#1747] {agent-issue}100[#100] -* Change internal directory structure: add a components directory to contain binaries and associated artifacts, and remove the downloads directory {agent-issue}836[#836] {agent-pull}1701[#1701] - -[discrete] -[[bug-fixes-8.6.0]] -=== Bug fixes - -{fleet}:: -* Only show {fleet}-managed data streams on data streams list page {kibana-pull}143300[#143300] -* Fix synchronization bug in {fleet-server} that can lead to {es} being flooded by requests to `/.fleet-actions/_fleet/_fleet_search` {fleet-server-pull}2205[#2205]. - -{agent}:: -* {agent} now uses the locally bound port (8221) when running {fleet-server} instead of the external port (8220) {agent-pull}1867[#1867] -// end 8.6.0 relnotes - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.6.x relnotes - -//[[release-notes-8.6.x]] -//== {fleet} and {agent} 8.6.x - -//Review important information about the {fleet} and {agent} 8.6.x release. - -//[discrete] -//[[security-updates-8.6.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.6.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.6.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.6.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.6.x, and will be removed in -//8.6.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.6.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.6.x]] -//=== New features - -//The 8.6.x release adds the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.6.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.6.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.6.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.7.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.7.asciidoc deleted file mode 100644 index 56ba76bd2..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.7.asciidoc +++ /dev/null @@ -1,333 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.7.1 relnotes - -[[release-notes-8.7.1]] -== {fleet} and {agent} 8.7.1 - -Review important information about the {fleet} and {agent} 8.7.1 release. - -[discrete] -[[new-features-8.7.1]] -=== New features - -The 8.7.1 release adds the following new and notable features. - -{agent}:: -* Add support for feature flags, starting with one to toggle fully qualified domain name (FQDN) reporting in - events generated by {agent} components, via the `host.name` field {agent-pull}2218[#2218] {agent-issue}2185[#2185] - -[discrete] -[[enhancements-8.7.1]] -=== Enhancements - -{fleet}:: -* The agent policy `Host name format` selector is now enabled by default {kibana-pull}154563[#154563] - -{agent}:: -* Diagnostics now preserve the Elastic Agent upgrade watcher logs, allowing upgrades that were rolled back to be debugged {agent-pull}2518[#2518] {agent-issue}2262[#2262] - -[discrete] -[[bug-fixes-8.7.1]] -=== Bug fixes - -{fleet}:: -* Fixes an issue where the Advanced options toggle in the policy editor always showing {kibana-pull}154612[#154612] -* Fixes an issue where the warning icon was unable to display in 8.7 {kibana-pull}154119[#154119] -* Adds updates to output logic {kibana-pull}153226[#153226] - -{agent}:: -* Fix parsing of paths from the `container-paths.yml` file used internally when `elastic-agent container` is run {agent-pull}2340[#2340] -* Fix action acknowledgements taking up to 5 minutes. Fixed OSQuery live query results taking up to five minutes to show up in Kibana {agent-pull}2406[#2406] {agent-issue}2410[#2410] -* Fixes a bug that `logging.level` settings were not being respected, coming either from Fleet UI or a config file {agent-pull}2456[#2456] {agent-issue}2450[#2450] -* Fixes a bug that caused an empty proxy from a Fleet managed {agent} policy to override the proxy set by `--proxy-url` {agent-pull}2468[#2468] {agent-issue}2304[#2304] {agent-issue}2447[#2447] -* Ensure that the `/usr/local/bin` directory exists on MacOS during {agent} installation {agent-pull}2490[#2490] {agent-issue}2487[#2487] -* Fixes a bug that caused the lumberjack input type to be missing from the Filebeat `filebeat.spec.yaml` file, which is required by the `barracuda_cloudgen_firewall` integration {agent-pull}2511[#2511] -* Fixes a bug that prevented sub-directories from being created under the `logs/` path in diagnostics ZIP files {agent-pull}2523[#2523] -* Make best effort in copying the run directory on upgrades to avoid unnecessary failures. Fixes intermittent upgrade failures when an osquery is running. {agent-pull}2448[#2448] {agent-issue}2433[#2433] - -// end 8.7.1 relnotes - -// begin 8.7.0 relnotes - -[[release-notes-8.7.0]] -== {fleet} and {agent} 8.7.0 - -Review important information about the {fleet} and {agent} 8.7.0 release. - -[discrete] -[[breaking-changes-8.7.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-147616]] -.Remove the current_upgrades endpoint -[%collapsible] -==== -*Details* + -The `api/fleet/current_upgrades` endpoint has been removed. For more information, refer to {kibana-pull}147616[#147616]. - -*Impact* + -When you upgrade to 8.7.0, use the `/action_status` endpoint. -==== - -[discrete] -[[breaking-147199]] -.Remove the preconfiguration API route -[%collapsible] -==== -*Details* + -The `/api/fleet/setup/preconfiguration` API, which was released as generally available by error, has been removed. For more information, refer to {kibana-pull}147199[#147199]. - -*Impact* + -Do not use `/api/fleet/setup/preconfiguration`. To manage preconfigured agent policies, use `kibana.yml`. For more information, check link:https://www.elastic.co/guide/en/kibana/current/fleet-settings-kb.html#_preconfiguration_settings_for_advanced_use_cases[Preconfigured settings]. -==== - -[discrete] -[[known-issues-8.6.2]] -=== Known issues - -[discrete] -[[known-issue-issue-2066-8-6-2-2]] -.Osquery live query results can take up to five minutes to show up in {kib}. -[%collapsible] -==== -*Details* + -A known issue in {agent} may prevent live query results from being available -in the {kib} UI even though the results have been successfully sent to {es}. -For more information, refer to {agent-issue}2066[#2066]. - -*Impact* + -Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes. -==== - -[[known-issue-2170-8-6-2-2]] -.Adding a {fleet-server} integration to an agent results in panic if the agent was not bootstrapped with a {fleet-server}. -[%collapsible] -==== - -*Details* - -A panic occurs because the {agent} does not have a `fleet.server` in the `fleet.enc` -configuration file. When this happens, the agent fails with a message like: - -[source,shell] ----- -panic: runtime error: invalid memory address or nil pointer dereference -[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x557b8eeafc1d] -goroutine 86 [running]: -github.com/elastic/elastic-agent/internal/pkg/agent/application.FleetServerComponentModifier.func1({0xc000652f00, 0xa, 0x10}, 0x557b8fa8eb92?) -... ----- - -For more information, refer to {agent-issue}2170[#2170]. - -*Impact* + - -To work around this problem, uninstall the {agent} and install it again with -{fleet-server} enabled during the bootstrap process. -==== - -[[known-issue-2433-8-6-2-2]] -.There is a known issue when upgrading {agent}s to 8.7.0 that are running Osquery. -[%collapsible] -==== - -*Details* + -{agent}s that have the Osquery Manager integration installed can get stuck in an "Updating" state. -For more information, refer to {agent-issue}2433[#2433]. - -*Impact* + -Users can do the following work around the issue: - -* Wait for the 8.7.1 release to upgrade {agent}s to the 8.7.x line. -* Remove the Osquery Manager integration before upgrading. After the {agent} has upgraded to 8.7.0, add the Osquery Manager integration back to the {agent}. -* If you encounter this issue and {agent}s are stuck in the "Updating" phase, remove the Osquery Manager integration, upgrade the {agent}, and then add it back. - -NOTE: you may need to use the {agent} upgrade API in this scenario instead of the UI. -==== - -[discrete] -[[new-features-8.7.0]] -=== New features - -The 8.7.0 release adds the following new and notable features. - -{fleet}:: -* Add `getStatusSummary` query parameter to `GET /api/fleet/agents` API {kibana-pull}149963[#149963] -* Enable diagnostics feature flag and change query for files to use upload_id {kibana-pull}149575[#149575] -* Add experimental toggles for doc-value-only {kibana-pull}149131[#149131] -* Display agent metrics, CPU and memory in the agent list table and agent details page {kibana-pull}149119[#149119] -* Add rollout period to upgrade action {kibana-pull}148240[#148240] -* Add per-policy inactivity timeout and use runtime fields for agent status {kibana-pull}147552[#147552] -* Show dataset combo box for input packages {kibana-pull}147015[#147015] -* Implement a new UI form to support an {agent} shipper in {fleet} {kibana-pull}145755[#145755] - -{agent}:: -* Add the Entity Analytics {filebeat} input to {agent} {agent-pull}2196[#2196] -* Add the ability for the {agent} to receive a diagnostics action {agent-pull}1703[#1703] {agent-issue}1883[#1883] - -[discrete] -[[enhancements-8.7.0]] -=== Enhancements - -{agent}:: -* Enhance {agent} monitoring configuration to support {filebeat} `/inputs` endpoint {agent-pull}2171[#2171] {beats-issue}33953[#33953] -* Render {agent} configuration when running `elastic-agent inspect --variables-wait` {agent-pull}2297[#2297] {agent-issue}2206[#2206] - -[discrete] -[[bug-fixes-8.7.0]] -=== Bug fixes - -{fleet-server}:: -* Accept raw errors as a fallback to detailed error type. This fixes a bug that enrollment or other operations would fail when an error was returned by Elastic in a raw string instead of JSON format. {fleet-server-pull}2079[#2079] - -{agent}:: -* Correctly migrate unencrypted {fleet} configuration when upgrading from versions prior to 8.3 {agent-pull}2256[#2256] {agent-issue}2249[#2249] -* Restore support for memcached/metrics inputs {agent-pull}2298[#2298] {agent-issue}2293[#2293] -* Fix the log message emitted by the Upgrade Watcher when it detects a crash {agent-pull}2320[#2320] -* Fix incorrect, temporary reporting of an agent as unhealthy during installation {agent-pull}2325[#2325] {agent-issue}2272[#2272] -* Correct the permissions of the `state/data/tmp` and `state/data/logs` folders when {agent} is run as a container {agent-pull}2330[#2330] {agent-issue}2315[#2315] -* Add a timer to periodically check for scheduled actions {agent-pull}2344[#2344] {agent-issue}2343[#2343] -* Fix a bug that caused {agent} to not start monitoring new Kubernetes pods until it was restarted {agent-pull}2349[#2349] {agent-issue}2269[#2269] -* Fix possible causes of deadlocks when {agent} shuts down {agent-pull}2352[#2352] {agent-issue}2310[#2310] -* Fix permission issue on MacOS Ventura and above when enrolling as part of the installation {agent-pull}2314[#2314] {agent-issue}2103[#2103] - -// end 8.7.0 relnotes - - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release adds the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.8.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.8.asciidoc deleted file mode 100644 index 986228ffc..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.8.asciidoc +++ /dev/null @@ -1,392 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.8.2 relnotes - -[[release-notes-8.8.2]] -== {fleet} and {agent} 8.8.2 - -Review important information about the {fleet} and {agent} 8.8.2 release. - -[discrete] -[[security-updates-8.8.2]] -=== Security updates - -{agent}:: -* Updated Go version to 1.19.10. {agent-pull}2846[#2846] - -[discrete] -[[enhancements-8.8.2]] -=== Enhancements - -* Log start and stop operations from service runtime at `INFO` rather than `DEBUG` level. {agent-pull}2879[#2879] {agent-issue}2864[#2864] - -[discrete] -[[bug-fixes-8.8.2]] -=== Bug fixes - -{fleet}:: -* Fixes usage of AsyncLocalStorage for audit log. {kibana-pull}159807[#159807] -* Fixing issue of returning output API key. {kibana-pull}159179[#159179] - -{agent}:: -* Explicitly specify timeout units as seconds in the Endpoint spec file. {agent-pull}2870[#2870] {agent-issue}2863[#2863] -* Fix logs collection in diagnostics when {agent} is running on Kubernetes. {agent-pull}2905[#2905] {agent-issue}2899[#2899] -* The <> that caused an {elastic-defend} and {agent} CPU spike when connectivity to Elasticsearch and/or Logstash is lost has been resolved in this release. - -// end 8.8.2 relnotes - -// begin 8.8.1 relnotes - -[[release-notes-8.8.1]] -== {fleet} and {agent} 8.8.1 - -Review important information about the {fleet} and {agent} 8.8.1 release. - -[discrete] -[[known-issues-8.8.1]] -=== Known issues - -[[known-issue-sdh-endpoint-316-v881]] -.{elastic-defend} and {agent} CPU spike when connectivity to Elasticsearch and/or Logstash is lost. -[%collapsible] -==== - -*Details* - -When the output server ({es} or {ls}) is unreachable, versions 8.8.0 & 8.8.1 of {elastic-defend} (or {elastic-endpoint}) and {agent} may enter a state where they repeatedly communicate with each other indefinitely. This manifests as both processes consuming dramatically more CPU, constantly. - -Versions 8.8.0 & 8.8.1 are affected on all operating systems. {agent} does not manifest the behavior unless the {elastic-defend} integration is enabled. - -*Impact* + - -This issue was resolved in version 8.8.2. If you are using {agent} with the {elastic-defend} integration, please update to 8.8.2 or later. - -==== - -[discrete] -[[enhancements-8.8.1]] -=== Enhancements - -{fleet}:: -* Add {agent} UI instructions for Universal Profile. {kibana-pull}158936[#158936] - -{fleet-server}:: -* Add {fleet} configuration file to {agent} diagnostics bundle. {fleet-server-pull}2632[#2632] {fleet-server-issue}2623[#2623] - -[discrete] -[[bug-fixes-8.8.1]] -=== Bug fixes - -{fleet}:: -* Include hidden data streams in package upgrade. {kibana-pull}158654[#158654] - -{agent}:: -* Fix potential communication issue when a running component would lose connection to the {agent} and be unable to re-connect because of a concurrent updated component model. {agent-pull}2729[#2729] {agent-pull}2691[#2691] - -* Retry download step during upgrade process. {agent-pull}2776[#2776] - -// end 8.8.1 relnotes - -// begin 8.8.0 relnotes - -[[release-notes-8.8.0]] -== {fleet} and {agent} 8.8.0 - -Review important information about the {fleet} and {agent} 8.8.0 release. - -[discrete] -[[known-issues-8.8.0]] -=== Known issues - -[[known-issue-issue-upgrade-20230608]] -.{agent} upgrade process can sometimes stall. -[%collapsible] -==== - -*Details* + -{agent} upgrades can sometimes stall without returning an error message, and without the agent upgrade process restarting automatically. - -*Impact* + -In this situation the agent returns from `Updating` to a `Healthy` state, but without the new version having been installed. To address this, you can trigger a new upgrade manually. - -This issue is specific to version 8.8.0 and is resolved in version 8.8.1. -==== - -[[known-issue-issue-2749]] -.{agent} can fail when file paths generated to represent Unix sockets exceed 103 characters. -[%collapsible] -==== - -*Details* + -When an internally generated file path exceeds this length it is truncated using a hash, and the newly constructed path might not be accessible to the agent. - -To identify the problem, check the output of `elastic-agent status --output=yaml` or the `state.yaml` file in a diagnostics bundle for output like the following: - -[source,console] ----- -- id: kubernetes/metrics-60f88f50-c873-11ed-9baf-09fb5640c56a - state: - state: 4 - message: 'Failed: pid ''3770789'' exited with code ''1''' - units: - ? unittype: 1 - unitid: kubernetes/metrics-60f88f50-c873-11ed-9baf-09fb5640c56a - : state: 4 - message: 'Failed: pid ''3770789'' exited with code ''1''' - ? unittype: 0 - unitid: kubernetes/metrics-60f88f50-c873-11ed-9baf-09fb5640c56a-kubernetes/metrics-kubelet-0d1f291d-9b2e-4f44-a0dc-82ebee865799 - : state: 4 - message: 'Failed: pid ''3770789'' exited with code ''1''' - ? unittype: 0 - unitid: kubernetes/metrics-60f88f50-c873-11ed-9baf-09fb5640c56a-kubernetes/metrics-kube-proxy-0d1f291d-9b2e-4f44-a0dc-82ebee865799 - : state: 4 - message: 'Failed: pid ''3770789'' exited with code ''1''' - features_idx: 0 - version_info: - name: "" - version: "" ----- - -This is accompanied by an error message in the logs: - -[source,console] ----- -logs/elastic-agent-20230530-23.ndjson:{"log.level":"error","@timestamp":"2023-05-30T11:42:46.776Z","message":"Exiting: could not start the HTTP server for the API: listen unix /tmp/elastic-agent/6dd26cab2bb93d6254d75a9ef22c5fb5d3c5ffbd8866f26288d86d2f672d2ae6.sock: bind: no such file or directory","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-60f88f50-c873-11ed-9baf-08ec5473d24b","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-60e22e52-d872-12dc-4adf-09fb5242c26b"},"log.origin":{"file.line":1142,"file.name":"instance/beat.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"} ----- - -*Impact* + - -This issue is being investigated. Until it's resolved, as a workaround you can reduce the length of the agent output name until the problem stops occurring. -==== - -[[known-issue-sdh-endpoint-316-v880]] -.{elastic-defend} and {agent} CPU spike when connectivity to Elasticsearch and/or Logstash is lost. -[%collapsible] -==== - -*Details* - -When the output server ({es} or {ls}) is unreachable, versions 8.8.0 & 8.8.1 of {elastic-defend} (or {elastic-endpoint}) and {agent} may enter a state where they repeatedly communicate with each other indefinitely. This manifests as both processes consuming dramatically more CPU, constantly. - -Versions 8.8.0 & 8.8.1 are affected on all operating systems. {agent} does not manifest the behavior unless the {elastic-defend} integration is enabled. - -*Impact* + - -This issue was resolved in version 8.8.2. If you are using {agent} with the {elastic-defend} integration, please update to 8.8.2 or later. - -==== - -[discrete] -[[new-features-8.8.0]] -=== New features - -The 8.8.0 release Added the following new and notable features. - -{fleet}:: -* Added audit logging for core CRUD operations {kibana-pull}152118[#152118] -* Added modal to display versions changelog {kibana-pull}152082[#152082] - -{fleet-server}:: -* Documented how to run the fleet server locally {fleet-server-pull}2212[#2212] {fleet-server-issue}1423[#1423] -* {fleet-server} now supports file uploads for a limited subset of integrations {fleet-server-pull}1902[#1902] -* Extended the {fleet-server} actions schema to support signed actions passing to the agent as a part of the agent tamper protection. {fleet-server-pull}2353[#2353] -* {fleet-server} can now be run in stand-alone mode without needing to check into {kib} {fleet-server-pull}2359[#2359] {fleet-server-issue}2351[#2351] -* Added support for gathering secret values from files {fleet-server-pull}2459[#2459] -* Added action APM metadata to help debug agent actions {fleet-server-pull}2472[#2472] - -{agent}:: -* Added a specification file for the link:https://www.elastic.co/observability/universal-profiling[Universal Profiling Symbolizer] {agent-pull}2401[#2401] -* Added a specification file for the link:https://www.elastic.co/observability/universal-profiling[Universal Profiling Collector] {agent-pull}2407[#2407] -* Added support to specify the {fleet-server} `service_token` through a file specified with `service_token_file` {agent-pull}2424[#2424] - -[discrete] -[[enhancements-8.8.0]] -=== Enhancements - -{fleet}:: -* Added overview dashboards in fleet {kibana-pull}154914[#154914] -* Added raw status to Agent details UI {kibana-pull}154826[#154826] -* Added support for dynamic_namespace and dynamic_dataset {kibana-pull}154732[#154732] -* Added the ability to show pipelines and mappings editor for input packages {kibana-pull}154077[#154077] -* Added placeholder to integration select field {kibana-pull}153927[#153927] -* Added the ability to show integration subcategories {kibana-pull}153591[#153591] -* Added the ability to create and update the package policy API return 409 conflict when names are not unique {kibana-pull}153533[#153533] -* Added the ability to display policy changes in Agent activity {kibana-pull}153237[#153237] -* Added the ability to display errors in Agent activity with link to Logs {kibana-pull}152583[#152583] -* Added support for select type in integrations {kibana-pull}152550[#152550] -* Added the ability to make spaces plugin optional {kibana-pull}152115[#152115] -* Added proxy ssl key and certificate to agent policy {kibana-pull}152005[#152005] -* Added `_meta` field `has_experimental_data_stream_indexing_features` {kibana-pull}151853[#151853] -* Added the ability to create templates and pipelines when updating package of a single package policy from type integration to input {kibana-pull}150199[#150199] -* Added user's secondary authorization to Transforms {kibana-pull}154665[#154665] -* Added support for the Cloud Defend application to {agent} {fleet-server-pull}2477[#2477] -* Disabled signature validation in {agent} so that only {endpoint-sec} validates policies and actions {fleet-server-pull}2562[#2562] - -{fleet-server}:: -* Replaced upgrade expiration and `minimum_execution_duration` with rollout_duration_seconds` {fleet-server-pull}2243[#2243] -* Added a `poll_timeout` attribute to check in requests that the client can use to inform {fleet-server} of how long the client will hold the polling connection open for {fleet-server-pull}2491[#2491] {fleet-server-issue}2337[#2337] -* Added a `memory_limit` configuration setting to help prevent OOM errors {fleet-server-pull}2514[#2514] - -{agent}:: -* Make download of {agent} upgrade artifacts asynchronous during Fleet-managed upgrade and increase the download timeout to 2 hours {agent-pull}2205[#2205] {agent-issue}1706[#1706] -* Make the language used in CLI commands more consistent {fleet-server-pull}2496[#2496] - -[discrete] -[[bug-fixes-8.8.0]] -=== Bug fixes - -{fleet}:: -* Fixes package license check to use new `conditions.elastic.subscription` field {kibana-pull}154831[#154831] -* Fixes the OpenAPI spec from `/agent/upload` to `/agent/uploads` for Agent uploads API {kibana-pull}151722[#151722] - -{fleet-server}:: -* Filter out unused `UPDATE_TAGS` and `FORCE_UNENROLL` actions from being delivered to {agent} {fleet-server-pull}2200[#2200] -* Ignore the `unenroll_timeout` field on agent policies as it has been replaced by a configurable inactivity timeout {fleet-server-pull}2096[#2096] {fleet-server-issue}2063[#2063] -* Fixed {fleet-server} discarding duplicate `server` keys input when creating configuration from a policy {fleet-server-pull}2354[#2354] {fleet-server-issue}2303[#2303] -* {fleet-server} will no longer restart subsystems like API listeners and the {es} client when the log level changes {fleet-server-pull}2454[#2454] {fleet-server-issue}2453[#2453] - -{agent}:: -* Fixed the formatting of system metricsets in example {agent} configuration file {agent-pull}2338[#2338] -* Fixed the parsing of paths from the `container-paths.yml` file {agent-pull}2340[#2340] -* Added a check to ensure that {agent} was bootstrapped with the `--fleet-server-*` options {agent-pull}2505[#2505] {agent-issue}2170[#2170] -* Fixed an issue where inspect and diagnostics didn't include the local {agent} configuration {agent-pull}2529[#2529] {agent-issue}2390[#2390] -* Fixed a bug that caused heap profiles captured in the agent diagnostics to be unusable {agent-pull}2549[#2549] {agent-issue}2530[#2530] -* Fix an issue that occurs when specifing a `FLEET_SERVER_SERVICE_TOKEN_PATH` with the agent running in a Docker container where both the token value and path are passed in the enroll section of the agent setup {agent-pull}2576[#2576] - -// end 8.8.0 relnotes - - - - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/release-notes/release-notes-8.9.asciidoc b/docs/en/ingest-management/release-notes/release-notes-8.9.asciidoc deleted file mode 100644 index f0a6958d9..000000000 --- a/docs/en/ingest-management/release-notes/release-notes-8.9.asciidoc +++ /dev/null @@ -1,523 +0,0 @@ -// Use these for links to issue and pulls. -:kibana-issue: https://github.com/elastic/kibana/issues/ -:kibana-pull: https://github.com/elastic/kibana/pull/ -:beats-issue: https://github.com/elastic/beats/issues/ -:beats-pull: https://github.com/elastic/beats/pull/ -:agent-libs-pull: https://github.com/elastic/elastic-agent-libs/pull/ -:agent-issue: https://github.com/elastic/elastic-agent/issues/ -:agent-pull: https://github.com/elastic/elastic-agent/pull/ -:fleet-server-issue: https://github.com/elastic/fleet-server/issues/ -:fleet-server-pull: https://github.com/elastic/fleet-server/pull/ - -[[release-notes]] -= Release notes - -This section summarizes the changes in each release. - -* <> -* <> -* <> - -Also see: - -* {kibana-ref}/release-notes.html[{kib} release notes] -* {beats-ref}/release-notes.html[{beats} release notes] - -// begin 8.9.2 relnotes - -[[release-notes-8.9.2]] -== {fleet} and {agent} 8.9.2 - -Review important information about the {fleet} and {agent} 8.9.2 release. - -[discrete] -[[known-issues-8.9.2]] -=== Known issues - -[[known-issue-3375-v892]] -.PGP key download fails in an air-gapped environment -[%collapsible] -==== - -*Details* - -IMPORTANT: If you're using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. -This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, the agent won't be able to download the remote key and therefore cannot be upgraded. - -*Impact* + - -For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available. - -*Option 1* - -If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in <>. -Please note that you need to enable HTTP Proxy usage for `artifacts.elastic.co` to bypass this problem, so you can craft the `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables to be used exclusively for it. - -*Option 2* - -As the upgrade URL is not customizable, we have to "trick" the system by pointing `https://artifacts.elastic.co/` to another host that will have the file. - -The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from `https://artifacts.elastic.co/GPG-KEY-elastic-agent``. - -_Example 1: Manual_ - -Edit the {agent} server hosts file to add the following content: - -[source,sh] ----- - artifacts.elastic.co ----- - -The Linux hosts file path is `/etc/hosts`. - -Windows hosts file path is `C:\Windows\System32\drivers\etc\hosts`. - -_Example 2: Puppet_ - -[source,yaml] ----- -host { 'elastic-artifacts': - ensure => 'present' - comment => 'Workaround for PGP check' - ip => '' -} ----- - -_Example 3: Ansible_ - -[source,yaml] ----- -- name : 'elastic-artifacts' - hosts : 'all' - become: 'yes' - - tasks: - - name: 'Add entry to /etc/hosts' - lineinfile: - path: '/etc/hosts' - line: ' artifacts.elastic.co' ----- - -==== - -[discrete] -[[enhancements-8.9.2]] -=== Enhancements - -{fleet}:: -* Adds the configuration setting `xpack.fleet.packageVerification.gpgKeyPath` as an environment variable in the {kib} container. {kibana-pull}163783[#163783]. - -{agent}:: -* Adds logging to the restart step of the {agent} upgrade rollback process. {agent-pull}3245[#3245] - -[discrete] -[[bug-fixes-8.9.2]] -=== Bug fixes - -{agent}:: -* Correctly identify retryable errors when attempting to uninstall on Windows. {agent-pull}3317[#3317] - -// end 8.9.2 relnotes - -// begin 8.9.1 relnotes - -[[release-notes-8.9.1]] -== {fleet} and {agent} 8.9.1 - -Review important information about the {fleet} and {agent} 8.9.1 release. - -[discrete] -[[security-updates-8.9.1]] -=== Security updates - -{agent}:: -* Updated Go version to 1.19.12. {agent-pull}3186[#3186] - -[discrete] -[[known-issues-8.9.1]] -=== Known issues - -[[known-issue-3375-v891]] -.PGP key download fails in an air-gapped environment -[%collapsible] -==== - -*Details* - -IMPORTANT: If you're using an air-gapped environment, we recommended waiting for this issue to be resolved before installing 8.9.x or any higher version, to avoid being unable to upgrade. - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. -This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, the agent won't be able to download the remote key and therefore cannot be upgraded. - -*Impact* + - -For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available. - -*Option 1* - -If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in <>. -Please note that you need to enable HTTP Proxy usage for `artifacts.elastic.co` to bypass this problem, so you can craft the `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables to be used exclusively for it. - -*Option 2* - -As the upgrade URL is not customizable, we have to "trick" the system by pointing `https://artifacts.elastic.co/` to another host that will have the file. - -The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from `https://artifacts.elastic.co/GPG-KEY-elastic-agent``. - -_Example 1: Manual_ - -Edit the {agent} server hosts file to add the following content: - -[source,sh] ----- - artifacts.elastic.co ----- - -The Linux hosts file path is `/etc/hosts`. - -Windows hosts file path is `C:\Windows\System32\drivers\etc\hosts`. - -_Example 2: Puppet_ - -[source,yaml] ----- -host { 'elastic-artifacts': - ensure => 'present' - comment => 'Workaround for PGP check' - ip => '' -} ----- - -_Example 3: Ansible_ - -[source,yaml] ----- -- name : 'elastic-artifacts' - hosts : 'all' - become: 'yes' - - tasks: - - name: 'Add entry to /etc/hosts' - lineinfile: - path: '/etc/hosts' - line: ' artifacts.elastic.co' ----- - -==== - -[discrete] -[[bug-fixes-8.9.1]] -=== Bug fixes - -{fleet}:: -* Fixes for query error on Agents list in the UI. ({kibana-pull}162816[#162816]) -* Remove duplicate path being pushed to package archive. ({kibana-pull}162724[#162724]) - -{agent}:: -* Improve two unclear error messages in the Upgrade Watcher {agent-pull}3093[#3093] -* Add default UTC timezone to synthetics agent Docker images to prevent navigation errors {agent-pull}3160[#3160] {beats-issue}36117[#36117] - -// end 8.9.1 relnotes - -// begin 8.9.0 relnotes - -[[release-notes-8.9.0]] -== {fleet} and {agent} 8.9.0 - -Review important information about the {fleet} and {agent} 8.9.0 release. - -[discrete] -[[security-updates-8.9.0]] -=== Security updates - -{fleet-server}:: -* Use a verified base image for building Fleet Server binaries. {fleet-server-pull}2339[#2339] - -[discrete] -[[known-issues-8.9.0]] -=== Known issues - -[[known-issue-3375]] -.PGP key download fails in an air-gapped environment -[%collapsible] -==== - -*Details* - -IMPORTANT: If you're using an air-gapped environment, we recommended waiting for this issue to be resolved before installing 8.9.x or any higher version, to avoid being unable to upgrade. - -Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. -This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. - -In an air-gapped environment, the agent won't be able to download the remote key and therefore cannot be upgraded. - -*Impact* + - -For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available. - -*Option 1* - -If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in <>. -Please note that you need to enable HTTP Proxy usage for `artifacts.elastic.co` to bypass this problem, so you can craft the `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables to be used exclusively for it. - -*Option 2* - -As the upgrade URL is not customizable, we have to "trick" the system by pointing `https://artifacts.elastic.co/` to another host that will have the file. - -The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from `https://artifacts.elastic.co/GPG-KEY-elastic-agent``. - -_Example 1: Manual_ - -Edit the {agent} server hosts file to add the following content: - -[source,sh] ----- - artifacts.elastic.co ----- - -The Linux hosts file path is `/etc/hosts`. - -Windows hosts file path is `C:\Windows\System32\drivers\etc\hosts`. - -_Example 2: Puppet_ - -[source,yaml] ----- -host { 'elastic-artifacts': - ensure => 'present' - comment => 'Workaround for PGP check' - ip => '' -} ----- - -_Example 3: Ansible_ - -[source,yaml] ----- -- name : 'elastic-artifacts' - hosts : 'all' - become: 'yes' - - tasks: - - name: 'Add entry to /etc/hosts' - lineinfile: - path: '/etc/hosts' - line: ' artifacts.elastic.co' ----- - -==== - -[discrete] -[[breaking-changes-8.9.0]] -=== Breaking changes - -Breaking changes can prevent your application from optimal operation and -performance. Before you upgrade, review the breaking changes, then mitigate the -impact to your application. - -[discrete] -[[breaking-2890]] -.Status command has been changed. -[%collapsible] -==== -*Details* + -The {agent} `status` command has been changed so that the default human output now uses a list format and summaries output. - -*Impact* + -Full human output can be obtained with the new `full` option. -For for information, refer to {agent-pull}2890[#2890]. -==== - -[discrete] -[[breaking-2531]] -.API default error code is now 500. -[%collapsible] -==== -*Details* + -Previously, when {fleet-server} encountered an unexpected error it resulted in a `Bad Request` response. - -*Impact* + -Now, any unexpected error returns an `Internal Server Error` response while keeping most of the current behavior -unchanged. On expected failure paths (for example, Agent Inactive, Missing Agent ID, Missing Auth Header) a `Bad Request` response is returned. For more information, refer to {fleet-server-pull}2531[#2531]. -==== - -[discrete] -[[breaking-ecs-hostname]] -.`host.name` field changed to ECS lowercase format. -[%collapsible] -==== -*Details* + -In {agent} output the `host.name` field has been changed to lowercase to match Elastic Common Schema (ECS) guidelines. The agent name is also reported in lowercase (`AGENT-name` becomes `agent-name`). - -*Impact* + -After upgrading {agent} to version 8.9.0 or higher, any case-sensitive searches may result in false-positive alerts. For example, a case-sensitive search based on the upper-case `AGENT-name` could result in an alert such as `system.load.1 reported no data in the last 5m for AGENT-name`. After upgrading, you may need to manually clear alerts and adjust some searches to match the new `host.name` format. - -==== - -[discrete] -[[new-features-8.9.0]] -=== New features - -The 8.9.0 release Added the following new and notable features. - -{fleet}:: -* Adds CloudFormation install method to CSPM. {kibana-pull}159994[#159994] -* Adds flags to give permissions to write to any dataset and namespace. {kibana-pull}157897[#157897] -* Disables Agent ID verification for Observability projects. {kibana-pull}157400[#157400] -* Setup `ignore_malformed` in {fleet}. {kibana-pull}157184[#157184] - -{fleet-server}:: -* A new `elastic-api` version header is added, allow versioning of the {fleet-server} APIs. {fleet-server-pull}2677[#2677] -* Support delivery of user-uploaded files to integrations. {fleet-server-pull}2666[#2666] - -{agent}:: -* Add the logs subcommand to the agent CLI. {agent-pull}2752[#2745] {agent-issue}114[#114] -* Support upgrading to specific snapshots by specifying the build ID. {agent-pull}2752[#2752] - -[discrete] -[[enhancements-8.9.0]] -=== Enhancements - -{fleet}:: -* Adds agent integration health reporting in the Fleet UI. {kibana-pull}158826[#158826] - -{fleet-server}:: -* Expose Prometheus metrics on metrics listener (when enabled). Ship Prometheus metrics with apm.Tracer when tracer is enabled. {fleet-server-pull}2610[#2610] - - -{agent}:: -* Add additional elements to support the Universal Profiling integration. {agent-pull}2881[#2881] - -[discrete] -[[bug-fixes-8.9.0]] -=== Bug fixes - -{fleet}:: -* Fixes a bug that prevented `index.mapping` settings to be propagated into component templates from default settings. {kibana-pull}157289[#157289] - -{fleet-server}:: -* Fixes a bug during {agent} upgrades where `action_seq_no` was overwritten with 0 if the `ackToken` was not provided. {fleet-server-pull}2582[#2582] -* Fixes an issue that caused {fleet-server} to go offline after reboot. {fleet-server-pull}2697[#2697] {fleet-server-pull}2431[#2431] - -{agent}:: -* Change monitoring socket to use a hash of the ID instead of the actual ID. {agent-pull}2912[#2912] -* Fix the drop processor for monitoring component logs to use the `component.id` instead of the dataset. {agent-pull}2982[#2982] {agent-issue}2388[#2388] -* Update Node version to 18.16.0. {agent-pull}2696[#2696] - -// end 8.9.0 relnotes - - -// --------------------- -//TEMPLATE -//Use the following text as a template. Remember to replace the version info. - -// begin 8.7.x relnotes - -//[[release-notes-8.7.x]] -//== {fleet} and {agent} 8.7.x - -//Review important information about the {fleet} and {agent} 8.7.x release. - -//[discrete] -//[[security-updates-8.7.x]] -//=== Security updates - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[breaking-changes-8.7.x]] -//=== Breaking changes - -//Breaking changes can prevent your application from optimal operation and -//performance. Before you upgrade, review the breaking changes, then mitigate the -//impact to your application. - -//[discrete] -//[[breaking-PR#]] -//.Short description -//[%collapsible] -//==== -//*Details* + -// For more information, refer to {kibana-pull}PR[#PR]. - -//*Impact* + -// For more information, refer to {fleet-guide}/fleet-server.html[Fleet Server]. -//==== - -//[discrete] -//[[known-issues-8.7.x]] -//=== Known issues - -//[[known-issue-issue#]] -//.Short description -//[%collapsible] -//==== - -//*Details* - -// - -//*Impact* + - -// - -//==== - -//[discrete] -//[[deprecations-8.7.x]] -//=== Deprecations - -//The following functionality is deprecated in 8.7.x, and will be removed in -//8.7.x. Deprecated functionality does not have an immediate impact on your -//application, but we strongly recommend you make the necessary updates after you -//upgrade to 8.7.x. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[new-features-8.7.x]] -//=== New features - -//The 8.7.x release Added the following new and notable features. - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[enhancements-8.7.x]] -//=== Enhancements - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -//[discrete] -//[[bug-fixes-8.7.x]] -//=== Bug fixes - -//{fleet}:: -//* add info - -//{agent}:: -//* add info - -// end 8.7.x relnotes diff --git a/docs/en/ingest-management/security/certificates-rotation.asciidoc b/docs/en/ingest-management/security/certificates-rotation.asciidoc deleted file mode 100644 index 04d786fd3..000000000 --- a/docs/en/ingest-management/security/certificates-rotation.asciidoc +++ /dev/null @@ -1,200 +0,0 @@ -[[certificates-rotation]] -= Rotate SSL/TLS CA certificates - -In some scenarioes you may want to rotate your configured certificate authorities (CAs), for instance if your chosen CAs are due to expire. Refer to the following steps to rotate certificates between connected components: - -* <> -* <> -* <> - -[discrete] -[[certificates-rotation-agent-fs]] -== Rotating a {fleet-server} CA - -{agent} communicates with {fleet-server} to receive policies and to check for updates. There are two methods to rotate a CA certificate on {fleet-server} for connections from {agent}. The first method requires {agent} to re-enroll with {fleet-server} one or more times. The second method avoids re-enrollment and requires overwriting the existing CA file with a new certificate. - -**Option 1: To renew an expiring CA certificate on {fleet-server} with {agent} re-enrollments** - -Using this method, the {agent} with an old or expiring CA configured will be re-enrolled with {fleet-server} using a new CA. - -. Update the {agent} with the new {fleet-server} CA: -+ -The {agent} should already have a CA configured. Use the <> command to re-enroll the agent with an updated, comma separated set of CAs to use. -+ -[source,shell] ----- -elastic-agent enroll \ - --url= \ - --enrollment-token= \ - ... \ - --certificate-authorities ----- -+ -A new agent enrollment will cause a new agent to appear in {fleet}. This may be considered disruptive, however the old agent entry will transition to an offline state. A new agent enrollment is required in order for the {fleet-server} configuration to be modified to accept multiple certificate authorities. -+ -At this point, all TLS connections are still relying on the original CA that was provided (`original_CA`) in order to authenticate {fleet-server} certificates. - -. Rotate the certificates on {fleet-server}: -+ -This procedure will reissue new certificates based on the new CA. Re-enroll {fleet-server} with all the new certificates: -+ -[source,shell] ----- -elastic-agent enroll ... - --url= \ - --enrollment-token= \ - ... \ - --fleet-server-cert --certificate-authorities ----- -+ -This will cause the TLS connections to the {agents} to reset and will load the relevant new CA and certificates to the {fleet-server} configuration. - -. The {agents} will automatically establish new TLS connections as part of their normal operation: -+ -The new CA (`new_CA`) on the agent installed in Step 1 will be used to authenticate the certificates used by {fleet-server}. -+ -Note that if the original CA (`original CA`) was compromised, then it may need to be removed from the agent's CA list. To achieve this you need to enroll the agent again: -+ -[source,shell] ----- -elastic-agent enroll ... - --url= \ - --enrollment-token= \ - ... \ - --certificate-authorities ----- - -**Option 2: To renew an expiring CA certificate on {es} without {agent} re-enrollments** - -Option 1 results in multiple {agent} enrollments. Another option to avoid multiple enrollments is to overwrite the CA files with the new CA or certificate. This method uses a single file with multiple CAs that can be replaced. - -To use this option it is assumed that: - -* {agent}s have already been enrolled using a file that contains the Certificate Authority: -+ -[source,shell] ----- -elastic-agent enroll ... - --url= \ - --enrollment-token= \ - ... \ - --certificate-authorities= ----- - -* The {agent} running {fleet-server} has already been enrolled with the following secure connection options, where each option points to files that contain the certificates and keys: -+ -[source,shell] ----- -elastic-agent enroll ... - --url= \ - --enrollment-token= \ - ... \ - --certificate-authorities= \ - --fleet-server-cert= \ - --fleet-server-cert-key= ----- - -To update the {agent} and {fleet-server} configurations: - -. Update the configuration with the new CA by changing the content of `CA.pem` to include the new CA. -+ -[source,shell] ----- -cat new_ca.pem >> CA.pem ----- - -. Restart the {agents}. Note that this is not a re-enrollment. Restarting will force the {agents} to reload the CAs. -+ -[source,shell] ----- -elastic-agent restart ----- - -. For the {agent} that is running {fleet-server}, overwrite the original `certificate`, `certificate-key`, and the `certificate-authority` with the new ones to use. -+ -[source,shell] ----- -cat new-cert.pem > cert.pem -cat new-key.pem > key.pem -cat new_ca.pem > CA.pem ----- - -. Restart the {agent} that is running {fleet-server}. -+ -[source,shell] ----- -elastic-agent restart ----- - -. If the original certificate needs to be removed from the {agents}, overwrite the `CA.pem` with the new CA only: -+ -[source,shell] ----- -cat new_ca.pem > CA.pem ----- - -. Finally, restart the {agents} again. -+ -[source,shell] ----- -elastic-agent restart ----- - -[discrete] -[[certificates-rotation-fs-es]] -== Rotating an {es} CA for connections from {fleet-server} - - - -{fleet-server} communicates with {es} to send status information to {fleet} about {agent}s and to retrieve updated policies to ship out to all {agent}s enrolled in a given policy. If you have {fleet-server} <>, you may wish to rotate your configured CA certificate, for instance if the certificate is due to expire. - -To rotate a CA certificate on {es} for connections from {fleet-server}: - -. Update the {fleet-server} with the new {fleet-server} CA: -+ -The {agent} running {fleet-server} should already have a CA configured. Use the <> command to re-enroll the agent running {fleet-server} with an updated, comma separated set of CAs to use. -+ -[source,shell] ----- -elastic-agent enroll \ - --fleet-server-es= \ - --fleet-server-service-token= \ - ... \ - --fleet-server-es-ca ----- -+ -A new agent enrollment will cause two {fleet-server} agents to appear in {fleet}. This may be considered disruptive, however the old agent entry will transition to offline. A new agent enrollment is required in order for the {fleet-server} configuration to be modified to accept multiple certificate authorities. -+ -At this point, all TLS connections are still relying on the original CA that was provided (`original_ES_CA`) in order to authenticate {es} certificates. Re-enrolling the {fleet-server} will cause the agents going through that {fleet-server} to also reset their TLS, but the connections will be re-established as required. - -. Rotate the certificates on {es}. -+ -{es} will use new certificates based on the new {es} CA. Since the {fleet-server} has the original and the new {es} CAs in a chain, it will accept original and new certificates from {es}. -+ -Note that if the original {es} CA (`original_ES CA`) was compromised, then it may need to be removed from the {fleet-server}'s CA list. To achieve this you need to enroll the {fleet-server} agent again (if re-enrollment is a concern then use a file to hold the certificates and certificate-authority): -+ -[source,shell] ----- -elastic-agent enroll \ - --fleet-server-es= \ - --fleet-server-service-token= \ - ... \ - --fleet-server-es-ca ----- - -[discrete] -[[certificates-rotation-agent-es]] -== Rotating an {es} CA for connections from {agent} - -Using configuration information from a policy delivered by {fleet-server}, {agent} collects data and sends it to {es}. - -To rotate a CA certificate on {es} for connections from {agent}: - -. In {fleet} open the **Settings** tab. -. In the **Outputs** section, click the edit button for the {es} output that requires a certificate rotation. -. In the **Elasticsearch CA trusted fingerprint** field, add the new trusted fingerprint to use. This is the SHA-256 fingerprint (hash) of the certificate authority used to self-sign {es} certificates. This fingerprint will be used to verify self-signed certificates presented by {es}. -+ -If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normally. -+ -[role="screenshot"] -image::images/certificate-rotation-agent-es.png[Screen capture of the Edit Output UI: Elasticsearch CA trusted fingerprint] diff --git a/docs/en/ingest-management/security/certificates.asciidoc b/docs/en/ingest-management/security/certificates.asciidoc deleted file mode 100644 index 56dde3b56..000000000 --- a/docs/en/ingest-management/security/certificates.asciidoc +++ /dev/null @@ -1,334 +0,0 @@ -[[secure-connections]] -= Configure SSL/TLS for self-managed {fleet-server}s - -If you're running a self-managed cluster, configure Transport Layer Security -(TLS) to encrypt traffic between {agent}s, {fleet-server}, and other components -in the {stack}. - -For the install settings specific to mutual TLS, as opposed to one-way TLS, refer to <>. - -For a summary of flow by which TLS is established between components using -either one-way or mutual TLS, refer to <>. - -TIP: Our {ess-product}[hosted {ess}] on {ecloud} provides secure, encrypted -connections out of the box! - -[discrete] -[[prereqs]] -== Prerequisites - -Configure security and generate certificates for the {stack}. For more -information about securing the {stack}, refer to -{ref}/configuring-stack-security.html[Configure security for the {stack}]. - -[IMPORTANT] -==== -{agent}s require a PEM-formatted CA certificate to send encrypted data to {es}. -If you followed the steps in {ref}/configuring-stack-security.html[Configure -security for the {stack}], your certificate will be in a p12 file. To convert -it, use OpenSSL: - -[source,shell] ----- -openssl pkcs12 -in path.p12 -out cert.crt -clcerts -nokeys -openssl pkcs12 -in path.p12 -out private.key -nocerts -nodes ----- - -Key passwords are not currently supported. -==== - -IMPORTANT: When you run {agent} with the {elastic-defend} integration, the link:https://en.wikipedia.org/wiki/X.509[TLS certificates] used to connect to {fleet-server} and {es} need to be generated using link:https://en.wikipedia.org/wiki/RSA_(cryptosystem)[RSA]. For a full list of available algorithms to use when configuring TLS or mTLS, see <>. These settings are available for both standalone and {fleet}-managed {agent}. - -[discrete] -[[generate-fleet-server-certs]] -== Generate a custom certificate and private key for {fleet-server} - -This section describes how to use the `certutil` tool provided by {es}, but you -can use whatever process you typically use to generate PEM-formatted -certificates. - -. Generate a certificate authority (CA). Skip this step if you want to use an -existing CA. -+ --- -[source,shell] ----- -./bin/elasticsearch-certutil ca --pem ----- - -This command creates a zip file that contains the CA certificate and key you'll -use to sign the {fleet-server} certificate. Extract the zip file: - -image::images/ca.png[Screen capture of a folder called ca that contains two files: ca.crt and ca.key] - -Store the files in a secure location. --- - -. Use the certificate authority to generate certificates for {fleet-server}. -For example: -+ --- -[source,shell] ----- -./bin/elasticsearch-certutil cert \ - --name fleet-server \ - --ca-cert /path/to/ca/ca.crt \ - --ca-key /path/to/ca/ca.key \ - --dns your.host.name.here \ - --ip 192.0.2.1 \ - --pem ----- - -Where `dns` and `ip` specify the name and IP address of the {fleet-server}. Run -this command for each {fleet-server} you plan to deploy. - -This command creates a zip file that includes a `.crt` and `.key` -file. Extract the zip file: - -image::images/fleet-server-certs.png[Screen capture of a folder called fleet-server that contains two files: fleet-server.crt and fleet-server.key] - -Store the files in a secure location. You'll need these files later to encrypt -traffic between {agent}s and {fleet-server}. --- - -[discrete] -== Encrypt traffic between {agent}s, {fleet-server}, and {es} - -{fleet-server} needs a CA certificate or the CA fingerprint to connect securely to {es}. It also -needs to expose a {fleet-server} certificate so other {agent}s can connect to it -securely. - -For the steps in this section, imagine you have the following files: - -[cols=2*] -|=== - -|`ca.crt` -|The CA certificate to use to connect to {fleet-server}. This is the -CA used to <> -for {fleet-server}. - -|`fleet-server.crt` -|The certificate you generated for {fleet-server}. - -|`fleet-server.key` -|The private key you generated for {fleet-server}. - -If the `fleet-server.key` file is encrypted with a passphrase, the passphrase will need to be specified through a file. - -|`elasticsearch-ca.crt` -|The CA certificate to use to connect to {es}. This is the CA used to generate -certs for {es} (see <>). - -Note that the CA certificate's SHA-256 fingerprint (hash) may be used instead of the `elasticsearch-ca.crt` file for securing connections to {es}. - - -|=== - -To encrypt traffic between {agent}s, {fleet-server}, and {es}: - -. Configure {fleet} settings. These settings are applied to all {fleet}-managed -{agent}s. - -. In {kib}, open the main menu, then click *Management > {fleet} > Settings*. - -.. Under *{fleet-server} hosts*, specify the URLs {agent}s will use to connect to -{fleet-server}. For example, https://192.0.2.1:8220, where 192.0.2.1 is the host -IP where you will install {fleet-server}. -+ -TIP: For host settings, use the `https` protocol. DNS-based names are also -allowed. - -.. Under *Outputs*, search for the default output, then click the *Edit* icon in -the *Action* column. - -.. In the *Hosts* field, specify the {es} URLs where -{agent}s will send data. For example, https://192.0.2.0:9200. - -.. Specify either a CA certificate or CA fingerprint to connect securely -{es}: - -// lint ignore elasticsearch -* If you have a valid HEX encoded SHA-256 CA trusted fingerprint from root CA, -specify it in the *Elasticsearch CA trusted fingerprint* field. To learn more, refer to the -{ref}/configuring-stack-security.html[{es} security documentation]. - -* Otherwise, under *Advanced YAML configuration*, set -`ssl.certificate_authorities` and specify the CA certificate to use to connect -to {es}. You can specify a list of file paths (if the files are available), or -embed a certificate directly in the YAML configuration. If you specify file -paths, the certificates must be available on the hosts running the {agent}s. -+ -File path example: -+ --- -[source,yaml] ----- -ssl.certificate_authorities: ["/path/to/your/elasticsearch-ca.crt"] <1> ----- -<1> The path to the CA certificate on the {agent} host. - -Pasted certificate example: - -[source,yaml] ----- -ssl: - certificate_authorities: - - | - -----BEGIN CERTIFICATE----- - MIIDSjCCAjKgAwIBAgIVAKlphSqJclcni3P83gVsirxzuDuwMA0GCSqGSIb3DQEB - CwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2Vu - ZXJhdGVkIENBMB4XDTIxMDYxNzAxMzIyOVoXDTI0MDYxNjAxMzIyOVowNDEyMDAG - A1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5lcmF0ZWQgQ0Ew - ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDOFgtVri7Msy2iR33nLrVO - /M/6IyF72kFXup1E67TzetI22avOxNlq+HZTpZoWGV1I4RgxiQeN12FLuxxhd9nm - rxfZEqpuIjvo6fvU9ifC03WjXg1opgdEb6JqH93RHKw0PYimxhQfFcwrKxFseHUx - DeUNQgHkMQhDZgIfNgr9H/1X6qSU4h4LemyobKY3HDKY6pGsuBzsF4iOCtIitE9p - sagiWR21l1gW/lNaEW2ICKhJXbaqbE/pis45/yyPI4Q1Jd1VqZv744ejnZJnpAx9 - mYSE5RqssMeV6Wlmu1xWljOPeerOVIKUfHY38y8GZwk7TNYAMajratG2dj+v9eAV - AgMBAAGjUzBRMB0GA1UdDgQWBBSCNCjkb66eVsIaa+AouwUsxU4b6zAfBgNVHSME - GDAWgBSCNCjkb66eVsIaa+AouwUsxU4b6zAPBgNVHRMBAf8EBTADAQH/MA0GCSqG - SIb3DQEBCwUAA4IBAQBVSbRObxPwYFk0nqF+THQDG/JfpAP/R6g+tagFIBkATLTu - zeZ6oJggWNSfgcBviTpXc6i1AT3V3iqzq9KZ5rfm9ckeJmjBd9gAcyqaeF/YpWEb - ZAtbxfgPLI3jK+Sn8S9fI/4djEUl6F/kARpq5ljYHt9BKlBDyL2sHymQcrDC3pTZ - hEOM4cDbyKHgt/rjcNhPRn/q8g3dDhBdzjlNzaCNH/kmqWpot9AwmhhfPTcf1VRc - gxdg0CTQvQvuceEvIYYYVGh/cIsIhV2AyiNBzV5jJw5ztQoVyWvdqn3B1YpMP8oK - +nadUcactH4gbsX+oXRULNC7Cdd9bp2G7sQc+aZm - -----END CERTIFICATE----- ----- --- - -. Install an {agent} as a {fleet-server} on the host and configure it to use TLS: - -.. If you don't already have a {fleet-server} service token, click the *Agents* -tab in {fleet} and follow the instructions to generate the service token now. -+ -TIP: The in-product installation steps are incomplete. Before running the -`install` command, add the settings shown in the next step. - -.. From the directory where you extracted {fleet-server}, run the `install` -command and specify the certificates to use. -+ --- -The following command installs {agent} as a service, enrolls it in the -{fleet-server} policy, and starts the service. - -NOTE: If you're using DEB or RPM, or already have the {agent} installed, use the -`enroll` command along with the following options, then start the service as -described in <>. - -[source,shell] ----- -sudo ./elastic-agent install \ - --url=https://192.0.2.1:8220 \ - --fleet-server-es=https://192.0.2.0:9200 \ - --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \ - --fleet-server-policy=fleet-server-policy \ - --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \ - --certificate-authorities=/path/to/ca.crt \ - --fleet-server-cert=/path/to/fleet-server.crt \ - --fleet-server-cert-key=/path/to/fleet-server.key \ - --fleet-server-port=8220 \ - --elastic-agent-cert=/tmp/fleet-server.crt \ - --elastic-agent-cert-key=/tmp/fleet-server.key \ - --elastic-agent-cert-key-passphrase=/tmp/fleet-server/passphrase-file \ - --fleet-server-es-cert=/tmp/fleet-server.crt \ - --fleet-server-es-cert-key=/tmp/fleet-server.key \ - --fleet-server-client-auth=required ----- - -Where: - -`url`:: -{fleet-server} URL. -`fleet-server-es`:: -{es} URL -`fleet-server-service-token`:: -Service token to use to communicate with {es}. -`fleet-server-policy`:: -The specific policy that {fleet-server} will use. -`fleet-server-es-ca`:: -CA certificate that the current {fleet-server} uses to connect to {es}. -`certificate-authorities`:: -List of paths to PEM-encoded CA certificate files that should be trusted -for the other {agents} to connect to this {fleet-server} -`fleet-server-cert`:: -The path for the PEM-encoded certificate (or certificate chain) -which is associated with the fleet-server-cert-key to expose this {fleet-server} HTTPS endpoint -to the other {agents} -`fleet-server-cert-key`:: -Private key to use to expose this {fleet-server} HTTPS endpoint -to the other {agents} - -`elastic-agent-cert`:: -The certificate to use as the client certificate for {agent}'s connections to {fleet-server}. -`elastic-agent-cert-key`:: -The path to the private key to use as for {agent}'s connections to {fleet-server}. -`elastic-agent-cert-key`:: -The path to the file that contains the passphrase for the mutual TLS private key that {agent} will use to connect to {fleet-server}. -The file must only contain the characters of the passphrase, no newline or extra non-printing characters. -This option is only used if the `elastic-agent-cert-key` is encrypted and requires a passphrase to use. -`fleet-server-es-cert`:: -The path to the client certificate that {fleet-server} will use when connecting to {es}. -`fleet-server-es-cert-key`:: -The path to the private key that {fleet-server} will use when connecting to {es}. -`fleet-server-client-auth`:: -One of `none`, `optional`, or `required`. Defaults to `none`. {fleet-server}'s client_authentication option for client mTLS connections. If `optional` or `required` is specified, client certificates are verified using CAs specified in the `--certificate-authorities` flag. - -Note that additionally an optional passphrase for the private key may be specified with: - -`fleet-server-cert-key-passphrase`:: -Passphrase file used to decrypt {fleet-server}'s private key. - -.What happens if you enroll {fleet-server} without specifying certificates? -**** - -If the certificates are managed by your organization and installed at the system -level, they will be used to encrypt traffic between {agent}s, {fleet-server}, -and {es}. - -If system-level certificates don't exist, {fleet-server} automatically generates -self-signed certificates. Traffic between {fleet-server} and {agent}s over -HTTPS is encrypted, but the certificate chain cannot be verified. Any {agent}s -enrolling in {fleet-server} will need to pass the `--insecure` flag to -acknowledge that the certificate chain is not verified. - -Allowing {fleet-server} to generate self-signed certificates is useful to get -things running for development, but not recommended in a production environment. -**** --- - -. Install your {agent}s and enroll them in {fleet}. -+ --- -{agent}s connecting to a secured {fleet-server} need to pass in the CA -certificate used by the {fleet-server}. The CA certificate used by {es} is -already specified in the agent policy because it's set under {fleet} settings in -{kib}. You do not need to pass it on the command line. - -The following command installs {agent} as a service, enrolls it -in the agent policy associated with the specified token, and starts the service. - -[source,shell] ----- -sudo elastic-agent install --url=https://192.0.2.1:8220 \ - --enrollment-token= \ - --certificate-authorities=/path/to/ca.crt ----- - -Where: - -`url`:: -{fleet-server} URL to use to enroll the {agent} into {fleet}. -`enrollment-token`:: -The enrollment token for the policy that will be applied to the {agent}. -`certificate-authorities`:: -CA certificate to use to connect to {fleet-server}. This is the -CA used to <> -for {fleet-server}. - -// lint ignore elastic-agent -Don't have an enrollment token? On the *Agents* tab in {fleet}, click *Add agent*. -Under *Enroll and start the Elastic Agent*, follow the in-product installation steps, making sure -that you add the `--certificate-authorities` option before you run the command. --- diff --git a/docs/en/ingest-management/security/enrollment-tokens.asciidoc b/docs/en/ingest-management/security/enrollment-tokens.asciidoc deleted file mode 100644 index f9c41d1b3..000000000 --- a/docs/en/ingest-management/security/enrollment-tokens.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -[[fleet-enrollment-tokens]] -= {fleet} enrollment tokens - -A {fleet} enrollment token (referred to as an `enrollment API key` in the {fleet} API documentation) -is an {es} API key that you use to enroll one or more {agent}s in {fleet}. -The enrollment token enrolls the {agent} in a specific -agent policy that defines the data to be collected by the agent. You can -use the token as many times as required. It will remain valid until you revoke -it. - -The enrollment token is used for the initial communication between {agent} and -{fleet-server}. After the initial connection request from the {agent}, -the {fleet-server} passes two API keys to the {agent}: - -* An output API key -+ -This API key is used to send data to {es}. It has the minimal permissions needed -to ingest all the data specified by the agent policy. If the API key is invalid, -the {agent} stops ingesting data into {es}. - -* A communication API key -+ -This API key is used to communicate with the {fleet-server}. It has only the -permissions needed to communicate with the {fleet-server}. If the API key is -invalid, {fleet-server} stops communicating with the {agent}. - -[discrete] -[[create-fleet-enrollment-tokens]] -== Create enrollment tokens - -Create enrollment tokens and use them to enroll {agent}s in specific policies. - -TIP: When you use the {fleet} UI to add an agent or create a new policy, {fleet} -creates an enrollment token for you automatically. - -To create an enrollment token: - -. In {kib}, go to **Management -> {fleet} -> Enrollment tokens**. - -. Click **Create enrollment token**. Name your token and select an agent policy. -+ -Note that the token name you specify must be unique so as to avoid conflict with any existing API keys. -+ -[role="screenshot"] -image::images/create-token.png[Enrollment tokens tab in {fleet}] - -. Click **Create enrollment token**. - -. In the list of tokens, click the **Show token** icon to see the token secret. -+ -[role="screenshot"] -image::images/show-token.png[Enrollment tokens tab with Show token icon highlighted] - -All {agent}s enrolled through this token will use the selected policy unless you -assign or enroll them in a different policy. - -To learn how to install {agent}s and enroll them in {fleet}, refer to -<>. - -TIP: You can use the {fleet} API to get a list of enrollment tokens. For more -information, refer to <>. - -[discrete] -[[revoke-fleet-enrollment-tokens]] -== Revoke enrollment tokens - -You can revoke an enrollment token that you no longer wish to use to enroll {agents} in an agent policy in {fleet}. -Revoking an enrollment token essentially invalidates the API key used by agents to communicate with {fleet-server}. - -To revoke an enrollment token: - -. In {fleet}, click **Enrollment tokens**. - -. Find the token you want to revoke in the list and click the **Revoke token** -icon. -+ -[role="screenshot"] -image::images/revoke-token.png[Enrollment tokens tab with Revoke token highlighted] - -. Click **Revoke enrollment token**. You can no longer use this token to enroll -{agent}s. However, the currently enrolled agents will continue to function. -+ -To re-enroll your {agent}s, use an active enrollment token. - -Note that when an enrollment token is revoked it is not immediately deleted. -Deletion occurs automatically after the duration specified in the {es} -{ref}/security-settings.html#api-key-service-settings-delete-retention-period[`xpack.security.authc.api_key.delete.retention_period`] setting has expired (see {ref}/security-api-invalidate-api-key.html[Invalidate API key API] for details). - -Until the enrollment token has been deleted: - -* The token name may not be re-used when you <>. -* The token continues to be visible in the {fleet} UI. -* The token continues to be returned by a `GET /api/fleet/enrollment_api_keys` API request. -Revoked enrollment tokens are identified as `"active": false`. diff --git a/docs/en/ingest-management/security/fleet-roles-and-privileges.asciidoc b/docs/en/ingest-management/security/fleet-roles-and-privileges.asciidoc deleted file mode 100644 index dd1b460fb..000000000 --- a/docs/en/ingest-management/security/fleet-roles-and-privileges.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[fleet-roles-and-privileges]] -= Required roles and privileges - -Beginning with {stack} version 8.1, you no longer require the built-in `elastic` superuser credentials to use {fleet} and Integrations. - -Assigning the {kib} feature privileges `Fleet` and `Integrations` grants access to these features: - -`all`:: Grants full read-write access. -`read`:: Grants read-only access. -`none`:: No access is granted. - -Take advantage of these privilege settings by: - -* <> -* <> - -[discrete] -[[fleet-roles-and-privileges-built-in]] -== Built-in roles - -{es} comes with built-in roles that include default privileges. - -`editor`:: -The built-in `editor` role grants the following privileges, supporting full read-write access to {fleet} and Integrations: - -* {Fleet}: `all` -* Integrations: `all` - -`viewer`:: -The built-in `viewer` role grants the following privileges, supporting read-only access to {fleet} and Integrations: - -* {Fleet}: `read` -* Integrations: `read` - -You can also create a new role that can be assigned to a user, in order to grant more specific levels of access to {fleet} and Integrations. - -[discrete] -[[fleet-roles-and-privileges-create]] -== Create a role for {fleet} - -To create a new role with access to {fleet} and Integrations: - -. In {kib}, go to **Management -> Stack Management**. -. In the **Security** section, select **Roles**. -. Select **Create role**. -. Specify a name for the role. -. Leave the {es} settings at their defaults, or refer to {ref}/security-privileges.html[Security privileges] for descriptions of the available settings. -. In the {kib} section, select **Assign to space**. -. In the **Spaces** menu, select *** All Spaces**. Since many Integrations assets are shared across spaces, the users need the {kib} privileges in all spaces. -. Expand the **Management** section. -. Choose the access level that you'd like the role to have with respect to {fleet} and integrations: - -.. To grant the role full access to use and manage {fleet} and integrations, set both the **Fleet** and **Integrations** privileges to `All`. -+ -[role="screenshot"] -image::images/kibana-fleet-privileges-all.png[Kibana privileges flyout showing Fleet and Integrations set to All] - -.. Similarly, to create a read-only user for {fleet} and Integrations, set both the **Fleet** and **Integrations** privileges to `Read`. -+ -[role="screenshot"] -image::images/kibana-fleet-privileges-read.png[Kibana privileges flyout showing Fleet and Integrations set to All] - -Once you've created a new role you can assign it to any {es} user. You can edit the role at any time by returning to the **Roles** page in {kib}. diff --git a/docs/en/ingest-management/security/generate-certificates.asciidoc b/docs/en/ingest-management/security/generate-certificates.asciidoc deleted file mode 100644 index 3e0835f54..000000000 --- a/docs/en/ingest-management/security/generate-certificates.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[secure]] -= Secure {agent} connections - -++++ -Secure connections -++++ - -Some connections may require you to generate certificates and configure SSL/TLS. - -* <> -* <> -* <> \ No newline at end of file diff --git a/docs/en/ingest-management/security/images/certificate-rotation-agent-es.png b/docs/en/ingest-management/security/images/certificate-rotation-agent-es.png deleted file mode 100644 index 9a7ff1f1a..000000000 Binary files a/docs/en/ingest-management/security/images/certificate-rotation-agent-es.png and /dev/null differ diff --git a/docs/en/ingest-management/security/logstash-certificates.asciidoc b/docs/en/ingest-management/security/logstash-certificates.asciidoc deleted file mode 100644 index 92d59bfbe..000000000 --- a/docs/en/ingest-management/security/logstash-certificates.asciidoc +++ /dev/null @@ -1,273 +0,0 @@ -[[secure-logstash-connections]] -= Configure SSL/TLS for the {ls} output - -To send data from {agent} to {ls} securely, you need to configure Transport -Layer Security (TLS). Using TLS ensures that your {agent}s send encrypted data -to trusted {ls} servers, and that your {ls} servers receive data from trusted -{agent} clients. - -[discrete] -[[secure-logstash-prereqs]] -== Prerequisites - -* Make sure your https://www.elastic.co/subscriptions[subscription level] -supports output to {ls}. - -* On Windows, add port 8220 for {fleet-server} and 5044 for {ls} to the -inbound port rules in Windows Advanced Firewall. - -* If you are connecting to a self-managed {es} cluster, you need the CA -certificate that was used to sign the certificates for the HTTP layer of {es} -cluster. For more information, refer to the -{ref}/configuring-stack-security.html[{es} security docs]. - -[discrete] -[[generate-logstash-certs]] -== Generate custom certificates and private keys - -You can use whatever process you typically use to generate PEM-formatted -certificates. The examples shown here use the `certutil` tool provided by {es}. - -TIP: The `certutil` tool is not available on {ecloud}, but you can still use it -to generate certificates for {agent} to {ls} connections. Just -https://www.elastic.co/downloads/elasticsearch[download an {es} package], -extract it to a local directory, and run the `elasticsearch-certutil` command. -There's no need to start {es}! - -. Generate a certificate authority (CA). Skip this step if you want to use an -existing CA. -+ --- -[source,shell] ----- -./bin/elasticsearch-certutil ca --pem ----- - -This command creates a zip file that contains the CA certificate and key you'll -use to sign the certificates. Extract the zip file: - -image::images/ca-certs.png[Screen capture of a folder called ca that contains two files: ca.crt and ca.key] --- - -. Generate a client SSL certificate signed by your CA. For example: -+ --- -[source,shell] ----- -./bin/elasticsearch-certutil cert \ - --name client \ - --ca-cert /path/to/ca/ca.crt \ - --ca-key /path/to/ca/ca.key \ - --pem ----- - -Extract the zip file: - -image::images/client-certs.png[Screen capture of a folder called client that contains two files: client.crt and client.key] --- - -. Generate a {ls} SSL certificate signed by your CA. For example: -+ --- -[source,shell] ----- -./bin/elasticsearch-certutil cert \ - --name logstash \ - --ca-cert /path/to/ca/ca.crt \ - --ca-key /path/to/ca/ca.key \ - --dns your.host.name.here \ - --ip 192.0.2.1 \ - --pem ----- - - -Extract the zip file: - -image::images/logstash-certs.png[Screen capture of a folder called logstash that contains two files: logstash.crt and logstash.key] --- - -. Convert the {ls} key to pkcs8. For example, on Linux run: -+ -[source,shell] ----- -openssl pkcs8 -inform PEM -in logstash.key -topk8 -nocrypt -outform PEM -out logstash.pkcs8.key ----- - -Store these files in a secure location. - -[discrete] -[[configure-ls-ssl]] -== Configure the {ls} pipeline - -TIP: If you've already created the {ls} `elastic-agent-pipeline.conf` pipeline -and added it to `pipelines.yml`, skip to the example configurations and modify -your pipeline configuration as needed. - -In your {ls} configuration directory, open the `pipelines.yml` file and -add the following configuration. Replace the path to your file. - -[source,yaml] ----- -- pipeline.id: elastic-agent-pipeline - path.config: "/etc/path/to/elastic-agent-pipeline.conf" ----- - -In the `elastic-agent-pipeline.conf` file, add the pipeline configuration. Note -that the configuration needed for {ess} on {ecloud} is different from -self-managed {es} clusters. If you copied the configuration shown in {fleet}, -adjust it as needed. - -{ess} example: - -[source,text] ----- -input { - elastic_agent { - port => 5044 - ssl_enabled => true - ssl_certificate_authorities => ["/path/to/ca.crt"] - ssl_certificate => "/path/to/logstash.crt" - ssl_key => "/path/to/logstash.pkcs8.key" - ssl_client_authentication => "required" - } -} - -output { - elasticsearch { - cloud_id => "xxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx=" <1> - api_key => "xxxx:xxxx" <2> - data_stream => true - ssl => true <3> - } -} ----- -<1> Use the `cloud_id` shown on your deployment page in {ecloud}. -<2> In {fleet}, you can generate this API key when you add a {ls} output. -<3> {ess} uses standard publicly trusted certificates, so there's no need -specify other SSL settings here. - -Self-managed {es} cluster example: - -[source,text] ----- -input { - elastic_agent { - port => 5044 - ssl_enabled => true - ssl_certificate_authorities => ["/path/to/ca.crt"] - ssl_certificate => "/path/to/logstash.crt" - ssl_key => "/path/to/logstash.pkcs8.key" - ssl_client_authentication => "required" - } -} - -output { - elasticsearch { - hosts => "https://xxxx:9200" - api_key => "xxxx:xxxx" - data_stream => true - ssl => true - cacert => "/path/to/http_ca.crt" <1> - } -} ----- -<1> Use the certificate that was generated for {es}. - -To learn more about the {ls} configuration, refer to: - -* {logstash-ref}/plugins-inputs-elastic_agent.html[{agent} input plugin] -* {logstash-ref}/plugins-outputs-elasticsearch.html[{es} output plugin] -* {logstash-ref}/ls-security.html[Secure your connection to {es}] - -When you're done configuring the pipeline, restart {ls}: - -[source,shell] ----- -bin/logstash ----- - -[discrete] -[[add-ls-output]] -== Add a {ls} output to {fleet} - -This section describes how to add a {ls} output and configure SSL settings -in {fleet}. If you're running {agent} standalone, refer to the -<> configuration docs. - -// lint disable logstash -. In {kib}, go to *{fleet} > Settings*. - -. Under *Outputs*, click *Add output*. If you've been following the {ls} steps -in {fleet}, you might already be on this page. - -. Specify a name for the output. - -. For *Type*, select *Logstash*. - -. Under *Logstash hosts*, specify the host and port your agents will use to -connect to {ls}. Use the format `host:port`. - -. In the *Server SSL certificate authorities* field, paste in the entire -contents of the `ca.crt` file you <>. - -. In the *Client SSL certificate* field, paste in the entire contents of the -`client.crt` file you generated earlier. - -. In the *Client SSL certificate key* field, paste in the entire contents of the -`client.key` file you generated earlier. - -[role="screenshot"] -image::images/add-logstash-output.png[Screen capture of a folder called `logstash` that contains two files: logstash.crt and logstash.key] -// lint enable logstash - -When you're done, save and apply the settings. - -[discrete] -[[use-ls-output]] -== Select the {ls} output in an agent policy - -{ls} is now listening for events from {agent}, but events are not streaming into -{es} yet. You need to select the {ls} output in an agent policy. You can edit -an existing policy or create a new one: - -. In {kib}, go to *{fleet} > Agent policies* and either create a new agent policy -or click an existing policy to edit it: -+ -* To change the output settings in a new policy, click *Create agent policy* -and expand *Advanced options*. -* To change the output settings in an existing policy, click the policy to edit -it, then click *Settings*. - -. Set *Output for integrations* and (optionally) *Output for agent monitoring* -to use the {ls} output you created earlier. You might need to scroll down to see -these options -+ -[role="screenshot"] -image::images/agent-output-settings.png[Screen capture showing the {ls} output policy selected in an agent policy] - -. Save your changes. - -Any {agent}s enrolled in the agent policy will begin sending data to {es} via -{ls}. If you don't have any installed {agent}s enrolled in the agent policy, do -that now. - -There might be a slight delay while the {agent}s update to the new policy -and connect to {ls} over a secure connection. - -[discrete] -[[test-ls-connection]] -== Test the connection - -To make sure {ls} is sending data, run the following command from the host where -{ls} is running: - -[source,shell] ----- -curl -XGET localhost:9600/_node/stats/events ----- - -The request should return stats on the number of events in and out. If these -values are 0, check the {agent} logs for problems. - -When data is streaming to {es}, go to *{observability}* and click -*Metrics* to view metrics about your system. diff --git a/docs/en/ingest-management/security/mutual-tls.asciidoc b/docs/en/ingest-management/security/mutual-tls.asciidoc deleted file mode 100644 index acd276a5a..000000000 --- a/docs/en/ingest-management/security/mutual-tls.asciidoc +++ /dev/null @@ -1,249 +0,0 @@ -[[mutual-tls]] -= {agent} deployment models with mutual TLS - -Mutual Transport Layer Security (mTLS) provides a higher level of security and trust compared to one-way TLS, where only the server presents a certificate. It ensures that not only the server is who it claims to be, but the client is also authenticated. This is particularly valuable in scenarios where both parties need to establish trust and validate each other's identities, such as in secure API communication, web services, or remote authentication. - -For a summary of flow by which TLS is established between components using either one-way or mutual TLS, refer to <>. - -* <> -* <> -* <> -* <> -* <> - - -//[source,shell] -//---- -//example -//---- - -//image::images/fleet-server-certs.png[Screen capture of a folder called fleet-server that contains two files: fleet-server.crt and fleet-server.key] - -[discrete] -[[mutual-tls-overview]] -== Overview - -With mutual TLS the following authentication and certification verification occurs: - -* **Client Authentication**: The client presents its digital certificate to the server during the TLS handshake. This certificate is issued by a trusted Certificate Authority (CA) and contains the client's public key. -* **Server Authentication**: The server also presents its digital certificate to the client, proving its identity and sharing its public key. The server's certificate is also issued by a trusted CA. -* **Certificate Verification**: Both the client and server verify each other's certificates by checking the digital signatures against the CAs' public key (note that the client and server need not use the same CA). - -{fleet}-managed {agent} has two main connections to ensure correct operations: - -* Connectivity to {fleet-server} (the control plane, to check in, download policies, and similar). -* Connectivity to an Output (the data plane, such as {es} or {ls}). - -In order to bootstrap, {agent} initially must establish a secure connection to the {fleet-server}, which can reside on-premises or in {ecloud}. This connectivity verification process ensures the agent's authenticity. Once verified, the agent receives the policy configuration. This policy download equips the agent with the knowledge of the other components it needs to engage with. For instance, it gains insights into the output destinations it should write data to. - -//If mutual TLS (mTLS) is a requirement, {agent} must first establish an mTLS connection with {fleet-server}, with both client and server exchanging certificates and validating one another. Once the policy configuration is in place, it possesses the necessary details to establish an mTLS connection with the specific output it's configured to use. In the case of {fleet}-managed {agents}, certificates and certificate authorities essential for client-server authentication are configured through the {fleet} application in the {kib} user interface. As previously mentioned, the initial step involves establishing connectivity between {agent} and the {fleet-server}, allowing the subsequent configuration to take effect. - -//To facilitate the bootstrapping process and enable {agent} to establish an mTLS connection with {fleet-server}, all certificates and certificate authorities are configured using command-line parameters during the agent installation. Once the mTLS connection between {agent} and the {fleet-server} is established, the policy configuration enables the establishment of the mTLS connection between {agent} and the designated output as well. - -When mTLS is required, the secure setup between {agent}, {fleet}, and {fleet-server} is configured through the following steps: - -. mTLS is enabled. -. The initial mTLS connection between {agent} and {fleet-server} is configured when {agent} is enrolled, using the parameters passed through the `elastic-agent install` or `elastic-agent enroll` command. -. Once enrollment has completed, {agent} downloads the initial {agent} policy from {fleet-server}. -.. If the {agent} policy contains mTLS configuration settings, those settings will take precedence over those used during enrollment: This includes both the mTLS settings used for connectivity between {agent} and {fleet-server} (and the {fleet} application in {kib}, for {fleet}-managed {agent}), and the settings used between {agent} and it's specified output. -.. If the {agent} policy does not contain any TLS, mTLS, or proxy configuration settings, these settings will remain as they were specified when {agent} enrolled. Note that the initial TLS, mTLS, or proxy configuration settings can not be removed through the {agent} policy; they can only be updated. - -IMPORTANT: When you run {agent} with the {elastic-defend} integration, the link:https://en.wikipedia.org/wiki/X.509[TLS certificates] used to connect to {fleet-server} and {es} need to be generated using link:https://en.wikipedia.org/wiki/RSA_(cryptosystem)[RSA]. For a full list of available algorithms to use when configuring TLS or mTLS, see <>. These settings are available for both standalone and {fleet}-managed {agent}. - -[discrete] -[[mutual-tls-on-premise]] -== On-premise deployments - -image::images/mutual-tls-on-prem.png[Diagram of mutual TLS on premise deployment model] - -Refer to the steps in <>. To configure mutual TLS, include the following additional parameters when you install {agent} and {fleet-server}. - -[discrete] -=== {agent} settings -During {agent} installation on premise use the following options: - -[cols="1,1"] -|=== -|`--certificate-authorities` -|List of CA certificates that are trusted when {fleet-server} connects to {agent} - -|`--elastic-agent-cert` -|{agent} certificate to present to {fleet-server} during authentication - -|`--elastic-agent-cert-key` -|{agent} certificate key to present to {fleet-server} - -|`--elastic-agent-cert-key-passphrase` -|The path to the file that contains the passphrase for the mutual TLS private key that {agent} will use to connect to {fleet-server} -|=== - -[discrete] -=== {fleet-server} settings -During {fleet-server} installation on-premise {fleet-server} authenticates with {es} and {agents}. You can use the following CLI options to facilitate these secure connections: - -[cols="1,1"] -|=== -|`--fleet-server-es-ca` -|CA to use for the {es} connection - -|`--fleet-server-es-cert` -|{fleet-server} certificate to present to {es} - -|`--fleet-server-es-cert-key` -|{fleet-server} certificate key to present to {es} - -|`--certificate-authorities` -|List of CA certificates that are trusted when {agent} connects to {fleet-server} and when {fleet-server} validates the {agent} identity. - -|`--fleet-server-cert` -|{fleet-server} certificate to present to {agents} during authentication - -|`--fleet-server-cert-key` -|{fleet-server}'s private certificate key used to decrypt the certificate -|=== - -[discrete] -=== {fleet} settings: - -In {kib}, navigate to {fleet}, open the **Settings** tab, and choose the **Output** that you'd like to configure. -In the **Advanced YAML configuration**, add the following settings: - -[cols="1,1"] -|=== -|`ssl.certificate_authorities` -|List of CA certificates that are trusted when {fleet-server} connects to {agent} - -|`ssl.certificate` -|This certificate will be passed down to all the agents that have this output configured in their policy. This certificate is used by the agent when establishing mTLS to the output. - -You may either apply the full certificate, in which case all the agents get the same certificate OR alternatively point to a local directory on the agent where the certificate resides, if the certificates are to be unique per agent. - -|`ssl.key` -|This certificate key will be passed down to all the agents that have this output configured in their policy. The certificate key is used to decrypt the SSL certificate. - -|=== - -[IMPORTANT] -==== -Note the following when you specify these SSL settings: - -* The certificate authority, certificate, and certificate key need to be specified as a path to a local file. You cannot specify a directory. -* You can define multiple CAs or paths to CAs. -* Only one certificate and certificate key can be defined. -==== - -In the *Advanced YAML configuration* these settings should be added in the following format: - -[source,shell] ----- -ssl.certificate_authorities: - - /path/to/ca -ssl.certificate: /path/to/cert -ssl.key: /path/to/cert_key ----- - -OR - -[source,shell] ----- -ssl.certificate_authorities: - - /path/to/ca -ssl.certificate: /path/to/cert -ssl.key: /path/to/cert_key ----- - -image::images/mutual-tls-onprem-advanced-yaml.png[Screen capture of output advanced yaml settings] - -[discrete] -[[mutual-tls-cloud]] -== {fleet-server} on {ecloud} - -In this deployment model, all traffic ingress into {ecloud} has its TLS connection terminated at the {ecloud} boundary. Since this termination is not handled on a per-tenant basis, a client-specific certificate can NOT be used at this point. - -image::images/mutual-tls-cloud.png[Diagram of mutual TLS on cloud deployment model] - -We currently don't support mTLS in this deployment model. An alternate deployment model is shown below where you can deploy your own secure proxy where TLS connections are terminated. - -[discrete] -[[mutual-tls-cloud-proxy]] -== {fleet-server} on {ecloud} using a proxy - -In this scenario, where you have access to the proxy, you can configure mTLS between the agent and your proxy. - -image::images/mutual-tls-cloud-proxy.png[Diagram of mutual TLS on cloud deployment model with a proxy] - -[discrete] -=== {agent} settings -During {agent} installation on premise use the following options: - -[cols="1,1"] -|=== -|`--certificate-authorities` -|List of CA certificates that are trusted when {agent} connects to {fleet-server} or to the proxy between {agent} and {fleet-server} - -|`--elastic-agent-cert` -|{agent} certificate to present during authentication to {fleet-server} or to the proxy between {agent} and {fleet-server} - -|`--elastic-agent-cert-key` -|{agent}'s private certificate key used to decrypt the certificate - -|`--elastic-agent-cert-key-passphrase` -|The path to the file that contains the passphrase for the mutual TLS private key that {agent} will use to connect to {fleet-server} -|=== - -[discrete] -[[mutual-tls-on-premise-hosted-es]] -== {fleet-server} on-premise and Hosted Elasticsearch Service - -In some scenarios you may want to deploy {fleet-server} on your own premises. In this case, you're able to provide your own certificates and certificate authority to enable mTLS between {fleet-server} and {agent}. - -However, as with the <> use case, the data plane TLS connections terminate at the {ecloud} boundary. {ecloud} is not a multi-tenanted service and therefore can't provide per-user certificates. - -image::images/mutual-tls-fs-onprem.png[Diagram of mutual TLS with Fleet Server on premise and hosted Elasticsearch Service deployment model] - -Similar to the {fleet-server} on {ecloud} use case, a secure proxy can be placed in such an environment to terminate the TLS connections and satisfy the mTLS requirements. - -image::images/mutual-tls-fs-onprem-proxy.png[Diagram of mutual TLS with Fleet Server on premise and hosted Elasticsearch Service deployment model with a proxy] - -[discrete] -=== {agent} settings -During {agent} installation on premise use the following options, similar to <>: - -[cols="1,1"] -|=== -|`--certificate-authorities` -|List of CA certificates that are trusted for when {agent} connects to {fleet-server} - -|`--elastic-agent-cert` -|{agent} certificate to present to {fleet-server} during authentication - -|`--elastic-agent-cert-key` -|{agent}'s private certificate key used to decrypt the certificate - -|`--elastic-agent-cert-key-passphrase` -|The path to the file that contains the passphrase for the mutual TLS private key that {agent} will use to connect to {fleet-server} -|=== - -[discrete] -=== {fleet-server} settings -During {fleet-server} installation on-premise use the following options so that {fleet-server} can authenticate itself to the agent and then also to the secure proxy server: - -[cols="1,1"] -|=== -|`--fleet-server-es-ca` -|CA to use for the {es} connection, via secure proxy. This CA is used to authenticate the TLS connection from a secure proxy - -|`--certificate-authorities` -|List of CA certificates that are trusted when {agent} connects to {fleet-server} - -|`--fleet-server-cert` -|{fleet-server} certificate to present to {agents} during authentication - -|`--fleet-server-cert-key` -|{fleet-server}'s private certificate key used to decrypt the certificate -|=== - -[discrete] -=== {fleet} settings - -This is the same as what's described for <>. The main difference is that you need to use certificates that are accepted by the secure proxy, as the mTLS is set up between the agent and the secure proxy. diff --git a/docs/en/ingest-management/security/tls-overview.asciidoc b/docs/en/ingest-management/security/tls-overview.asciidoc deleted file mode 100644 index c058cfbdc..000000000 --- a/docs/en/ingest-management/security/tls-overview.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[tls-overview]] -= One-way and mutual TLS certifications flow - -This page provides an overview of the relationship between the various certificates and certificate authorities (CAs) that you configure for {fleet-server} and {agent}, using the `elastic-agent install` TLS command options. - -* <> -* <> - -[discrete] -[[one-way-tls-connection]] -== Simple one-way TLS connection - -The following `elastic-agent install` command configures a {fleet-server} with the required certificates and certificate authorities to enable one-way TLS connections between the components involved: - -[source,shell] ----- -elastic-agent install --url=https://your-fleet-server.elastic.co:443 \ ---certificate-authorities=/path/to/fleet-ca \ ---fleet-server-es=https://es.elastic.com:443 \ ---fleet-server-es-ca=/path/to/es-ca \ ---fleet-server-cert=/path/to/fleet-cert \ ---fleet-server-cert-key=/path/to/fleet-cert-key \ ---fleet-server-service-token=FLEET-SERVER-SERVICE-TOKEN \ ---fleet-server-policy=FLEET-SERVER-POLICY-ID \ ---fleet-server-port=8220 ----- - -{agent} is configured with `fleet-ca` as the certificate authority that it needs to validate certificates from {fleet-server}. - -During the TLS connection setup, {fleet-server} presents its certificate `fleet-cert` to the agent and the agent (as a client) uses `fleet-ca` to validate the presented certificate. - -image::images/tls-overview-oneway-fs-agent.png[Diagram of one-way TLS connection between Fleet Server and Elastic Agent] - -{fleet-server} also establishes a secure connection to an {es} cluster. In this case, {fleet-server} is configured with the certificate authority from the {es} `es-ca`. {es} presents its certificate, `es-cert`, and {fleet-server} validates the presented certificate using the certificate authority `es-ca`. - -image::images/tls-overview-oneway-fs-es.png[Diagram of one-way TLS connection between Fleet Server and Elasticsearch] - -[discrete] -=== Relationship between components in a one-way TLS connection - -image::images/tls-overview-oneway-all.jpg[Diagram of one-way TLS connection between components] - -[discrete] -[[mutual-tls-connection]] -== Mutual TLS connection - -The following `elastic-agent install` command configures a {fleet-server} with the required certificates and certificate authorities to enable mutual TLS connections between the components involved: - -[source,shell] ----- -elastic-agent install --url=https://your-fleet-server.elastic.co:443 \ ---certificate-authorities=/path/to/fleet-ca,/path/to/agent-ca \ ---elastic-agent-cert=/path/to/agent-cert \ ---elastic-agent-cert-key=/path/to/agent-cert-key \ ---elastic-agent-cert-key=/path/to/agent-cert-key-passphrase \ ---fleet-server-es=https://es.elastic.com:443 \ ---fleet-server-es-ca=/path/to/es-ca \ ---fleet-server-es-cert=/path/to/fleet-es-cert \ ---fleet-server-es-cert-key=/path/to/fleet-es-cert-key \ ---fleet-server-cert=/path/to/fleet-cert \ ---fleet-server-cert-key=/path/to/fleet-cert-key \ ---fleet-server-client-auth=required \ ---fleet-server-service-token=FLEET-SERVER-SERVICE-TOKEN \ ---fleet-server-policy=FLEET-SERVER-POLICY-ID \ ---fleet-server-port=8220 ----- - -As with the <>, {agent} is configured with `fleet-ca` as the certificate authority that it needs to validate certificates from the {fleet-server}. {fleet-server} presents its certificate `fleet-cert` to the agent and the agent (as a client) uses `fleet-ca` to validate the presented certificate. - -To establish a mutual TLS connection, the agent presents its certificate, `agent-cert`, and {fleet-server} validates this certificate using the `agent-ca` that it has stored in memory. - -image::images/tls-overview-mutual-fs-agent.png[Diagram of mutual TLS connection between Fleet Server and Elastic Agent] - -{fleet-server} can also establish a mutual TLS connection to the {es} cluster. In this case, {fleet-server} is configured with the certificate authority from the {es} `es-ca` and uses this to validate the certificate `es-cert` presented to it by {es}. - -image::images/tls-overview-mutual-fs-es.png[Diagram of mutual TLS connection between Fleet Server and Elasticsearch] - -Note that you can also configure mutual TLS for {fleet-server} and {agent} <>. - -[discrete] -=== Relationship between components in a mutual TLS connection - -image::images/tls-overview-mutual-all.jpg[Diagram of mutual TLS connection between components] \ No newline at end of file diff --git a/docs/en/ingest-management/serverless-restrictions.asciidoc b/docs/en/ingest-management/serverless-restrictions.asciidoc deleted file mode 100644 index aa3ea5b88..000000000 --- a/docs/en/ingest-management/serverless-restrictions.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[fleet-agent-serverless-restrictions]] -= {fleet} and {agent} restrictions for {serverless-full} - -++++ -Restrictions for {serverless-full} -++++ - -[discrete] -[[elastic-agent-serverless-restrictions]] -== {agent} - -If you are using {agent} with link:{serverless-docs}[{serverless-full}], note these differences from use with {ess} and self-managed {es}: - -* The number of {agents} that may be connected to an {serverless-full} project is limited to 10 thousand. -* The minimum supported version of {agent} supported for use with {serverless-full} is 8.11.0. - -[[outputs-serverless-restrictions]] -**Outputs** - -* On {serverless-short}, you can configure new {es} outputs to use a proxy, with the restriction that the output URL is fixed. Any new {es} outputs must use the default {es} host URL. - - -[discrete] -[[fleet-serverless-restrictions]] -== {fleet} - -The path to get to the {fleet} application in {kib} differs across projects: - -* In {ess} deployments, navigate to **Management > Fleet**. -* In {serverless-short} {observability} projects, navigate to **Project settings > Fleet**. -* In {serverless-short} Security projects, navigate to **Assets > Fleet**. - -[discrete] -[[fleet-server-serverless-restrictions]] -== {fleet-server} - -Note the following restrictions with using {fleet-server} on {serverless-short}: - -* On-premises {fleet-server} is not currently available for use in a {serverless-short} environment. -We recommend using the hosted {fleet-server} that is included and configured automatically in {serverless-short} {observability} and Security projects. - -* On {serverless-short}, you can configure {fleet-server} to use a proxy, with the restriction that the {fleet-server} host URL is fixed. Any new {fleet-server} hosts must use the default {fleet-server} host URL. - diff --git a/docs/en/ingest-management/standalone-note.asciidoc b/docs/en/ingest-management/standalone-note.asciidoc deleted file mode 100644 index 882548e83..000000000 --- a/docs/en/ingest-management/standalone-note.asciidoc +++ /dev/null @@ -1,3 +0,0 @@ -NOTE: Running {agent} in standalone mode is an advanced use case. The -documentation is incomplete and not yet mature. When possible, we recommend -using {fleet}-managed agents instead of standalone mode. diff --git a/docs/en/ingest-management/tab-widgets/add-fleet-server/content.asciidoc b/docs/en/ingest-management/tab-widgets/add-fleet-server/content.asciidoc deleted file mode 100644 index 142af0863..000000000 --- a/docs/en/ingest-management/tab-widgets/add-fleet-server/content.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -// tag::ess[] - -{ecloud} runs a hosted version of {integrations-server} that includes -{fleet-server}. No extra setup is required unless you want to scale your -deployment. - -To confirm that an {integrations-server} is available in your deployment: - -. In {fleet}, open the **Agents** tab. -. Look for the **{ecloud} agent policy**. This policy is -managed by {ecloud}, and contains a {fleet-server} integration and an Elastic -APM integration. You cannot modify the policy. Confirm that the agent status is -**Healthy**. - -[TIP] -==== -Don't see the agent? Make sure your deployment includes an -{integrations-server} instance. This instance is required to use {fleet}. - -[role="screenshot"] -image::images/integrations-server-hosted-container.png[Hosted {integrations-server}] -==== - -// end::ess[] - -// tag::self-managed[] - -To deploy a self-managed {fleet-server}, you install an {agent} and enroll it in -an agent policy containing the {fleet-server} integration. - -NOTE: You can install only a single {agent} per host, which means you cannot run -{fleet-server} and another {agent} on the same host unless you deploy a -containerized {fleet-server}. - -. In {fleet}, open the **Settings** tab. For more information -about these settings, see {fleet-guide}/fleet-settings.html[{fleet} settings]. -// lint ignore fleet-server -. Under **Fleet Server hosts**, click **Edit hosts** and specify one or more host -URLs your {agent}s will use to connect to {fleet-server}. For example, -`https://192.0.2.1:8220`, where `192.0.2.1` is the host IP where you will -install {fleet-server}. Save and apply your settings. -+ -TIP: If the **Edit hosts** option is grayed out, {fleet-server} hosts -are configured outside of {fleet}. For more information, refer to -{kibana-ref}/fleet-settings-kb.html[{fleet} settings in {kib}]. - -. In the **{es} hosts** field, specify the {es} URLs where {agent}s will send data. -For example, `https://192.0.2.0:9200`. Skip this step if you've started the -{stack} with security enabled (you cannot change this setting because it's -managed outside of {fleet}). - -. Save and apply the settings. - -. Click the **Agents** tab and follow the in-product instructions to add a -{fleet} server: -+ -[role="screenshot"] -image::images/add-fleet-server.png[In-product instructions for adding a {fleet-server}] - -**Notes:** - -* Choose **Quick Start** if you want {fleet} to generate a -{fleet-server} policy and enrollment token for you. The {fleet-server} policy -will include a {fleet-server} integration plus a system integration for -monitoring {agent}. This option generates self-signed certificates and is not -recommended for production use cases. -* Choose **Advanced** if you want to either: -** Use your own {fleet-server} policy. You can create a new {fleet-server} -policy or select an existing one. Alternatively you can -{fleet-guide}/create-a-policy-no-ui.html[create a {fleet-server} policy without using the UI], -and select the policy here. -** Use your own TLS certificates to encrypt traffic between {agent}s and -{fleet-server}. To learn how to generate certs, refer to -{fleet-guide}/secure-connections.html[Configure SSL/TLS for self-managed {fleet-server}s]. -* It's recommended you generate a unique service token for each -{fleet-server}. For other ways to generate service tokens, see -{ref}/service-tokens-command.html[`elasticsearch-service-tokens`]. -* If you are providing your own certificates: -** Before running the `install` command, make sure you replace the values in -angle brackets. -** Note that the URL specified by `--url` must match the DNS name used to -generate the certificate specified by `--fleet-server-cert`. -* The `install` command installs the {agent} as a managed service and enrolls it -in a {fleet-server} policy. For more {fleet-server} commands, see -{fleet-guide}/elastic-agent-cmd-options.html[{agent} command reference]. - -If installation is successful, you'll see confirmation that {fleet-server} -connected. Click **Continue enrolling Elastic Agent** to begin enrolling your -agents in {fleet-server}. - -NOTE: If you're unable to add a {fleet}-managed agent, click the **Agents** tab -and confirm that the agent running {fleet-server} is healthy. - -// end::self-managed[] diff --git a/docs/en/ingest-management/tab-widgets/add-fleet-server/widget.asciidoc b/docs/en/ingest-management/tab-widgets/add-fleet-server/widget.asciidoc deleted file mode 100644 index e917bc9ef..000000000 --- a/docs/en/ingest-management/tab-widgets/add-fleet-server/widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::content.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/code.asciidoc b/docs/en/ingest-management/tab-widgets/code.asciidoc deleted file mode 100644 index 61b18b001..000000000 --- a/docs/en/ingest-management/tab-widgets/code.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -// Defining styles and script here for simplicity. -++++ - - - -++++ diff --git a/docs/en/ingest-management/tab-widgets/download-widget.asciidoc b/docs/en/ingest-management/tab-widgets/download-widget.asciidoc deleted file mode 100644 index 6753a8b8c..000000000 --- a/docs/en/ingest-management/tab-widgets/download-widget.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::download.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/download.asciidoc b/docs/en/ingest-management/tab-widgets/download.asciidoc deleted file mode 100644 index c023fd032..000000000 --- a/docs/en/ingest-management/tab-widgets/download.asciidoc +++ /dev/null @@ -1,109 +0,0 @@ -// tag::deb[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {agent} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -[IMPORTANT] -==== -* To simplify upgrading to future versions of {agent}, we recommended -that you use the tarball distribution instead of the RPM distribution. -* You can install {agent} in an `unprivileged` mode that does not require `root` privileges. Refer to {fleet-guide}/elastic-agent-unprivileged.html[Run {agent} without administrative privileges] for details. -==== - -["source","sh",subs="attributes"] ----- -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-amd64.deb -sudo dpkg -i elastic-agent-{version}-amd64.deb ----- - -endif::[] -// end::deb[] - -// tag::rpm[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {agent} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -[IMPORTANT] -==== -* To simplify upgrading to future versions of {agent}, we recommended -that you use the tarball distribution instead of the RPM distribution. -* You can install {agent} in an `unprivileged` mode that does not require `root` privileges. Refer to {fleet-guide}/elastic-agent-unprivileged.html[Run {agent} without administrative privileges] for details. -==== - -["source","sh",subs="attributes"] ----- -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-x86_64.rpm -sudo rpm -vi elastic-agent-{version}-x86_64.rpm ----- -endif::[] -// end::rpm[] - -// tag::mac[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {agent} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ----- -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-darwin-x86_64.tar.gz -tar xzvf elastic-agent-{version}-darwin-x86_64.tar.gz ----- - -endif::[] -// end::mac[] - -// tag::linux[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {agent} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ----- -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-linux-x86_64.tar.gz -tar xzvf elastic-agent-{version}-linux-x86_64.tar.gz ----- - -endif::[] -// end::linux[] - -// tag::win[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {agent} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","powershell",subs="attributes"] ----- -# PowerShell 5.0+ -wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-windows-x86_64.zip -OutFile elastic-agent-{version}-windows-x86_64.zip -Expand-Archive .\elastic-agent-{version}-windows-x86_64.zip ----- -Or manually: - -. Download the {agent} Windows zip file from the -https://www.elastic.co/downloads/beats/elastic-agent[download page]. - -. Extract the contents of the zip file. - -endif::[] -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/enroll-widget.asciidoc b/docs/en/ingest-management/tab-widgets/enroll-widget.asciidoc deleted file mode 100644 index e69de29bb..000000000 diff --git a/docs/en/ingest-management/tab-widgets/event-logging-widget.asciidoc b/docs/en/ingest-management/tab-widgets/event-logging-widget.asciidoc deleted file mode 100644 index 3655aa465..000000000 --- a/docs/en/ingest-management/tab-widgets/event-logging-widget.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::event-logging.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/event-logging.asciidoc b/docs/en/ingest-management/tab-widgets/event-logging.asciidoc deleted file mode 100644 index 7ec2ddb09..000000000 --- a/docs/en/ingest-management/tab-widgets/event-logging.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -// tag::mac[] - -`/Library/Elastic/Agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson` - -// end::mac[] - -// tag::linux[] - -`/opt/Elastic/Agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson` - -// end::linux[] - -// tag::win[] - -`C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\events\elastic-agent-event-log*.ndjson` - -// end::win[] - -// tag::deb[] - -`/var/lib/elastic-agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson` - -// end::deb[] - -// tag::rpm[] - -`/var/lib/elastic-agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson` - -// end::rpm[] diff --git a/docs/en/ingest-management/tab-widgets/install-layout-widget.asciidoc b/docs/en/ingest-management/tab-widgets/install-layout-widget.asciidoc deleted file mode 100644 index 66bb30d40..000000000 --- a/docs/en/ingest-management/tab-widgets/install-layout-widget.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::install-layout.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/install-layout.asciidoc b/docs/en/ingest-management/tab-widgets/install-layout.asciidoc deleted file mode 100644 index d7dad85d1..000000000 --- a/docs/en/ingest-management/tab-widgets/install-layout.asciidoc +++ /dev/null @@ -1,81 +0,0 @@ -// tag::mac[] - -// lint disable -`/Library/Elastic/Agent/*`:: -{agent} program files -`/Library/Elastic/Agent/elastic-agent.yml`:: -Main {agent} configuration -`/Library/Elastic/Agent/fleet.enc`:: -Main {agent} {fleet} encrypted configuration - -`/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson`:: -Log files for {agent} and {beats} shippers footnote:lognumbering[Logs file names end with a date (`YYYYMMDD`) and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation.] -`/usr/bin/elastic-agent`:: - -Shell wrapper installed into PATH - -You can install {agent} in a custom base path other than `/Library`. When installing {agent} with the `./elastic-agent install` -command, use the `--base-path` CLI option to specify the custom base path. -// end::mac[] - -// tag::linux[] - -`/opt/Elastic/Agent/*`:: -{agent} program files -`/opt/Elastic/Agent/elastic-agent.yml`:: -Main {agent} configuration -`/opt/Elastic/Agent/fleet.enc`:: -Main {agent} {fleet} encrypted configuration -`/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson`:: -Log files for {agent} and {beats} shippers footnote:lognumbering[] -`/usr/bin/elastic-agent`:: -Shell wrapper installed into PATH - -You can install {agent} in a custom base path other than `/opt`. When installing {agent} with the `./elastic-agent install` -command, use the `--base-path` CLI option to specify the custom base path. -// end::linux[] - -// tag::win[] - -`C:\Program Files\Elastic\Agent*`:: -{agent} program files -`C:\Program Files\Elastic\Agent\elastic-agent.yml`:: -Main {agent} configuration -`C:\Program Files\Elastic\Agent\fleet.enc`:: -Main {agent} {fleet} encrypted configuration -`C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson`:: -Log files for {agent} and {beats} shippers footnote:lognumbering[] - -You can install {agent} in a custom base path other than `C:\Program Files`. When installing {agent} with the `.\elastic-agent.exe install` -command, use the `--base-path` CLI option to specify the custom base path. -// end::win[] - -// tag::deb[] - -`/usr/share/elastic-agent/*`:: -{agent} program files -`/etc/elastic-agent/elastic-agent.yml`:: -Main {agent} configuration -`/etc/elastic-agent/fleet.enc`:: -Main {agent} {fleet} encrypted configuration -`/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson`:: -Log files for {agent} and {beats} shippers footnote:lognumbering[] -`/usr/bin/elastic-agent`:: -Shell wrapper installed into PATH - -// end::deb[] - -// tag::rpm[] - -`/usr/share/elastic-agent/*`:: -{agent} program files -`/etc/elastic-agent/elastic-agent.yml`:: -Main {agent} configuration -`/etc/elastic-agent/fleet.enc`:: -Main {agent} {fleet} encrypted configuration -`/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson`:: -Log files for {agent} and {beats} shippers footnote:lognumbering[] -`/usr/bin/elastic-agent`:: -Shell wrapper installed into PATH - -// end::rpm[] diff --git a/docs/en/ingest-management/tab-widgets/install-widget.asciidoc b/docs/en/ingest-management/tab-widgets/install-widget.asciidoc deleted file mode 100644 index 40707c561..000000000 --- a/docs/en/ingest-management/tab-widgets/install-widget.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::install.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/install.asciidoc b/docs/en/ingest-management/tab-widgets/install.asciidoc deleted file mode 100644 index abce9b096..000000000 --- a/docs/en/ingest-management/tab-widgets/install.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -// tag::deb[] - -// tag::install-tip[] -TIP: You must run this command as the root user because some -integrations require root privileges to collect sensitive data. - -// end::install-tip[] - -["source","sh",subs="attributes"] ----- -sudo elastic-agent enroll --url= --enrollment-token= <1> -sudo systemctl enable elastic-agent <2> -sudo systemctl start elastic-agent ----- -<1> `fleet_server_url` is the host and IP where {fleet-server} is running, and -`enrollment_token` is the enrollment token acquired from {fleet}. Not sure where -{fleet-server} is running? Look at the -{fleet-guide}/fleet-settings.html[{fleet} settings] in {kib}. -<2> The DEB package includes a service unit for Linux systems with systemd. On -these systems, you can manage {agent} by using the usual systemd commands. If -you don't have systemd, run `sudo service elastic-agent start`. - -// end::deb[] - -// tag::rpm[] - -include::install.asciidoc[tag=install-tip] - -["source","sh",subs="attributes"] ----- -sudo elastic-agent enroll --url= --enrollment-token= <1> -sudo systemctl enable elastic-agent <2> -sudo systemctl start elastic-agent ----- -<1> `fleet_server_url` is the host and IP where {fleet-server} is running, and -`enrollment_token` is the enrollment token acquired from {fleet}. -<2> The RPM package includes a service unit for Linux systems with systemd. On -these systems, you can manage {agent} by using the usual systemd commands. If -you don't have systemd, run `sudo service elastic-agent start`. - -// end::rpm[] - -// tag::mac[] - -include::install.asciidoc[tag=install-tip] - -["source","sh",subs="attributes"] ----- -cd elastic-agent-{version}-darwin-x86_64 -sudo ./elastic-agent install --url= --enrollment-token= <1> <2> ----- -<1> `fleet_server_url` is the host and IP where {fleet-server} is running, and -`enrollment_token` is the enrollment token acquired from {fleet}. - -// end::mac[] - -// tag::linux[] - -include::install.asciidoc[tag=install-tip] - -["source","sh",subs="attributes"] ----- -cd elastic-agent-{version}-linux-x86_64 -sudo ./elastic-agent install --url= --enrollment-token= <1> <2> <3> ----- -<1> `fleet_server_url` is the host and IP where {fleet-server} is running, and -`enrollment_token` is the enrollment token acquired from {fleet}. -<2> This command requires a system and service manager like systemd. - -// end::linux[] - -// tag::win[] -Open a PowerShell prompt as an Administrator (right-click the PowerShell icon -and select **Run As Administrator**). - -From the PowerShell prompt, change to the directory where you installed {agent}, -and run: - -[source,shell] ----- -.\elastic-agent.exe install --url= --enrollment-token= <1> <2> ----- -<1> `fleet_server_url` is the host and IP where {fleet-server} is running, and -`enrollment_token` is the enrollment token acquired from {fleet}. - -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/logging-widget.asciidoc b/docs/en/ingest-management/tab-widgets/logging-widget.asciidoc deleted file mode 100644 index 2fd2fcb60..000000000 --- a/docs/en/ingest-management/tab-widgets/logging-widget.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::logging.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/logging.asciidoc b/docs/en/ingest-management/tab-widgets/logging.asciidoc deleted file mode 100644 index be7548c01..000000000 --- a/docs/en/ingest-management/tab-widgets/logging.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -// tag::mac[] - -**/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson** - -// end::mac[] - -// tag::linux[] - -**/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson** - -// end::linux[] - -// tag::win[] - -**C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson** - -// end::win[] - -// tag::deb[] - -**/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson** - -// end::deb[] - -// tag::rpm[] - -**/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson** - -// end::rpm[] \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/prereq-widget.asciidoc b/docs/en/ingest-management/tab-widgets/prereq-widget.asciidoc deleted file mode 100644 index b70ed7a73..000000000 --- a/docs/en/ingest-management/tab-widgets/prereq-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::prereq.asciidoc[tag=cloud] - -++++ -
- -
-++++ diff --git a/docs/en/ingest-management/tab-widgets/prereq.asciidoc b/docs/en/ingest-management/tab-widgets/prereq.asciidoc deleted file mode 100644 index 0848aa609..000000000 --- a/docs/en/ingest-management/tab-widgets/prereq.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -// tag::cloud[] -* {ess} deployment that includes an {integrations-server} (included by -default in every {ess} deployment). {ess-leadin-short} - -* {kib} user with `All` privileges on {fleet} and {integrations}. Since many -Integrations assets are shared across spaces, users need the {kib} privileges in -all spaces. -// end::cloud[] - -// tag::self-managed[] - -* {es} cluster and {kib} (version {minor-version}) with a basic license or -higher. {stack-ref}/installing-elastic-stack.html[Learn how to install the -{stack} on your own hardware]. - -* Secure, encrypted connection between {kib} and {es}. For more information, -see {ref}/configuring-stack-security.html[Start the {stack} with security enabled]. - -* Internet connection for {kib} to download integration packages from the -{package-registry}. Make sure the {kib} server can connect to -`https://epr.elastic.co` on port `443`. If your environment has network traffic -restrictions, there are ways to work around this requirement. -See {fleet-guide}/air-gapped.html[Air-gapped environments] for more information. - -* {kib} user with `All` privileges on {fleet} and {integrations}. Since many -Integrations assets are shared across spaces, users need the {kib} privileges in -all spaces. - -* In the {es} configuration, the -{ref}/security-settings.html#api-key-service-settings[built-in API key -service] must be enabled. -(`xpack.security.authc.api_key.enabled: true`) - -* In the {kib} configuration, the saved objects encryption key -must be set. {fleet} requires this setting in order to save API keys and encrypt -them in {kib}. You can either set `xpack.encryptedSavedObjects.encryptionKey` to -an alphanumeric value of at least 32 characters, or run the -{kibana-ref}/kibana-encryption-keys.html[`kibana-encryption-keys` command] to -generate the key. - -//TO DO: We need to test these recommendations to see which are still valid -//when users run security by default. I suspect the setup is easier than we -//are conveying here. - -**Example security settings** - -For testing purposes, you can use the following settings to get started quickly, -but make sure you properly secure the {stack} before sending real data. - -elasticsearch.yml example: - -[source,yaml] ----- -xpack.security.enabled: true -xpack.security.authc.api_key.enabled: true ----- - -kibana.yml example: - -[source,yaml] ----- -elasticsearch.username: "kibana_system" <1> -xpack.encryptedSavedObjects.encryptionKey: "something_at_least_32_characters" ----- -<1> The password should be stored in the {kib} keystore as described in the -{ref}/security-minimal-setup.html[{es} security documentation]. -// end::self-managed[] diff --git a/docs/en/ingest-management/tab-widgets/remove-endpoint-files/content.asciidoc b/docs/en/ingest-management/tab-widgets/remove-endpoint-files/content.asciidoc deleted file mode 100644 index 6d293f00b..000000000 --- a/docs/en/ingest-management/tab-widgets/remove-endpoint-files/content.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -// tag::mac[] - -[source,shell] ----------------------------------- -cd /tmp -cp /Library/Elastic/Endpoint/elastic-endpoint elastic-endpoint -sudo ./elastic-endpoint uninstall -rm elastic-endpoint ----------------------------------- -// end::mac[] - -// tag::linux[] -[source,shell] ----------------------------------- -cd /tmp -cp /opt/Elastic/Endpoint/elastic-endpoint elastic-endpoint -sudo ./elastic-endpoint uninstall -rm elastic-endpoint ----------------------------------- -// end::linux[] - -// tag::win[] -[source,shell] ----------------------------------- -cd %TEMP% -copy "c:\Program Files\Elastic\Endpoint\elastic-endpoint.exe" elastic-endpoint.exe -.\elastic-endpoint.exe uninstall -del .\elastic-endpoint.exe ----------------------------------- -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/remove-endpoint-files/widget.asciidoc b/docs/en/ingest-management/tab-widgets/remove-endpoint-files/widget.asciidoc deleted file mode 100644 index 40a479dc8..000000000 --- a/docs/en/ingest-management/tab-widgets/remove-endpoint-files/widget.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -++++ -
-
- - - -
-
-++++ - -include::content.asciidoc[tag=mac] - -++++ -
- - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/run-agent-image/content.asciidoc b/docs/en/ingest-management/tab-widgets/run-agent-image/content.asciidoc deleted file mode 100644 index 5be787635..000000000 --- a/docs/en/ingest-management/tab-widgets/run-agent-image/content.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -// tag::cloud[] - -["source","sh",subs="attributes"] ----- -docker run \ - --env FLEET_ENROLL=1 \ <1> - --env FLEET_URL= \ <2> - --env FLEET_ENROLLMENT_TOKEN= \ <3> - --rm docker.elastic.co/elastic-agent/elastic-agent:{version} <4> ----- - -<1> Set to 1 to enroll the {agent} into {fleet-server}. -<2> URL to enroll the {fleet-server} into. You can find it in {kib}. Select *Management -> {fleet} -> Fleet Settings*, and copy the {fleet-server} host URL. -<3> The token to use for enrollment. Close the flyout panel and select *Enrollment tokens*. Find the Agent policy you want to enroll {agent} into, and display and copy the secret token. To learn how to create a policy, refer to <>. -<4> If you want to run *elastic-agent-complete* image, replace `elastic-agent` to `elastic-agent-complete`. Use the `elastic-agent` user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to {observability-guide}/uptime-set-up.html[Synthetics {fleet} Quickstart] for more information. - -Refer to <> for all available options. - -// end::cloud[] - -// tag::self-managed[] -If you're running a self-managed cluster and want to run your own {fleet-server}, run the following command, which will spin up both {agent} and {fleet-server} in a container: - -["source","sh",subs="attributes"] ----- -docker run \ - --env FLEET_SERVER_ENABLE=true \ <1> - --env FLEET_SERVER_ELASTICSEARCH_HOST= \ <2> - --env FLEET_SERVER_SERVICE_TOKEN= \ <3> - --env FLEET_SERVER_POLICY_ID= \ <4> - -p 8220:8220 \ <5> - --rm docker.elastic.co/elastic-agent/elastic-agent:{version} <6> ----- - -<1> Set to 1 to bootstrap Fleet Server on this Elastic Agent. -<2> Your cluster's {es} host URL. -<3> The {fleet} service token. <> if you don't have one already. -<4> ID of the {fleet-server} policy. We recommend only having one fleet-server policy. To learn how to create a policy, refer to <>. -<5> publish container port 8220 to host. -<6> If you want to run the *elastic-agent-complete* image, replace `elastic-agent` with `elastic-agent-complete`. Use the `elastic-agent` user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to {observability-guide}/uptime-set-up.html[Synthetics {fleet} Quickstart] for more information. - -Refer to <> for all available options. - -// end::self-managed[] diff --git a/docs/en/ingest-management/tab-widgets/run-agent-image/widget.asciidoc b/docs/en/ingest-management/tab-widgets/run-agent-image/widget.asciidoc deleted file mode 100644 index 0d70934ba..000000000 --- a/docs/en/ingest-management/tab-widgets/run-agent-image/widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::content.asciidoc[tag=cloud] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/run-standalone-widget.asciidoc b/docs/en/ingest-management/tab-widgets/run-standalone-widget.asciidoc deleted file mode 100644 index cd73b3900..000000000 --- a/docs/en/ingest-management/tab-widgets/run-standalone-widget.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::run-standalone.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/run-standalone.asciidoc b/docs/en/ingest-management/tab-widgets/run-standalone.asciidoc deleted file mode 100644 index 4d11b322f..000000000 --- a/docs/en/ingest-management/tab-widgets/run-standalone.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -// tag::deb[] - -include::install.asciidoc[tag=install-tip] - -[source,shell] ----- -sudo systemctl enable elastic-agent <1> -sudo systemctl start elastic-agent ----- -<1> The DEB package includes a service unit for Linux systems with systemd. On -these systems, you can manage {agent} by using the usual systemd commands. If -you don't have systemd, run `sudo service elastic-agent start`. - -// end::deb[] - -// tag::rpm[] - -include::install.asciidoc[tag=install-tip] - -[source,shell] ----- -sudo systemctl enable elastic-agent <1> -sudo systemctl start elastic-agent ----- -<1> The RPM package includes a service unit for Linux systems with systemd. On -these systems, you can manage {agent} by using the usual systemd commands. If -you don't have systemd, run `sudo service elastic-agent start`. - -// end::rpm[] - -// tag::mac[] - -include::install.asciidoc[tag=install-tip] - -[source,shell] ----- -sudo ./elastic-agent install ----- - -// end::mac[] - -// tag::linux[] - -include::install.asciidoc[tag=install-tip] - -[source,shell] ----- -sudo ./elastic-agent install ----- - -// end::linux[] - -// tag::win[] -Open a PowerShell prompt as an Administrator (right-click the PowerShell icon -and select **Run As Administrator**). - -From the PowerShell prompt, change to the directory where you installed {agent}, -and run: - -[source,shell] ----- -.\elastic-agent.exe install ----- - -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/start-widget.asciidoc b/docs/en/ingest-management/tab-widgets/start-widget.asciidoc deleted file mode 100644 index 1033eabae..000000000 --- a/docs/en/ingest-management/tab-widgets/start-widget.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::start.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/start.asciidoc b/docs/en/ingest-management/tab-widgets/start.asciidoc deleted file mode 100644 index 56023aa2f..000000000 --- a/docs/en/ingest-management/tab-widgets/start.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -// tag::deb[] - -The DEB package includes a service unit for Linux systems with systemd. On these -systems, you can manage {agent} by using the usual systemd commands. - -// tag::start-command[] -Use `systemctl` to start the agent: - -[source,shell] ----- -sudo systemctl start elastic-agent ----- - -Otherwise, use: - -[source,shell] ----- -sudo service elastic-agent start ----- -// end::start-command[] - -// end::deb[] - -// tag::rpm[] -The RPM package includes a service unit for Linux systems with systemd. On these -systems, you can manage {agent} by using the usual systemd commands. - -include::start.asciidoc[tag=start-command] - -// end::rpm[] - -// tag::mac[] - -[source,shell] ----- -sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist ----- - -// end::mac[] - -// tag::linux[] - -[source,shell] ----- -sudo service elastic-agent start ----- - -// end::linux[] - -// tag::win[] - -[source,shell] ----- -Start-Service Elastic Agent ----- - -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/stop-widget.asciidoc b/docs/en/ingest-management/tab-widgets/stop-widget.asciidoc deleted file mode 100644 index 605bafecc..000000000 --- a/docs/en/ingest-management/tab-widgets/stop-widget.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -++++ -
-
- - - - - -
-
-++++ - -include::stop.asciidoc[tag=mac] - -++++ -
- - - - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/stop.asciidoc b/docs/en/ingest-management/tab-widgets/stop.asciidoc deleted file mode 100644 index 0eecb5939..000000000 --- a/docs/en/ingest-management/tab-widgets/stop.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -// tag::deb[] - -The DEB package includes a service unit for Linux systems with systemd. On these -systems, you can manage {agent} by using the usual systemd commands. - -// tag::stop-command[] -Use `systemctl` to stop the agent: - -[source,shell] ----- -sudo systemctl stop elastic-agent ----- - -Otherwise, use: - -[source,shell] ----- -sudo service elastic-agent stop ----- - -NOTE: {agent} will restart automatically if the system is rebooted. - -// end::stop-command[] - -// end::deb[] - -// tag::rpm[] -The RPM package includes a service unit for Linux systems with systemd. On these -systems, you can manage {agent} by using the usual systemd commands. - -include::stop.asciidoc[tag=stop-command] - -// end::rpm[] - -// tag::mac[] - -[source,shell] ----- -sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist ----- - -NOTE: {agent} will restart automatically if the system is rebooted. - -// end::mac[] - -// tag::linux[] -[source,shell] ----- -sudo service elastic-agent stop ----- - -NOTE: {agent} will restart automatically if the system is rebooted. - -// end::linux[] - -// tag::win[] - -[source,shell] ----- -Stop-Service Elastic Agent ----- - -If necessary, use Task Manager on Windows to stop {agent}. This will kill the -`elastic-agent` process and any sub-processes it created (such as {beats}). - -NOTE: {agent} will restart automatically if the system is rebooted. - -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/uninstall-widget.asciidoc b/docs/en/ingest-management/tab-widgets/uninstall-widget.asciidoc deleted file mode 100644 index 5da2f3868..000000000 --- a/docs/en/ingest-management/tab-widgets/uninstall-widget.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -++++ -
-
- - - -
-
-++++ - -include::uninstall.asciidoc[tag=mac] - -++++ -
- - -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/uninstall.asciidoc b/docs/en/ingest-management/tab-widgets/uninstall.asciidoc deleted file mode 100644 index beb89ada6..000000000 --- a/docs/en/ingest-management/tab-widgets/uninstall.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -// tag::uninstall-tip[] -TIP: You must run this command as the root user. - -// end::uninstall-tip[] - -// tag::mac[] - -include::uninstall.asciidoc[tag=uninstall-tip] - -[source,shell] ----- -sudo /Library/Elastic/Agent/elastic-agent uninstall ----- - -// end::mac[] - -// tag::linux[] - -include::uninstall.asciidoc[tag=uninstall-tip] - -[source,shell] ----- -sudo /opt/Elastic/Agent/elastic-agent uninstall ----- - -// end::linux[] - -// tag::win[] -Open a PowerShell prompt as an Administrator (right-click the PowerShell icon -and select **Run As Administrator**). - -From the PowerShell prompt, run: - -[source,shell] ----- -C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall ----- - -// end::win[] diff --git a/docs/en/ingest-management/tab-widgets/upgrade-fleet-widget.asciidoc b/docs/en/ingest-management/tab-widgets/upgrade-fleet-widget.asciidoc deleted file mode 100644 index 1eabb2131..000000000 --- a/docs/en/ingest-management/tab-widgets/upgrade-fleet-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::upgrade-fleet.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/en/ingest-management/tab-widgets/upgrade-fleet.asciidoc b/docs/en/ingest-management/tab-widgets/upgrade-fleet.asciidoc deleted file mode 100644 index 6f815ec30..000000000 --- a/docs/en/ingest-management/tab-widgets/upgrade-fleet.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -// tag::ess[] - -{ecloud} runs a hosted version of {fleet-server}. - -To confirm that {fleet-server} is available in your deployment: - -// lint ignore fleet -. Log in to {kib} and go to *Management > Fleet*, and click the *Agents* tab. -. The following message is displayed. Please note that your {agent}s have now -been unenrolled and have stopped sending data. -+ -Select *Do not show this message again*, and then click *Close and get started*. -+ -[role="screenshot"] -image::images/fleet-server-prompt.png[Add {fleet-server} prompt] -+ -If your previous deployment did not include an APM node, you are prompted to enable *APM & Fleet*, -required for using {fleet-server}. -+ -Click *Edit deployment* and add an *{integrations-server}* node to your deployment. -+ -[role="screenshot"] -image::images/apm-fleet-prompt.png[Add {integrations-server} node prompt] -+ -. To confirm that {fleet-server} is available in your deployment, click the *Agents* tab. -. Under *Policies*, select the *{ecloud} agent policy* to confirm that {fleet-server} -is listed. This policy is managed by {ecloud} and can not be modified. -+ -You are now able to install and enroll subsequent {agent}s with {fleet-server}. - -// end::ess[] - -// tag::self-managed[] - -To upgrade to {fleet-server}: - -. Log in to {kib} and go to *Management > {fleet}*. -. Click the *Agents* tab. -. The following message is displayed. Please note that your {agent}s have now -been unenrolled and have stopped sending data. -+ -Select *Do not show this message again*, and then click *Close and get started*. -+ -[role="screenshot"] -image::images/fleet-server-prompt-managed.png[Add {fleet-server} prompt] - -. Click *{fleet} settings*, and in the *{fleet-server} hosts* field, enter the -URLs {agent}s will use to connect to {fleet-server}. For example, -`https://192.0.2.1:8220`, where `192.0.2.1` is the host IP where you will -install {fleet-server}. - -. In the *{es} hosts* field, specify the {es} URLs where {agent}s will send data. -For example, `https://192.0.2.0:9200`. - -. Save and apply the settings. - -. Click the *Agents* tab and follow the in-product instructions to add a -{fleet-server}. -+ -Start with downloading the {agent} artifact of your choice to the host -to run the {fleet-server}: -+ -[role="screenshot"] -image::images/add-fleet-server.png[In-product instructions for adding a {fleet-server}] -+ -The `install` command installs the {agent} as a managed service and enrolls it -in a {fleet-server} policy. The example below is for Linux and macOS hosts (Windows usage -will be similar): -+ -[source,yaml] ----- -sudo ./elastic-agent install --fleet-server-es=http://localhost:9200 \ ---fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg6dzEta0JDTmZUZGlDTjlwRmNVTjNVQQ ----- -+ -IMPORTANT: For DEB or RPM artifact usage, substitute the `install` command for `enroll`. -This is because DEB and RPM usage inherently include installing the {agent} files to disk, -and so the `install` command is incompatible with them. -+ -When the installation is successful, you'll see the {fleet-server} {agent} on the -*Agents* tab in *{fleet}*. -+ -You now need to install and enroll your {agent}s. -// end::self-managed[] \ No newline at end of file diff --git a/docs/en/ingest-management/troubleshooting/faq.asciidoc b/docs/en/ingest-management/troubleshooting/faq.asciidoc deleted file mode 100644 index db0543df7..000000000 --- a/docs/en/ingest-management/troubleshooting/faq.asciidoc +++ /dev/null @@ -1,285 +0,0 @@ -[id="fleet-faq",titleabbrev="FAQ"] -= Frequently asked questions - -We have collected the most frequently asked questions here. If your question -isn't answered here, contact us in the {forum}[discuss forum]. Your feedback -is very valuable to us. - -Also read <>. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[discrete] -[[enrolled-agent-not-showing-up]] -== Why doesn't my enrolled agent show up in the {fleet} app? - -If {agent} was successfully enrolled, but doesn't show up in the *Agents* list, -it might not be started. Make sure the `elastic-agent` process is running on -the host. If it's not running, use the <> -command to start it. The most common way to deploy an {agent} is by using -the `install` command. This command starts the {agent} for you. - -[discrete] -[[where-are-the-agent-logs]] -== Where does {agent} store logs after startup? - -The location of {agent} logs varies by platform. In general, {agent} stores -logs under the `data` directory where {agent} was started. For example, on -macOS, you'll find logs for the running {agent} under path similar to: - -`/Library/Elastic/Agent/data/elastic-agent-08e204/logs/elastic-agent-20220105.ndjson` - -You'll find logs for the {beats} shippers, such as {metricbeat}, under paths -like: - -`/Library/Elastic/Agent/data/elastic-agent-08e204/logs/default/metricbeat-20220105.ndjson` - -If the log path does not exist, {agent} was unable to start {metricbeat}, which -is a higher level problem to triage. Usually you can see these logs in the -{fleet} UI, unless there are problems severe enough that the {agent} or its -related processes cannot send data to {es}. - -See <> to find out the exact paths for each platform. - -[discrete] -[[what-is-my-agent-config]] -== What policy is the {agent} running? - -To find the policy file, inspect the `elastic-agent.yml` file in the -directory where {agent} is running. Not sure where the agent is running? See -<>. - -If the agent is running in {fleet} mode, this file contains the following -citation: - -[source,yaml] ----- -fleet: - enabled: true ----- - -The `state.yml` file (located under `data/elastic-agent-*`) contains the -entire, unencrypted policy. - -* To see the {es} location, look at the `hosts` setting under `outputs`. For -example: -+ --- -[source,json] ----- -outputs: - default: - api_key: Aq-mPpcBDA7TmnriKCSD:Np6NAleNQ1mMpgN_JPYazw - hosts: - - https://3m63533c175a4036b3d8bbe7bd462fa3.us-east-1.aws.found.io:443 - type: elasticsearch ----- - -This file also shows the version of all packages used by the current -policy. --- - -* To see the {agent} version, run: -+ -[source,shell] ----- -elastic-agent version ----- - - -[discrete] -[[where-is-the-data-agent-is-sending]] -== Why can't I see the data {agent} is sending? - -If {elastic-agent} is set up and running, but you don't see data in {kib}: - - - -. Go to **Management > {dev-tools-app}** in {kib}, and in the Console, search your -index for data. For example: -+ -[source,console] ----- -GET metrics-*/_search ----- -+ -Or if you prefer, go to the **Discover** app. - -. Look at the data that {elastic-agent} has sent and see if the `name.host` -field contains your host machine name. - -If you don't see data for your host, it's possible that the data is blocked -in the network, or that a firewall or security problem is preventing the {agent} -from sending the data. - -Although it's redundant to install stand-alone {metricbeat}, you might want to -try installing it to see if it's able to send data successfully to {es}. For -more information, see -{metricbeat-ref}/metricbeat-installation-configuration.html[{metricbeat} quick start]. - -If {metricbeat} is able to send data to {es}, there is possibly a bug or -problem with {agent}, and you should report it. - -[discrete] -[[i-deleted-my-agent]] -== How do I restore an {agent} that I deleted from {fleet}? - -It's okay, we've got your back! The data is still in {es}. To add {agent} -to {fleet} again, <>, re-enroll it on the host, then -run {agent}. - -[discrete] -[[i-rebooted-my-host]] -== How do I restart {agent} after rebooting my host? - -{agent} should restart automatically when you reboot your host. If it doesn't, -you can start it manually by running: - -[source,shell] ----- -elastic-agent run ----- - -If the process is already running, you can restart it by running: - -[source,shell] ----- -elastic-agent restart ----- - -[discrete] -[[does-agent-download-packages]] -== Does {agent} or {kib} download integration packages? - -{agent} does not download integration packages. When you add an integration in -{fleet}, {kib} connects to the {package-registry} at `epr.elastic.co`, -downloads the integration package, and stores its assets in {es}. This means -that you no longer have to run a manual setup command to load integrations as -you did previously with {beats} modules. - -By default, {kib} requires an internet connection to download integration -packages from the {package-registry}. If network restrictions prevent -{kib} from reaching the public {package-registry}, you can use a proxy -server or host your own {package-registry}. To learn more, refer to -<>. - -[discrete] -[[does-agent-download-anything-from-internet]] -== Does {agent} download anything from the Internet? - -* In version 7.10 and later, a fully capable artifact can be installed with no -connection to the Elastic download site. However, if it is in use, the -{elastic-defend} process is instructed to attempt to download -newer released versions of the integration-specific artifacts it uses. Some of -those are, for example, the malware model, trusted applications artifact, -exceptions list artifact, and others. {elastic-endpoint} will continue to -protect the host even if it's unable to download updates. However, it won't -receive updates to protections until {agent} is upgraded to a new version. -For more information, refer to the -{security-guide}/index.html[{elastic-sec} documentation]. - -* {agent} requires internet access to download artifacts for binary upgrades. -In air-gapped environments, you can host your own artifact registry. For -more information, refer to <>. - -[discrete] -[[do-i-need-to-setup-elastic-agent]] -== Do I need to set up the {beats} managed by {agent}? - -You might have noticed that {agent} runs {beats} under the hood. But note that -the {beats} managed by {agent} are set up and run differently from standalone -{beats}. - -For example, standalone {beats} use modules and require you to run a setup -command on the host to load assets, such as ingest pipelines and dashboards. In -contrast, {beats} managed by {agent} use integration packages that {kib} -downloads from the {package-registry} at `epr.elastic.co`. This means that -{agent} does not need extra privileges to set up assets because -{fleet} manages the assets. - -[discrete] -[[what-is-the-endpoint-package]] -== What is the {elastic-defend} integration in {fleet}? - -The {elastic-defend} integration provides protection on your {agent} -controlled host. The integration monitors your host for security-related events, -allowing for investigation of security data through the {security-app} in {kib}. -The {elastic-defend} integration is managed by {agent} in the -same way as other integrations. Try it out! For more information, refer to the -{security-guide}/index.html[{elastic-sec} documentation]. - -[discrete] -[[how-are-security-to-agent-communications-secured]] -== How are communications secured between {elastic-sec} and {agent}? - -{elastic-sec} connects to the agent over loopback TLS on port 6788. -{elastic-sec} validates that the agent has root (Linux and macOS) or SYSTEM -(Windows) permissions. - -[discrete] -[[how-are-secrets-secured]] -== How are secrets secured when entered into integration policies or agent policies in {kib}? - -Credentials that you provide for an agent or integration policy are stored in -{es}. They can be read by any user who has read permissions to the `.fleet-*` -and `.kibana*` indices in {es}. By default these are the superuser, -`fleet-server` service account tokens, and the `kibana_system` user. These -secrets are also included in agent policies and shared with agents via {fleet} -through TLS. When you use the {agent} installer and enroll agents in {fleet}, -the policies are stored on the host file system and, by default, can only be -read by root. - -[discrete] -[[which-es-kibana-ports-are-needed]] -== Which {es} and {kib} ports need to be accessible? - -The policy generated by {fleet} already contains the correct {es} address -and port for your setup. If you run everything locally, the address is -`127.0.0.1:9200`. If you use our -{ess-product}[hosted {ess}] on {ecloud}, -you can copy the {es} endpoint URL from the overview page of your deployment. -If you're not running in {ecloud}, make sure the {kib} and {es} HTTPS ports -are both accessible; by default these are `5601` and `9200` respectively. - -[discrete] -[[how-do-i-reinstall-a-missing-dashboard-asset]] -== If I delete an integration dashboard asset from {kib}, how do I get it back? - -To reinstall the assets for a specific integration, you can use the {fleet} UI. -For more information, see <>. - -Alternatively, you can use the {fleet} API using the package name and version. -This needs to be run against the {kib} API and not the {es} API to be -successful. To reinstall package assets, execute the following call with the -`force` parameter in the body: - -[source,sh] ----- -POST api/fleet/epm/packages/[package name]/[package version] -{ "force": true } ----- -// KIBANA - -So, for example, to reinstall the system v1.0.0 package, POST: - -[source,sh] ----- -POST api/fleet/epm/packages/system/1.0.0 -{ "force": true } ----- -// KIBANA - -The package version is shown in the Integrations view in {kib}. diff --git a/docs/en/ingest-management/troubleshooting/images/collect-agent-diagnostics.png b/docs/en/ingest-management/troubleshooting/images/collect-agent-diagnostics.png deleted file mode 100644 index 4e9647881..000000000 Binary files a/docs/en/ingest-management/troubleshooting/images/collect-agent-diagnostics.png and /dev/null differ diff --git a/docs/en/ingest-management/troubleshooting/troubleshooting-intro.asciidoc b/docs/en/ingest-management/troubleshooting/troubleshooting-intro.asciidoc deleted file mode 100644 index 528049e64..000000000 --- a/docs/en/ingest-management/troubleshooting/troubleshooting-intro.asciidoc +++ /dev/null @@ -1,6 +0,0 @@ -[[troubleshooting-intro]] -= Troubleshoot - -This section provides an <> and -<> tips to help you resolve common -problems with {fleet} and {agent}. diff --git a/docs/en/ingest-management/troubleshooting/troubleshooting.asciidoc b/docs/en/ingest-management/troubleshooting/troubleshooting.asciidoc deleted file mode 100644 index 99f64a88c..000000000 --- a/docs/en/ingest-management/troubleshooting/troubleshooting.asciidoc +++ /dev/null @@ -1,981 +0,0 @@ -[[fleet-troubleshooting]] -= Troubleshoot common problems - -We have collected the most common known problems and listed them here. If your problem -is not described here, please review the open issues in the following GitHub repositories: - -[options,header] -|=== -| Repository | To review or report issues about - -|https://github.com/elastic/kibana/issues[elastic/kibana] | {fleet} and {integrations} UI -|https://github.com/elastic/elastic-agent/issues[elastic/elastic-agent] | {agent} -|https://github.com/elastic/beats/issues[elastic/beats] | {beats} shippers -|https://github.com/elastic/fleet-server/issues[elastic/fleet-server] | {fleet-server} -|https://github.com/elastic/package-registry/issues[elastic/package-registry] | {package-registry} -|https://github.com/elastic/observability-docs/issues[elastic/observability-docs] | Documentation issues - -|=== - -Have a question? Read our <>, or contact us in the -{forum}[discuss forum]. Your feedback is valuable to us. - -Running {agent} standalone? Also refer to <>. - - -[discrete] -[[troubleshooting-contents]] -== Troubleshooting contents - -Find troubleshooting information for {fleet}, {fleet-server}, and {agent} in the following documentation: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -[discrete] -[[deleted-policy-unenroll]] -== {agent} unenroll fails - -In {fleet}, if you delete an {agent} policy that is associated with one or more inactive enrolled agents, when the agent returns back to a `Healthy` or `Offline` state, it cannot be unenrolled. Attempting to unenroll the agent results in an `Error unenrolling agent` message, and the unenrollment fails. - -To resolve this problem, you can use the <> to force unenroll the agent. - -To uninstall a single {agent}: - -[source,"shell"] ----- -POST kbn:/api/fleet/agents//unenroll -{ - "force": true, - "revoke": true -} ----- - -To bulk uninstall a set of {agents}: - -[source,"shell"] ----- -POST kbn:/api/fleet/agents/bulk_unenroll -{ "agents": ["", ""], - "force": true, - "revoke": true -} ----- - -We are also updating the {fleet} UI to prevent removal of an {agent} policy that is currently associated with any inactive agents. - -[discrete] -[[tsdb-illegal-argument]] -== illegal_argument_exception when TSDB is enabled - -When you use an {agent} integration in which TSDB (Time Series Database) is enabled, you may encounter an `illegal_argument_exception` error in the {fleet} UI. - -This can occur if you have a component template defined that includes a `_source` attribute, which conflicts with the `_source: synthetic` setting used when TSDB is enabled. - -For details about the error and how to resolve it, refer to the section `Runtime fields cannot be used in TSDB indices` in the Innovation Hub article link:https://support.elastic.co/knowledge/9363b9fd[TSDB enabled integrations for {agent}]. - -[discrete] -[[agents-in-cloud-stuck-at-updating]] -== {agent}s hosted on {ecloud} are stuck in `Updating` or `Offline` - -In {ecloud}, after <> {fleet-server} and its -integration policies, agents enrolled in the {ecloud} agent policy -may experience issues updating. To resolve this problem: - -. In a terminal window, run the following `cURL` request, providing your {kib} superuser credentials to reset the {ecloud} agent policy. - -** On {kib} versions 8.11 and later, run: -+ -[source,shell] ----- -curl -u : --request POST \ - --url /internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ - --header 'content-type: application/json' \ - --header 'kbn-xsrf: xyz' \ - --header 'elastic-api-version: 1' ----- - -** On {kib} versions earlier than 8.11, run: -+ -[source,shell] ----- -curl -u : --request POST \ - --url /internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ - --header 'content-type: application/json' \ - --header 'kbn-xsrf: xyz' ----- - -. Force unenroll the agent stuck in `Updating`: - -.. To find agent's ID, go to *{fleet} > Agents* and click the agent to see its -details. Copy the Agent ID. - -.. In a terminal window, run: -+ -[source,shell] ----- -curl -u : --request POST \ - --url /api/fleet/agents//unenroll \ - --header 'content-type: application/json' \ - --header 'kbn-xsrf: xx' \ - --data-raw '{"force":true,"revoke":true}' \ - --compressed ----- -+ -Where `` is the ID you copied in the previous step. - -. Restart the {integrations-server}: -+ -In the {ecloud} console under {integrations-server}, click *Force Restart*. - - -[discrete] -[[fleet-server-not-in-kibana-cloud]] -== When using {ecloud}, {fleet-server} is not listed in {kib} - -If you are unable to see {fleet-server} in {kib}, make sure it's set up. - -To set up {fleet-server} on {ecloud}: - -. Go to your deployment on {ecloud}. -. Follow the {ecloud} prompts to set up *{integrations-server}*. Once complete, the {fleet-server} {agent} -will show up in {fleet}. - -To enable {fleet} and set up {fleet-server} on a self-managed cluster: - -. In the {es} configuration file, `config/elasticsearch.yml`, set the following -security settings to enable security and API keys: -+ -[source,yaml] ----- -xpack.security.enabled: true -xpack.security.authc.api_key.enabled: true ----- - -. In the {kib} configuration file, `config/kibana.yml`, enable {fleet} -and specify your user credentials: -+ -[source,yaml] ----- -xpack.encryptedSavedObjects.encryptionKey: "something_at_least_32_characters" -elasticsearch.username: "my_username" <1> -elasticsearch.password: "my_password" ----- -<1> Specify a user who is authorized to use {fleet}. -+ -To set up passwords, you can use the documented {es} APIs or the -`elasticsearch-setup-passwords` command. For example, `./bin/elasticsearch-setup-passwords auto` -+ -After running the command: - - .. Copy the Elastic user name to the {kib} configuration file. - .. Restart {kib}. - .. Follow the documented steps for setting up a self-managed {fleet-server}. -For more information, refer to <>. - - -[discrete] -[[fleet-setup-fails]] -== The `/api/fleet/setup` endpoint can't reach the package registry - -To install {integrations}, the {fleet} app requires a connection to -an external service called the {package-registry}. - -For this to work, the {kib} server must connect to `https://epr.elastic.co` on port `443`. - -[discrete] -[[fleet-errors-tls]] -== {kib} cannot connect to {package-registry} in air-gapped environments - -In air-gapped environments, you may encounter the following error if you're using a custom Certificate Authority (CA) that is not available to {kib}: - -[source,json] ----- -{"type":"log","@timestamp":"2022-03-02T09:58:36-05:00","tags":["error","plugins","fleet"],"pid":58716,"message":"Error connecting to package registry: request to https://customer.server.name:8443/categories?experimental=true&include_policy_templates=true&kibana.version=7.17.0 failed, reason: self signed certificate in certificate chain"} ----- - -To fix this problem, add your CA certificate file path to the {kib} startup -file by defining the `NODE_EXTRA_CA_CERTS` environment variable. More information -about this in <> section. - - -[discrete] -[[fleet-app-crashes]] -== {fleet} in {kib} crashes - -. To investigate the error, open your browser's development console. -. Select the **Network** tab, and refresh the page. -+ -One of the requests to the {fleet} API will most likely have returned an error. If the error -message doesn't give you enough information to fix the problem, please contact us in the {forum}[discuss forum]. - -[discrete] -[[agent-enrollment-certs]] -== {agent} enrollment fails on the host with `x509: certificate signed by unknown authority` message - -To ensure that communication with {fleet-server} is encrypted, -{fleet-server} requires {agent}s to present a signed certificate. In a -self-managed cluster, if you don't specify certificates when you set up -{fleet-server}, self-signed certificates are generated automatically. - -If you attempt to enroll an {agent} in a {fleet-server} with a self-signed -certificate, you will encounter the following error: - -[source,sh] ----- -Error: fail to enroll: fail to execute request to fleet-server: x509: certificate signed by unknown authority -Error: enroll command failed with exit code: 1 ----- - -To fix this problem, pass the `--insecure` flag along with the `enroll` or -`install` command. For example: - -[source,sh] ----- -sudo ./elastic-agent install --url=https://:8220 --enrollment-token= --insecure ----- - -Traffic between {agent}s and {fleet-server} over HTTPS will be encrypted; you're -simply acknowledging that you understand that the certificate chain cannot be -verified. - -Allowing {fleet-server} to generate self-signed certificates is useful to get -things running for development, but not recommended in a production environment. - -For more information, refer to <>. - -[discrete] -[[es-enrollment-certs]] -== {agent} enrollment fails on the host with `x509: cannot validate certificate for x.x.x.x because it doesn't contain any IP SANs` message - -To ensure that communication with {es} is encrypted, -{fleet-server} requires {es} to present a signed certificate. - -This error occurs when you use self-signed certificates with {es} using IP as a Common Name (CN). -With IP as a CN, {fleet-server} looks into subject alternative names (SANs), which is empty. To work -around this situation, use the `--fleet-server-es-insecure` flag to disable certificate verification. - -You will also need to set `ssl.verification_mode: none` in the Output settings in {fleet} and {integrations} UI. - -[discrete] -[[agent-enrollment-timeout]] -== {agent} enrollment fails on the host with `Client.Timeout exceeded` message - -To enroll in {fleet}, {agent} must connect to the {fleet-server} instance. -If the agent is unable to connect, you see the following failure: - -[source,output] ------ -fail to enroll: fail to execute request to {fleet-server}:Post http://fleet-server:8220/api/fleet/agents/enroll?: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ------ - -Here are several steps to help you troubleshoot the problem. - -. Check for networking problems. From the host, run the `ping` command to confirm -that it can reach the {fleet-server} instance. - -. Additionally, `curl` the `/status` API of {fleet-server}: -+ -[source,shell] ----- -curl -f http://:8220/api/status ----- -+ -. Verify that you have specified the correct {kib} {fleet} settings URL and port for -your environment. -+ -By default, HTTPS protocol and port 8220 is expected by {fleet-server} to communicate -with {es} unless you have explicitly set it otherwise. -+ -. Check that you specified a valid enrollment key during enrollment. To do this: -.. In {fleet}, select **Enrollment tokens**. -.. To view the secret, click the eyeball icon. The secret should match the string -that you used to enroll {agent} on your host. -.. If the secret doesn't match, create a new enrollment token and use this -token when you run the `elastic-agent enroll` command. - -[discrete] -[[general-fleet-server-triage]] -== Many {fleet-server} problems can be triaged and fixed with the below tips - -IMPORTANT: When creating an issue or sending a support forum communication, this section -can help you identify what is required. - -{fleet-server} allows {agent} to connect to {es}, which is the same as the connection -to {kib} in prior releases. However, because {fleet-server} is on the edge host, it may -result in additional networking setup and troubleshooting. - -[discrete] -[[trb-retrieve-agent-version]] -=== Retrieve the {agent} version - -. If you installed the {agent}, run the following command (the example is for POSIX -based systems): -+ -[source,shell] ----- -elastic-agent version ----- -+ -. If you have not installed the {agent} and you are running it as a temporary process, you can run: -+ -[source,shell] ----- -./elastic-agent version ----- -+ -NOTE: Both of the above commands are accessible via Windows or macOS with their OS-specific slight variation in -how you call them. If needed, please refer to <> -for examples of how to adjust them. - -[discrete] -[[trb-check-agent-status]] -=== Check the {agent} status - -Run the following command to view the current status of the {agent}. - -[source,shell] ----- -elastic-agent status ----- - -Based on the information returned, you can take further action. - -If {agent} is running, but you do not see what you expect, here are some items to review: - -. In {fleet}, click **Agents**. Check which policy is associated with the running {agent}. If it is not the policy you expected, you can change it. -. In {fleet}, click **Agents**, and then select the {agent} policy. Check for the integrations that should be included. -+ -For example, if you want to include system data, make sure the *System* integration is included in the policy. -+ -. Confirm if the *Collect agent logs* and *Collect agent metrics* options are selected. -.. In {fleet}, click **Agents**, and then select the {agent} policy. -.. Select the *Settings* tab. If you want to collect agent logs or metrics, select these options. -+ -IMPORTANT: The *{ecloud} agent policy* is created only in {ecloud} deployments and, by default, -does not include the collection of logs of metrics. - -[discrete] -[[trb-collect-agent-diagnostics]] -=== Collect {agent} diagnostics bundle - -The {agent} diagnostics bundle collects the following information: - -. {agent} versions numbers -. {beats} (and other process) version numbers and process metadata -. Local configuration, elastic-agent policy, and the configuration that is rendered and passed to {beats} and other processes -. {agent}'s local log files -. {agent} and {beats} pprof profiles - -Note that the diagnostics bundle is intended for debugging purposes only, its structure may change between releases. - -IMPORTANT: {agent} attempts to automatically redact credentials and API keys when creating diagnostics. -Please review the contents of the archive before sharing to ensure that there are no credentials in plain text. - -IMPORTANT: The ZIP archive containing diagnostics information will include the raw events of documents sent to the {agent} output. -By default, it will log only the failing events as `warn`. -When the `debug` logging level is enabled, all events are logged. -Please review the contents of the archive before sharing to ensure that no sensitive information is included. - -**Get the diagnostics bundle using the CLI** - -Run the following command to generate a zip archive containing diagnostics information that the Elastic team can use for debugging cases. - -[source,shell] ----- -elastic-agent diagnostics ----- - -If you want to omit the raw events from the diagnostic, add the flag `--exclude-events`. - -**Get the diagnostics bundle through {fleet}** - -{fleet} provides the ability to remotely generate and gather an {agent}'s diagnostics bundle. -An agent can gather and upload diagnostics if it is online in a `Healthy` or `Unhealthy` state. The diagnostics are sent to {fleet-server} which in turn adds it into {es}. Therefore, this works even with {agents} that are not using the {es} output. -To download the diagnostics bundle for local viewing: - -. In {fleet}, open the **Agents** tab. - -. In the **Host** column, click the agent's name. - -. Select the **Diagnostics** tab and click the **Request diagnostics .zip** button. -+ -[role="screenshot"] -image::images/collect-agent-diagnostics1.png[Collect agent diagnostics under agent details] - -. In the **Request Diagnostics** pop-up, select **Collect additional CPU metrics** if you'd like detailed CPU data. -+ -[role="screenshot"] -image::images/collect-agent-diagnostics2.png[Collect agent diagnostics confirmation pop-up] - -. Click the **Request diagnostics** button. - -When available, the new diagnostic bundle will be listed on this page, as well as any in-progress or previously collected bundles for the {agent}. - -Note that the bundles are stored in {es} and are removed automatically after 7 days. You can also delete any previously created bundle by clicking the `trash can` icon. - -[discrete] -[[not-installing-no-logs-in-terminal]] -== Some problems occur so early that insufficient logging is available - -If some problems occur early and insufficient logging is available, run the following command: - -[source,shell] ----- -./elastic-agent install -f ----- - -The stand-alone install command installs the {agent}, and all of the service configuration is set up. You can now run the -'enrollment' command. For example: - -[source,shell] ----- -elastic-agent enroll --fleet-server-es=https://:443 --fleet-server-service-token= --fleet-server-policy= ----- -Note: Port `443` is commonly used in {ecloud}. However, with self-managed deployments, your {es} may run on port `9200` or something entirely different. - -For information on where to find agent logs, refer to our <>. - -[discrete] -[[agent-healthy-but-no-data-in-es]] -== The {agent} is cited as `Healthy` but still has set up problems sending data to {es} - -. To confirm that the {agent} is running and its status is `Healthy`, select the *Agents* tab. -+ -If you previously selected the *Collect agent logs* option, you can now look at the agent logs. -+ -. Click the agent name and then select the *Logs* tab. -+ -If there are no logs displayed, it suggests a communication problem between your host and {es}. The possible reason for this is -that the port is already in use. -+ -. You can check the port usage using tools like Wireshark or netstat. On a POSIX system, you can run the following command: -+ -[source,shell] ----- -netstat -nat | grep :8220 ----- -+ -Any response data indicates that the port is in use. This could be correct or not -if you had intended to uninstall the {fleet-server}. In which case, re-check and continue. - -[discrete] -[[fleet-agent-stuck-on-updating]] -== {agent} is stuck in status `Updating` - -Beginning in {stack} version 8.11, a stuck {agent} upgrade should be detected automatically, -and you can <> from {fleet}. - -[discrete] -[[secondary-agent-not-connecting]] -== {fleet-server} is running and healthy with data, but other Agents cannot use it to connect to {es} - -Some settings are only used when you have multiple {agent}s. If this is the case, it may help -to check that the hosts can communicate with the {fleet-server}. - -From the non-{fleet-server} host, run the following command: - -[source,shell] ----- -curl -f http://:8220/api/status ----- - -The response may yield errors that you can be debug further, or it may work and show that communication ports and -networking are not the problems. - -One common problem is that the default {fleet-server} port of `8220` isn’t open on the {fleet-server} -host to communicate. You can review and correct this using common tools in alignment with any -networking and security concerns you may have. - -[discrete] -[[es-apikey-failed]] -== {es} authentication service fails with `Authentication using apikey failed` message - -To save API keys and encrypt them in {es}, {fleet} requires an encryption key. - -To provide an API key, in the `kibana.yml` configuration file, set the `xpack.encryptedSavedObjects.encryptionKey` property. - -[source,yaml] ----- -xpack.encryptedSavedObjects.encryptionKey: "something_at_least_32_characters" ----- - -[discrete] -[[process-not-root]] -== {agent} fails with `Agent process is not root/admin or validation failed` message - -Ensure the user running {agent} has root privileges as some integrations -require root privileges to collect sensitive data. - -If you're running {agent} in the foreground (and not as a service) on Linux or macOS, run the -agent under the root user: `sudo` or `su`. - -If you're using the {elastic-defend} integration, make sure you're -running {agent} under the SYSTEM account. - -TIP: If you install {agent} as a service as described in -<>, {agent} runs under the SYSTEM account by -default. - -To run {agent} under the SYSTEM account, you can do the following: - -. Download https://docs.microsoft.com/en-us/sysinternals/downloads/psexec[PsExec] -and extract the contents to a folder. For example, `d:\tools`. -. Open a command prompt as an Administrator (right-click the command prompt -icon and select *Run As Administrator*). -. From the command prompt, run {agent} under the SYSTEM account: -+ -[source,sh] ----- -d:\tools\psexec.exe -sid "C:\Program Files\Elastic-Agent\elastic-agent.exe" run ----- - - -[discrete] -[[upgrading-integration-too-many-conflicts]] -== Integration policy upgrade has too many conflicts - -If you try to upgrade an integration policy that is several versions old, there -may be substantial conflicts or configuration issues. Rather than trying to fix -these problems, it might be faster to create a new policy, test it, and roll -out the integration upgrade to additional hosts. - -After <>: - -. <>. - -. <>. -The newer version is automatically used. - -. <> to an {agent}. -+ -TIP: In larger deployments, you should test integration upgrades on a sample {agent} -before rolling out a larger upgrade initiative. -Only after a small trial is deemed successful should the updated policy be -rolled out all hosts. - -. Roll out the integration update to additional hosts: - -.. In {fleet}, click *Agent policies*. -Click on the name of the policy you want to edit. - -.. Search or scroll to a specific integration. -Open the *Actions* menu and select *Delete integration*. - -.. Click *Add integration* and re-add the freshly deleted integration. -The updated version will be used and applied to all {agent}s. - -.. Repeat this process for each policy with the out-of-date integration. -+ -NOTE: In some instances, for example, when there are hundreds or thousands of different {agent}s and -policies that need to be updated, this upgrade path is not feasible. -In this case, update one policy and use the <> action to apply the updated policy versions to additional policies. -This method's downside is losing -the granularity of assessing the individual Integration version changes individually across policies. - -[discrete] -[[agent-hangs-while-unenrolling]] -== {agent} hangs while unenrolling - -When unenrolling {agent}, {fleet} waits for acknowledgment from the agent -before it completes the unenroll process. If {fleet} doesn't receive an -acknowledgment, the status hangs at `unenrolling.` - -You can unenroll an agent to invalidate all API keys related to the agent and change the status to -`inactive` so that the agent no longer appears in {fleet}. - -. In {fleet}, select **Agents**. - -. Under Agents, choose **Unenroll agent** from the **Actions** menu next to the -agent you want to unenroll. - -. Click **Force unenroll**. - -[discrete] -[[ca-cert-testing]] -== On {fleet-server} startup, ERROR seen with `State changed to CRASHED: exited with code: 1` - -You may see this error message for a number of different reasons. A common reason is when attempting production-like usage and the ca.crt file passed in cannot be found. To verify if this is the problem, bootstrap {fleet-server} without passing a ca.crt file. This implies you would test any subsequent -{agent} installs temporarily with {fleet-sever}'s own self-signed cert. - -TIP: Ensure to pass in the full path to the ca.crt file. A relative path is not viable. - -You will know if your {fleet-server} is set up with its testing oriented self-signed certificate usage, -when you see the following error during {agent} installs: - -[source,sh] ----- -Error: fail to enroll: fail to execute request to fleet-server: x509: certificate signed by unknown authority -Error: enroll command failed with exit code: 1 ----- - -To install or enroll against a self-signed cert {fleet-server} {agent}, add in the `--insecure` option to the -command: - -[source,sh] ----- -sudo ./elastic-agent install --url=https://:8220 --enrollment-token= --insecure ----- - -For more information, refer to <>. - -[discrete] -[[endpoint-not-uninstalled-with-agent]] -== Uninstalling {elastic-endpoint} fails - -When you uninstall {agent}, all the programs managed by {agent}, such as -{elastic-endpoint}, are also removed. If uninstalling fails, -{elastic-endpoint} might remain on your system. - -To remove {elastic-endpoint}, run the following commands: - --- -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/remove-endpoint-files/widget.asciidoc[] - --- - -[discrete] -[[endpoint-unauthorized]] -== API key is unauthorized to send telemetry to `.logs-endpoint.diagnostic.collection-*` indices - -By default, telemetry is turned on in the {stack} to helps us learn about the -features that our users are most interested in. This helps us to focus our efforts on -making features even better. - -If you've recently upgraded from version `7.10` to `7.11`, you might see the -following message when you view {elastic-defend} logs: - -[source,sh] ----- -action [indices:admin/auto_create] is unauthorized for API key id [KbvCi3YB96EBa6C9k2Cm] -of user [fleet_enroll] on indices [.logs-endpoint.diagnostic.collection-default] ----- - -The above message indicates that {elastic-endpoint} does not have the correct -permissions to send telemetry. This is a known problem in 7.11 that will be -fixed in an upcoming patch release. - -To remove this message from your logs, you can turn off telemetry for the {elastic-defend} integration -until the next patch release is available. - -. In {kib}, click **Integrations**, and then select the **Manage** tab. - -. Click **{elastic-defend}**, and then select the **Policies** tab to view all the -installed integrations. - -. Click the integration to edit it. - -. Under advanced settings, set `windows.advanced.diagnostic.enabled` -to `false`, and then save the integration. - - -[discrete] -[[hosted-agent-offline]] -== Hosted {agent} is offline - -To scale the {fleet-server} deployment, {ecloud} starts new containers or shuts down old ones when hosted {agent}s are required or no longer needed. The old {agent}s will show in the Agents list for 24 hours then automatically disappear. - -[discrete] -[[mac-file-sharing]] -== {agent} fails to enroll with {fleet-server} running on localhost. - -If you're testing {fleet-server} locally on a macOS system using localhost (`https://127.0.0.1:8220`) as the Host URL, you may encounter this error: - -[source,sh] ----- -Error: fail to enroll: fail to execute request to fleet-server: -lookup My-MacBook-Pro.local: no such host ----- - -This can occur on newer macOS software. To resolve the problem, link:https://support.apple.com/en-ca/guide/mac-help/mh17131/mac[ensure that file sharing is enabled] on your local system. - -[discrete] -[[hosted-agent-8-x-upgrade-fail]] -== APM & {fleet} fails to upgrade to 8.x on {ecloud} - -In some scenarios, upgrading APM & {fleet} to 8.x may fail if the {ecloud} agent policy was modified manually. The {fleet} app in {kib} may show a message like: - -[source,sh] ----- -Unable to create package policy. Package 'apm' already exists on this agent policy ----- - -To work around this problem, you can reset the {ecloud} agent policy with an API call. Note that this will remove any custom integration policies that you've added to the policy, such as Synthetics monitors. - -[source,sh] ----- -curl -u elastic: --request POST \ - --url /internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ - --header 'Content-Type: application/json' \ - --header 'kbn-xsrf: xyz' ----- - -[discrete] -[[pgp-key-download-fail]] -== Air-gapped {agent} upgrade can fail due to an inaccessible PGP key - -In versions 8.9 and above, an {agent} upgrade may fail when the upgrader can't access a PGP key required to verify the binary signature. For details and a workaround, refer to the link:https://www.elastic.co/guide/en/fleet/8.9/release-notes-8.9.0.html#known-issue-3375[PGP key download fails in an air-gapped environment] known issue in the version 8.9.0 Release Notes or to the link:https://github.com/elastic/elastic-agent/blob/main/docs/pgp-workaround.md[workaround documentation] in the elastic-agent GitHub repository. - -[discrete] -[[fleet-server-integration-removed]] -== {agents} are unable to connect after removing the {fleet-server} integration - -When you use {fleet}-managed {agent}, at least one {agent} needs to be running the link:https://docs.elastic.co/integrations/fleet_server[{fleet-server} integration]. In case the policy containing this integration is accidentally removed from {agent}, all other agents will not be able to be managed. However, the {agents} will continue to send data to their configured output. - -There are two approaches to fixing this issue, depending on whether or not the the {agent} that was running the {fleet-server} integration is still installed and healthy (but is now running another policy). - -To recover the {agent}: - -. In {fleet}, open the **Agents** tab and click **Add agent**. - -. In the **Add agent** flyout, select an agent policy that contains the **Fleet Server** integration. On Elastic Cloud you can use the **Elastic Cloud agent policy** which includes the integration. - -. Follow the instructions in the flyout, and stop before running the CLI commands. - -. Depending on the state of the original {fleet-server} {agent}, do one of the following: -+ -* **The original {fleet-server} {agent} is still running and healthy** -+ -In this case, you only need to re-enroll the agent with {fleet}: - -.. Copy the `elastic-agent install` command from the {kib} UI. -.. In the command, replace `install` with `enroll`. -.. In the directory where {agent} is running (for example `/opt/Elastic/Agent/` on Linux), run the command as `root`. -+ -For example, if {kib} gives you the command: -+ -[source,sh] ----- -sudo ./elastic-agent install --url=https://fleet-server:8220 --enrollment-token=bXktc3VwZXItc2VjcmV0LWVucm9sbWVudC10b2tlbg== ----- -+ -Instead run: -+ -[source,sh] ----- -sudo ./elastic-agent enroll --url=https://fleet-server:8220 --enrollment-token=bXktc3VwZXItc2VjcmV0LWVucm9sbWVudC10b2tlbg== ----- - -* **The original {fleet-server} {agent} is no longer installed** -+ -In this case, you need to install the agent again: - -.. Copy the commands from the {kib} UI. The commands don't need to be changed. -.. Run the commands in order. The first three commands will download a new {agent} install package, expand the archive, and change directories. -+ -The final command will install {agent}. For example: -+ -[source,sh] ----- -sudo ./elastic-agent install --url=https://fleet-server:8220 --enrollment-token=bXktc3VwZXItc2VjcmV0LWVucm9sbWVudC10b2tlbg== ----- - -After running these steps your {agents} should be able to connect with {fleet} again. - -[discrete] -[[agent-oom-k8s]] -== {agent} Out of Memory errors on Kubernetes - -In a Kubernetes environment, {agent} may be terminated with reason `OOMKilled` due to inadequate available memory. - -To detect the problem, run the `kubectl describe pod` command and check the results for the following content: - -[source,sh] ----- - Last State: Terminated - Reason: OOMKilled - Exit Code: 137 ----- - -To resolve the problem, allocate additional memory to the agent and then restart it. - -[discrete] -[[agent-sudo-error]] -== Error when running {agent} commands with `sudo` - -On Linux systems, when you install {agent} <>, that is, using the `--unprivileged` flag, -{agent} commands should not be run with `sudo`. Doing so may result in an error due to the agent not having the required privileges. - -For example, when you run {agent} with the `--unprivileged` flag, running the `elastic-agent inspect` command will result in an error like the following: - -[source,sh] ----- -Error: error loading agent config: error loading raw config: fail to read configuration /Library/Elastic/Agent/fleet.enc for the elastic-agent: fail to decode bytes: cipher: message authentication failed ----- - -To resolve this, either install {agent} without the `--unprivileged` flag so that it has administrative access, or run the {agent} commands without the `sudo` prefix. - -[discrete] -[[agent-kubernetes-kustomize]] -== Troubleshoot {agent} installation on Kubernetes, with Kustomize - -Potential issues during {agent} installation on Kubernetes can be categorized into two main areas: - -. <>. -. <>. - -[discrete] -[[agent-kustomize-manifest]] -=== Problems related to the creation of objects within the manifest - -When troubleshooting installations performed with https://github.com/kubernetes-sigs/kustomize[Kustomize], it's good practice to inspect the output of the rendered manifest. To do this, take the installation command provided by Kibana Onboarding and replace the final part, `| kubectl apply -f-`, with a redirection to a local file. This allows for easier analysis of the rendered output. - -For example, the following command, originally provided by {kib} for an {agent} Standalone installation, has been modified to redirect the output for troubleshooting purposes: - -[source,sh] ----- -kubectl kustomize https://github.com/elastic/elastic-agent/deploy/kubernetes/elastic-agent-kustomize/default/elastic-agent-standalone\?ref\=v8.15.3 | sed -e 's/JUFQSV9LRVkl/ZDAyNnZaSUJ3eWIwSUlCT0duRGs6Q1JfYmJoVFRUQktoN2dXTkd0FNMtdw==/g' -e "s/%ES_HOST%/https:\/\/7a912e8674a34086eacd0e3d615e6048.us-west2.gcp.elastic-cloud.com:443/g" -e "s/%ONBOARDING_ID%/db687358-2c1f-4ec9-86e0-8f1baa4912ed/g" -e "s/\(docker.elastic.co\/beats\/elastic-agent:\).*$/\18.15.3/g" -e "/{CA_TRUSTED}/c\ " > elastic_agent_installation_complete_manifest.yaml ----- - -The previous command generates a local file named `elastic_agent_installation_complete_manifest.yaml`, which you can use for further analysis. It contains the complete set of resources required for the {agent} installation, including: - -* RBAC objects (`ServiceAccounts`, `Roles`, etc.) - -* `ConfigMaps` and `Secrets` for {agent} configuration - -* {agent} Standalone deployed as a `DaemonSet` - -* https://github.com/kubernetes/kube-state-metrics[Kube-state-metrics] deployed as a `Deployment` - -The content of this file is equivalent to what you'd obtain by following the <> steps, with the exception that `kube-state-metrics` is not included in the standalone method. - -**Possible issues** - -* If your user doesn't have *cluster-admin* privileges, the RBAC resources creation might fail. - -* Some Kubernetes security mechanisms (like https://kubernetes.io/docs/concepts/security/pod-security-standards/[Pod Security Standards]) could cause part of the manifest to be rejected, as `hostNetwork` access and `hostPath` volumes are required. - -* If you already have an installation of `kube-state-metrics`, it could cause part of the manifest installation to fail or to update your existing resources without notice. - -[discrete] -[[agent-kustomize-after]] -=== Failures occurring within specific components after installation - -If the installation is correct and all resources are deployed, but data is not flowing as expected (for example, you don't see any data on the *[Metrics Kubernetes] Cluster Overview* dashboard), check the following items: - -. Check resources status and ensure they are all in a `Running` state: -+ -[source,sh] ----- -kubectl get pods -n kube-system | grep elastic -kubectl get pods -n kube-system | grep kube-state-metrics ----- -+ -[NOTE] -==== -The default configuration assumes that both `kube-state-metrics` and the {agent} `DaemonSet` are deployed in the **same namespace** for communication purposes. If you change the namespace of any of the components, the agent configuration will need further policy updates. -==== - -. Describe the Pods if they are in a `Pending` state: -+ -[source,sh] ----- -kubectl describe -n kube-system ----- - -. Check the logs of elastic-agents and kube-state-metrics, and look for errors or warnings: -+ -[source,sh] ----- -kubectl logs -n kube-system -kubectl logs -n kube-system | grep -i error -kubectl logs -n kube-system | grep -i warn ----- -+ -[source,sh] ----- -kubectl logs -n kube-system ----- - -**Possible issues** - -* Connectivity, authorization, or authentication issues when connecting to {es}: -+ -Ensure the API Key and {es} destination endpoint used during the installation is correct and is reachable from within the Pods. -+ -In an already installed system, the API Key is stored in a `Secret` named `elastic-agent-creds-`, and the endpoint is configured in the `ConfigMap` `elastic-agent-configs-`. - -* Missing cluster-level metrics (provided by `kube-state-metrics`): -+ -As described in <>, the {agent} Pod acting as `leader` is responsible for retrieving cluster-level metrics from `kube-state-metrics` and delivering them to {ref}/data-streams.html[data streams] prefixed as `metrics-kubernetes.state_`. In order to troubleshoot a situation where these metrics are not appearing: -+ -. Determine which Pod owns the <> `lease` in the cluster, with: -+ -[source,sh] ----- -kubectl get lease -n kube-system elastic-agent-cluster-leader ----- -+ -. Check the logs of that Pod to see if there are errors when connecting to `kube-state-metrics` and if the `state_*` metrics are being sent to {es}. -+ -One way to check if `state_*` metrics are being delivered to {es} is to inspect log lines with the `"Non-zero metrics in the last 30s"` message and check the values of the `state_*` metrics within the line, with something like: -+ -[source,sh] ----- -kubectl logs -n kube-system elastic-agent-xxxx | grep "Non-zero metrics" | grep "state_" ----- -+ -If the previous command returns `"state_pod":{"events":213,"success":213}` or similar for all `state_*` metrics, it means the metrics are being delivered. -+ -. As a last resort, if you believe none of the Pods is acting as a leader, you can try deleting the `lease` to generate a new one: -+ -[source,sh] ----- -kubectl delete lease -n kube-system elastic-agent-cluster-leader -# wait a few seconds and check for the lease again -kubectl get lease -n kube-system elastic-agent-cluster-leader ----- - -* Performance problems: -+ -Monitor the CPU and Memory usage of the agents Pods and adjust the manifest requests and limits as needed. Refer to <> for more details about the needed resources. - -Extra resources for {agent} on Kubernetes troubleshooting and information: - -* <>. - -* https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes/elastic-agent-kustomize/default[{agent} Kustomize Templates] documentation and resources. - -* Other examples and manifests to deploy https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes[{agent} on Kubernetes]. - -[discrete] -[[agent-kubernetes-invalid-api-key]] -== Troubleshoot {agent} on Kubernetes seeing `invalid api key to authenticate with fleet` in logs - -If an agent was unenrolled from a Kubernetes cluster, there might be data remaining in `/var/lib/elastic-agent-managed/kube-system/state` on the node(s). Reenrolling an agent later on the same nodes might then result in `invalid api key to authenticate with fleet` error messages. - -To avoid these errors, make sure to delete this state-folder before enrolling a new agent. - -For more information, refer to issue link:https://github.com/elastic/elastic-agent/issues/3586[#3586].