diff --git a/docset.yml b/docset.yml
index 5fcde28fb0..0f3befd591 100644
--- a/docset.yml
+++ b/docset.yml
@@ -3,6 +3,25 @@ exclude:
- 'README.md'
cross_links:
- asciidocalypse
+ - kibana
+ - integration-docs
+ - integrations
+ - logstash
+ - elasticsearch
+ - cloud
+ - beats
+ - go-elasticsearch
+ - elasticsearch-java
+ - elasticsearch-net
+ - elasticsearch-php
+ - elasticsearch-py
+ - elasticsearch-ruby
+ - elasticsearch-js
+ - ecs
+ - ecs-logging
+ - search-ui
+ - cloud-on-k8s
+
toc:
- file: index.md
- toc: get-started
@@ -12,6 +31,9 @@ toc:
- toc: deploy-manage
- toc: cloud-account
- toc: troubleshoot
+ - toc: release-notes
+ - toc: reference
+ - toc: extend
- toc: raw-migrated-files
subs:
diff --git a/extend/index.md b/extend/index.md
new file mode 100644
index 0000000000..92bebe7cd2
--- /dev/null
+++ b/extend/index.md
@@ -0,0 +1,20 @@
+# Extend and contribute
+
+This section contains information on how to extend or contribute to our various products.
+
+## Contributing to Elastic Projects
+
+You can contribute to various projects, including:
+
+- [Kibana](kibana://docs/extend/index.md): Enhance our data visualization platform by contributing to Kibana.
+- [Logstash](logstash://docs/extend/index.md): Help us improve the data processing pipeline with your contributions to Logstash.
+- [Beats](beats://docs/extend/index.md): Add new features or beats to our lightweight data shippers.
+
+## Creating Integrations
+
+Extend the capabilities of Elastic by creating integrations that connect Elastic products with other tools and systems. Visit our [Integrations Guide](integrations://docs/extend/index.md) to get started.
+
+## Elasticsearch Plugins
+
+Develop custom plugins to add new functionalities to Elasticsearch. Check out our [Elasticsearch Plugins Development Guide](elasticsearch://docs/extend/index.md) for detailed instructions and best practices.
+
diff --git a/extend/toc.yml b/extend/toc.yml
new file mode 100644
index 0000000000..f2ab236796
--- /dev/null
+++ b/extend/toc.yml
@@ -0,0 +1,2 @@
+toc:
+ - file: index.md
\ No newline at end of file
diff --git a/reference/data-analysis/index.md b/reference/data-analysis/index.md
new file mode 100644
index 0000000000..e4f03e50bf
--- /dev/null
+++ b/reference/data-analysis/index.md
@@ -0,0 +1,10 @@
+# Data analysis
+
+% TO-DO: Add links to "What is data analysis?"%
+
+This section contains reference information for data analysis features, including:
+
+* [Text analysis components](elasticsearch://docs/reference/data-analysis/text-analysis/index.md)
+* [Aggregations](elasticsearch://docs/reference/data-analysis/aggregations/index.md)
+* [Machine learning functions](/reference/data-analysis/machine-learning/machine-learning-functions.md)
+* [Canvas functions](/reference/data-analysis/kibana/canvas-functions.md)
diff --git a/reference/data-analysis/kibana/canvas-functions.md b/reference/data-analysis/kibana/canvas-functions.md
new file mode 100644
index 0000000000..1746624c34
--- /dev/null
+++ b/reference/data-analysis/kibana/canvas-functions.md
@@ -0,0 +1,1850 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/kibana/current/canvas-function-reference.html
+---
+
+# Canvas function reference [canvas-function-reference]
+
+Behind the scenes, Canvas is driven by a powerful expression language, with dozens of functions and other capabilities, including table transforms, type casting, and sub-expressions.
+
+The Canvas expression language also supports [TinyMath functions](/reference/data-analysis/kibana/tinymath-functions.md), which perform complex math calculations.
+
+A * denotes a required argument.
+
+A † denotes an argument can be passed multiple times.
+
+[A](#a_fns) | B | [C](#c_fns) | [D](#d_fns) | [E](#e_fns) | [F](#f_fns) | [G](#g_fns) | [H](#h_fns) | [I](#i_fns) | [J](#j_fns) | [K](#k_fns) | [L](#l_fns) | [M](#m_fns) | [N](#n_fns) | O | [P](#p_fns) | Q | [R](#r_fns) | [S](#s_fns) | [T](#t_fns) | [U](#u_fns) | [V](#v_fns) | W | X | Y | Z
+
+
+## A [a_fns]
+
+
+### `all` [all_fn]
+
+Returns `true` if all of the conditions are met. See also [`any`](#any_fn).
+
+**Expression syntax**
+
+```js
+all {neq "foo"} {neq "bar"} {neq "fizz"}
+all condition={gt 10} condition={lt 20}
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| math "mean(percent_uptime)"
+| formatnumber "0.0%"
+| metric "Average uptime"
+ metricFont={
+ font size=48 family="'Open Sans', Helvetica, Arial, sans-serif"
+ color={
+ if {all {gte 0} {lt 0.8}} then="red" else="green"
+ }
+ align="center" lHeight=48
+ }
+| render
+```
+
+This sets the color of the metric text to `"red"` if the context passed into `metric` is greater than or equal to 0 and less than 0.8. Otherwise, the color is set to `"green"`.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* * †
Alias: `condition` | `boolean` | The conditions to check. |
+
+**Returns:** `boolean`
+
+
+### `alterColumn` [alterColumn_fn]
+
+Converts between core types, including `string`, `number`, `null`, `boolean`, and `date`, and renames columns. See also [`mapColumn`](#mapColumn_fn), [`mathColumn`](#mathColumn_fn), and [`staticColumn`](#staticColumn_fn).
+
+**Expression syntax**
+
+```js
+alterColumn "cost" type="string"
+alterColumn column="@timestamp" name="foo"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| alterColumn "time" name="time_in_ms" type="number"
+| table
+| render
+```
+
+This renames the `time` column to `time_in_ms` and converts the type of the column’s values from `date` to `number`.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `column` | `string` | The name of the column to alter. |
+| `name` | `string` | The resultant column name. Leave blank to not rename. |
+| `type` | `string` | The type to convert the column to. Leave blank to not change the type. |
+
+**Returns:** `datatable`
+
+
+### `any` [any_fn]
+
+Returns `true` if at least one of the conditions is met. See also [`all`](#all_fn).
+
+**Expression syntax**
+
+```js
+any {eq "foo"} {eq "bar"} {eq "fizz"}
+any condition={lte 10} condition={gt 30}
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| filterrows {
+ getCell "project" | any {eq "elasticsearch"} {eq "kibana"} {eq "x-pack"}
+ }
+| pointseries color="project" size="max(price)"
+| pie
+| render
+```
+
+This filters out any rows that don’t contain `"elasticsearch"`, `"kibana"` or `"x-pack"` in the `project` field.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* * †
Alias: `condition` | `boolean` | The conditions to check. |
+
+**Returns:** `boolean`
+
+
+### `as` [as_fn]
+
+Creates a `datatable` with a single value. See also [`getCell`](#getCell_fn).
+
+**Expression syntax**
+
+```js
+as
+as "foo"
+as name="bar"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| ply by="project" fn={math "count(username)" | as "num_users"} fn={math "mean(price)" | as "price"}
+| pointseries x="project" y="num_users" size="price" color="project"
+| plot
+| render
+```
+
+`as` casts any primitive value (`string`, `number`, `date`, `null`) into a `datatable` with a single row and a single column with the given name (or defaults to `"value"` if no name is provided). This is useful when piping a primitive value into a function that only takes `datatable` as an input.
+
+In the example, `ply` expects each `fn` subexpression to return a `datatable` in order to merge the results of each `fn` back into a `datatable`, but using a `math` aggregation in the subexpressions returns a single `math` value, which is then cast into a `datatable` using `as`.
+
+**Accepts:** `string`, `boolean`, `number`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `name` | `string` | The name to give the column.
Default: `"value"` |
+
+**Returns:** `datatable`
+
+
+### `asset` [asset_fn]
+
+Retrieves Canvas workpad asset objects to provide as argument values. Usually images.
+
+**Expression syntax**
+
+```js
+asset "asset-52f14f2b-fee6-4072-92e8-cd2642665d02"
+asset id="asset-498f7429-4d56-42a2-a7e4-8bf08d98d114"
+```
+
+**Code example**
+
+```text
+image dataurl={asset "asset-c661a7cc-11be-45a1-a401-d7592ea7917a"} mode="contain"
+| render
+```
+
+The image asset stored with the ID `"asset-c661a7cc-11be-45a1-a401-d7592ea7917a"` is passed into the `dataurl` argument of the `image` function to display the stored asset.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `id` | `string` | The ID of the asset to retrieve. |
+
+**Returns:** `string`
+
+
+### `axisConfig` [axisConfig_fn]
+
+Configures the axis of a visualization. Only used with [`plot`](#plot_fn).
+
+**Expression syntax**
+
+```js
+axisConfig show=false
+axisConfig position="right" min=0 max=10 tickSize=1
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| pointseries x="size(cost)" y="project" color="project"
+| plot defaultStyle={seriesStyle bars=0.75 horizontalBars=true}
+ legend=false
+ xaxis={axisConfig position="top" min=0 max=400 tickSize=100}
+ yaxis={axisConfig position="right"}
+| render
+```
+
+This sets the `x-axis` to display on the top of the chart and sets the range of values to `0-400` with ticks displayed at `100` intervals. The `y-axis` is configured to display on the `right`.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `max` | `number`, `string`, `null` | The maximum value displayed in the axis. Must be a number, a date in milliseconds since epoch, or an ISO8601 string. |
+| `min` | `number`, `string`, `null` | The minimum value displayed in the axis. Must be a number, a date in milliseconds since epoch, or an ISO8601 string. |
+| `position` | `string` | The position of the axis labels. For example, `"top"`, `"bottom"`, `"left"`, or `"right"`.
Default: `"left"` |
+| `show` | `boolean` | Show the axis labels?
Default: `true` |
+| `tickSize` | `number`, `null` | The increment size between each tick. Use for `number` axes only. |
+
+**Returns:** `axisConfig`
+
+
+## C [c_fns]
+
+
+### `case` [case_fn]
+
+Builds a [`case`](#case_fn), including a condition and a result, to pass to the [`switch`](#switch_fn) function.
+
+**Expression syntax**
+
+```js
+case 0 then="red"
+case when=5 then="yellow"
+case if={lte 50} then="green"
+```
+
+**Code example**
+
+```text
+math "random()"
+| progress shape="gauge" label={formatnumber "0%"}
+ font={
+ font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" align="center"
+ color={
+ switch {case if={lte 0.5} then="green"}
+ {case if={all {gt 0.5} {lte 0.75}} then="orange"}
+ default="red"
+ }
+ }
+ valueColor={
+ switch {case if={lte 0.5} then="green"}
+ {case if={all {gt 0.5} {lte 0.75}} then="orange"}
+ default="red"
+ }
+| render
+```
+
+This sets the color of the progress indicator and the color of the label to `"green"` if the value is less than or equal to `0.5`, `"orange"` if the value is greater than `0.5` and less than or equal to `0.75`, and `"red"` if `none` of the case conditions are met.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `when` | `any` | The value compared to the *context* to see if they are equal. The `when` argument is ignored when the `if` argument is also specified. |
+| `if` | `boolean` | This value indicates whether the condition is met. The `if` argument overrides the `when` argument when both are provided. |
+| `then` * | `any` | The value returned if the condition is met. |
+
+**Returns:** `case`
+
+
+### `clear` [clear_fn]
+
+Clears the *context*, and returns `null`.
+
+**Accepts:** `null`
+
+**Returns:** `null`
+
+
+### `clog` [clog_fn]
+
+Outputs the *input* in the console. This function is for debug purposes
+
+**Expression syntax**
+
+```js
+clog
+```
+
+**Code example**
+
+```text
+kibana
+| demodata
+| clog
+| filterrows fn={getCell "age" | gt 70}
+| clog
+| pointseries x="time" y="mean(price)"
+| plot defaultStyle={seriesStyle lines=1 fill=1}
+| render
+```
+
+This prints the `datatable` objects in the browser console before and after the `filterrows` function.
+
+**Accepts:** `any`
+
+**Returns:** Depends on your input and arguments
+
+
+### `columns` [columns_fn]
+
+Includes or excludes columns from a `datatable`. When both arguments are specified, the excluded columns will be removed first.
+
+**Expression syntax**
+
+```js
+columns include="@timestamp, projects, cost"
+columns exclude="username, country, age"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| columns include="price, cost, state, project"
+| table
+| render
+```
+
+This only keeps the `price`, `cost`, `state`, and `project` columns from the `demodata` data source and removes all other columns.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `include` | `string` | A comma-separated list of column names to keep in the `datatable`. |
+| `exclude` | `string` | A comma-separated list of column names to remove from the `datatable`. |
+
+**Returns:** `datatable`
+
+
+### `compare` [compare_fn]
+
+Compares the *context* to specified value to determine `true` or `false`. Usually used in combination with `<>` or [`case`](#case_fn). This only works with primitive types, such as `number`, `string`, `boolean`, `null`. See also [`eq`](#eq_fn), [`gt`](#gt_fn), [`gte`](#gte_fn), [`lt`](#lt_fn), [`lte`](#lte_fn), [`neq`](#neq_fn)
+
+**Expression syntax**
+
+```js
+compare "neq" to="elasticsearch"
+compare op="lte" to=100
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| mapColumn project
+ fn={getCell project |
+ switch
+ {case if={compare eq to=kibana} then=kibana}
+ {case if={compare eq to=elasticsearch} then=elasticsearch}
+ default="other"
+ }
+| pointseries size="size(cost)" color="project"
+| pie
+| render
+```
+
+This maps all `project` values that aren’t `"kibana"` and `"elasticsearch"` to `"other"`. Alternatively, you can use the individual comparator functions instead of compare.
+
+**Accepts:** `string`, `number`, `boolean`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `op` | `string` | The operator to use in the comparison: `"eq"` (equal to), `"gt"` (greater than), `"gte"` (greater than or equal to), `"lt"` (less than), `"lte"` (less than or equal to), `"ne"` or `"neq"` (not equal to).
Default: `"eq"` |
+| `to`
Aliases: `b`, `this` | `any` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+### `containerStyle` [containerStyle_fn]
+
+Creates an object used for styling an element’s container, including background, border, and opacity.
+
+**Expression syntax**
+
+```js
+containerStyle backgroundColor="red"’
+containerStyle borderRadius="50px"
+containerStyle border="1px solid black"
+containerStyle padding="5px"
+containerStyle opacity="0.5"
+containerStyle overflow="hidden"
+containerStyle backgroundImage={asset id=asset-f40d2292-cf9e-4f2c-8c6f-a504a25e949c}
+ backgroundRepeat="no-repeat"
+ backgroundSize="cover"
+```
+
+**Code example**
+
+```text
+shape "star" fill="#E61D35" maintainAspect=true
+| render containerStyle={
+ containerStyle backgroundColor="#F8D546"
+ borderRadius="200px"
+ border="4px solid #05509F"
+ padding="0px"
+ opacity="0.9"
+ overflow="hidden"
+ }
+```
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `backgroundColor` | `string` | A valid CSS background color. |
+| `backgroundImage` | `string` | A valid CSS background image. |
+| `backgroundRepeat` | `string` | A valid CSS background repeat.
Default: `"no-repeat"` |
+| `backgroundSize` | `string` | A valid CSS background size.
Default: `"contain"` |
+| `border` | `string` | A valid CSS border. |
+| `borderRadius` | `string` | The number of pixels to use when rounding the corners. |
+| `opacity` | `number` | A number between 0 and 1 that represents the degree of transparency of the element. |
+| `overflow` | `string` | A valid CSS overflow.
Default: `"hidden"` |
+| `padding` | `string` | The distance of the content, in pixels, from the border. |
+
+**Returns:** `containerStyle`
+
+
+### `context` [context_fn]
+
+Returns whatever you pass into it. This can be useful when you need to use *context* as argument to a function as a sub-expression.
+
+**Expression syntax**
+
+```js
+context
+```
+
+**Code example**
+
+```text
+date
+| formatdate "LLLL"
+| markdown "Last updated: " {context}
+| render
+```
+
+Using the `context` function allows us to pass the output, or *context*, of the previous function as a value to an argument in the next function. Here we get the formatted date string from the previous function and pass it as `content` for the markdown element.
+
+**Accepts:** `any`
+
+**Returns:** Depends on your input and arguments
+
+
+### `createTable` [createTable_fn]
+
+Creates a datatable with a list of columns, and 1 or more empty rows. To populate the rows, use [`mapColumn`](#mapColumn_fn) or [`mathColumn`](#mathColumn_fn).
+
+**Expression syntax**
+
+```js
+createTable id="a" id="b"
+createTable id="a" name="A" id="b" name="B" rowCount=5
+```
+
+**Code example**
+
+```text
+var_set
+name="logs" value={essql "select count(*) as a from kibana_sample_data_logs"}
+name="commerce" value={essql "select count(*) as b from kibana_sample_data_ecommerce"}
+| createTable ids="totalA" ids="totalB"
+| staticColumn name="totalA" value={var "logs" | getCell "a"}
+| alterColumn column="totalA" type="number"
+| staticColumn name="totalB" value={var "commerce" | getCell "b"}
+| alterColumn column="totalB" type="number"
+| mathColumn id="percent" name="percent" expression="totalA / totalB"
+| render
+```
+
+This creates a table based on the results of two `essql` queries, joined into one table.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `ids` † | `string` | Column ids to generate in positional order. ID represents the key in the row. |
+| `names` † | `string` | Column names to generate in positional order. Names are not required to be unique, and default to the ID if not provided. |
+| `rowCount` | `number` | The number of empty rows to add to the table, to be assigned a value later
Default: `1` |
+
+**Returns:** `datatable`
+
+
+### `csv` [csv_fn]
+
+Creates a `datatable` from CSV input.
+
+**Expression syntax**
+
+```js
+csv "fruit, stock
+ kiwi, 10
+ Banana, 5"
+```
+
+**Code example**
+
+```text
+csv "fruit,stock
+ kiwi,10
+ banana,5"
+| pointseries color=fruit size=stock
+| pie
+| render
+```
+
+This creates a `datatable` with `fruit` and `stock` columns with two rows. This is useful for quickly mocking data.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `data` | `string` | The CSV data to use. |
+| `delimiter` | `string` | The data separation character. |
+| `newline` | `string` | The row separation character. |
+
+**Returns:** `datatable`
+
+
+## D [d_fns]
+
+
+### `date` [date_fn]
+
+Returns the current time, or a time parsed from a specified string, as milliseconds since epoch.
+
+**Expression syntax**
+
+```js
+date
+date value=1558735195
+date "2019-05-24T21:59:55+0000"
+date "01/31/2019" format="MM/DD/YYYY"
+```
+
+**Code example**
+
+```text
+date
+| formatdate "LLL"
+| markdown {context}
+ font={font family="Arial, sans-serif" size=30 align="left"
+ color="#000000"
+ weight="normal"
+ underline=false
+ italic=false}
+| render
+```
+
+Using `date` without passing any arguments will return the current date and time.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `value` | `string` | An optional date string that is parsed into milliseconds since epoch. The date string can be either a valid JavaScript `Date` input or a string to parse using the `format` argument. Must be an ISO8601 string, or you must provide the format. |
+| `format` | `string` | The MomentJS format used to parse the specified date string. For more information, see [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). |
+
+**Returns:** `number`
+
+
+### `demodata` [demodata_fn]
+
+A sample data set that includes project CI times with usernames, countries, and run phases.
+
+**Expression syntax**
+
+```js
+demodata
+demodata "ci"
+demodata type="shirts"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| table
+| render
+```
+
+`demodata` is a mock data set that you can use to start playing around in Canvas.
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `type` | `string` | The name of the demo data set to use.
Default: `"ci"` |
+
+**Returns:** `datatable`
+
+
+### `do` [do_fn]
+
+Executes multiple sub-expressions, then returns the original *context*. Use for running functions that produce an action or a side effect without changing the original *context*.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Aliases: `exp`, `expression`, `fn`, `function` | `any` | The sub-expressions to execute. The return values of these sub-expressions are not available in the root pipeline as this function simply returns the original *context*. |
+
+**Returns:** Depends on your input and arguments
+
+
+### `dropdownControl` [dropdownControl_fn]
+
+Configures a dropdown filter control element.
+
+**Expression syntax**
+
+```js
+dropdownControl valueColumn=project filterColumn=project
+dropdownControl valueColumn=agent filterColumn=agent.keyword filterGroup=group1
+```
+
+**Code example**
+
+```text
+demodata
+| dropdownControl valueColumn=project filterColumn=project
+| render
+```
+
+This creates a dropdown filter element. It requires a data source and uses the unique values from the given `valueColumn` (i.e. `project`) and applies the filter to the `project` column. Note: `filterColumn` should point to a keyword type field for Elasticsearch data sources.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `filterColumn` * | `string` | The column or field that you want to filter. |
+| `filterGroup` | `string` | The group name for the filter. |
+| `labelColumn` | `string` | The column or field to use as the label in the dropdown control |
+| `valueColumn` * | `string` | The column or field from which to extract the unique values for the dropdown control. |
+
+**Returns:** `render`
+
+
+## E [e_fns]
+
+
+### `embeddable` [embeddable_fn]
+
+Returns an embeddable with the provided configuration
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `config` | `string` | The base64 encoded embeddable input object |
+| `type` * | `string` | The embeddable type |
+
+**Returns:** `embeddable`
+
+
+### `eq` [eq_fn]
+
+Returns whether the *context* is equal to the argument.
+
+**Expression syntax**
+
+```js
+eq true
+eq null
+eq 10
+eq "foo"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| mapColumn project
+ fn={getCell project |
+ switch
+ {case if={eq kibana} then=kibana}
+ {case if={eq elasticsearch} then=elasticsearch}
+ default="other"
+ }
+| pointseries size="size(cost)" color="project"
+| pie
+| render
+```
+
+This changes all values in the project column that don’t equal `"kibana"` or `"elasticsearch"` to `"other"`.
+
+**Accepts:** `boolean`, `number`, `string`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+### `escount` [escount_fn]
+
+Query Elasticsearch for the number of hits matching the specified query.
+
+**Expression syntax**
+
+```js
+escount index="logstash-*"
+escount "currency:"EUR"" index="kibana_sample_data_ecommerce"
+escount query="response:404" index="kibana_sample_data_logs"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| escount "Cancelled:true" index="kibana_sample_data_flights"
+| math "value"
+| progress shape="semicircle"
+ label={formatnumber 0,0}
+ font={font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center}
+ max={filters | escount index="kibana_sample_data_flights"}
+| render
+```
+
+The first `escount` expression retrieves the number of flights that were cancelled. The second `escount` expression retrieves the total number of flights.
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `q`, `query` | `string` | A Lucene query string.
Default: `"-_index:.kibana"` |
+| `index`
Alias: `dataView` | `string` | An index or data view. For example, `"logstash-*"`.
Default: `"_all"` |
+
+**Returns:** `number`
+
+
+### `esdocs` [esdocs_fn]
+
+Query Elasticsearch for raw documents. Specify the fields you want to retrieve, especially if you are asking for a lot of rows.
+
+**Expression syntax**
+
+```js
+esdocs index="logstash-*"
+esdocs "currency:"EUR"" index="kibana_sample_data_ecommerce"
+esdocs query="response:404" index="kibana_sample_data_logs"
+esdocs index="kibana_sample_data_flights" count=100
+esdocs index="kibana_sample_data_flights" sort="AvgTicketPrice, asc"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| esdocs index="kibana_sample_data_ecommerce"
+ fields="customer_gender, taxful_total_price, order_date"
+ sort="order_date, asc"
+ count=10000
+| mapColumn "order_date"
+ fn={getCell "order_date" | date {context} | rounddate "YYYY-MM-DD"}
+| alterColumn "order_date" type="date"
+| pointseries x="order_date" y="sum(taxful_total_price)" color="customer_gender"
+| plot defaultStyle={seriesStyle lines=3}
+ palette={palette "#7ECAE3" "#003A4D" gradient=true}
+| render
+```
+
+This retrieves the first 10000 documents data from the `kibana_sample_data_ecommerce` index sorted by `order_date` in ascending order, and only requests the `customer_gender`, `taxful_total_price`, and `order_date` fields.
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `q`, `query` | `string` | A Lucene query string.
Default: `"-_index:.kibana"` |
+| `count` | `number` | The number of documents to retrieve. For better performance, use a smaller data set.
Default: `1000` |
+| `fields` | `string` | A comma-separated list of fields. For better performance, use fewer fields. |
+| `index`
Alias: `dataView` | `string` | An index or data view. For example, `"logstash-*"`.
Default: `"_all"` |
+| `metaFields` | `string` | Comma separated list of meta fields. For example, `"_index,_type"`. |
+| `sort` | `string` | The sort direction formatted as `"field, direction"`. For example, `"@timestamp, desc"` or `"bytes, asc"`. |
+
+**Returns:** `datatable`
+
+
+### `essql` [essql_fn]
+
+Queries Elasticsearch using Elasticsearch SQL.
+
+**Expression syntax**
+
+```js
+essql query="SELECT * FROM "logstash*""
+essql "SELECT * FROM "apm*"" count=10000
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| essql query="SELECT Carrier, FlightDelayMin, AvgTicketPrice FROM "kibana_sample_data_flights""
+| table
+| render
+```
+
+This retrieves the `Carrier`, `FlightDelayMin`, and `AvgTicketPrice` fields from the "kibana_sample_data_flights" index.
+
+**Accepts:** `kibana_context`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `q`, `query` | `string` | An Elasticsearch SQL query. |
+| `count` | `number` | The number of documents to retrieve. For better performance, use a smaller data set.
Default: `1000` |
+| `parameter` †
Alias: `param` | `string`, `number`, `boolean` | A parameter to be passed to the SQL query. |
+| `timeField`
Alias: `timeField` | `string` | The time field to use in the time range filter, which is set in the context. |
+| `timezone`
Alias: `tz` | `string` | The timezone to use for date operations. Valid ISO8601 formats and UTC offsets both work.
Default: `"UTC"` |
+
+**Returns:** `datatable`
+
+
+### `exactly` [exactly_fn]
+
+Creates a filter that matches a given column to an exact value.
+
+**Expression syntax**
+
+```js
+exactly "state" value="running"
+exactly "age" value=50 filterGroup="group2"
+exactly column="project" value="beats"
+```
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `column` *
Aliases: `c`, `field` | `string` | The column or field that you want to filter. |
+| `filterGroup` | `string` | The group name for the filter. |
+| `value` *
Aliases: `v`, `val` | `string` | The value to match exactly, including white space and capitalization. |
+
+**Returns:** `filter`
+
+
+## F [f_fns]
+
+
+### `filterrows` [filterrows_fn]
+
+Filters rows in a `datatable` based on the return value of a sub-expression.
+
+**Expression syntax**
+
+```js
+filterrows {getCell "project" | eq "kibana"}
+filterrows fn={getCell "age" | gt 50}
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| filterrows {getCell "country" | any {eq "IN"} {eq "US"} {eq "CN"}}
+| mapColumn "@timestamp"
+ fn={getCell "@timestamp" | rounddate "YYYY-MM"}
+| alterColumn "@timestamp" type="date"
+| pointseries x="@timestamp" y="mean(cost)" color="country"
+| plot defaultStyle={seriesStyle points="2" lines="1"}
+ palette={palette "#01A4A4" "#CC6666" "#D0D102" "#616161" "#00A1CB" "#32742C" "#F18D05" "#113F8C" "#61AE24" "#D70060" gradient=false}
+| render
+```
+
+This uses `filterrows` to only keep data from India (`IN`), the United States (`US`), and China (`CN`).
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Aliases: `exp`, `expression`, `fn`, `function` | `boolean` | An expression to pass into each row in the `datatable`. The expression should return a `boolean`. A `true` value preserves the row, and a `false` value removes it. |
+
+**Returns:** `datatable`
+
+
+### `filters` [filters_fn]
+
+Aggregates element filters from the workpad for use elsewhere, usually a data source. [`filters`](#filters_fn) is deprecated and will be removed in a future release. Use `kibana | selectFilter` instead.
+
+**Expression syntax**
+
+```js
+filters
+filters group="timefilter1"
+filters group="timefilter2" group="dropdownfilter1" ungrouped=true
+```
+
+**Code example**
+
+```text
+filters group=group2 ungrouped=true
+| demodata
+| pointseries x="project" y="size(cost)" color="project"
+| plot defaultStyle={seriesStyle bars=0.75} legend=false
+ font={
+ font size=14
+ family="'Open Sans', Helvetica, Arial, sans-serif"
+ align="left"
+ color="#FFFFFF"
+ weight="lighter"
+ underline=true
+ italic=true
+ }
+| render
+```
+
+`filters` sets the existing filters as context and accepts a `group` parameter to opt into specific filter groups. Setting `ungrouped` to `true` opts out of using global filters.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Alias: `group` | `string` | The name of the filter group to use. |
+| `ungrouped`
Aliases: `nogroup`, `nogroups` | `boolean` | Exclude filters that belong to a filter group?
Default: `false` |
+
+**Returns:** `filter`
+
+
+### `font` [font_fn]
+
+Create a font style.
+
+**Expression syntax**
+
+```js
+font size=12
+font family=Arial
+font align=middle
+font color=pink
+font weight=lighter
+font underline=true
+font italic=false
+font lHeight=32
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| pointseries x="project" y="size(cost)" color="project"
+| plot defaultStyle={seriesStyle bars=0.75} legend=false
+ font={
+ font size=14
+ family="'Open Sans', Helvetica, Arial, sans-serif"
+ align="left"
+ color="#FFFFFF"
+ weight="lighter"
+ underline=true
+ italic=true
+ }
+| render
+```
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `align` | `string` | The horizontal text alignment.
Default: `${ theme "font.align" default="left" }` |
+| `color` | `string` | The text color.
Default: `${ theme "font.color" }` |
+| `family` | `string` | An acceptable CSS web font string
Default: `${ theme "font.family" default="'Open Sans', Helvetica, Arial, sans-serif" }` |
+| `italic` | `boolean` | Italicize the text?
Default: `${ theme "font.italic" default=false }` |
+| `lHeight`
Alias: `lineHeight` | `number`, `null` | The line height in pixels
Default: `${ theme "font.lHeight" }` |
+| `size` | `number` | The font size
Default: `${ theme "font.size" default=14 }` |
+| `sizeUnit` | `string` | The font size unit
Default: `"px"` |
+| `underline` | `boolean` | Underline the text?
Default: `${ theme "font.underline" default=false }` |
+| `weight` | `string` | The font weight. For example, `"normal"`, `"bold"`, `"bolder"`, `"lighter"`, `"100"`, `"200"`, `"300"`, `"400"`, `"500"`, `"600"`, `"700"`, `"800"`, or `"900"`.
Default: `${ theme "font.weight" default="normal" }` |
+
+**Returns:** `style`
+
+
+### `formatdate` [formatdate_fn]
+
+Formats an ISO8601 date string or a date in milliseconds since epoch using MomentJS. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/).
+
+**Expression syntax**
+
+```js
+formatdate format="YYYY-MM-DD"
+formatdate "MM/DD/YYYY"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| mapColumn "time" fn={getCell time | formatdate "MMM 'YY"}
+| pointseries x="time" y="sum(price)" color="state"
+| plot defaultStyle={seriesStyle points=5}
+| render
+```
+
+This transforms the dates in the `time` field into strings that look like `"Jan ‘19"`, `"Feb ‘19"`, etc. using a MomentJS format.
+
+**Accepts:** `number`, `string`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `format` | `string` | A MomentJS format. For example, `"MM/DD/YYYY"`. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). |
+
+**Returns:** `string`
+
+
+### `formatnumber` [formatnumber_fn]
+
+Formats a number into a formatted number string using the Numeral pattern.
+
+**Expression syntax**
+
+```js
+formatnumber format="$0,0.00"
+formatnumber "0.0a"
+```
+
+**Code example**
+
+```text
+kibana
+| selectFilter
+| demodata
+| math "mean(percent_uptime)"
+| progress shape="gauge"
+ label={formatnumber "0%"}
+ font={font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align="center"}
+| render
+```
+
+The `formatnumber` subexpression receives the same `context` as the `progress` function, which is the output of the `math` function. It formats the value into a percentage.
+
+**Accepts:** `number`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `format` | `string` | A Numeral pattern format string. For example, `"0.0a"` or `"0%"`. |
+
+**Returns:** `string`
+
+
+## G [g_fns]
+
+
+### `getCell` [getCell_fn]
+
+Fetches a single cell from a `datatable`.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `c`, `column` | `string` | The name of the column to fetch the value from. If not provided, the value is retrieved from the first column. |
+| `row`
Alias: `r` | `number` | The row number, starting at 0.
Default: `0` |
+
+**Returns:** Depends on your input and arguments
+
+
+### `gt` [gt_fn]
+
+Returns whether the *context* is greater than the argument.
+
+**Accepts:** `number`, `string`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+### `gte` [gte_fn]
+
+Returns whether the *context* is greater or equal to the argument.
+
+**Accepts:** `number`, `string`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+## H [h_fns]
+
+
+### `head` [head_fn]
+
+Retrieves the first N rows from the `datatable`. See also [`tail`](#tail_fn).
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `count` | `number` | The number of rows to retrieve from the beginning of the `datatable`.
Default: `1` |
+
+**Returns:** `datatable`
+
+
+## I [i_fns]
+
+
+### `if` [if_fn]
+
+Performs conditional logic.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `condition` | `boolean` | A `true` or `false` indicating whether a condition is met, usually returned by a sub-expression. When unspecified, the original *context* is returned. |
+| `else` | `any` | The return value when the condition is `false`. When unspecified and the condition is not met, the original *context* is returned. |
+| `then` | `any` | The return value when the condition is `true`. When unspecified and the condition is met, the original *context* is returned. |
+
+**Returns:** Depends on your input and arguments
+
+
+### `image` [image_fn]
+
+Displays an image. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `dataurl`, `url` | `string`, `null` | The HTTP(S) URL or `base64` data URL of an image.
Default: `null` |
+| `mode` | `string` | `"contain"` shows the entire image, scaled to fit. `"cover"` fills the container with the image, cropping from the sides or bottom as needed. `"stretch"` resizes the height and width of the image to 100% of the container.
Default: `"contain"` |
+
+**Returns:** `image`
+
+
+## J [j_fns]
+
+
+### `joinRows` [joinRows_fn]
+
+Concatenates values from rows in a `datatable` into a single string.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `column` | `string` | The column or field from which to extract the values. |
+| `distinct` | `boolean` | Extract only unique values?
Default: `true` |
+| `quote` | `string` | The quote character to wrap around each extracted value.
Default: `"'"` |
+| `separator`
Aliases: `delimiter`, `sep` | `string` | The delimiter to insert between each extracted value.
Default: `","` |
+
+**Returns:** `string`
+
+
+## K [k_fns]
+
+
+### `kibana` [kibana_fn]
+
+Gets kibana global context
+
+**Accepts:** `kibana_context`, `null`
+
+**Returns:** `kibana_context`
+
+
+## L [l_fns]
+
+
+### `location` [location_fn]
+
+Find your current location using the Geolocation API of the browser. Performance can vary, but is fairly accurate. See [https://developer.mozilla.org/en-US/docs/Web/API/Navigator/geolocation](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/geolocation). Don’t use [`location`](#location_fn) if you plan to generate PDFs as this function requires user input.
+
+**Accepts:** `null`
+
+**Returns:** `datatable`
+
+
+### `lt` [lt_fn]
+
+Returns whether the *context* is less than the argument.
+
+**Accepts:** `number`, `string`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+### `lte` [lte_fn]
+
+Returns whether the *context* is less than or equal to the argument.
+
+**Accepts:** `number`, `string`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+## M [m_fns]
+
+
+### `mapCenter` [mapCenter_fn]
+
+Returns an object with the center coordinates and zoom level of the map.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `lat` * | `number` | Latitude for the center of the map |
+| `lon` * | `number` | Longitude for the center of the map |
+| `zoom` * | `number` | Zoom level of the map |
+
+**Returns:** `mapCenter`
+
+
+### `mapColumn` [mapColumn_fn]
+
+Adds a column calculated as the result of other columns. Changes are made only when you provide arguments.See also [`alterColumn`](#alterColumn_fn) and [`staticColumn`](#staticColumn_fn).
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. |
+| `copyMetaFrom` | `string`, `null` | If set, the meta object from the specified column id is copied over to the specified target column. If the column doesn’t exist it silently fails.
Default: `null` |
+| `expression` *
Aliases: `exp`, `fn`, `function` | `boolean`, `number`, `string`, `null` | An expression that is executed on every row, provided with a single-row `datatable` context and returning the cell value. |
+| `id` | `string`, `null` | An optional id of the resulting column. When no id is provided, the id will be looked up from the existing column by the provided name argument. If no column with this name exists yet, a new column with this name and an identical id will be added to the table.
Default: `null` |
+
+**Returns:** `datatable`
+
+
+### `markdown` [markdown_fn]
+
+Adds an element that renders Markdown text. TIP: Use the [`markdown`](#markdown_fn) function for single numbers, metrics, and paragraphs of text.
+
+**Accepts:** `datatable`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Aliases: `content`, `expression` | `string` | A string of text that contains Markdown. To concatenate, pass the `string` function multiple times.
Default: `""` |
+| `font` | `style` | The CSS font properties for the content. For example, "font-family" or "font-weight".
Default: `${font}` |
+| `openLinksInNewTab` | `boolean` | A true or false value for opening links in a new tab. The default value is `false`. Setting to `true` opens all links in a new tab.
Default: `false` |
+
+**Returns:** `render`
+
+
+### `math` [math_fn]
+
+Interprets a `TinyMath` math expression using a `number` or `datatable` as *context*. The `datatable` columns are available by their column name. If the *context* is a number it is available as `value`.
+
+**Accepts:** `number`, `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `expression` | `string` | An evaluated `TinyMath` expression. See [/reference/data-analysis/kibana/tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). |
+| `onError` | `string` | In case the `TinyMath` evaluation fails or returns NaN, the return value is specified by onError. When `'throw'`, it will throw an exception, terminating expression execution (default). |
+
+**Returns:** Depends on your input and arguments
+
+
+### `mathColumn` [mathColumn_fn]
+
+Adds a column by evaluating `TinyMath` on each row. This function is optimized for math and performs better than using a math expression in [`mapColumn`](#mapColumn_fn).
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. |
+| *Unnamed*
Alias: `expression` | `string` | An evaluated `TinyMath` expression. See [/reference/data-analysis/kibana/tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). |
+| `castColumns` † | `string` | The column ids that are cast to numbers before the formula is applied. |
+| `copyMetaFrom` | `string`, `null` | If set, the meta object from the specified column id is copied over to the specified target column. If the column doesn’t exist it silently fails.
Default: `null` |
+| `id` * | `string` | id of the resulting column. Must be unique. |
+| `onError` | `string` | In case the `TinyMath` evaluation fails or returns NaN, the return value is specified by onError. When `'throw'`, it will throw an exception, terminating expression execution (default). |
+
+**Returns:** `datatable`
+
+
+### `metric` [metric_fn]
+
+Displays a number over a label.
+
+**Accepts:** `number`, `string`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `description`, `label`, `text` | `string` | The text describing the metric.
Default: `""` |
+| `labelFont` | `style` | The CSS font properties for the label. For example, `font-family` or `font-weight`.
Default: `${font size=14 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center}` |
+| `metricFont` | `style` | The CSS font properties for the metric. For example, `font-family` or `font-weight`.
Default: `${font size=48 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center lHeight=48}` |
+| `metricFormat`
Alias: `format` | `string` | A Numeral pattern format string. For example, `"0.0a"` or `"0%"`. |
+
+**Returns:** `render`
+
+
+## N [n_fns]
+
+
+### `neq` [neq_fn]
+
+Returns whether the *context* is not equal to the argument.
+
+**Accepts:** `boolean`, `number`, `string`, `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. |
+
+**Returns:** `boolean`
+
+
+## P [p_fns]
+
+
+### `palette` [palette_fn]
+
+Creates a color palette.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Alias: `color` | `string` | The palette colors. Accepts an HTML color name, HEX, HSL, HSLA, RGB, or RGBA. |
+| `continuity` | `string` | Default: `"above"` |
+| `gradient` | `boolean` | Make a gradient palette where supported?
Default: `false` |
+| `range` | `string` | Default: `"percent"` |
+| `rangeMax` | `number` | |
+| `rangeMin` | `number` | |
+| `reverse` | `boolean` | Reverse the palette?
Default: `false` |
+| `stop` † | `number` | The palette color stops. When used, it must be associated with each color. |
+
+**Returns:** `palette`
+
+
+### `pie` [pie_fn]
+
+Configures a pie chart element.
+
+**Accepts:** `pointseries`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `font` | `style` | The CSS font properties for the labels. For example, `font-family` or `font-weight`.
Default: `${font}` |
+| `hole` | `number` | Draws a hole in the pie, between `0` and `100`, as a percentage of the pie radius.
Default: `0` |
+| `labelRadius` | `number` | The percentage of the container area to use as a radius for the label circle.
Default: `100` |
+| `labels` | `boolean` | Display the pie labels?
Default: `true` |
+| `legend` | `string`, `boolean` | The legend position. For example, `"nw"`, `"sw"`, `"ne"`, `"se"`, or `false`. When `false`, the legend is hidden.
Default: `false` |
+| `palette` | `palette` | A `palette` object for describing the colors to use in this pie chart.
Default: `${palette}` |
+| `radius` | `string`, `number` | The radius of the pie as a percentage, between `0` and `1`, of the available space. To automatically set the radius, use `"auto"`.
Default: `"auto"` |
+| `seriesStyle` † | `seriesStyle` | A style of a specific series |
+| `tilt` | `number` | The percentage of tilt where `1` is fully vertical, and `0` is completely flat.
Default: `1` |
+
+**Returns:** `render`
+
+
+### `plot` [plot_fn]
+
+Configures a chart element.
+
+**Accepts:** `pointseries`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `defaultStyle` | `seriesStyle` | The default style to use for every series.
Default: `${seriesStyle points=5}` |
+| `font` | `style` | The CSS font properties for the labels. For example, `font-family` or `font-weight`.
Default: `${font}` |
+| `legend` | `string`, `boolean` | The legend position. For example, `"nw"`, `"sw"`, `"ne"`, `"se"`, or `false`. When `false`, the legend is hidden.
Default: `"ne"` |
+| `palette` | `palette` | A `palette` object for describing the colors to use in this chart.
Default: `${palette}` |
+| `seriesStyle` † | `seriesStyle` | A style of a specific series |
+| `xaxis` | `boolean`, `axisConfig` | The axis configuration. When `false`, the axis is hidden.
Default: `true` |
+| `yaxis` | `boolean`, `axisConfig` | The axis configuration. When `false`, the axis is hidden.
Default: `true` |
+
+**Returns:** `render`
+
+
+### `ply` [ply_fn]
+
+Subdivides a `datatable` by the unique values of the specified columns, and passes the resulting tables into an expression, then merges the outputs of each expression.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `by` † | `string` | The column to subdivide the `datatable`. |
+| `expression` †
Aliases: `exp`, `fn`, `function` | `datatable` | An expression to pass each resulting `datatable` into. Tips: Expressions must return a `datatable`. Use [`as`](#as_fn) to turn literals into `datatable`s. Multiple expressions must return the same number of rows.If you need to return a different row count, pipe into another instance of [`ply`](#ply_fn). If multiple expressions returns the columns with the same name, the last one wins. |
+
+**Returns:** `datatable`
+
+
+### `pointseries` [pointseries_fn]
+
+Turn a `datatable` into a point series model. Currently we differentiate measure from dimensions by looking for a `TinyMath` expression. See [/reference/data-analysis/kibana/tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). If you enter a `TinyMath` expression in your argument, we treat that argument as a measure, otherwise it is a dimension. Dimensions are combined to create unique keys. Measures are then deduplicated by those keys using the specified `TinyMath` function
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `color` | `string` | An expression to use in determining the mark’s color. |
+| `size` | `string` | The size of the marks. Only applicable to supported elements. |
+| `text` | `string` | The text to show on the mark. Only applicable to supported elements. |
+| `x` | `string` | The values along the X-axis. |
+| `y` | `string` | The values along the Y-axis. |
+
+**Returns:** `pointseries`
+
+
+### `progress` [progress_fn]
+
+Configures a progress element.
+
+**Accepts:** `number`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `shape` | `string` | Select `"gauge"`, `"horizontalBar"`, `"horizontalPill"`, `"semicircle"`, `"unicorn"`, `"verticalBar"`, `"verticalPill"`, or `"wheel"`.
Default: `"gauge"` |
+| `barColor` | `string` | The color of the background bar.
Default: `"#f0f0f0"` |
+| `barWeight` | `number` | The thickness of the background bar.
Default: `20` |
+| `font` | `style` | The CSS font properties for the label. For example, `font-family` or `font-weight`.
Default: `${font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center}` |
+| `label` | `boolean`, `string` | To show or hide the label, use `true` or `false`. Alternatively, provide a string to display as a label.
Default: `true` |
+| `max` | `number` | The maximum value of the progress element.
Default: `1` |
+| `valueColor` | `string` | The color of the progress bar.
Default: `"#1785b0"` |
+| `valueWeight` | `number` | The thickness of the progress bar.
Default: `20` |
+
+**Returns:** `render`
+
+
+## R [r_fns]
+
+
+### `removeFilter` [removeFilter_fn]
+
+Removes filters from context
+
+**Accepts:** `kibana_context`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `group` | `string` | Removes only filters belonging to the provided group |
+| `from` | `string` | Removes only filters owned by the provided id |
+| `ungrouped`
Aliases: `nogroup`, `nogroups` | `boolean` | Should filters without group be removed
Default: `false` |
+
+**Returns:** `kibana_context`
+
+
+### `render` [render_fn]
+
+Renders the *context* as a specific element and sets element level options, such as background and border styling.
+
+**Accepts:** `render`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `as` | `string` | The element type to render. You probably want a specialized function instead, such as [`plot`](#plot_fn) or [`shape`](#shape_fn). |
+| `containerStyle` | `containerStyle` | The style for the container, including background, border, and opacity.
Default: `${containerStyle}` |
+| `css` | `string` | Any block of custom CSS to be scoped to the element.
Default: `".canvasRenderEl${}"` |
+
+**Returns:** `render`
+
+
+### `repeatImage` [repeatImage_fn]
+
+Configures a repeating image element.
+
+**Accepts:** `number`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `emptyImage` | `string`, `null` | Fills the difference between the *context* and `max` parameter for the element with this image. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
Default: `null` |
+| `image` | `string`, `null` | The image to repeat. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
Default: `null` |
+| `max` | `number`, `null` | The maximum number of times the image can repeat.
Default: `1000` |
+| `size` | `number` | The maximum height or width of the image, in pixels. When the image is taller than it is wide, this function limits the height.
Default: `100` |
+
+**Returns:** `render`
+
+
+### `replace` [replace_fn]
+
+Uses a regular expression to replace parts of a string.
+
+**Accepts:** `string`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `pattern`, `regex` | `string` | The text or pattern of a JavaScript regular expression. For example, `"[aeiou]"`. You can use capturing groups here. |
+| `flags`
Alias: `modifiers` | `string` | Specify flags. See [https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp).
Default: `"g"` |
+| `replacement` | `string` | The replacement for the matching parts of string. Capturing groups can be accessed by their index. For example, `"$1"`.
Default: `""` |
+
+**Returns:** `string`
+
+
+### `revealImage` [revealImage_fn]
+
+Configures an image reveal element.
+
+**Accepts:** `number`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `emptyImage` | `string`, `null` | An optional background image to reveal over. Provide an image asset as a ``base64`` data URL, or pass in a sub-expression.
Default: `null` |
+| `image` | `string`, `null` | The image to reveal. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
Default: `null` |
+| `origin` | `string` | The position to start the image fill. For example, `"top"`, `"bottom"`, `"left"`, or right.
Default: `"bottom"` |
+
+**Returns:** `render`
+
+
+### `rounddate` [rounddate_fn]
+
+Uses a MomentJS formatting string to round milliseconds since epoch, and returns milliseconds since epoch.
+
+**Accepts:** `number`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `format` | `string` | The MomentJS format to use for bucketing. For example, `"YYYY-MM"` rounds to months. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). |
+
+**Returns:** `number`
+
+
+### `rowCount` [rowCount_fn]
+
+Returns the number of rows. Pairs with [`ply`](#ply_fn) to get the count of unique column values, or combinations of unique column values.
+
+**Accepts:** `datatable`
+
+**Returns:** `number`
+
+
+## S [s_fns]
+
+
+### `selectFilter` [selectFilter_fn]
+
+Selects filters from context
+
+**Accepts:** `kibana_context`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Alias: `group` | `string` | Select only filters belonging to the provided group |
+| `from` | `string` | Select only filters owned by the provided id |
+| `ungrouped`
Aliases: `nogroup`, `nogroups` | `boolean` | Should filters without group be included
Default: `false` |
+
+**Returns:** `kibana_context`
+
+
+### `seriesStyle` [seriesStyle_fn]
+
+Creates an object used for describing the properties of a series on a chart. Use [`seriesStyle`](#seriesStyle_fn) inside of a charting function, like [`plot`](#plot_fn) or [`pie`](#pie_fn).
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `bars` | `number` | The width of bars. |
+| `color` | `string` | The line color. |
+| `fill` | `number`, `boolean` | Should we fill in the points?
Default: `false` |
+| `horizontalBars` | `boolean` | Sets the orientation of the bars in the chart to horizontal. |
+| `label` | `string` | The name of the series to style. |
+| `lines` | `number` | The width of the line. |
+| `points` | `number` | The size of points on line. |
+| `stack` | `number`, `null` | Specifies if the series should be stacked. The number is the stack ID. Series with the same stack ID are stacked together. |
+
+**Returns:** `seriesStyle`
+
+
+### `shape` [shape_fn]
+
+Creates a shape.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `shape` | `string` | Pick a shape.
Default: `"square"` |
+| `border`
Alias: `stroke` | `string` | An SVG color for the border outlining the shape. |
+| `borderWidth`
Alias: `strokeWidth` | `number` | The thickness of the border.
Default: `0` |
+| `fill` | `string` | An SVG color to fill the shape.
Default: `"black"` |
+| `maintainAspect` | `boolean` | Maintain the shape’s original aspect ratio?
Default: `false` |
+
+**Returns:** Depends on your input and arguments
+
+
+### `sort` [sort_fn]
+
+Sorts a `datatable` by the specified column.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `by`, `column` | `string` | The column to sort by. When unspecified, the `datatable` is sorted by the first column. |
+| `reverse` | `boolean` | Reverses the sorting order. When unspecified, the `datatable` is sorted in ascending order.
Default: `false` |
+
+**Returns:** `datatable`
+
+
+### `staticColumn` [staticColumn_fn]
+
+Adds a column with the same static value in every row. See also [`alterColumn`](#alterColumn_fn), [`mapColumn`](#mapColumn_fn), and [`mathColumn`](#mathColumn_fn)
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Aliases: `column`, `name` | `string` | The name of the new column. |
+| `value` | `string`, `number`, `boolean`, `null` | The value to insert in each row in the new column. TIP: use a sub-expression to rollup other columns into a static value.
Default: `null` |
+
+**Returns:** `datatable`
+
+
+### `string` [string_fn]
+
+Concatenates all of the arguments into a single string.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Alias: `value` | `string`, `number`, `boolean` | The values to join together into one string. Include spaces where needed. |
+
+**Returns:** `string`
+
+
+### `switch` [switch_fn]
+
+Performs conditional logic with multiple conditions. See also [`case`](#case_fn), which builds a `case` to pass to the [`switch`](#switch_fn) function.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* * †
Alias: `case` | `case` | The conditions to check. |
+| `default`
Alias: `finally` | `any` | The value returned when no conditions are met. When unspecified and no conditions are met, the original *context* is returned. |
+
+**Returns:** Depends on your input and arguments
+
+
+## T [t_fns]
+
+
+### `table` [table_fn]
+
+Configures a table element.
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `font` | `style` | The CSS font properties for the contents of the table. For example, `font-family` or `font-weight`.
Default: `${font}` |
+| `paginate` | `boolean` | Show pagination controls? When `false`, only the first page is displayed.
Default: `true` |
+| `perPage` | `number` | The number of rows to display on each page.
Default: `10` |
+| `showHeader` | `boolean` | Show or hide the header row with titles for each column.
Default: `true` |
+
+**Returns:** `render`
+
+
+### `tail` [tail_fn]
+
+Retrieves the last N rows from the end of a `datatable`. See also [`head`](#head_fn).
+
+**Accepts:** `datatable`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Alias: `count` | `number` | The number of rows to retrieve from the end of the `datatable`.
Default: `1` |
+
+**Returns:** `datatable`
+
+
+### `timefilter` [timefilter_fn]
+
+Creates a time filter for querying a source.
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `column`
Aliases: `c`, `field` | `string` | The column or field that you want to filter.
Default: `"@timestamp"` |
+| `filterGroup` | `string` | The group name for the filter |
+| `from`
Aliases: `f`, `start` | `string` | The beginning of the range, in ISO8601 or Elasticsearch `datemath` format |
+| `to`
Aliases: `end`, `t` | `string` | The end of the range, in ISO8601 or Elasticsearch `datemath` format |
+
+**Returns:** `filter`
+
+
+### `timefilterControl` [timefilterControl_fn]
+
+Configures a time filter control element.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `column`
Aliases: `c`, `field` | `string` | The column or field that you want to filter.
Default: `"@timestamp"` |
+| `compact` | `boolean` | Shows the time filter as a button, which triggers a popover.
Default: `true` |
+| `filterGroup` | `string` | The group name for the filter. |
+
+**Returns:** `render`
+
+
+### `timelion` [timelion_fn]
+
+Uses Timelion to extract one or more time series from many sources.
+
+**Accepts:** `filter`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed*
Aliases: `q`, `query` | `string` | A Timelion query
Default: `".es(*)"` |
+| `from` | `string` | The Elasticsearch `datemath` string for the beginning of the time range.
Default: `"now-1y"` |
+| `interval` | `string` | The bucket interval for the time series.
Default: `"auto"` |
+| `timezone` | `string` | The timezone for the time range. See [https://momentjs.com/timezone/](https://momentjs.com/timezone/).
Default: `"UTC"` |
+| `to` | `string` | The Elasticsearch `datemath` string for the end of the time range.
Default: `"now"` |
+
+**Returns:** `datatable`
+
+
+### `timerange` [timerange_fn]
+
+An object that represents a span of time.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| `from` * | `string` | The start of the time range |
+| `to` * | `string` | The end of the time range |
+
+**Returns:** `timerange`
+
+
+### `to` [to_fn]
+
+Explicitly casts the type of the *context* from one type to the specified type.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* †
Alias: `type` | `string` | A known data type in the expression language. |
+
+**Returns:** Depends on your input and arguments
+
+
+## U [u_fns]
+
+
+### `uiSetting` [uiSetting_fn]
+
+Returns a UI settings parameter value.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `parameter` | `string` | The parameter name. |
+| `default` | `any` | A default value in case of the parameter is not set. |
+
+**Returns:** Depends on your input and arguments
+
+
+### `urlparam` [urlparam_fn]
+
+Retrieves a URL parameter to use in an expression. The [`urlparam`](#urlparam_fn) function always returns a `string`. For example, you can retrieve the value `"20"` from the parameter `myVar` from the URL `https://localhost:5601/app/canvas?myVar=20`.
+
+**Accepts:** `null`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Aliases: `param`, `var`, `variable` | `string` | The URL hash parameter to retrieve. |
+| `default` | `string` | The string returned when the URL parameter is unspecified.
Default: `""` |
+
+**Returns:** `string`
+
+
+## V [v_fns]
+
+
+### `var` [var_fn]
+
+Updates the Kibana global context.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* *
Alias: `name` | `string` | Specify the name of the variable. |
+
+**Returns:** Depends on your input and arguments
+
+
+### `var_set` [var_set_fn]
+
+Updates the Kibana global context.
+
+**Accepts:** `any`
+
+| Argument | Type | Description |
+| --- | --- | --- |
+| *Unnamed* * †
Alias: `name` | `string` | Specify the name of the variable. |
+| `value` †
Alias: `val` | `any` | Specify the value for the variable. When unspecified, the input context is used. |
+
+**Returns:** Depends on your input and arguments
+
+
diff --git a/reference/data-analysis/kibana/tinymath-functions.md b/reference/data-analysis/kibana/tinymath-functions.md
new file mode 100644
index 0000000000..034662ea69
--- /dev/null
+++ b/reference/data-analysis/kibana/tinymath-functions.md
@@ -0,0 +1,692 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/kibana/current/canvas-tinymath-functions.html
+---
+
+# TinyMath functions [canvas-tinymath-functions]
+
+TinyMath provides a set of functions that can be used with the Canvas expression language to perform complex math calculations. Read on for detailed information about the functions available in TinyMath, including what parameters each function accepts, the return value of that function, and examples of how each function behaves.
+
+Most of the functions accept arrays and apply JavaScript Math methods to each element of that array. For the functions that accept multiple arrays as parameters, the function generally does the calculation index by index.
+
+Any function can be wrapped by another function as long as the return type of the inner function matches the acceptable parameter type of the outer function.
+
+
+## abs( a ) [_abs_a]
+
+Calculates the absolute value of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The absolute value of `a`. Returns an array with the absolute values of each element if `a` is an array.
+
+**Example**
+
+```js
+abs(-1) // returns 1
+abs(2) // returns 2
+abs([-1 , -2, 3, -4]) // returns [1, 2, 3, 4]
+```
+
+
+## add( …args ) [_add_args]
+
+Calculates the sum of one or more numbers/arrays passed into the function. If at least one array of numbers is passed into the function, the function will calculate the sum by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The sum of all numbers in `args` if `args` contains only numbers. Returns an array of sums of the elements at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Throws**: `'Array length mismatch'` if `args` contains arrays of different lengths
+
+**Example**
+
+```js
+add(1, 2, 3) // returns 6
+add([10, 20, 30, 40], 10, 20, 30) // returns [70, 80, 90, 100]
+add([1, 2], 3, [4, 5], 6) // returns [(1 + 3 + 4 + 6), (2 + 3 + 5 + 6)] = [14, 16]
+```
+
+
+## cbrt( a ) [_cbrt_a]
+
+Calculates the cube root of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The cube root of `a`. Returns an array with the cube roots of each element if `a` is an array.
+
+**Example**
+
+```js
+cbrt(-27) // returns -3
+cbrt(94) // returns 4.546835943776344
+cbrt([27, 64, 125]) // returns [3, 4, 5]
+```
+
+
+## ceil( a ) [_ceil_a]
+
+Calculates the ceiling of a number, i.e., rounds a number towards positive infinity. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The ceiling of `a`. Returns an array with the ceilings of each element if `a` is an array.
+
+**Example**
+
+```js
+ceil(1.2) // returns 2
+ceil(-1.8) // returns -1
+ceil([1.1, 2.2, 3.3]) // returns [2, 3, 4]
+```
+
+
+## clamp( …a, min, max ) [_clamp_a_min_max]
+
+Restricts value to a given range and returns closed available value. If only `min` is provided, values are restricted to only a lower bound.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …a | number | Array. | one or more numbers or arrays of numbers |
+| min | number | Array. | (optional) The minimum value this function will return. |
+| max | number | Array. | (optional) The maximum value this function will return. |
+
+**Returns**: `number` | `Array.`. The closest value between `min` (inclusive) and `max` (inclusive). Returns an array with values greater than or equal to `min` and less than or equal to `max` (if provided) at each index.
+
+**Throws**:
+
+* `'Array length mismatch'` if a `min` and/or `max` are arrays of different lengths
+* `'Min must be less than max'` if `max` is less than `min`
+
+**Example**
+
+```js
+clamp(1, 2, 3) // returns 2
+clamp([10, 20, 30, 40], 15, 25) // returns [15, 20, 25, 25]
+clamp(10, [15, 2, 4, 20], 25) // returns [15, 10, 10, 20]
+clamp(35, 10, [20, 30, 40, 50]) // returns [20, 30, 35, 35]
+clamp([1, 9], 3, [4, 5]) // returns [clamp([1, 3, 4]), clamp([9, 3, 5])] = [3, 5]
+```
+
+
+## count( a ) [_count_a]
+
+Returns the length of an array. Alias for size.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | Array. | array of any values |
+
+**Returns**: `number`. The length of the array.
+
+**Throws**: `'Must pass an array'` if `a` is not an array.
+
+**Example**
+
+```js
+count([]) // returns 0
+count([-1, -2, -3, -4]) // returns 4
+count(100) // returns 1
+```
+
+
+## cube( a ) [_cube_a]
+
+Calculates the cube of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The cube of `a`. Returns an array with the cubes of each element if `a` is an array.
+
+**Example**
+
+```js
+cube(-3) // returns -27
+cube([3, 4, 5]) // returns [27, 64, 125]
+```
+
+
+## divide( a, b ) [_divide_a_b]
+
+Divides two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | dividend, a number or an array of numbers |
+| b | number | Array. | divisor, a number or an array of numbers, b != 0 |
+
+**Returns**: `number` | `Array.`. Returns the quotient of `a` and `b` if both are numbers. Returns an array with the quotients applied index-wise to each element if `a` or `b` is an array.
+
+**Throws**:
+
+* `'Array length mismatch'` if `a` and `b` are arrays with different lengths
+* `'Cannot divide by 0'` if `b` equals 0 or contains 0
+
+**Example**
+
+```js
+divide(6, 3) // returns 2
+divide([10, 20, 30, 40], 10) // returns [1, 2, 3, 4]
+divide(10, [1, 2, 5, 10]) // returns [10, 5, 2, 1]
+divide([14, 42, 65, 108], [2, 7, 5, 12]) // returns [7, 6, 13, 9]
+```
+
+
+## exp( a ) [_exp_a]
+
+Calculates *e^x* where *e* is Euler’s number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. Returns an array with the values of `e^x` evaluated where `x` is each element of `a` if `a` is an array.
+
+**Example**
+
+```js
+exp(2) // returns e^2 = 7.3890560989306495
+exp([1, 2, 3]) // returns [e^1, e^2, e^3] = [2.718281828459045, 7.3890560989306495, 20.085536923187668]
+```
+
+
+## first( a ) [_first_a]
+
+Returns the first element of an array. If anything other than an array is passed in, the input is returned.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | Array. | array of any values |
+
+**Returns**: `*`. The first element of `a`. Returns `a` if `a` is not an array.
+
+**Example**
+
+```js
+first(2) // returns 2
+first([1, 2, 3]) // returns 1
+```
+
+
+## fix( a ) [_fix_a]
+
+Calculates the fix of a number, i.e., rounds a number towards 0. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The fix of `a`. Returns an array with the fixes for each element if `a` is an array.
+
+**Example**
+
+```js
+fix(1.2) // returns 1
+fix(-1.8) // returns -1
+fix([1.8, 2.9, -3.7, -4.6]) // returns [1, 2, -3, -4]
+```
+
+
+## floor( a ) [_floor_a]
+
+Calculates the floor of a number, i.e., rounds a number towards negative infinity. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The floor of `a`. Returns an array with the floor of each element if `a` is an array.
+
+**Example**
+
+```js
+floor(1.8) // returns 1
+floor(-1.2) // returns -2
+floor([1.7, 2.8, 3.9]) // returns [1, 2, 3]
+```
+
+
+## last( a ) [_last_a]
+
+Returns the last element of an array. If anything other than an array is passed in, the input is returned.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | Array. | array of any values |
+
+**Returns**: `*`. The last element of `a`. Returns `a` if `a` is not an array.
+
+**Example**
+
+```js
+last(2) // returns 2
+last([1, 2, 3]) // returns 3
+```
+
+
+## log( a, b ) [_log_a_b]
+
+Calculates the logarithm of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers, `a` must be greater than 0 |
+| b | Object | (optional) base for the logarithm. If not provided a value, the default base is e, and the natural log is calculated. |
+
+**Returns**: `number` | `Array.`. The logarithm of `a`. Returns an array with the the logarithms of each element if `a` is an array.
+
+**Throws**:
+
+* `'Base out of range'` if `b` ⇐ 0
+* `'Must be greater than 0'` if `a` > 0
+
+**Example**
+
+```js
+log(1) // returns 0
+log(64, 8) // returns 2
+log(42, 5) // returns 2.322344707681546
+log([2, 4, 8, 16, 32], 2) // returns [1, 2, 3, 4, 5]
+```
+
+
+## log10( a ) [_log10_a]
+
+Calculates the logarithm base 10 of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers, `a` must be greater than 0 |
+
+**Returns**: `number` | `Array.`. The logarithm of `a`. Returns an array with the the logarithms base 10 of each element if `a` is an array.
+
+**Throws**: `'Must be greater than 0'` if `a` < 0
+
+**Example**
+
+```js
+log(10) // returns 1
+log(100) // returns 2
+log(80) // returns 1.9030899869919433
+log([10, 100, 1000, 10000, 100000]) // returns [1, 2, 3, 4, 5]
+```
+
+
+## max( …args ) [_max_args]
+
+Finds the maximum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the maximum by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The maximum value of all numbers if `args` contains only numbers. Returns an array with the the maximum values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Throws**: `'Array length mismatch'` if `args` contains arrays of different lengths
+
+**Example**
+
+```js
+max(1, 2, 3) // returns 3
+max([10, 20, 30, 40], 15) // returns [15, 20, 30, 40]
+max([1, 9], 4, [3, 5]) // returns [max([1, 4, 3]), max([9, 4, 5])] = [4, 9]
+```
+
+
+## mean( …args ) [_mean_args]
+
+Finds the mean value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mean by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The mean value of all numbers if `args` contains only numbers. Returns an array with the the mean values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Example**
+
+```js
+mean(1, 2, 3) // returns 2
+mean([10, 20, 30, 40], 20) // returns [15, 20, 25, 30]
+mean([1, 9], 5, [3, 4]) // returns [mean([1, 5, 3]), mean([9, 5, 4])] = [3, 6]
+```
+
+
+## median( …args ) [_median_args]
+
+Finds the median value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the median by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The median value of all numbers if `args` contains only numbers. Returns an array with the the median values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Example**
+
+```js
+median(1, 1, 2, 3) // returns 1.5
+median(1, 1, 2, 2, 3) // returns 2
+median([10, 20, 30, 40], 10, 20, 30) // returns [15, 20, 25, 25]
+median([1, 9], 2, 4, [3, 5]) // returns [median([1, 2, 4, 3]), median([9, 2, 4, 5])] = [2.5, 4.5]
+```
+
+
+## min( …args ) [_min_args]
+
+Finds the minimum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the minimum by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The minimum value of all numbers if `args` contains only numbers. Returns an array with the the minimum values of each index, including all scalar numbers in `args` in the calculation at each index if `a` is an array.
+
+**Throws**: `'Array length mismatch'` if `args` contains arrays of different lengths.
+
+**Example**
+
+```js
+min(1, 2, 3) // returns 1
+min([10, 20, 30, 40], 25) // returns [10, 20, 25, 25]
+min([1, 9], 4, [3, 5]) // returns [min([1, 4, 3]), min([9, 4, 5])] = [1, 4]
+```
+
+
+## mod( a, b ) [_mod_a_b]
+
+Remainder after dividing two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | dividend, a number or an array of numbers |
+| b | number | Array. | divisor, a number or an array of numbers, b != 0 |
+
+**Returns**: `number` | `Array.`. The remainder of `a` divided by `b` if both are numbers. Returns an array with the the remainders applied index-wise to each element if `a` or `b` is an array.
+
+**Throws**:
+
+* `'Array length mismatch'` if `a` and `b` are arrays with different lengths
+* `'Cannot divide by 0'` if `b` equals 0 or contains 0
+
+**Example**
+
+```js
+mod(10, 7) // returns 3
+mod([11, 22, 33, 44], 10) // returns [1, 2, 3, 4]
+mod(100, [3, 7, 11, 23]) // returns [1, 2, 1, 8]
+mod([14, 42, 65, 108], [5, 4, 14, 2]) // returns [5, 2, 9, 0]
+```
+
+
+## mode( …args ) [_mode_args]
+
+Finds the mode value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mode by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.>`. An array of mode value(s) of all numbers if `args` contains only numbers. Returns an array of arrays with mode value(s) of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Example**
+
+```js
+mode(1, 1, 2, 3) // returns [1]
+mode(1, 1, 2, 2, 3) // returns [1,2]
+mode([10, 20, 30, 40], 10, 20, 30) // returns [[10], [20], [30], [10, 20, 30, 40]]
+mode([1, 9], 1, 4, [3, 5]) // returns [mode([1, 1, 4, 3]), mode([9, 1, 4, 5])] = [[1], [4, 5, 9]]
+```
+
+
+## multiply( a, b ) [_multiply_a_b]
+
+Multiplies two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+| b | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The product of `a` and `b` if both are numbers. Returns an array with the the products applied index-wise to each element if `a` or `b` is an array.
+
+**Throws**: `'Array length mismatch'` if `a` and `b` are arrays with different lengths
+
+**Example**
+
+```js
+multiply(6, 3) // returns 18
+multiply([10, 20, 30, 40], 10) // returns [100, 200, 300, 400]
+multiply(10, [1, 2, 5, 10]) // returns [10, 20, 50, 100]
+multiply([1, 2, 3, 4], [2, 7, 5, 12]) // returns [2, 14, 15, 48]
+```
+
+
+## pow( a, b ) [_pow_a_b]
+
+Calculates the cube root of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+| b | number | the power that `a` is raised to |
+
+**Returns**: `number` | `Array.`. `a` raised to the power of `b`. Returns an array with the each element raised to the power of `b` if `a` is an array.
+
+**Throws**: `'Missing exponent'` if `b` is not provided
+
+**Example**
+
+```js
+pow(2,3) // returns 8
+pow([1, 2, 3], 4) // returns [1, 16, 81]
+```
+
+
+## random( a, b ) [_random_a_b]
+
+Generates a random number within the given range where the lower bound is inclusive and the upper bound is exclusive. If no numbers are passed in, it will return a number between 0 and 1. If only one number is passed in, it will return a number between 0 and the number passed in.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | (optional) must be greater than 0 if `b` is not provided |
+| b | number | (optional) must be greater than `a` |
+
+**Returns**: `number`. A random number between 0 and 1 if no numbers are passed in. Returns a random number between 0 and `a` if only one number is passed in. Returns a random number between `a` and `b` if two numbers are passed in.
+
+**Throws**: `'Min must be greater than max'` if `a` < 0 when only `a` is passed in or if `a` > `b` when both `a` and `b` are passed in
+
+**Example**
+
+```js
+random() // returns a random number between 0 (inclusive) and 1 (exclusive)
+random(10) // returns a random number between 0 (inclusive) and 10 (exclusive)
+random(-10,10) // returns a random number between -10 (inclusive) and 10 (exclusive)
+```
+
+
+## range( …args ) [_range_args]
+
+Finds the range of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the range by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Example**
+
+```js
+range(1, 2, 3) // returns 2
+range([10, 20, 30, 40], 15) // returns [5, 5, 15, 25]
+range([1, 9], 4, [3, 5]) // returns [range([1, 4, 3]), range([9, 4, 5])] = [3, 5]
+```
+
+
+## range( …args ) [_range_args_2]
+
+Finds the range of one of more numbers/arrays of numbers into the function. If at least one array of numbers is passed into the function, the function will find the range by index.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| …args | number | Array. | one or more numbers or arrays of numbers |
+
+**Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array.
+
+**Example**
+
+```js
+range(1, 2, 3) // returns 2
+range([10, 20, 30, 40], 15) // returns [5, 5, 15, 25]
+range([1, 9], 4, [3, 5]) // returns [range([1, 4, 3]), range([9, 4, 5])] = [3, 5]
+```
+
+
+## round( a, b ) [_round_a_b]
+
+Rounds a number towards the nearest integer by default, or decimal place (if passed in as `b`). For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+| b | number | (optional) number of decimal places, default value: 0 |
+
+**Returns**: `number` | `Array.`. The rounded value of `a`. Returns an array with the the rounded values of each element if `a` is an array.
+
+**Example**
+
+```js
+round(1.2) // returns 2
+round(-10.51) // returns -11
+round(-10.1, 2) // returns -10.1
+round(10.93745987, 4) // returns 10.9375
+round([2.9234, 5.1234, 3.5234, 4.49234324], 2) // returns [2.92, 5.12, 3.52, 4.49]
+```
+
+
+## size( a ) [_size_a]
+
+Returns the length of an array. Alias for count.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | Array. | array of any values |
+
+**Returns**: `number`. The length of the array.
+
+**Throws**: `'Must pass an array'` if `a` is not an array
+
+**Example**
+
+```js
+size([]) // returns 0
+size([-1, -2, -3, -4]) // returns 4
+size(100) // returns 1
+```
+
+
+## sqrt( a ) [_sqrt_a]
+
+Calculates the square root of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The square root of `a`. Returns an array with the the square roots of each element if `a` is an array.
+
+**Throws**: `'Unable find the square root of a negative number'` if `a` < 0
+
+**Example**
+
+```js
+sqrt(9) // returns 3
+sqrt(30) //5.477225575051661
+sqrt([9, 16, 25]) // returns [3, 4, 5]
+```
+
+
+## square( a ) [_square_a]
+
+Calculates the square of a number. For arrays, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The square of `a`. Returns an array with the the squares of each element if `a` is an array.
+
+**Example**
+
+```js
+square(-3) // returns 9
+square([3, 4, 5]) // returns [9, 16, 25]
+```
+
+
+## subtract( a, b ) [_subtract_a_b]
+
+Subtracts two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element.
+
+| Param | Type | Description |
+| --- | --- | --- |
+| a | number | Array. | a number or an array of numbers |
+| b | number | Array. | a number or an array of numbers |
+
+**Returns**: `number` | `Array.`. The difference of `a` and `b` if both are numbers, or an array of differences applied index-wise to each element.
+
+**Throws**: `'Array length mismatch'` if `a` and `b` are arrays with different lengths
+
+**Example**
+
+```js
+subtract(6, 3) // returns 3
+subtract([10, 20, 30, 40], 10) // returns [0, 10, 20, 30]
+subtract(10, [1, 2, 5, 10]) // returns [9, 8, 5, 0]
+subtract([14, 42, 65, 108], [2, 7, 5, 12]) // returns [12, 35, 52, 96]
+```
+
+
+## sum( …args ) [_sum_args]
+
+Calculates the sum of one or more numbers/arrays passed into the function. If at least one array is passed, the function will sum up one or more numbers/arrays of numbers and distinct values of an array. Sum accepts arrays of different lengths.
+
+**Returns**: `number`. The sum of one or more numbers/arrays of numbers including distinct values in arrays
+
+**Example**
+
+```js
+sum(1, 2, 3) // returns 6
+sum([10, 20, 30, 40], 10, 20, 30) // returns 160
+sum([1, 2], 3, [4, 5], 6) // returns sum(1, 2, 3, 4, 5, 6) = 21
+sum([10, 20, 30, 40], 10, [1, 2, 3], 22) // returns sum(10, 20, 30, 40, 10, 1, 2, 3, 22) = 138
+```
+
+
+## unique( a ) [_unique_a]
+
+Counts the number of unique values in an array.
+
+**Returns**: `number`. The number of unique values in the array. Returns 1 if `a` is not an array.
+
+**Example**
+
+```js
+unique(100) // returns 1
+unique([]) // returns 0
+unique([1, 2, 3, 4]) // returns 4
+unique([1, 2, 3, 4, 2, 2, 2, 3, 4, 2, 4, 5, 2, 1, 4, 2]) // returns 5
+```
+
diff --git a/reference/data-analysis/machine-learning/machine-learning-functions.md b/reference/data-analysis/machine-learning/machine-learning-functions.md
new file mode 100644
index 0000000000..d5966b9648
--- /dev/null
+++ b/reference/data-analysis/machine-learning/machine-learning-functions.md
@@ -0,0 +1,24 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-functions.html
+---
+
+# Function reference [ml-functions]
+
+The {{ml-features}} include analysis functions that provide a wide variety of flexible ways to analyze data for anomalies.
+
+When you create {{anomaly-jobs}}, you specify one or more detectors, which define the type of analysis that needs to be done. If you are creating your job by using {{ml}} APIs, you specify the functions in detector configuration objects. If you are creating your job in {{kib}}, you specify the functions differently depending on whether you are creating single metric, multi-metric, or advanced jobs.
+
+Most functions detect anomalies in both low and high values. In statistical terminology, they apply a two-sided test. Some functions offer low and high variations (for example, `count`, `low_count`, and `high_count`). These variations apply one-sided tests, detecting anomalies only when the values are low or high, depending one which alternative is used.
+
+You can specify a `summary_count_field_name` with any function except `metric`. When you use `summary_count_field_name`, the {{ml}} features expect the input data to be pre-aggregated. The value of the `summary_count_field_name` field must contain the count of raw events that were summarized. In {{kib}}, use the **summary_count_field_name** in advanced {{anomaly-jobs}}. Analyzing aggregated input data provides a significant boost in performance. For more information, see [Aggregating data for faster performance](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-aggregation.md).
+
+If your data is sparse, there may be gaps in the data which means you might have empty buckets. You might want to treat these as anomalies or you might want these gaps to be ignored. Your decision depends on your use case and what is important to you. It also depends on which functions you use. The `sum` and `count` functions are strongly affected by empty buckets. For this reason, there are `non_null_sum` and `non_zero_count` functions, which are tolerant to sparse data. These functions effectively ignore empty buckets.
+
+* [Count functions](/reference/data-analysis/machine-learning/ml-count-functions.md)
+* [Geographic functions](/reference/data-analysis/machine-learning/ml-geo-functions.md)
+* [Information content functions](/reference/data-analysis/machine-learning/ml-info-functions.md)
+* [Metric functions](/reference/data-analysis/machine-learning/ml-metric-functions.md)
+* [Rare functions](/reference/data-analysis/machine-learning/ml-rare-functions.md)
+* [Sum functions](/reference/data-analysis/machine-learning/ml-sum-functions.md)
+* [Time functions](/reference/data-analysis/machine-learning/ml-time-functions.md)
diff --git a/reference/data-analysis/machine-learning/ml-count-functions.md b/reference/data-analysis/machine-learning/ml-count-functions.md
new file mode 100644
index 0000000000..f3ce220070
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-count-functions.md
@@ -0,0 +1,224 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-count-functions.html
+---
+
+# Count functions [ml-count-functions]
+
+Count functions detect anomalies when the number of events in a bucket is anomalous.
+
+Use `non_zero_count` functions if your data is sparse and you want to ignore cases where the bucket count is zero.
+
+Use `distinct_count` functions to determine when the number of distinct values in one field is unusual, as opposed to the total count.
+
+Use high-sided functions if you want to monitor unusually high event rates. Use low-sided functions if you want to look at drops in event rate.
+
+The {{ml-features}} include the following count functions:
+
+* [`count`, `high_count`, `low_count`](ml-count-functions.md#ml-count)
+* [`non_zero_count`, `high_non_zero_count`, `low_non_zero_count`](ml-count-functions.md#ml-nonzero-count)
+* [`distinct_count`, `high_distinct_count`, `low_distinct_count`](ml-count-functions.md#ml-distinct-count)
+
+
+## Count, high_count, low_count [ml-count]
+
+The `count` function detects anomalies when the number of events in a bucket is anomalous.
+
+The `high_count` function detects anomalies when the count of events in a bucket are unusually high.
+
+The `low_count` function detects anomalies when the count of events in a bucket are unusually low.
+
+These functions support the following properties:
+
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```console
+PUT _ml/anomaly_detectors/example1
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "count"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+This example is probably the simplest possible analysis. It identifies time buckets during which the overall count of events is higher or lower than usual.
+
+When you use this function in a detector in your {{anomaly-job}}, it models the event rate and detects when the event rate is unusual compared to its past behavior.
+
+```console
+PUT _ml/anomaly_detectors/example2
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "high_count",
+ "by_field_name" : "error_code",
+ "over_field_name": "user"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+If you use this `high_count` function in a detector in your {{anomaly-job}}, it models the event rate for each error code. It detects users that generate an unusually high count of error codes compared to other users.
+
+```console
+PUT _ml/anomaly_detectors/example3
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "low_count",
+ "by_field_name" : "status_code"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+In this example, the function detects when the count of events for a status code is lower than usual.
+
+When you use this function in a detector in your {{anomaly-job}}, it models the event rate for each status code and detects when a status code has an unusually low count compared to its past behavior.
+
+```console
+PUT _ml/anomaly_detectors/example4
+{
+ "analysis_config": {
+ "summary_count_field_name" : "events_per_min",
+ "detectors": [{
+ "function" : "count"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+If you are analyzing an aggregated `events_per_min` field, do not use a sum function (for example, `sum(events_per_min)`). Instead, use the count function and the `summary_count_field_name` property. For more information, see [Aggregating data for faster performance](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-aggregation.md).
+
+
+## Non_zero_count, high_non_zero_count, low_non_zero_count [ml-nonzero-count]
+
+The `non_zero_count` function detects anomalies when the number of events in a bucket is anomalous, but it ignores cases where the bucket count is zero. Use this function if you know your data is sparse or has gaps and the gaps are not important.
+
+The `high_non_zero_count` function detects anomalies when the number of events in a bucket is unusually high and it ignores cases where the bucket count is zero.
+
+The `low_non_zero_count` function detects anomalies when the number of events in a bucket is unusually low and it ignores cases where the bucket count is zero.
+
+These functions support the following properties:
+
+* `by_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+For example, if you have the following number of events per bucket:
+
+::::{admonition}
+1,22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,43,31,0,0,0,0,0,0,0,0,0,0,0,0,2,1
+
+::::
+
+
+The `non_zero_count` function models only the following data:
+
+::::{admonition}
+1,22,2,43,31,2,1
+
+::::
+
+
+```console
+PUT _ml/anomaly_detectors/example5
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "high_non_zero_count",
+ "by_field_name" : "signaturename"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+If you use this `high_non_zero_count` function in a detector in your {{anomaly-job}}, it models the count of events for the `signaturename` field. It ignores any buckets where the count is zero and detects when a `signaturename` value has an unusually high count of events compared to its past behavior.
+
+::::{note}
+Population analysis (using an `over_field_name` property value) is not supported for the `non_zero_count`, `high_non_zero_count`, and `low_non_zero_count` functions. If you want to do population analysis and your data is sparse, use the `count` functions, which are optimized for that scenario.
+::::
+
+
+
+## Distinct_count, high_distinct_count, low_distinct_count [ml-distinct-count]
+
+The `distinct_count` function detects anomalies where the number of distinct values in one field is unusual.
+
+The `high_distinct_count` function detects unusually high numbers of distinct values in one field.
+
+The `low_distinct_count` function detects unusually low numbers of distinct values in one field.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```console
+PUT _ml/anomaly_detectors/example6
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "distinct_count",
+ "field_name" : "user"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+This `distinct_count` function detects when a system has an unusual number of logged in users. When you use this function in a detector in your {{anomaly-job}}, it models the distinct count of users. It also detects when the distinct number of users is unusual compared to the past.
+
+```console
+PUT _ml/anomaly_detectors/example7
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "high_distinct_count",
+ "field_name" : "dst_port",
+ "over_field_name": "src_ip"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+This example detects instances of port scanning. When you use this function in a detector in your {{anomaly-job}}, it models the distinct count of ports. It also detects the `src_ip` values that connect to an unusually high number of different `dst_ports` values compared to other `src_ip` values.
+
diff --git a/reference/data-analysis/machine-learning/ml-geo-functions.md b/reference/data-analysis/machine-learning/ml-geo-functions.md
new file mode 100644
index 0000000000..011b632d29
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-geo-functions.md
@@ -0,0 +1,70 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-geo-functions.html
+---
+
+# Geographic functions [ml-geo-functions]
+
+The geographic functions detect anomalies in the geographic location of the input data.
+
+The {{ml-features}} include the following geographic function: `lat_long`.
+
+::::{note}
+You cannot create forecasts for {{anomaly-jobs}} that contain geographic functions. You also cannot add rules with conditions to detectors that use geographic functions.
+::::
+
+
+
+## Lat_long [ml-lat-long]
+
+The `lat_long` function detects anomalies in the geographic location of the input data.
+
+This function supports the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```console
+PUT _ml/anomaly_detectors/example1
+{
+ "analysis_config": {
+ "detectors": [{
+ "function" : "lat_long",
+ "field_name" : "transaction_coordinates",
+ "by_field_name" : "credit_card_number"
+ }]
+ },
+ "data_description": {
+ "time_field":"timestamp",
+ "time_format": "epoch_ms"
+ }
+}
+```
+
+If you use this `lat_long` function in a detector in your {{anomaly-job}}, it detects anomalies where the geographic location of a credit card transaction is unusual for a particular customer’s credit card. An anomaly might indicate fraud.
+
+A "typical" value indicates a centroid of a cluster of previously observed locations that is closest to the "actual" location at that time. For example, there may be one centroid near the person’s home that is associated with the cluster of local grocery stores and restaurants, and another centroid near the person’s work associated with the cluster of lunch and coffee places.
+
+::::{important}
+The `field_name` that you supply must be a single string that contains two comma-separated numbers of the form `latitude,longitude`, a `geo_point` field, a `geo_shape` field that contains point values, or a `geo_centroid` aggregation. The `latitude` and `longitude` must be in the range -180 to 180 and represent a point on the surface of the Earth.
+::::
+
+
+For example, JSON data might contain the following transaction coordinates:
+
+```js
+{
+ "time": 1460464275,
+ "transaction_coordinates": "40.7,-74.0",
+ "credit_card_number": "1234123412341234"
+}
+```
+
+In {{es}}, location data is likely to be stored in `geo_point` fields. For more information, see [`geo_point` data type](elasticsearch://docs/reference/elasticsearch/mapping-reference/geo-point.md). This data type is supported natively in {{ml-features}}. Specifically, when pulling data from a `geo_point` field, a {{dfeed}} will transform the data into the appropriate `lat,lon` string format before sending to the {{anomaly-job}}.
+
+For more information, see [Altering data in your {{dfeed}} with runtime fields](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-transform.md).
+
diff --git a/reference/data-analysis/machine-learning/ml-info-functions.md b/reference/data-analysis/machine-learning/ml-info-functions.md
new file mode 100644
index 0000000000..5907a3a45d
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-info-functions.md
@@ -0,0 +1,64 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-info-functions.html
+---
+
+# Information content functions [ml-info-functions]
+
+The information content functions detect anomalies in the amount of information that is contained in strings within a bucket. These functions can be used as a more sophisticated method to identify incidences of data exfiltration or C2C activity, when analyzing the size in bytes of the data might not be sufficient.
+
+The {{ml-features}} include the following information content functions:
+
+* `info_content`, `high_info_content`, `low_info_content`
+
+
+## Info_content, High_info_content, Low_info_content [ml-info-content]
+
+The `info_content` function detects anomalies in the amount of information that is contained in strings in a bucket.
+
+If you want to monitor for unusually high amounts of information, use `high_info_content`. If want to look at drops in information content, use `low_info_content`.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "info_content",
+ "field_name" : "subdomain",
+ "over_field_name" : "highest_registered_domain"
+}
+```
+
+If you use this `info_content` function in a detector in your {{anomaly-job}}, it models information that is present in the `subdomain` string. It detects anomalies where the information content is unusual compared to the other `highest_registered_domain` values. An anomaly could indicate an abuse of the DNS protocol, such as malicious command and control activity.
+
+::::{note}
+In this example, both high and low values are considered anomalous. In many use cases, the `high_info_content` function is often a more appropriate choice.
+::::
+
+
+```js
+{
+ "function" : "high_info_content",
+ "field_name" : "query",
+ "over_field_name" : "src_ip"
+}
+```
+
+If you use this `high_info_content` function in a detector in your {{anomaly-job}}, it models information content that is held in the DNS query string. It detects `src_ip` values where the information content is unusually high compared to other `src_ip` values. This example is similar to the example for the `info_content` function, but it reports anomalies only where the amount of information content is higher than expected.
+
+```js
+{
+ "function" : "low_info_content",
+ "field_name" : "message",
+ "by_field_name" : "logfilename"
+}
+```
+
+If you use this `low_info_content` function in a detector in your {{anomaly-job}}, it models information content that is present in the message string for each `logfilename`. It detects anomalies where the information content is low compared to its past behavior. For example, this function detects unusually low amounts of information in a collection of rolling log files. Low information might indicate that a process has entered an infinite loop or that logging features have been disabled.
+
diff --git a/reference/data-analysis/machine-learning/ml-metric-functions.md b/reference/data-analysis/machine-learning/ml-metric-functions.md
new file mode 100644
index 0000000000..84139a6fde
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-metric-functions.md
@@ -0,0 +1,240 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-metric-functions.html
+---
+
+# Metric functions [ml-metric-functions]
+
+The metric functions include functions such as mean, min and max. These values are calculated for each bucket. Field values that cannot be converted to double precision floating point numbers are ignored.
+
+The {{ml-features}} include the following metric functions:
+
+* [`min`](ml-metric-functions.md#ml-metric-min)
+* [`max`](ml-metric-functions.md#ml-metric-max)
+* [`median`, `high_median`, `low_median`](ml-metric-functions.md#ml-metric-median)
+* [`mean`, `high_mean`, `low_mean`](ml-metric-functions.md#ml-metric-mean)
+* [`metric`](ml-metric-functions.md#ml-metric-metric)
+* [`varp`, `high_varp`, `low_varp`](ml-metric-functions.md#ml-metric-varp)
+
+::::{note}
+You cannot add rules with conditions to detectors that use the `metric` function.
+::::
+
+
+
+## Min [ml-metric-min]
+
+The `min` function detects anomalies in the arithmetic minimum of a value. The minimum value is calculated for each bucket.
+
+High- and low-sided functions are not applicable.
+
+This function supports the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "min",
+ "field_name" : "amt",
+ "by_field_name" : "product"
+}
+```
+
+If you use this `min` function in a detector in your {{anomaly-job}}, it detects where the smallest transaction is lower than previously observed. You can use this function to detect items for sale at unintentionally low prices due to data entry mistakes. It models the minimum amount for each product over time.
+
+
+## Max [ml-metric-max]
+
+The `max` function detects anomalies in the arithmetic maximum of a value. The maximum value is calculated for each bucket.
+
+High- and low-sided functions are not applicable.
+
+This function supports the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "max",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `max` function in a detector in your {{anomaly-job}}, it detects where the longest `responsetime` is longer than previously observed. You can use this function to detect applications that have `responsetime` values that are unusually lengthy. It models the maximum `responsetime` for each application over time and detects when the longest `responsetime` is unusually long compared to previous applications.
+
+```js
+{
+ "function" : "max",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+},
+{
+ "function" : "high_mean",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+The analysis in the previous example can be performed alongside `high_mean` functions by application. By combining detectors and using the same influencer this job can detect both unusually long individual response times and average response times for each bucket.
+
+
+## Median, high_median, low_median [ml-metric-median]
+
+The `median` function detects anomalies in the statistical median of a value. The median value is calculated for each bucket.
+
+If you want to monitor unusually high median values, use the `high_median` function.
+
+If you are just interested in unusually low median values, use the `low_median` function.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "median",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `median` function in a detector in your {{anomaly-job}}, it models the median `responsetime` for each application over time. It detects when the median `responsetime` is unusual compared to previous `responsetime` values.
+
+
+## Mean, high_mean, low_mean [ml-metric-mean]
+
+The `mean` function detects anomalies in the arithmetic mean of a value. The mean value is calculated for each bucket.
+
+If you want to monitor unusually high average values, use the `high_mean` function.
+
+If you are just interested in unusually low average values, use the `low_mean` function.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "mean",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `mean` function in a detector in your {{anomaly-job}}, it models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusual compared to previous `responsetime` values.
+
+```js
+{
+ "function" : "high_mean",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `high_mean` function in a detector in your {{anomaly-job}}, it models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusually high compared to previous `responsetime` values.
+
+```js
+{
+ "function" : "low_mean",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `low_mean` function in a detector in your {{anomaly-job}}, it models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusually low compared to previous `responsetime` values.
+
+
+## Metric [ml-metric-metric]
+
+The `metric` function combines `min`, `max`, and `mean` functions. You can use it as a shorthand for a combined analysis. If you do not specify a function in a detector, this is the default function.
+
+High- and low-sided functions are not applicable. You cannot use this function when a `summary_count_field_name` is specified.
+
+This function supports the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "metric",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `metric` function in a detector in your {{anomaly-job}}, it models the mean, min, and max `responsetime` for each application over time. It detects when the mean, min, or max `responsetime` is unusual compared to previous `responsetime` values.
+
+
+## Varp, high_varp, low_varp [ml-metric-varp]
+
+The `varp` function detects anomalies in the variance of a value which is a measure of the variability and spread in the data.
+
+If you want to monitor unusually high variance, use the `high_varp` function.
+
+If you are just interested in unusually low variance, use the `low_varp` function.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "varp",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `varp` function in a detector in your {{anomaly-job}}, it models the variance in values of `responsetime` for each application over time. It detects when the variance in `responsetime` is unusual compared to past application behavior.
+
+```js
+{
+ "function" : "high_varp",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `high_varp` function in a detector in your {{anomaly-job}}, it models the variance in values of `responsetime` for each application over time. It detects when the variance in `responsetime` is unusual compared to past application behavior.
+
+```js
+{
+ "function" : "low_varp",
+ "field_name" : "responsetime",
+ "by_field_name" : "application"
+}
+```
+
+If you use this `low_varp` function in a detector in your {{anomaly-job}}, it models the variance in values of `responsetime` for each application over time. It detects when the variance in `responsetime` is unusual compared to past application behavior.
+
diff --git a/reference/data-analysis/machine-learning/ml-rare-functions.md b/reference/data-analysis/machine-learning/ml-rare-functions.md
new file mode 100644
index 0000000000..fda99190a9
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-rare-functions.md
@@ -0,0 +1,91 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-rare-functions.html
+---
+
+# Rare functions [ml-rare-functions]
+
+The rare functions detect values that occur rarely in time or rarely for a population.
+
+The `rare` analysis detects anomalies according to the number of distinct rare values. This differs from `freq_rare`, which detects anomalies according to the number of times (frequency) rare values occur.
+
+::::{note}
+* The `rare` and `freq_rare` functions should not be used in conjunction with `exclude_frequent`.
+* You cannot create forecasts for {{anomaly-jobs}} that contain `rare` or `freq_rare` functions.
+* You cannot add rules with conditions to detectors that use `rare` or `freq_rare` functions.
+* Shorter bucket spans (less than 1 hour, for example) are recommended when looking for rare events. The functions model whether something happens in a bucket at least once. With longer bucket spans, it is more likely that entities will be seen in a bucket and therefore they appear less rare. Picking the ideal bucket span depends on the characteristics of the data with shorter bucket spans typically being measured in minutes, not hours.
+* To model rare data, a learning period of at least 20 buckets is required for typical data.
+
+::::
+
+
+The {{ml-features}} include the following rare functions:
+
+* [`rare`](ml-rare-functions.md#ml-rare)
+* [`freq_rare`](ml-rare-functions.md#ml-freq-rare)
+
+
+## Rare [ml-rare]
+
+The `rare` function detects values that occur rarely in time or rarely for a population. It detects anomalies according to the number of distinct rare values.
+
+This function supports the following properties:
+
+* `by_field_name` (required)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "rare",
+ "by_field_name" : "status"
+}
+```
+
+If you use this `rare` function in a detector in your {{anomaly-job}}, it detects values that are rare in time. It models status codes that occur over time and detects when rare status codes occur compared to the past. For example, you can detect status codes in a web access log that have never (or rarely) occurred before.
+
+```js
+{
+ "function" : "rare",
+ "by_field_name" : "status",
+ "over_field_name" : "clientip"
+}
+```
+
+If you use this `rare` function in a detector in your {{anomaly-job}}, it detects values that are rare in a population. It models status code and client IP interactions that occur. It defines a rare status code as one that occurs for few client IP values compared to the population. It detects client IP values that experience one or more distinct rare status codes compared to the population. For example in a web access log, a `clientip` that experiences the highest number of different rare status codes compared to the population is regarded as highly anomalous. This analysis is based on the number of different status code values, not the count of occurrences.
+
+::::{note}
+To define a status code as rare the {{ml-features}} look at the number of distinct status codes that occur, not the number of times the status code occurs. If a single client IP experiences a single unique status code, this is rare, even if it occurs for that client IP in every bucket.
+::::
+
+
+
+## Freq_rare [ml-freq-rare]
+
+The `freq_rare` function detects values that occur rarely for a population. It detects anomalies according to the number of times (frequency) that rare values occur.
+
+This function supports the following properties:
+
+* `by_field_name` (required)
+* `over_field_name` (required)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "freq_rare",
+ "by_field_name" : "uri",
+ "over_field_name" : "clientip"
+}
+```
+
+If you use this `freq_rare` function in a detector in your {{anomaly-job}}, it detects values that are frequently rare in a population. It models URI paths and client IP interactions that occur. It defines a rare URI path as one that is visited by few client IP values compared to the population. It detects the client IP values that experience many interactions with rare URI paths compared to the population. For example in a web access log, a client IP that visits one or more rare URI paths many times compared to the population is regarded as highly anomalous. This analysis is based on the count of interactions with rare URI paths, not the number of different URI path values.
+
+::::{note}
+Defining a URI path as rare happens the same way as you can see in the case of the status codes above: the analytics consider the number of distinct values that occur and not the number of times the URI path occurs. If a single client IP visits a single unique URI path, this is rare, even if it occurs for that client IP in every bucket.
+::::
+
+
diff --git a/reference/data-analysis/machine-learning/ml-sum-functions.md b/reference/data-analysis/machine-learning/ml-sum-functions.md
new file mode 100644
index 0000000000..489b663c96
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-sum-functions.md
@@ -0,0 +1,91 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-sum-functions.html
+---
+
+# Sum functions [ml-sum-functions]
+
+The sum functions detect anomalies when the sum of a field in a bucket is anomalous.
+
+If you want to monitor unusually high totals, use high-sided functions.
+
+If want to look at drops in totals, use low-sided functions.
+
+If your data is sparse, use `non_null_sum` functions. Buckets without values are ignored; buckets with a zero value are analyzed.
+
+The {{ml-features}} include the following sum functions:
+
+* [`sum`, `high_sum`, `low_sum`](ml-sum-functions.md#ml-sum)
+* [`non_null_sum`, `high_non_null_sum`, `low_non_null_sum`](ml-sum-functions.md#ml-nonnull-sum)
+
+
+## Sum, high_sum, low_sum [ml-sum]
+
+The `sum` function detects anomalies where the sum of a field in a bucket is anomalous.
+
+If you want to monitor unusually high sum values, use the `high_sum` function.
+
+If you want to monitor unusually low sum values, use the `low_sum` function.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "sum",
+ "field_name" : "expenses",
+ "by_field_name" : "costcenter",
+ "over_field_name" : "employee"
+}
+```
+
+If you use this `sum` function in a detector in your {{anomaly-job}}, it models total expenses per employees for each cost center. For each time bucket, it detects when an employee’s expenses are unusual for a cost center compared to other employees.
+
+```js
+{
+ "function" : "high_sum",
+ "field_name" : "cs_bytes",
+ "over_field_name" : "cs_host"
+}
+```
+
+If you use this `high_sum` function in a detector in your {{anomaly-job}}, it models total `cs_bytes`. It detects `cs_hosts` that transfer unusually high volumes compared to other `cs_hosts`. This example looks for volumes of data transferred from a client to a server on the internet that are unusual compared to other clients. This scenario could be useful to detect data exfiltration or to find users that are abusing internet privileges.
+
+
+## Non_null_sum, high_non_null_sum, low_non_null_sum [ml-nonnull-sum]
+
+The `non_null_sum` function is useful if your data is sparse. Buckets without values are ignored and buckets with a zero value are analyzed.
+
+If you want to monitor unusually high totals, use the `high_non_null_sum` function.
+
+If you want to look at drops in totals, use the `low_non_null_sum` function.
+
+These functions support the following properties:
+
+* `field_name` (required)
+* `by_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+::::{note}
+Population analysis (that is to say, use of the `over_field_name` property) is not applicable for this function.
+::::
+
+
+```js
+{
+ "function" : "high_non_null_sum",
+ "field_name" : "amount_approved",
+ "by_field_name" : "employee"
+}
+```
+
+If you use this `high_non_null_sum` function in a detector in your {{anomaly-job}}, it models the total `amount_approved` for each employee. It ignores any buckets where the amount is null. It detects employees who approve unusually high amounts compared to their past behavior.
+
diff --git a/reference/data-analysis/machine-learning/ml-time-functions.md b/reference/data-analysis/machine-learning/ml-time-functions.md
new file mode 100644
index 0000000000..d05cd43052
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ml-time-functions.md
@@ -0,0 +1,76 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ml-time-functions.html
+---
+
+# Time functions [ml-time-functions]
+
+The time functions detect events that happen at unusual times, either of the day or of the week. These functions can be used to find unusual patterns of behavior, typically associated with suspicious user activity.
+
+The {{ml-features}} include the following time functions:
+
+* [`time_of_day`](ml-time-functions.md#ml-time-of-day)
+* [`time_of_week`](ml-time-functions.md#ml-time-of-week)
+
+::::{note}
+* You cannot create forecasts for {{anomaly-jobs}} that contain time functions.
+* The `time_of_day` function is not aware of the difference between days, for instance work days and weekends. When modeling different days, use the `time_of_week` function. In general, the `time_of_week` function is more suited to modeling the behavior of people rather than machines, as people vary their behavior according to the day of the week.
+* Shorter bucket spans (for example, 10 minutes) are recommended when performing a `time_of_day` or `time_of_week` analysis. The time of the events being modeled are not affected by the bucket span, but a shorter bucket span enables quicker alerting on unusual events.
+* Unusual events are flagged based on the previous pattern of the data, not on what we might think of as unusual based on human experience. So, if events typically occur between 3 a.m. and 5 a.m., an event occurring at 3 p.m. is flagged as unusual.
+* When Daylight Saving Time starts or stops, regular events can be flagged as anomalous. This situation occurs because the actual time of the event (as measured against a UTC baseline) has changed. This situation is treated as a step change in behavior and the new times will be learned quickly.
+
+::::
+
+
+
+## Time_of_day [ml-time-of-day]
+
+The `time_of_day` function detects when events occur that are outside normal usage patterns. For example, it detects unusual activity in the middle of the night.
+
+The function expects daily behavior to be similar. If you expect the behavior of your data to differ on Saturdays compared to Wednesdays, the `time_of_week` function is more appropriate.
+
+This function supports the following properties:
+
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "time_of_day",
+ "by_field_name" : "process"
+}
+```
+
+If you use this `time_of_day` function in a detector in your {{anomaly-job}}, it models when events occur throughout a day for each process. It detects when an event occurs for a process that is at an unusual time in the day compared to its past behavior.
+
+
+## Time_of_week [ml-time-of-week]
+
+The `time_of_week` function detects when events occur that are outside normal usage patterns. For example, it detects login events on the weekend.
+
+::::{important}
+The `time_of_week` function models time in epoch seconds modulo the duration of a week in seconds. It means that the `typical` and `actual` values are seconds after a whole number of weeks since 1/1/1970 in UTC which is a Thursday. For example, a value of `475` is 475 seconds after midnight on Thursday in UTC.
+::::
+
+
+This function supports the following properties:
+
+* `by_field_name` (optional)
+* `over_field_name` (optional)
+* `partition_field_name` (optional)
+
+For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job).
+
+```js
+{
+ "function" : "time_of_week",
+ "by_field_name" : "eventcode",
+ "over_field_name" : "workstation"
+}
+```
+
+If you use this `time_of_week` function in a detector in your {{anomaly-job}}, it models when events occur throughout the week for each `eventcode`. It detects when a workstation event occurs at an unusual time during the week for that `eventcode` compared to other workstations. It detects events for a particular workstation that are outside the normal usage pattern.
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md
new file mode 100644
index 0000000000..053620845a
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md
@@ -0,0 +1,41 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-apache.html
+---
+
+# Apache {{anomaly-detect}} configurations [ootb-ml-jobs-apache]
+
+These {{anomaly-job}} wizards appear in {{kib}} if you use the Apache integration in {{fleet}} or you use {{filebeat}} to ship access logs from your [Apache](https://httpd.apache.org/) HTTP servers to {{es}}. The jobs assume that you use fields and data types from the Elastic Common Schema (ECS).
+
+
+## Apache access logs [apache-access-logs]
+
+These {{anomaly-jobs}} find unusual activity in HTTP access logs.
+
+For more details, see the {{dfeed}} and job definitions in [GitHub](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json). Note that these jobs are available in {{kib}} only if data exists that matches the query specified in the [manifest file](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L11).
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| low_request_rate_apache | Detects low request rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L215) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L370) |
+| source_ip_request_rate_apache | Detects unusual source IPs - high request rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L176) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L349) |
+| source_ip_url_count_apache | Detects unusual source IPs - high distinct count of URLs. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L136) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L328) |
+| status_code_rate_apache | Detects unusual status code rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L90) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L307) |
+| visitor_rate_apache | Detects unusual visitor rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L47) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L260) |
+
+
+## Apache access logs ({{filebeat}}) [apache-access-logs-filebeat]
+
+These legacy {{anomaly-jobs}} find unusual activity in HTTP access logs. For the latest versions, install the Apache integration in {{fleet}}; see [Apache access logs](ootb-ml-jobs-apache.md#apache-access-logs).
+
+For more details, see the {{dfeed}} and job definitions in [GitHub](https://github.com/elastic/kibana/tree/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml).
+
+These configurations are only available if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/manifest.json#L8).
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| low_request_rate_ecs | Detects low request rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/low_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_low_request_rate_ecs.json) |
+| source_ip_request_rate_ecs | Detects unusual source IPs - high request rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/source_ip_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_source_ip_request_rate_ecs.json) |
+| source_ip_url_count_ecs | Detect unusual source IPs - high distinct count of URLs (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/source_ip_url_count_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_source_ip_url_count_ecs.json) |
+| status_code_rate_ecs | Detects unusual status code rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/status_code_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_status_code_rate_ecs.json) |
+| visitor_rate_ecs | Detects unusual visitor rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/visitor_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_visitor_rate_ecs.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md
new file mode 100644
index 0000000000..1db0ac0231
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md
@@ -0,0 +1,18 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-apm.html
+---
+
+# APM {{anomaly-detect}} configurations [ootb-ml-jobs-apm]
+
+This {{anomaly-job}} appears in the {{apm-app}} and the {{ml-app}} app when you have data from APM Agents or an APM Server in your cluster. It is available only if data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apm_transaction/manifest.json).
+
+For more information about {{anomaly-detect}} in the {{apm-app}}, refer to [{{ml-cap}} integration](/solutions/observability/apps/integrate-with-machine-learning.md).
+
+
+## Transactions [apm-transaction-jobs]
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| apm_tx_metrics | Detects anomalies in transaction latency, throughput and error percentage for metric data. | [code](https://github.com/elastic/kibana/blob/main/x-pack/plugins/ml/server/models/data_recognizer/modules/apm_transaction/ml/apm_tx_metrics.json) | [code](https://github.com/elastic/kibana/blob/main/x-pack/plugins/ml/server/models/data_recognizer/modules/apm_transaction/ml/datafeed_apm_tx_metrics.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md
new file mode 100644
index 0000000000..ce1f7d8b92
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md
@@ -0,0 +1,21 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-auditbeat.html
+---
+
+# {{auditbeat}} {{anomaly-detect}} configurations [ootb-ml-jobs-auditbeat]
+
+These {{anomaly-job}} wizards appear in {{kib}} if you use [{{auditbeat}}](beats://docs/reference/auditbeat/auditbeat.md) to audit process activity on your systems. For more details, see the {{dfeed}} and job definitions in GitHub.
+
+
+## Auditbeat docker processes [auditbeat-process-docker-ecs]
+
+Detect unusual processes in docker containers from auditd data (ECS).
+
+These configurations are only available if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/manifest.json#L8).
+
+| Name | Description | Job (JSON)| Datafeed |
+| --- | --- | --- | --- |
+| docker_high_count_process_events_ecs | Detect unusual increases in process execution rates in docker containers (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/docker_high_count_process_events_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/datafeed_docker_high_count_process_events_ecs.json) |
+| docker_rare_process_activity_ecs | Detect rare process executions in docker containers (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/docker_rare_process_activity_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/datafeed_docker_rare_process_activity_ecs.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md
new file mode 100644
index 0000000000..245d55022d
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md
@@ -0,0 +1,27 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-logs-ui.html
+---
+
+# Logs {{anomaly-detect}} configurations [ootb-ml-jobs-logs-ui]
+
+These {{anomaly-jobs}} appear by default in the [{{logs-app}}](/solutions/observability/logs/explore-logs.md) in {{kib}}. For more information about their usage, refer to [Categorize log entries](/solutions/observability/logs/categorize-log-entries.md) and [Inspect log anomalies](/solutions/observability/logs/inspect-log-anomalies.md).
+
+
+## Log analysis [logs-ui-analysis]
+
+Detect anomalies in log entries via the Logs UI.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| log_entry_rate | Detects anomalies in the log entry ingestion rate | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_analysis/ml/log_entry_rate.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_analysis/ml/datafeed_log_entry_rate.json) |
+
+
+## Log entry categories [logs-ui-categories]
+
+Detect anomalies in count of log entries by category.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| log_entry_categories_count | Detects anomalies in count of log entries by category | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_categories/ml/log_entry_categories_count.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_categories/ml/datafeed_log_entry_categories_count.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md
new file mode 100644
index 0000000000..ceb58cf5f6
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md
@@ -0,0 +1,22 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-metricbeat.html
+---
+
+# {{metricbeat}} {{anomaly-detect}} configurations [ootb-ml-jobs-metricbeat]
+
+These {{anomaly-job}} wizards appear in {{kib}} if you use the [{{metricbeat}} system module](beats://docs/reference/metricbeat/metricbeat-module-system.md) to monitor your servers. For more details, see the {{dfeed}} and job definitions in GitHub.
+
+
+## {{metricbeat}} system [metricbeat-system-ecs]
+
+Detect anomalies in {{metricbeat}} System data (ECS).
+
+These configurations are only available if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/manifest.json#L8).
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| high_mean_cpu_iowait_ecs | Detect unusual increases in cpu time spent in iowait (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/high_mean_cpu_iowait_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/datafeed_high_mean_cpu_iowait_ecs.json) |
+| max_disk_utilization_ecs | Detect unusual increases in disk utilization (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/max_disk_utilization_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/datafeed_max_disk_utilization_ecs.json) |
+| metricbeat_outages_ecs | Detect unusual decreases in metricbeat documents (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/metricbeat_outages_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/datafeed_metricbeat_outages_ecs.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md
new file mode 100644
index 0000000000..e41df52ccc
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md
@@ -0,0 +1,31 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-metrics-ui.html
+---
+
+# Metrics {{anomaly-detect}} configurations [ootb-ml-jobs-metrics-ui]
+
+These {{anomaly-jobs}} can be created in the [{{infrastructure-app}}](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) in {{kib}}. For more information about their usage, refer to [Inspect metric anomalies](/solutions/observability/infra-and-hosts/detect-metric-anomalies.md).
+
+
+## Metrics hosts [metrics-ui-hosts]
+
+Detect anomalous memory and network behavior on hosts.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| hosts_memory_usage | Identify unusual spikes in memory usage across hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/hosts_memory_usage.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/datafeed_hosts_memory_usage.json) |
+| hosts_network_in | Identify unusual spikes in inbound traffic across hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/hosts_network_in.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/datafeed_hosts_network_in.json) |
+| hosts_network_out | Identify unusual spikes in outbound traffic across hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/hosts_network_out.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/datafeed_hosts_network_out.json) |
+
+
+## Metrics Kubernetes [metrics-ui-k8s]
+
+Detect anomalous memory and network behavior on Kubernetes pods.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| k8s_memory_usage | Identify unusual spikes in memory usage across Kubernetes pods. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/k8s_memory_usage.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/datafeed_k8s_memory_usage.json) |
+| k8s_network_in | Identify unusual spikes in inbound traffic across Kubernetes pods. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/k8s_network_in.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/datafeed_k8s_network_in.json) |
+| k8s_network_out | Identify unusual spikes in outbound traffic across Kubernetes pods. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/k8s_network_out.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/datafeed_k8s_network_out.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md
new file mode 100644
index 0000000000..9f3c3412cb
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md
@@ -0,0 +1,39 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-nginx.html
+---
+
+# Nginx {{anomaly-detect}} configurations [ootb-ml-jobs-nginx]
+
+These {{anomaly-job}} wizards appear in {{kib}} if you use the Nginx integration in {{fleet}} or you use {{filebeat}} to ship access logs from your [Nginx](http://nginx.org/) HTTP servers to {{es}}. The jobs assume that you use fields and data types from the Elastic Common Schema (ECS).
+
+
+## Nginx access logs [nginx-access-logs]
+
+Find unusual activity in HTTP access logs.
+
+These jobs are available in {{kib}} only if data exists that matches the query specified in the [manifest file](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json).
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| low_request_rate_nginx | Detect low request rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L215) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L370) |
+| source_ip_request_rate_nginx | Detect unusual source IPs - high request rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L176) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L349) |
+| source_ip_url_count_nginx | Detect unusual source IPs - high distinct count of URLs | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L136) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L328) |
+| status_code_rate_nginx | Detect unusual status code rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L90) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L307) |
+| visitor_rate_nginx | Detect unusual visitor rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L47) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L260) |
+
+
+## Nginx access logs ({{filebeat}}) [nginx-access-logs-filebeat]
+
+These legacy {{anomaly-jobs}} find unusual activity in HTTP access logs. For the latest versions, install the Nginx integration in {{fleet}}; see [Nginx access logs](ootb-ml-jobs-nginx.md#nginx-access-logs).
+
+These jobs exist in {{kib}} only if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/manifest.json).
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| low_request_rate_ecs | Detect low request rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/low_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_low_request_rate_ecs.json) |
+| source_ip_request_rate_ecs | Detect unusual source IPs - high request rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/source_ip_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_source_ip_request_rate_ecs.json) |
+| source_ip_url_count_ecs | Detect unusual source IPs - high distinct count of URLs (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/source_ip_url_count_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_source_ip_url_count_ecs.json) |
+| status_code_rate_ecs | Detect unusual status code rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/status_code_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_status_code_rate_ecs.json) |
+| visitor_rate_ecs | Detect unusual visitor rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/visitor_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_visitor_rate_ecs.json) |
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md
new file mode 100644
index 0000000000..b35701b203
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md
@@ -0,0 +1,217 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-siem.html
+---
+
+# Security {{anomaly-detect}} configurations [ootb-ml-jobs-siem]
+
+These {{anomaly-jobs}} automatically detect file system and network anomalies on your hosts. They appear in the **Anomaly Detection** interface of the {{security-app}} in {{kib}} when you have data that matches their configuration. For more information, refer to [Anomaly detection with machine learning](/solutions/security/advanced-entity-analytics/anomaly-detection.md).
+
+
+## Security: Authentication [security-authentication]
+
+Detect anomalous activity in your ECS-compatible authentication logs.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+By default, when you create these job in the {{security-app}}, it uses a {{data-source}} that applies to multiple indices. To get the same results if you use the {{ml-app}} app, create a similar [{{data-source}}](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/manifest.json#L7) then select it in the job wizard.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| auth_high_count_logon_events | Looks for an unusually large spike in successful authentication events. This can be due to password spraying, user enumeration, or brute force activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_events.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_events.json) |
+| auth_high_count_logon_events_for_a_source_ip | Looks for an unusually large spike in successful authentication events from a particular source IP address. This can be due to password spraying, user enumeration or brute force activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_events_for_a_source_ip.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_events_for_a_source_ip.json) |
+| auth_high_count_logon_fails | Looks for an unusually large spike in authentication failure events. This can be due to password spraying, user enumeration, or brute force activity and may be a precursor to account takeover or credentialed access. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_fails.json)] | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_fails.json) |
+| auth_rare_hour_for_a_user | Looks for a user logging in at a time of day that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different time zones. In addition, unauthorized user activity often takes place during non-business hours. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_hour_for_a_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_hour_for_a_user.json) |
+| auth_rare_source_ip_for_a_user | Looks for a user logging in from an IP address that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different locations. An unusual source IP address for a username could also be due to lateral movement when a compromised account is used to pivot between hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_source_ip_for_a_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_source_ip_for_a_user.json) |
+| auth_rare_user | Looks for an unusual user name in the authentication logs. An unusual user name is one way of detecting credentialed access by means of a new or dormant user account. A user account that is normally inactive, because the user has left the organization, which becomes active, may be due to credentialed access using a compromised account password. Threat actors will sometimes also create new users as a means of persisting in a compromised web application. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_user.json) |
+| suspicious_login_activity | Detect unusually high number of authentication attempts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_suspicious_login_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/suspicious_login_activity.json) |
+
+
+## Security: CloudTrail [security-cloudtrail-jobs]
+
+Detect suspicious activity recorded in your CloudTrail logs.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_cloudtrail/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| high_distinct_count_error_message | Looks for a spike in the rate of an error message which may simply indicate an impending service failure but these can also be byproducts of attempted or successful persistence, privilege escalation, defense evasion, discovery, lateral movement, or collection activity by a threat actor. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/high_distinct_count_error_message.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_high_distinct_count_error_message.json) |
+| rare_error_code | Looks for unusual errors. Rare and unusual errors may simply indicate an impending service failure but they can also be byproducts of attempted or successful persistence, privilege escalation, defense evasion, discovery, lateral movement, or collection activity by a threat actor. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_error_code.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_error_code.json) |
+| rare_method_for_a_city | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (city) that is unusual. This can be the result of compromised credentials or keys. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_city.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_city.json) |
+| rare_method_for_a_country | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (country) that is unusual. This can be the result of compromised credentials or keys. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_country.json) |
+| rare_method_for_a_username | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a user context that does not normally call the method. This can be the result of compromised credentials or keys as someone uses a valid account to persist, move laterally, or exfil data. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_username.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_username.json) |
+
+
+## Security: Host [security-host-jobs]
+
+Anomaly detection jobs for host-based threat hunting and detection.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+To access the host traffic anomalies dashboard in Kibana, go to: `Security -> Dashboards -> Host Traffic Anomalies`.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| high_count_events_for_a_host_name | Looks for a sudden spike in host based traffic. This can be due to a range of security issues, such as a compromised system, DDoS attacks, malware infections, privilege escalation, or data exfiltration. | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/high_count_events_for_a_host_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_high_count_events_for_a_host_name.json) |
+| low_count_events_for_a_host_name | Looks for a sudden drop in host based traffic. This can be due to a range of security issues, such as a compromised system, a failed service, or a network misconfiguration. | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/low_count_events_for_a_host_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_low_count_events_for_a_host_name.json) |
+
+
+## Security: Linux [security-linux-jobs]
+
+Anomaly detection jobs for Linux host-based threat hunting and detection.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| v3_linux_anomalous_network_activity | Looks for unusual processes using the network which could indicate command-and-control, lateral movement, persistence, or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_network_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_network_activity.json) |
+| v3_linux_anomalous_network_port_activity | Looks for unusual destination port activity that could indicate command-and-control, persistence mechanism, or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_network_port_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_network_port_activity.json) |
+| v3_linux_anomalous_process_all_hosts | Looks for processes that are unusual to all Linux hosts. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_process_all_hosts.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_process_all_hosts.json) |
+| v3_linux_anomalous_user_name | Rare and unusual users that are not normally active may indicate unauthorized changes or activity by an unauthorized user which may be credentialed access or lateral movement. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_user_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_user_name.json) |
+| v3_linux_network_configuration_discovery | Looks for commands related to system network configuration discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used by a threat actor to engage in system network configuration discovery to increase their understanding of connected networks and hosts. This information may be used to shape follow-up behaviors such as lateral movement or additional discovery. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_network_configuration_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_datafeed_linux_network_configuration_discovery.json) |
+| v3_linux_network_connection_discovery | Looks for commands related to system network connection discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used by a threat actor to engage in system network connection discovery to increase their understanding of connected services and systems. This information may be used to shape follow-up behaviors such as lateral movement or additional discovery. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_network_connection_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_datafeed_linux_network_connection_discovery.json) |
+| v3_linux_rare_metadata_process | Looks for anomalous access to the metadata service by an unusual process. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_metadata_process.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_metadata_process.json) |
+| v3_linux_rare_metadata_user | Looks for anomalous access to the metadata service by an unusual user. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_metadata_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_metadata_user.json) |
+| v3_linux_rare_sudo_user | Looks for sudo activity from an unusual user context. Unusual user context changes can be due to privilege escalation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_sudo_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/securiity_linux/ml/datafeed_v3_linux_rare_sudo_user.json) |
+| v3_linux_rare_user_compiler | Looks for compiler activity by a user context which does not normally run compilers. This can be ad-hoc software changes or unauthorized software deployment. This can also be due to local privilege elevation via locally run exploits or malware activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_user_compiler.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_user_compiler.json) |
+| v3_linux_system_information_discovery | Looks for commands related to system information discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system information discovery to gather detailed information about system configuration and software versions. This may be a precursor to the selection of a persistence mechanism or a method of privilege elevation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_information_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_information_discovery.json) |
+| v3_linux_system_process_discovery | Looks for commands related to system process discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system process discovery to increase their understanding of software applications running on a target host or network. This may be a precursor to the selection of a persistence mechanism or a method of privilege elevation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_process_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_process_discovery.json) |
+| v3_linux_system_user_discovery | Looks for commands related to system user or owner discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system owner or user discovery to identify currently active or primary users of a system. This may be a precursor to additional discovery, credential dumping, or privilege elevation activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_user_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_user_discovery.json) |
+| v3_rare_process_by_host_linux | Looks for processes that are unusual to a particular Linux host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_rare_process_by_host_linux.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_rare_process_by_host_linux.json) |
+
+
+## Security: Network [security-network-jobs]
+
+Detect anomalous network activity in your ECS-compatible network logs.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+By default, when you create these jobs in the {{security-app}}, it uses a {{data-source}} that applies to multiple indices. To get the same results if you use the {{ml-app}} app, create a similar [{{data-source}}](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/manifest.json#L7) then select it in the job wizard.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| high_count_by_destination_country | Looks for an unusually large spike in network activity to one destination country in the network logs. This could be due to unusually large amounts of reconnaissance or enumeration traffic. Data exfiltration activity may also produce such a surge in traffic to a destination country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_by_destination_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_by_destination_country.json) |
+| high_count_network_denies | Looks for an unusually large spike in network traffic that was denied by network ACLs or firewall rules. Such a burst of denied traffic is usually either 1) a misconfigured application or firewall or 2) suspicious or malicious activity. Unsuccessful attempts at network transit, in order to connect to command-and-control (C2), or engage in data exfiltration, may produce a burst of failed connections. This could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_denies.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_denies.json) |
+| high_count_network_events | Looks for an unusually large spike in network traffic. Such a burst of traffic, if not caused by a surge in business activity, can be due to suspicious or malicious activity. Large-scale data exfiltration may produce a burst of network traffic; this could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_events.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_events.json) |
+| rare_destination_country | Looks for an unusual destination country name in the network logs. This can be due to initial access, persistence, command-and-control, or exfiltration activity. For example, when a user clicks on a link in a phishing email or opens a malicious document, a request may be sent to download and run a payload from a server in a country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/rare_destination_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_rare_destination_country.json) |
+
+
+## Security: {{packetbeat}} [security-packetbeat-jobs]
+
+Detect suspicious network activity in {{packetbeat}} data.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_packetbeat/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| packetbeat_dns_tunneling | Looks for unusual DNS activity that could indicate command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_dns_tunneling.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_dns_tunneling.json) |
+| packetbeat_rare_dns_question | Looks for unusual DNS activity that could indicate command-and-control activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_dns_question.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_dns_question.json) |
+| packetbeat_rare_server_domain | Looks for unusual HTTP or TLS destination domain activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_server_domain.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_server_domain.json) |
+| packetbeat_rare_urls | Looks for unusual web browsing URL activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_urls.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_urls.json) |
+| packetbeat_rare_user_agent | Looks for unusual HTTP user agent activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_user_agent.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_user_agent.json) |
+
+
+## Security: Windows [security-windows-jobs]
+
+Anomaly detection jobs for Windows host-based threat hunting and detection.
+
+In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query.
+
+If there are additional requirements such as installing the Windows System Monitor (Sysmon) or auditing process creation in the Windows security event log, they are listed for each job.
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| v3_rare_process_by_host_windows | Looks for processes that are unusual to a particular Windows host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_rare_process_by_host_windows.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_rare_process_by_host_windows.json) |
+| v3_windows_anomalous_network_activity | Looks for unusual processes using the network which could indicate command-and-control, lateral movement, persistence, or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_network_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_network_activity.json) |
+| v3_windows_anomalous_path_activity | Looks for activity in unusual paths that may indicate execution of malware or persistence mechanisms. Windows payloads often execute from user profile paths. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_path_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_path_activity.json) |
+| v3_windows_anomalous_process_all_hosts | Looks for processes that are unusual to all Windows hosts. Such unusual processes may indicate execution of unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_process_all_hosts.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_process_all_hosts.json) |
+| v3_windows_anomalous_process_creation | Looks for unusual process relationships which may indicate execution of malware or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_process_creation.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_process_creation.json) |
+| v3_windows_anomalous_script | Looks for unusual powershell scripts that may indicate execution of malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_script.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_script.json) |
+| v3_windows_anomalous_service | Looks for rare and unusual Windows service names which may indicate execution of unauthorized services, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_service.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_service.json) |
+| v3_windows_anomalous_user_name | Rare and unusual users that are not normally active may indicate unauthorized changes or activity by an unauthorized user which may be credentialed access or lateral movement. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_user_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_user_name.json) |
+| v3_windows_rare_metadata_process | Looks for anomalous access to the metadata service by an unusual process. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_metadata_process.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_metadata_process.json) |
+| v3_windows_rare_metadata_user | Looks for anomalous access to the metadata service by an unusual user. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_metadata_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_metadata_user.json) |
+| v3_windows_rare_user_runas_event | Unusual user context switches can be due to privilege escalation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_runas_event.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_runas_event.json) |
+| v3_windows_rare_user_type10_remote_login | Unusual RDP (remote desktop protocol) user logins can indicate account takeover or credentialed access. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_type10_remote_login.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_type10_remote_login.json) |
+
+
+## Security: Elastic Integrations [security-integrations-jobs]
+
+[Elastic Integrations](integration-docs://docs/reference/index.md) are a streamlined way to add Elastic assets to your environment, such as data ingestion, {{transforms}}, and in this case, {{ml}} capabilities for Security.
+
+The following Integrations use {{ml}} to analyze patterns of user and entity behavior, and help detect and alert when there is related suspicious activity in your environment.
+
+* [Data Exfiltration Detection](integration-docs://docs/reference/ded.md)
+* [Domain Generation Algorithm Detection](integration-docs://docs/reference/dga.md)
+* [Lateral Movement Detection](integration-docs://docs/reference/lmd.md)
+* [Living off the Land Attack Detection](integration-docs://docs/reference/problemchild.md)
+
+**Domain Generation Algorithm (DGA) Detection**
+
+{{ml-cap}} solution package to detect domain generation algorithm (DGA) activity in your network data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription.
+
+To download, refer to the [documentation](integration-docs://docs/reference/dga.md).
+
+| Name | Description |
+| --- | --- |
+| dga_high_sum_probability | Detect domain generation algorithm (DGA) activity in your network data. |
+
+The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/dga/kibana/ml_module/dga-ml.json).
+
+**Living off the Land Attack (LotL) Detection**
+
+{{ml-cap}} solution package to detect Living off the Land (LotL) attacks in your environment. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. (Also known as ProblemChild).
+
+To download, refer to the [documentation](integration-docs://docs/reference/problemchild.md).
+
+| Name | Description |
+| --- | --- |
+| problem_child_rare_process_by_host | Looks for a process that has been classified as malicious on a host that does not commonly manifest malicious process activity. |
+| problem_child_high_sum_by_host | Looks for a set of one or more malicious child processes on a single host. |
+| problem_child_rare_process_by_user | Looks for a process that has been classified as malicious where the user context is unusual and does not commonly manifest malicious process activity. |
+| problem_child_rare_process_by_parent | Looks for rare malicious child processes spawned by a parent process. |
+| problem_child_high_sum_by_user | Looks for a set of one or more malicious processes, started by the same user. |
+| problem_child_high_sum_by_parent | Looks for a set of one or more malicious child processes spawned by the same parent process. |
+
+The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/problemchild/kibana/ml_module/problemchild-ml.json).
+
+**Data Exfiltration Detection (DED)**
+
+{{ml-cap}} package to detect data exfiltration in your network and file data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription.
+
+To download, refer to the [documentation](integration-docs://docs/reference/ded.md).
+
+| Name | Description |
+| --- | --- |
+| ded_high_sent_bytes_destination_geo_country_iso_code | Detects data exfiltration to an unusual geo-location (by country iso code). |
+| ded_high_sent_bytes_destination_ip | Detects data exfiltration to an unusual geo-location (by IP address). |
+| ded_high_sent_bytes_destination_port | Detects data exfiltration to an unusual destination port. |
+| ded_high_sent_bytes_destination_region_name | Detects data exfiltration to an unusual geo-location (by region name). |
+| ded_high_bytes_written_to_external_device | Detects data exfiltration activity by identifying high bytes written to an external device. |
+| ded_rare_process_writing_to_external_device | Detects data exfiltration activity by identifying a file write started by a rare process to an external device. |
+| ded_high_bytes_written_to_external_device_airdrop | Detects data exfiltration activity by identifying high bytes written to an external device via Airdrop. |
+
+The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/ded/kibana/ml_module/ded-ml.json).
+
+**Lateral Movement Detection (LMD)**
+
+{{ml-cap}} package to detect lateral movement based on file transfer activity and Windows RDP events. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription.
+
+To download, refer to the [documentation](integration-docs://docs/reference/lmd.md).
+
+| Name | Description |
+| --- | --- |
+| lmd_high_count_remote_file_transfer | Detects unusually high file transfers to a remote host in the network. |
+| lmd_high_file_size_remote_file_transfer | Detects unusually high size of files shared with a remote host in the network. |
+| lmd_rare_file_extension_remote_transfer | Detects data exfiltration to an unusual destination port. |
+| lmd_rare_file_path_remote_transfer | Detects unusual folders and directories on which a file is transferred. |
+| lmd_high_mean_rdp_session_duration | Detects unusually high mean of RDP session duration. |
+| lmd_high_var_rdp_session_duration | Detects unusually high variance in RDP session duration. |
+| lmd_high_sum_rdp_number_of_processes | Detects unusually high number of processes started in a single RDP session. |
+| lmd_unusual_time_weekday_rdp_session_start | Detects an RDP session started at an usual time or weekday. |
+| lmd_high_rdp_distinct_count_source_ip_for_destination | Detects a high count of source IPs making an RDP connection with a single destination IP. |
+| lmd_high_rdp_distinct_count_destination_ip_for_source | Detects a high count of destination IPs establishing an RDP connection with a single source IP. |
+| lmd_high_mean_rdp_process_args | Detects unusually high number of process arguments in an RDP session. |
+
+The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/lmd/kibana/ml_module/lmd-ml.json).
+
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md
new file mode 100644
index 0000000000..f0d2e2264d
--- /dev/null
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md
@@ -0,0 +1,20 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-uptime.html
+---
+
+# Uptime {{anomaly-detect}} configurations [ootb-ml-jobs-uptime]
+
+If you have appropriate {{heartbeat}} data in {{es}}, you can enable this {{anomaly-job}} in the [{{uptime-app}}](/solutions/observability/apps/synthetic-monitoring.md#monitoring-uptime) in {{kib}}. For more usage information, refer to [Inspect uptime duration anomalies](/solutions/observability/apps/inspect-uptime-duration-anomalies.md).
+
+
+## Uptime: {{heartbeat}} [uptime-heartbeat]
+
+Detect latency issues in heartbeat monitors.
+
+These configurations are available in {{kib}} only if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/uptime_heartbeat/manifest.json).
+
+| Name | Description | Job (JSON) | Datafeed |
+| --- | --- | --- | --- |
+| high_latency_by_geo | Identify periods of increased latency across geographical regions | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/uptime_heartbeat/ml/high_latency_by_geo.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/uptime_heartbeat/ml/datafeed_high_latency_by_geo.json) |
+
diff --git a/reference/data-analysis/machine-learning/supplied-anomaly-detection-configurations.md b/reference/data-analysis/machine-learning/supplied-anomaly-detection-configurations.md
new file mode 100644
index 0000000000..4e3302e945
--- /dev/null
+++ b/reference/data-analysis/machine-learning/supplied-anomaly-detection-configurations.md
@@ -0,0 +1,30 @@
+---
+navigation_title: "Supplied configurations"
+mapped_pages:
+ - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs.html
+---
+
+# Supplied {{anomaly-detect}} configurations [ootb-ml-jobs]
+
+
+{{anomaly-jobs-cap}} contain the configuration information and metadata necessary to perform an analytics task. {{kib}} can recognize certain types of data and provide specialized wizards for that context. This page lists the categories of the {{anomaly-jobs}} that are ready to use via {{kib}} in **Machine learning**. Refer to [Create {{anomaly-jobs}}](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md) to learn more about creating a job by using supplied configurations. Logs and Metrics supplied configurations are available and can be created via the related solution UI in {{kib}}.
+
+* [Apache](/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md)
+* [APM](/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md)
+* [{{auditbeat}}](/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md)
+* [Logs](/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md)
+* [{{metricbeat}}](/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md)
+* [Metrics](/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md)
+* [Nginx](/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md)
+* [Security](/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md)
+* [Uptime](/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md)
+
+::::{note}
+The configurations are only available if data exists that matches the queries specified in the manifest files. These recognizer queries are linked in the descriptions of the individual configurations.
+::::
+
+
+
+## Model memory considerations [ootb-ml-model-memory]
+
+By default, these jobs have `model_memory_limit` values that are deemed appropriate for typical user environments and data characteristics. If your environment or your data is atypical and your jobs reach a memory status value of `soft_limit` or `hard_limit`, you might need to update the model memory limits. For more information, see [Working with {{anomaly-detect}} at scale](/explore-analyze/machine-learning/anomaly-detection/anomaly-detection-scale.md#set-model-memory-limit).
diff --git a/reference/data-analysis/observability/index.md b/reference/data-analysis/observability/index.md
new file mode 100644
index 0000000000..5f4d4483e9
--- /dev/null
+++ b/reference/data-analysis/observability/index.md
@@ -0,0 +1,18 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/observability/current/metrics-reference.html
+---
+
+# Metrics reference [metrics-reference]
+
+Learn about the key metrics displayed in the Infrastructure app and how they are calculated.
+
+* [Host metrics](/reference/data-analysis/observability/observability-host-metrics-serverless.md)
+* [Container metrics](/reference/data-analysis/observability/observability-container-metrics-serverless.md)
+* [Kubernetes pod metrics](/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md)
+* [AWS metrics](/reference/data-analysis/observability/observability-aws-metrics-serverless.md)
+
+
+
+
+
diff --git a/reference/data-analysis/observability/metrics-reference-serverless.md b/reference/data-analysis/observability/metrics-reference-serverless.md
new file mode 100644
index 0000000000..70b5380366
--- /dev/null
+++ b/reference/data-analysis/observability/metrics-reference-serverless.md
@@ -0,0 +1,18 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html
+---
+
+# Metrics reference [observability-metrics-reference]
+
+Learn about the key metrics displayed in the Infrastructure UI and how they are calculated.
+
+* [Host metrics](/reference/data-analysis/observability/observability-host-metrics-serverless.md)
+* [Kubernetes pod metrics](/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md)
+* [Container metrics](/reference/data-analysis/observability/observability-container-metrics-serverless.md)
+* [AWS metrics](/reference/data-analysis/observability/observability-aws-metrics-serverless.md)
+
+
+
+
+
diff --git a/reference/data-analysis/observability/observability-aws-metrics-serverless.md b/reference/data-analysis/observability/observability-aws-metrics-serverless.md
new file mode 100644
index 0000000000..1d7a4ba631
--- /dev/null
+++ b/reference/data-analysis/observability/observability-aws-metrics-serverless.md
@@ -0,0 +1,66 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/serverless/current/observability-aws-metrics.html
+---
+
+# AWS metrics [observability-aws-metrics]
+
+::::{important}
+Additional AWS charges for GetMetricData API requests are generated using this module.
+
+::::
+
+
+
+## Monitor EC2 instances [monitor-ec2-instances]
+
+To analyze EC2 instance metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics).
+
+| | |
+| --- | --- |
+| **CPU Usage** | Average of `aws.ec2.cpu.total.pct`. |
+| **Inbound Traffic** | Average of `aws.ec2.network.in.bytes_per_sec`. |
+| **Outbound Traffic** | Average of `aws.ec2.network.out.bytes_per_sec`. |
+| **Disk Reads (Bytes)** | Average of `aws.ec2.diskio.read.bytes_per_sec`. |
+| **Disk Writes (Bytes)** | Average of `aws.ec2.diskio.write.bytes_per_sec`. |
+
+
+## Monitor S3 buckets [monitor-s3-buckets]
+
+To analyze S3 bucket metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics).
+
+| | |
+| --- | --- |
+| **Bucket Size** | Average of `aws.s3_daily_storage.bucket.size.bytes`. |
+| **Total Requests** | Average of `aws.s3_request.requests.total`. |
+| **Number of Objects** | Average of `aws.s3_daily_storage.number_of_objects`. |
+| **Downloads (Bytes)** | Average of `aws.s3_request.downloaded.bytes`. |
+| **Uploads (Bytes)** | Average of `aws.s3_request.uploaded.bytes`. |
+
+
+## Monitor SQS queues [monitor-sqs-queues]
+
+To analyze SQS queue metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics).
+
+| | |
+| --- | --- |
+| **Messages Available** | Max of `aws.sqs.messages.visible`. |
+| **Messages Delayed** | Max of `aws.sqs.messages.delayed`. |
+| **Messages Added** | Max of `aws.sqs.messages.sent`. |
+| **Messages Returned Empty** | Max of `aws.sqs.messages.not_visible`. |
+| **Oldest Message** | Max of `aws.sqs.oldest_message_age.sec`. |
+
+
+## Monitor RDS databases [monitor-rds-databases]
+
+To analyze RDS database metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics).
+
+| | |
+| --- | --- |
+| **CPU Usage** | Average of `aws.rds.cpu.total.pct`. |
+| **Connections** | Average of `aws.rds.database_connections`. |
+| **Queries Executed** | Average of `aws.rds.queries`. |
+| **Active Transactions** | Average of `aws.rds.transactions.active`. |
+| **Latency** | Average of `aws.rds.latency.dml`. |
+
+For information about the fields used by the Infrastructure UI to display AWS services metrics, see the [Infrastructure app fields](/reference/observability/serverless/infrastructure-app-fields.md).
diff --git a/reference/data-analysis/observability/observability-container-metrics-serverless.md b/reference/data-analysis/observability/observability-container-metrics-serverless.md
new file mode 100644
index 0000000000..033030bd46
--- /dev/null
+++ b/reference/data-analysis/observability/observability-container-metrics-serverless.md
@@ -0,0 +1,65 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/serverless/current/observability-container-metrics.html
+---
+
+# Container metrics [observability-container-metrics]
+
+Learn about key container metrics displayed in the Infrastructure UI:
+
+* [Docker](#key-metrics-docker)
+* [Kubernetes](#key-metrics-kubernetes)
+
+
+## Docker container metrics [key-metrics-docker]
+
+These are the key metrics displayed for Docker containers.
+
+
+### CPU usage metrics [key-metrics-docker-cpu]
+
+| Metric | Description |
+| --- | --- |
+| **CPU Usage (%)** | Average CPU for the container.
**Field Calculation:** `average(docker.cpu.total.pct)`
|
+
+
+### Memory metrics [key-metrics-docker-memory]
+
+| Metric | Description |
+| --- | --- |
+| **Memory Usage (%)** | Average memory usage for the container.
**Field Calculation:** `average(docker.memory.usage.pct)`
|
+
+
+### Network metrics [key-metrics-docker-network]
+
+| Metric | Description |
+| --- | --- |
+| **Inbound Traffic (RX)** | Derivative of the maximum of `docker.network.in.bytes` scaled to a 1 second rate.
**Field Calculation:** `average(docker.network.inbound.bytes) * 8 / (max(metricset.period, kql='docker.network.inbound.bytes: *') / 1000)`
|
+| **Outbound Traffic (TX)** | Derivative of the maximum of `docker.network.out.bytes` scaled to a 1 second rate.
**Field Calculation:** `average(docker.network.outbound.bytes) * 8 / (max(metricset.period, kql='docker.network.outbound.bytes: *') / 1000)`
|
+
+
+### Disk metrics [observability-container-metrics-disk-metrics]
+
+| Metric | Description |
+| --- | --- |
+| **Disk Read IOPS** | Average count of read operations from the device per second.
**Field Calculation:** `counter_rate(max(docker.diskio.read.ops), kql='docker.diskio.read.ops: *')`
|
+| **Disk Write IOPS** | Average count of write operations from the device per second.
**Field Calculation:** `counter_rate(max(docker.diskio.write.ops), kql='docker.diskio.write.ops: *')`
|
+
+
+## Kubernetes container metrics [key-metrics-kubernetes]
+
+These are the key metrics displayed for Kubernetes (containerd) containers.
+
+
+### CPU usage metrics [key-metrics-kubernetes-cpu]
+
+| Metric | Description |
+| --- | --- |
+| **CPU Usage (%)** | Average CPU for the container.
**Field Calculation:** `average(kubernetes.container.cpu.usage.limit.pct)`
|
+
+
+### Memory metrics [key-metrics-kubernetes-memory]
+
+| Metric | Description |
+| --- | --- |
+| **Memory Usage (%)** | Average memory usage for the container.
**Field Calculation:** `average(kubernetes.container.memory.usage.limit.pct)`
|
diff --git a/reference/data-analysis/observability/observability-host-metrics-serverless.md b/reference/data-analysis/observability/observability-host-metrics-serverless.md
new file mode 100644
index 0000000000..2c6d4245c8
--- /dev/null
+++ b/reference/data-analysis/observability/observability-host-metrics-serverless.md
@@ -0,0 +1,94 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/serverless/current/observability-host-metrics.html
+---
+
+# Host metrics [observability-host-metrics]
+
+Learn about key host metrics displayed in the Infrastructure UI:
+
+* [Hosts](#key-metrics-hosts)
+* [CPU usage](#key-metrics-cpu)
+* [Memory](#key-metrics-memory)
+* [Log](#key-metrics-log)
+* [Network](#key-metrics-network)
+* [Disk](#key-metrics-network)
+* [Legacy](#legacy-metrics)
+
+
+## Hosts metrics [key-metrics-hosts]
+
+| Metric | Description |
+| --- | --- |
+| **Hosts** | Number of hosts returned by your search criteria.
**Field Calculation**: `count(system.cpu.cores)`
|
+
+
+## CPU usage metrics [key-metrics-cpu]
+
+| Metric | Description |
+| --- | --- |
+| **CPU Usage (%)** | Average of percentage of CPU time spent in states other than Idle and IOWait, normalized by the number of CPU cores. Includes both time spent on user space and kernel space. 100% means all CPUs of the host are busy.
**Field Calculation**: `average(system.cpu.total.norm.pct)`
For legacy metric calculations, refer to [Legacy metrics](#legacy-metrics).
|
+| **CPU Usage - iowait (%)** | The percentage of CPU time spent in wait (on disk).
**Field Calculation**: `average(system.cpu.iowait.pct) / max(system.cpu.cores)`
|
+| **CPU Usage - irq (%)** | The percentage of CPU time spent servicing and handling hardware interrupts.
**Field Calculation**: `average(system.cpu.irq.pct) / max(system.cpu.cores)`
|
+| **CPU Usage - nice (%)** | The percentage of CPU time spent on low-priority processes.
**Field Calculation**: `average(system.cpu.nice.pct) / max(system.cpu.cores)`
|
+| **CPU Usage - softirq (%)** | The percentage of CPU time spent servicing and handling software interrupts.
**Field Calculation**: `average(system.cpu.softirq.pct) / max(system.cpu.cores)`
|
+| **CPU Usage - steal (%)** | The percentage of CPU time spent in involuntary wait by the virtual CPU while the hypervisor was servicing another processor. Available only on Unix.
**Field Calculation**: `average(system.cpu.steal.pct) / max(system.cpu.cores)`
|
+| **CPU Usage - system (%)** | The percentage of CPU time spent in kernel space.
**Field Calculation**: `average(system.cpu.system.pct) / max(system.cpu.cores)`
|
+| **CPU Usage - user (%)** | The percentage of CPU time spent in user space. On multi-core systems, you can have percentages that are greater than 100%. For example, if 3 cores are at 60% use, then the system.cpu.user.pct will be 180%.
**Field Calculation**: `average(system.cpu.user.pct) / max(system.cpu.cores)`
|
+| **Load (1m)** | 1 minute load average.
Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
**Field Calculation**: `average(system.load.1)`
|
+| **Load (5m)** | 5 minute load average.
Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
**Field Calculation**: `average(system.load.5)`
|
+| **Load (15m)** | 15 minute load average.
Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
**Field Calculation**: `average(system.load.15)`
|
+| **Normalized Load** | 1 minute load average normalized by the number of CPU cores.
Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).
100% means the 1 minute load average is equal to the number of CPU cores of the host.
Taking the example of a 32 CPU cores host, if the 1 minute load average is 32, the value reported here is 100%. If the 1 minute load average is 48, the value reported here is 150%.
**Field Calculation**: `average(system.load.1) / max(system.load.cores)`
|
+
+
+## Memory metrics [key-metrics-memory]
+
+| Metric | Description |
+| --- | --- |
+| **Memory Cache** | Memory (page) cache.
**Field Calculation**: `average(system.memory.used.bytes ) - average(system.memory.actual.used.bytes)`
|
+| **Memory Free** | Total available memory.
**Field Calculation**: `max(system.memory.total) - average(system.memory.actual.used.bytes)`
|
+| **Memory Free (excluding cache)** | Total available memory excluding the page cache.
**Field Calculation**: `system.memory.free`
|
+| **Memory Total** | Total memory capacity.
**Field Calculation**: `avg(system.memory.total)`
|
+| **Memory Usage (%)** | Percentage of main memory usage excluding page cache.
This includes resident memory for all processes plus memory used by the kernel structures and code apart from the page cache.
A high level indicates a situation of memory saturation for the host. For example, 100% means the main memory is entirely filled with memory that can’t be reclaimed, except by swapping out.
**Field Calculation**: `average(system.memory.actual.used.pct)`
|
+| **Memory Used** | Main memory usage excluding page cache.
**Field Calculation**: `average(system.memory.actual.used.bytes)`
|
+
+
+## Log metrics [key-metrics-log]
+
+| Metric | Description |
+| --- | --- |
+| **Log Rate** | Derivative of the cumulative sum of the document count scaled to a 1 second rate. This metric relies on the same indices as the logs.
**Field Calculation**: `cumulative_sum(doc_count)`
|
+
+
+## Network metrics [key-metrics-network]
+
+| Metric | Description |
+| --- | --- |
+| **Network Inbound (RX)** | Number of bytes that have been received per second on the public interfaces of the hosts.
**Field Calculation**: `sum(host.network.ingress.bytes) * 8 / 1000`
For legacy metric calculations, refer to [Legacy metrics](#legacy-metrics).
|
+| **Network Outbound (TX)** | Number of bytes that have been sent per second on the public interfaces of the hosts.
**Field Calculation**: `sum(host.network.egress.bytes) * 8 / 1000`
For legacy metric calculations, refer to [Legacy metrics](#legacy-metrics).
|
+
+
+## Disk metrics [observability-host-metrics-disk-metrics]
+
+| Metric | Description |
+| --- | --- |
+| **Disk Latency** | Time spent to service disk requests.
**Field Calculation**: `average(system.diskio.read.time + system.diskio.write.time) / (system.diskio.read.count + system.diskio.write.count)`
|
+| **Disk Read IOPS** | Average count of read operations from the device per second.
**Field Calculation**: `counter_rate(max(system.diskio.read.count), kql='system.diskio.read.count: *')`
|
+| **Disk Read Throughput** | Average number of bytes read from the device per second.
**Field Calculation**: `counter_rate(max(system.diskio.read.bytes), kql='system.diskio.read.bytes: *')`
|
+| **Disk Usage - Available (%)** | Percentage of disk space available.
**Field Calculation**: `1-average(system.filesystem.used.pct)`
|
+| **Disk Usage - Max (%)** | Percentage of disk space used. A high percentage indicates that a partition on a disk is running out of space.
**Field Calculation**: `max(system.filesystem.used.pct)`
|
+| **Disk Write IOPS** | Average count of write operations from the device per second.
**Field Calculation**: `counter_rate(max(system.diskio.write.count), kql='system.diskio.write.count: *')`
|
+| **Disk Write Throughput** | Average number of bytes written from the device per second.
**Field Calculation**: `counter_rate(max(system.diskio.write.bytes), kql='system.diskio.write.bytes: *')`
|
+
+
+## Legacy metrics [legacy-metrics]
+
+Over time, we may change the formula used to calculate a specific metric. To avoid affecting your existing rules, instead of changing the actual metric definition, we create a new metric and refer to the old one as "legacy."
+
+The UI and any new rules you create will use the new metric definition. However, any alerts that use the old definition will refer to the metric as "legacy."
+
+| Metric | Description |
+| --- | --- |
+| **CPU Usage (legacy)** | Percentage of CPU time spent in states other than Idle and IOWait, normalized by the number of CPU cores. This includes both time spent on user space and kernel space. 100% means all CPUs of the host are busy.
**Field Calculation**: `(average(system.cpu.user.pct) + average(system.cpu.system.pct)) / max(system.cpu.cores)`
|
+| **Network Inbound (RX) (legacy)** | Number of bytes that have been received per second on the public interfaces of the hosts.
**Field Calculation**: `average(host.network.ingress.bytes) * 8 / (max(metricset.period, kql='host.network.ingress.bytes: *') / 1000)`
|
+| **Network Outbound (TX) (legacy)** | Number of bytes that have been sent per second on the public interfaces of the hosts.
**Field Calculation**: `average(host.network.egress.bytes) * 8 / (max(metricset.period, kql='host.network.egress.bytes: *') / 1000)`
|
diff --git a/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md b/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md
new file mode 100644
index 0000000000..e5f32d0269
--- /dev/null
+++ b/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md
@@ -0,0 +1,17 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/serverless/current/observability-kubernetes-pod-metrics.html
+---
+
+# Kubernetes pod metrics [observability-kubernetes-pod-metrics]
+
+To analyze Kubernetes pod metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics).
+
+| | |
+| --- | --- |
+| **CPU Usage** | Average of `kubernetes.pod.cpu.usage.node.pct`. |
+| **Memory Usage** | Average of `kubernetes.pod.memory.usage.node.pct`. |
+| **Inbound Traffic** | Derivative of the maximum of `kubernetes.pod.network.rx.bytes` scaled to a 1 second rate. |
+| **Outbound Traffic** | Derivative of the maximum of `kubernetes.pod.network.tx.bytes` scaled to a 1 second rate. |
+
+For information about the fields used by the Infrastructure UI to display Kubernetes pod metrics, see the [Infrastructure app fields](/reference/observability/serverless/infrastructure-app-fields.md).
diff --git a/reference/ecs.md b/reference/ecs.md
new file mode 100644
index 0000000000..a8cc7766f5
--- /dev/null
+++ b/reference/ecs.md
@@ -0,0 +1,9 @@
+---
+navigation_title: ECS
+---
+# Elastic Common Schema
+
+Elastic Common Schema (ECS) defines a common set of fields for ingesting data into Elasticsearch.
+For field details and usage information, refer to [](ecs://docs/reference/index.md).
+
+ECS loggers are plugins for your favorite logging libraries, which help you to format your logs into ECS-compatible JSON. Check out [](ecs-logging://docs/reference/intro.md).
diff --git a/reference/elasticsearch.md b/reference/elasticsearch.md
new file mode 100644
index 0000000000..5e23cb0854
--- /dev/null
+++ b/reference/elasticsearch.md
@@ -0,0 +1,18 @@
+# Elasticsearch and index management
+
+% TO-DO: Add links to "Elasticsearch basics"%
+
+This section contains reference information for Elasticsearch and index management features, including:
+
+* Settings
+* Security roles and privileges
+* Index lifecycle actions
+* Mappings
+* Command line tools
+* Curator
+* Clients
+
+% TO-DO: Add links to "query language and scripting language sections"%
+
+Elasticsearch also provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features.
+Refer to [Elasticsearch API](https://www.elastic.co/docs/api/doc/elasticsearch) and [Elasticsearch Serverless API](https://www.elastic.co/docs/api/doc/elasticsearch-serverless).
\ No newline at end of file
diff --git a/reference/elasticsearch/clients/index.md b/reference/elasticsearch/clients/index.md
new file mode 100644
index 0000000000..0d98eb88b0
--- /dev/null
+++ b/reference/elasticsearch/clients/index.md
@@ -0,0 +1,34 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/elasticsearch/client/index.html
+ - https://www.elastic.co/guide/en/serverless/current/elasticsearch-clients.html
+navigation_title: Clients
+---
+
+# Elasticsearch clients [elasticsearch-clients]
+
+This section contains documentation for all the official Elasticsearch clients:
+
+* Eland
+* Go
+* Java
+* JavaScript
+* .NET
+* PHP
+* Python
+* Ruby
+* Rust
+
+You can use the following language clients with {{es-serverless}}:
+
+* [Go](go-elasticsearch://docs/reference/getting-started-serverless.md)
+* [Java](elasticsearch-java://docs/reference/getting-started-serverless.md)
+* [.NET](elasticsearch-net://docs/reference/getting-started.md)
+* [Node.JS](elasticsearch-js://docs/reference/getting-started.md)
+* [PHP](elasticsearch-php://docs/reference/getting-started.md)
+* [Python](elasticsearch-py://docs/reference/getting-started.md)
+* [Ruby](elasticsearch-ruby://docs/reference/getting-started.md)
+
+::::{tip}
+Learn how to [connect to your {{es-serverless}} endpoint](/solutions/search/serverless-elasticsearch-get-started.md).
+::::
\ No newline at end of file
diff --git a/reference/glossary/index.md b/reference/glossary/index.md
new file mode 100644
index 0000000000..f0c145faf0
--- /dev/null
+++ b/reference/glossary/index.md
@@ -0,0 +1,835 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/elastic-stack-glossary/current/index.html
+ - https://www.elastic.co/guide/en/elastic-stack-glossary/current/terms.html
+ - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-glossary.html
+ - https://www.elastic.co/guide/en/ecs/current/ecs-glossary.html
+---
+
+# Glossary [terms]
+
+$$$glossary-metadata$$$ @metadata
+: A special field for storing content that you don't want to include in output [events](/reference/glossary/index.md#glossary-event). For example, the `@metadata` field is useful for creating transient fields for use in [conditional](/reference/glossary/index.md#glossary-conditional) statements.
+
+
+## A [a-glos]
+
+$$$glossary-action$$$ action
+: 1. The rule-specific response that occurs when an alerting rule fires. A rule can have multiple actions. See [Connectors and actions](kibana://docs/reference/connectors-kibana.md).
+2. In {{elastic-sec}}, actions send notifications via other systems when a detection alert is created, such as email, Slack, PagerDuty, and {{webhook}}.
+
+
+$$$glossary-admin-console$$$ administration console
+: A component of {{ece}} that provides the API server for the [Cloud UI](/reference/glossary/index.md#glossary-cloud-ui). Also syncs cluster and allocator data from ZooKeeper to {{es}}.
+
+$$$glossary-advanced-settings$$$ Advanced Settings
+: Enables you to control the appearance and behavior of {{kib}} by setting the date format, default index, and other attributes. Part of {{kib}} Stack Management. See [Advanced Settings](kibana://docs/reference/advanced-settings.md).
+
+$$$glossary-agent-policy$$$ Agent policy
+: A collection of inputs and settings that defines the data to be collected by {{agent}}. An agent policy can be applied to a single agent or shared by a group of agents; this makes it easier to manage many agents at scale. See [{{agent}} policies](/reference/ingestion-tools/fleet/agent-policy.md).
+
+$$$glossary-alias$$$ alias
+: Secondary name for a group of [data streams](/reference/glossary/index.md#glossary-data-stream) or [indices](/reference/glossary/index.md#glossary-index). Most {{es}} APIs accept an alias in place of a data stream or index. See [Aliases](/manage-data/data-store/aliases.md).
+
+$$$glossary-allocator-affinity$$$ allocator affinity
+: Controls how {{stack}} deployments are distributed across the available set of allocators in your {{ece}} installation.
+
+$$$glossary-allocator-tag$$$ allocator tag
+: In {{ece}}, characterizes hardware resources for {{stack}} deployments. Used by [instance configurations](/reference/glossary/index.md#glossary-instance-configuration) to determine which instances of the {{stack}} should be placed on what hardware.
+
+$$$glossary-allocator$$$ allocator
+: Manages hosts that contain {{es}} and {{kib}} nodes. Controls the lifecycle of these nodes by creating new [containers](/reference/glossary/index.md#glossary-container) and managing the nodes within these containers when requested. Used to scale the capacity of your {{ece}} installation.
+
+$$$glossary-analysis$$$ analysis
+: Process of converting unstructured [text](/reference/glossary/index.md#glossary-text) into a format optimized for search. See [Text analysis](/manage-data/data-store/text-analysis.md).
+
+$$$glossary-annotation$$$ annotation
+: A way to augment a data display with descriptive domain knowledge.
+
+$$$glossary-anomaly-detection-job$$$ {{anomaly-job}}
+: {{anomaly-jobs-cap}} contain the configuration information and metadata necessary to perform an analytics task. See [{{ml-jobs-cap}}](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-create-job).
+
+$$$glossary-api-key$$$ API key
+: Unique identifier for authentication in {{es}}. When [transport layer security (TLS)](/deploy-manage/deploy/self-managed/installing-elasticsearch.md) is enabled, all requests must be authenticated using an API key or a username and password.
+
+$$$glossary-apm-agent$$$ APM agent
+: An open-source library, written in the same language as your service, which [instruments](/reference/glossary/index.md#glossary-instrumentation) your code and collects performance data and errors at runtime.
+
+$$$glossary-apm-server$$$ APM Server
+: An open-source application that receives data from [APM agents](/reference/glossary/index.md#glossary-apm-agent) and sends it to {{es}}.
+
+$$$glossary-app$$$ app
+: A top-level {{kib}} component that is accessed through the side navigation. Apps include core {{kib}} components such as Discover and Dashboard, solutions like {{observability}} and Security, and special-purpose tools like Maps and {{stack-manage-app}}.
+
+$$$glossary-auto-follow-pattern$$$ auto-follow pattern
+: [Index pattern](/reference/glossary/index.md#glossary-index-pattern) that automatically configures new [indices](/reference/glossary/index.md#glossary-index) as [follower indices](/reference/glossary/index.md#glossary-follower-index) for [{{ccr}}](/reference/glossary/index.md#glossary-ccr). See [Manage auto-follow patterns](/deploy-manage/tools/cross-cluster-replication/manage-auto-follow-patterns.md).
+
+$$$glossary-zone$$$ availability zone
+: Contains resources available to a {{ece}} installation that are isolated from other availability zones to safeguard against failure. Could be a rack, a server zone or some other logical constraint that creates a failure boundary. In a highly available cluster, the nodes of a cluster are spread across two or three availability zones to ensure that the cluster can survive the failure of an entire availability zone. Also see [Fault Tolerance (High Availability)](/deploy-manage/deploy/cloud-enterprise/ece-ha.md).
+
+
+## B [b-glos]
+
+$$$glossary-basemap$$$ basemap
+: The background detail necessary to orient the location of a map.
+
+$$$glossary-beats-runner$$$ beats runner
+: Used to send {{filebeat}} and {{metricbeat}} information to the logging cluster.
+
+$$$glossary-bucket-aggregation$$$ bucket aggregation
+: An aggregation that creates buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type), which determines whether or not a document in the current context falls into the bucket.
+
+$$$glossary-ml-bucket$$$ bucket
+: 1. A set of documents in {{kib}} that have certain characteristics in common. For example, matching documents might be bucketed by color, distance, or date range.
+2. The {{ml-features}} also use the concept of a bucket to divide the time series into batches for processing. The *bucket span* is part of the configuration information for {{anomaly-jobs}}. It defines the time interval that is used to summarize and model the data. This is typically between 5 minutes to 1 hour and it depends on your data characteristics. When you set the bucket span, take into account the granularity at which you want to analyze, the frequency of the input data, the typical duration of the anomalies, and the frequency at which alerting is required.
+
+
+
+## C [c-glos]
+
+$$$glossary-canvas-language$$$ Canvas expression language
+: A pipeline-based expression language for manipulating and visualizing data. Includes dozens of functions and other capabilities, such as table transforms, type casting, and sub-expressions. Supports TinyMath functions for complex math calculations. See [Canvas function reference](/reference/data-analysis/kibana/canvas-functions.md).
+
+$$$glossary-canvas$$$ Canvas
+: Enables you to create presentations and infographics that pull live data directly from {{es}}. See [Canvas](/explore-analyze/visualize/canvas.md).
+
+$$$glossary-certainty$$$ certainty
+: Specifies how many documents must contain a pair of terms before it is considered a useful connection in a graph.
+
+$$$CA$$$CA
+: Certificate authority. An entity that issues digital certificates to verify identities over a network.
+
+$$$glossary-client-forwarder$$$ client forwarder
+: Used for secure internal communications between various components of {{ece}} and ZooKeeper.
+
+$$$glossary-cloud-ui$$$ Cloud UI
+: Provides web-based access to manage your {{ece}} installation, supported by the [administration console](/reference/glossary/index.md#glossary-admin-console).
+
+$$$glossary-cluster$$$ cluster
+: 1. A group of one or more connected {{es}} [nodes](/reference/glossary/index.md#glossary-node). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md).
+2. A layer type and display option in the **Maps** application. Clusters display a cluster symbol across a grid on the map, one symbol per grid cluster. The cluster location is the weighted centroid for all documents in the grid cell.
+3. In {{eck}}, it can refer to either an [Elasticsearch cluster](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) or a Kubernetes cluster depending on the context.
+
+$$$glossary-codec-plugin$$$ codec plugin
+: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that changes the data representation of an [event](/reference/glossary/index.md#glossary-event). Codecs are essentially stream filters that can operate as part of an input or output. Codecs enable you to separate the transport of messages from the serialization process. Popular codecs include json, msgpack, and plain (text).
+
+$$$glossary-cold-phase$$$ cold phase
+: Third possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the cold phase, data is no longer updated and seldom [queried](/reference/glossary/index.md#glossary-query). The data still needs to be searchable, but it's okay if those queries are slower. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-cold-tier$$$ cold tier
+: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that hold time series data that is accessed occasionally and not normally updated. See [Data tiers](/manage-data/lifecycle/data-tiers.md).
+
+$$$glossary-component-template$$$ component template
+: Building block for creating [index templates](/reference/glossary/index.md#glossary-index-template). A component template can specify [mappings](/reference/glossary/index.md#glossary-mapping), [index settings](elasticsearch://docs/reference/elasticsearch/index-settings/index.md), and [aliases](/reference/glossary/index.md#glossary-alias). See [index templates](/manage-data/data-store/templates.md).
+
+$$$glossary-condition$$$ condition
+: Specifies the circumstances that must be met to trigger an alerting [rule](/reference/glossary/index.md#glossary-rule).
+
+$$$glossary-conditional$$$ conditional
+: A control flow that executes certain actions based on whether a statement (also called a condition) is true or false. {{ls}} supports `if`, `else if`, and `else` statements. You can use conditional statements to apply filters and send events to a specific output based on conditions that you specify.
+
+$$$glossary-connector$$$ connector
+: A configuration that enables integration with an external system (the destination for an action). See [Connectors and actions](kibana://docs/reference/connectors-kibana.md).
+
+$$$glossary-console$$$ Console
+: In {{kib}}, a tool for interacting with the {{es}} REST API. You can send requests to {{es}}, view responses, view API documentation, and get your request history. See [Console](/explore-analyze/query-filter/tools/console.md).
+
+ In {{ess}}, provides web-based access to manage your {{ecloud}} deployments.
+
+
+$$$glossary-constructor$$$ constructor
+: Directs [allocators](/reference/glossary/index.md#glossary-allocator) to manage containers of {{es}} and {{kib}} nodes and maximizes the utilization of allocators. Monitors plan change requests from the Cloud UI and determines how to transform the existing cluster. In a highly available installation, places cluster nodes within different availability zones to ensure that the cluster can survive the failure of an entire availability zone.
+
+$$$glossary-container$$$ container
+: Includes an instance of {{ece}} software and its dependencies. Used to provision similar environments, to assign a guaranteed share of host resources to nodes, and to simplify operational effort in {{ece}}.
+
+$$$glossary-content-tier$$$ content tier
+: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that handle the [indexing](/reference/glossary/index.md#glossary-index) and [query](/reference/glossary/index.md#glossary-query) load for content, such as a product catalog. See [Data tiers](/manage-data/lifecycle/data-tiers.md).
+
+$$$glossary-coordinator$$$ coordinator
+: Consists of a logical grouping of some {{ece}} services and acts as a distributed coordination system and resource scheduler.
+
+$$$glossary-ccr$$$ {{ccr}} (CCR)
+: Replicates [data streams](/reference/glossary/index.md#glossary-data-stream) and [indices](/reference/glossary/index.md#glossary-index) from [remote clusters](/reference/glossary/index.md#glossary-remote-cluster) in a [local cluster](/reference/glossary/index.md#glossary-local-cluster). See [{{ccr-cap}}](/deploy-manage/tools/cross-cluster-replication.md).
+
+$$$glossary-ccs$$$ {{ccs}} (CCS)
+: Searches [data streams](/reference/glossary/index.md#glossary-data-stream) and [indices](/reference/glossary/index.md#glossary-index) on [remote clusters](/reference/glossary/index.md#glossary-remote-cluster) from a [local cluster](/reference/glossary/index.md#glossary-local-cluster). See [Search across clusters](/solutions/search/cross-cluster-search.md).
+
+$$$CRD$$$CRD
+: [Custom resource definition](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-CustomResourceDefinition). {{eck}} extends the Kubernetes API with CRDs to allow users to deploy and manage Elasticsearch, Kibana, APM Server, Enterprise Search, Beats, Elastic Agent, Elastic Maps Server, and Logstash resources just as they would do with built-in Kubernetes resources.
+
+$$$glossary-custom-rule$$$ custom rule
+: A set of conditions and actions that change the behavior of {{anomaly-jobs}}. You can also use filters to further limit the scope of the rules. See [Custom rules](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-rules). {{kib}} refers to custom rules as job rules.
+
+
+## D [d-glos]
+
+$$$glossary-dashboard$$$ dashboard
+: A collection of [visualizations](/reference/glossary/index.md#glossary-visualization), [saved searches](/reference/glossary/index.md#glossary-saved-search), and [maps](/reference/glossary/index.md#glossary-map) that provide insights into your data from multiple perspectives.
+
+$$$glossary-data-center$$$ data center
+: Check [availability zone](/reference/glossary/index.md#glossary-zone).
+
+$$$glossary-dataframe-job$$$ data frame analytics job
+: Data frame analytics jobs contain the configuration information and metadata necessary to perform {{ml}} analytics tasks on a source index and store the outcome in a destination index. See [{{dfanalytics-cap}} overview](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md).
+
+$$$glossary-data-source$$$ data source
+: A file, database, or service that provides the underlying data for a map, Canvas element, or visualization.
+
+$$$glossary-data-stream$$$ data stream
+: A named resource used to manage [time series data](/reference/glossary/index.md#glossary-time-series-data). A data stream stores data across multiple backing [indices](/reference/glossary/index.md#glossary-index). See [Data streams](/manage-data/data-store/data-streams.md).
+
+$$$glossary-data-tier$$$ data tier
+: Collection of [nodes](/reference/glossary/index.md#glossary-node) with the same [data role](elasticsearch://docs/reference/elasticsearch/configuration-reference/node-settings.md) that typically share the same hardware profile. Data tiers include the [content tier](/reference/glossary/index.md#glossary-content-tier), [hot tier](/reference/glossary/index.md#glossary-hot-tier), [warm tier](/reference/glossary/index.md#glossary-warm-tier), [cold tier](/reference/glossary/index.md#glossary-cold-tier), and [frozen tier](/reference/glossary/index.md#glossary-frozen-tier). See [Data tiers](/manage-data/lifecycle/data-tiers.md).
+
+$$$glossary-data-view$$$ data view
+: An object that enables you to select the data that you want to use in {{kib}} and define the properties of the fields. A data view can point to one or more [data streams](/reference/glossary/index.md#glossary-data-stream), [indices](/reference/glossary/index.md#glossary-index), or [aliases](/reference/glossary/index.md#glossary-alias). For example, a data view can point to your log data from yesterday, or all indices that contain your data.
+
+$$$glossary-ml-datafeed$$$ datafeed
+: {{anomaly-jobs-cap}} can analyze either a one-off batch of data or continuously in real time. {{dfeeds-cap}} retrieve data from {{es}} for analysis.
+
+$$$glossary-dataset$$$ dataset
+: A collection of data that has the same structure. The name of a dataset typically signifies its source. See [data stream naming scheme](/reference/ingestion-tools/fleet/data-streams.md).
+
+$$$glossary-delete-phase$$$ delete phase
+: Last possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the delete phase, an [index](/reference/glossary/index.md#glossary-index) is no longer needed and can safely be deleted. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-deployment-template$$$ deployment template
+: A reusable configuration of Elastic products and solutions used to create an {{ecloud}} [deployment](/reference/glossary/index.md#glossary-deployment).
+
+$$$glossary-deployment$$$ deployment
+: One or more products from the {{stack}} configured to work together and run on {{ecloud}}.
+
+$$$glossary-detection-alert$$$ detection alert
+: {{elastic-sec}} produced alerts. Detection alerts are never received from external systems. When a rule's conditions are met, {{elastic-sec}} writes a detection alert to an {{es}} alerts index.
+
+$$$glossary-detection-rule$$$ detection rule
+: Background tasks in {{elastic-sec}} that run periodically and produce alerts when suspicious activity is detected.
+
+$$$glossary-ml-detector$$$ detector
+: As part of the configuration information that is associated with {{anomaly-jobs}}, detectors define the type of analysis that needs to be done. They also specify which fields to analyze. You can have more than one detector in a job, which is more efficient than running multiple jobs against the same data.
+
+$$$glossary-director$$$ director
+: Manages the [ZooKeeper](/reference/glossary/index.md#glossary-zookeeper) datastore. This role is often shared with the [coordinator](/reference/glossary/index.md#glossary-coordinator), though in production deployments it can be separated.
+
+$$$glossary-discover$$$ Discover
+: Enables you to search and filter your data to zoom in on the information that you are interested in.
+
+$$$glossary-distributed-tracing$$$ distributed tracing
+: The end-to-end collection of performance data throughout your microservices architecture.
+
+$$$glossary-document$$$ document
+: JSON object containing data stored in {{es}}. See [Documents and indices](/manage-data/data-store/index-basics.md).
+
+$$$glossary-drilldown$$$ drilldown
+: A navigation path that retains context (time range and filters) from the source to the destination, so you can view the data from a new perspective. A dashboard that shows the overall status of multiple data centers might have a drilldown to a dashboard for a single data center. See [Drilldowns](/explore-analyze/dashboards.md).
+
+
+## E [e-glos]
+
+$$$glossary-edge$$$ edge
+: A connection between nodes in a graph that shows that they are related. The line weight indicates the strength of the relationship. See [Graph](/explore-analyze/visualize/graph.md).
+
+$$$glossary-elastic-agent$$$ {{agent}}
+: A single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. See [{{agent}} overview](/reference/ingestion-tools/fleet/index.md).
+
+$$$glossary-ece$$$ {{ece}} (ECE)
+: The official enterprise offering to host and manage the {{stack}} yourself at scale. Can be installed on a public cloud platform, such as AWS, GCP or Microsoft Azure, on your own private cloud, or on bare metal.
+
+$$$glossary-eck$$$ {{eck}} (ECK)
+: Built on the Kubernetes Operator pattern, ECK extends the basic Kubernetes orchestration capabilities to support the setup and management of Elastic products and solutions on Kubernetes.
+
+$$$glossary-ecs$$$ Elastic Common Schema (ECS)
+: A document schema for Elasticsearch, for use cases such as logging and metrics. ECS defines a common set of fields, their datatype, and gives guidance on their correct usage. ECS is used to improve uniformity of event data coming from different sources.
+
+$$$EKS$$$ Elastic Kubernetes Service (EKS)
+: A managed Kubernetes service provided by Amazon Web Services (AWS).
+
+$$$glossary-ems$$$ Elastic Maps Service (EMS)
+: A service that provides basemap tiles, shape files, and other key features that are essential for visualizing geospatial data.
+
+$$$glossary-epr$$$ Elastic Package Registry (EPR)
+: A service hosted by Elastic that stores Elastic package definitions in a central location. See the [EPR GitHub repository](https://github.com/elastic/package-registry).
+
+$$$glossary-elastic-security-indices$$$ {{elastic-sec}} indices
+: Indices containing host and network source events (such as `packetbeat-*`, `log-*`, and `winlogbeat-*`). When you [create a new rule in {{elastic-sec}}](/solutions/security/detect-and-alert/create-detection-rule.md), the default index pattern corresponds to the values defined in the `securitySolution:defaultIndex` advanced setting.
+
+$$$glossary-elastic-stack$$$ {{stack}}
+: Also known as the *ELK Stack*, the {{stack}} is the combination of various Elastic products that integrate for a scalable and flexible way to manage your data.
+
+$$$glossary-elasticsearch-service$$$ {{ess}}
+: The official hosted {{stack}} offering, from the makers of {{es}}. Available as a software-as-a-service (SaaS) offering on different cloud platforms, such as AWS, GCP, and Microsoft Azure.
+
+$$$glossary-element$$$ element
+: A [Canvas](/reference/glossary/index.md#glossary-canvas) workpad object that displays an image, text, or visualization.
+
+$$$glossary-endpoint-exception$$$ endpoint exception
+: [Exceptions](/reference/glossary/index.md#glossary-exception) added to both rules and Endpoint agents on hosts. Endpoint exceptions can only be added when:
+
+ * Endpoint agents are installed on the hosts.
+ * The {{elastic-endpoint}} Security rule is activated.
+
+
+$$$glossary-eql$$$ Event Query Language (EQL)
+: [Query](/reference/glossary/index.md#glossary-query) language for event-based time series data, such as logs, metrics, and traces. EQL supports matching for event sequences. See [EQL](/explore-analyze/query-filter/languages/eql.md).
+
+$$$glossary-event$$$ event
+: A single unit of information, containing a timestamp plus additional data. An event arrives via an input, and is subsequently parsed, timestamped, and passed through the {{ls}} [pipeline](/reference/glossary/index.md#glossary-pipeline).
+
+$$$glossary-exception$$$ exception
+: In {{elastic-sec}}, exceptions are added to rules to prevent specific source event field values from generating alerts.
+
+$$$glossary-external-alert$$$ external alert
+: Alerts {{elastic-sec}} receives from external systems, such as Suricata.
+
+
+## F [f-glos]
+
+$$$glossary-feature-controls$$$ Feature Controls
+: Enables administrators to customize which features are available in each [space](/reference/glossary/index.md#glossary-space). See [Feature Controls](/deploy-manage/manage-spaces.md#spaces-control-feature-visibility).
+
+$$$glossary-feature-importance$$$ feature importance
+: In supervised {{ml}} methods such as {{regression}} and {{classification}}, feature importance indicates the degree to which a specific feature affects a prediction. See [{{regression-cap}} feature importance](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md#dfa-regression-feature-importance) and [{{classification-cap}} feature importance](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md#dfa-classification-feature-importance).
+
+$$$glossary-feature-influence$$$ feature influence
+: In {{oldetection}}, feature influence scores indicate which features of a data point contribute to its outlier behavior. See [Feature influence](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md#dfa-feature-influence).
+
+$$$glossary-feature-state$$$ feature state
+: The indices and data streams used to store configurations, history, and other data for an Elastic feature, such as {{es}} security or {{kib}}. A feature state typically includes one or more [system indices or data streams](/reference/glossary/index.md#glossary-system-index). It may also include regular indices and data streams used by the feature. You can use [snapshots](/reference/glossary/index.md#glossary-snapshot) to back up and restore feature states. See [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state).
+
+$$$glossary-field-reference$$$ field reference
+: A reference to an event [field](/reference/glossary/index.md#glossary-field). This reference may appear in an output block or filter block in the {{ls}} config file. Field references are typically wrapped in square (`[]`) brackets, for example `[fieldname]`. If you are referring to a top-level field, you can omit the `[]` and simply use the field name. To refer to a nested field, you specify the full path to that field: `[top-level field][nested field]`.
+
+$$$glossary-field$$$ field
+: 1. Key-value pair in a [document](/reference/glossary/index.md#glossary-document). See [Mapping](/manage-data/data-store/mapping.md).
+2. In {{ls}}, this term refers to an [event](/reference/glossary/index.md#glossary-event) property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. {{ls}} uses the term "fields" to refer to these properties.
+
+
+$$$glossary-filter-plugin$$$ filter plugin
+: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that performs intermediary processing on an [event](/reference/glossary/index.md#glossary-event). Typically, filters act upon event data after it has been ingested via inputs, by mutating, enriching, and/or modifying the data according to configuration rules. Filters are often applied conditionally depending on the characteristics of the event. Popular filter plugins include grok, mutate, drop, clone, and geoip. Filter stages are optional.
+
+$$$glossary-filter$$$ filter
+: [Query](/reference/glossary/index.md#glossary-query) that does not score matching documents. See [filter context](/explore-analyze/query-filter/languages/querydsl.md).
+
+$$$glossary-fleet-server$$$ {{fleet-server}}
+: {{fleet-server}} is a component used to centrally manage {{agent}}s. It serves as a control plane for updating agent policies, collecting status information, and coordinating actions across agents.
+
+$$$glossary-fleet$$$ Fleet
+: Fleet provides a way to centrally manage {{agent}}s at scale. There are two parts: The Fleet app in {{kib}} provides a web-based UI to add and remotely manage agents, while the {{fleet-server}} provides the backend service that manages agents. See [{{agent}} overview](/reference/ingestion-tools/fleet/index.md).
+
+$$$glossary-flush$$$ flush
+: Writes data from the [transaction log](elasticsearch://docs/reference/elasticsearch/index-settings/translog.md) to disk for permanent storage.
+
+$$$glossary-follower-index$$$ follower index
+: Target [index](/reference/glossary/index.md#glossary-index) for [{{ccr}}](/reference/glossary/index.md#glossary-ccr). A follower index exists in a [local cluster](/reference/glossary/index.md#glossary-local-cluster) and replicates a [leader index](/reference/glossary/index.md#glossary-leader-index). See [{{ccr-cap}}](/deploy-manage/tools/cross-cluster-replication.md).
+
+$$$glossary-force-merge$$$ force merge
+: Manually triggers a [merge](/reference/glossary/index.md#glossary-merge) to reduce the number of [segments](/reference/glossary/index.md#glossary-segment) in an index's [shards](/reference/glossary/index.md#glossary-shard).
+
+$$$glossary-frozen-phase$$$ frozen phase
+: Fourth possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the frozen phase, an [index](/reference/glossary/index.md#glossary-index) is no longer updated and [queried](/reference/glossary/index.md#glossary-query) rarely. The information still needs to be searchable, but it's okay if those queries are extremely slow. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-frozen-tier$$$ frozen tier
+: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that hold time series data that is accessed rarely and not normally updated. See [Data tiers](/manage-data/lifecycle/data-tiers.md).
+
+
+## G [g-glos]
+
+$$$GCS$$$GCS
+: Google Cloud Storage. Block storage service provided by Google Cloud Platform (GCP).
+
+$$$GKE$$$GKE
+: [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/). Managed Kubernetes service provided by Google Cloud Platform (GCP).
+
+$$$glossary-gem$$$ gem
+: A self-contained package of code that's hosted on [RubyGems.org](https://rubygems.org). {{ls}} [plugins](/reference/glossary/index.md#glossary-plugin) are packaged as Ruby Gems. You can use the {{ls}} [plugin manager](/reference/glossary/index.md#glossary-plugin-manager) to manage {{ls}} gems.
+
+$$$glossary-geo-point$$$ geo-point
+: A field type in {{es}}. A geo-point field accepts latitude-longitude pairs for storing point locations. The latitude-longitude format can be from a string, geohash, array, well-known text, or object. See [geo-point](elasticsearch://docs/reference/elasticsearch/mapping-reference/geo-point.md).
+
+$$$glossary-geo-shape$$$ geo-shape
+: A field type in {{es}}. A geo-shape field accepts arbitrary geographic primitives, like polygons, lines, or rectangles (and more). You can populate a geo-shape field from GeoJSON or well-known text. See [geo-shape](elasticsearch://docs/reference/elasticsearch/mapping-reference/geo-shape.md).
+
+$$$glossary-geojson$$$ GeoJSON
+: A format for representing geospatial data. GeoJSON is also a file-type, commonly used in the **Maps** application to upload a file of geospatial data. See [GeoJSON data](/explore-analyze/visualize/maps/indexing-geojson-data-tutorial.md).
+
+$$$glossary-graph$$$ graph
+: A data structure and visualization that shows interconnections between a set of entities. Each entity is represented by a node. Connections between nodes are represented by [edges](/reference/glossary/index.md#glossary-edge). See [Graph](/explore-analyze/visualize/graph.md).
+
+$$$glossary-grok-debugger$$$ Grok Debugger
+: A tool for building and debugging grok patterns. Grok is good for parsing syslog, Apache, and other webserver logs. See [Debugging grok expressions](/explore-analyze/query-filter/tools/grok-debugger.md).
+
+
+## H [h-glos]
+
+$$$glossary-hardware-profile$$$ hardware profile
+: In {{ecloud}}, a built-in [deployment template](/reference/glossary/index.md#glossary-deployment-template) that supports a specific use case for the {{stack}}, such as a compute optimized deployment that provides high vCPU for search-heavy use cases.
+
+$$$glossary-heat-map$$$ heat map
+: A layer type in the **Maps** application. Heat maps cluster locations to show higher (or lower) densities. Heat maps describe a visualization with color-coded cells or regions to analyze patterns across multiple dimensions. See [Heat map layer](/explore-analyze/visualize/maps/heatmap-layer.md).
+
+$$$glossary-hidden-index$$$ hidden data stream or index
+: [Data stream](/reference/glossary/index.md#glossary-data-stream) or [index](/reference/glossary/index.md#glossary-index) excluded from most [index patterns](/reference/glossary/index.md#glossary-index-pattern) by default. See [Hidden data streams and indices](elasticsearch://docs/reference/elasticsearch/rest-apis/api-conventions.md#multi-hidden).
+
+$$$glossary-host-runner$$$ host runner (runner)
+: In {{ece}}, a local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to the host exist and are able to run, and creates or recreates the containers if necessary.
+
+$$$glossary-hot-phase$$$ hot phase
+: First possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the hot phase, an [index](/reference/glossary/index.md#glossary-index) is actively updated and queried. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-hot-thread$$$ hot thread
+: A Java thread that has high CPU usage and executes for a longer than normal period of time.
+
+$$$glossary-hot-tier$$$ hot tier
+: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that handle the [indexing](/reference/glossary/index.md#glossary-index) load for time series data, such as logs or metrics. This tier holds your most recent, most frequently accessed data. See [Data tiers](/manage-data/lifecycle/data-tiers.md).
+
+
+## I [i-glos]
+
+$$$glossary-id$$$ ID
+: Identifier for a [document](/reference/glossary/index.md#glossary-document). Document IDs must be unique within an [index](/reference/glossary/index.md#glossary-index). See the [`_id` field](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-id-field.md).
+
+$$$glossary-index-lifecycle-policy$$$ index lifecycle policy
+: Specifies how an [index](/reference/glossary/index.md#glossary-index) moves between phases in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle) and what actions to perform during each phase. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-index-lifecycle$$$ index lifecycle
+: Five phases an [index](/reference/glossary/index.md#glossary-index) can transition through: [hot](/reference/glossary/index.md#glossary-hot-phase), [warm](/reference/glossary/index.md#glossary-warm-phase), [cold](/reference/glossary/index.md#glossary-cold-phase), [frozen](/reference/glossary/index.md#glossary-frozen-phase), and [delete](/reference/glossary/index.md#glossary-delete-phase). See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-index-pattern$$$ index pattern
+: In {{es}}, a string containing a wildcard (`*`) pattern that can match multiple [data streams](/reference/glossary/index.md#glossary-data-stream), [indices](/reference/glossary/index.md#glossary-index), or [aliases](/reference/glossary/index.md#glossary-alias). See [Multi-target syntax](elasticsearch://docs/reference/elasticsearch/rest-apis/api-conventions.md).
+
+$$$glossary-index-template$$$ index template
+: Automatically configures the [mappings](/reference/glossary/index.md#glossary-mapping), [index settings](elasticsearch://docs/reference/elasticsearch/index-settings/index.md), and [aliases](/reference/glossary/index.md#glossary-alias) of new [indices](/reference/glossary/index.md#glossary-index) that match its [index pattern](/reference/glossary/index.md#glossary-index-pattern). You can also use index templates to create [data streams](/reference/glossary/index.md#glossary-data-stream). See [Index templates](/manage-data/data-store/templates.md).
+
+$$$glossary-index$$$ index
+: 1. Collection of JSON [documents](/reference/glossary/index.md#glossary-document). See [Documents and indices](/manage-data/data-store/index-basics.md).
+2. To add one or more JSON documents to {{es}}. This process is called indexing.
+
+
+$$$glossary-indexer$$$ indexer
+: A {{ls}} instance that is tasked with interfacing with an {{es}} cluster in order to index [event](/reference/glossary/index.md#glossary-event) data.
+
+$$$glossary-indicator-index$$$ indicator index
+: Indices containing suspect field values in {{elastic-sec}}. [Indicator match rules](/solutions/security/detect-and-alert/create-detection-rule.md#create-indicator-rule) use these indices to compare their field values with source event values contained in [{{elastic-sec}} indices](/reference/glossary/index.md#glossary-elastic-security-indices).
+
+$$$glossary-inference-aggregation$$$ inference aggregation
+: A pipeline aggregation that references a [trained model](/reference/glossary/index.md#glossary-trained-model) in an aggregation to infer on the results field of the parent bucket aggregation. It enables you to use supervised {{ml}} at search time.
+
+$$$glossary-inference-processor$$$ inference processor
+: A processor specified in an ingest pipeline that uses a [trained model](/reference/glossary/index.md#glossary-trained-model) to infer against the data that is being ingested in the pipeline.
+
+$$$glossary-inference$$$ inference
+: A {{ml}} feature that enables you to use supervised learning processes – like {{classification}}, {{regression}}, or [{{nlp}}](/reference/glossary/index.md#glossary-nlp) – in a continuous fashion by using [trained models](/reference/glossary/index.md#glossary-trained-model) against incoming data.
+
+$$$glossary-influencer$$$ influencer
+: Influencers are entities that might have contributed to an anomaly in a specific bucket in an {{anomaly-job}}. For more information, see [Influencers](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-influencers).
+
+$$$glossary-ingestion$$$ ingestion
+: The process of collecting and sending data from various data sources to {{es}}.
+
+$$$glossary-input-plugin$$$ input plugin
+: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that reads [event](/reference/glossary/index.md#glossary-event) data from a specific source. Input plugins are the first stage in the {{ls}} event processing [pipeline](/reference/glossary/index.md#glossary-pipeline). Popular input plugins include file, syslog, redis, and beats.
+
+$$$glossary-instance-configuration$$$ instance configuration
+: In {{ecloud}}, enables the instances of the {{stack}} to run on suitable hardware resources by filtering on [allocator tags](/reference/glossary/index.md#glossary-allocator-tag). Used as building blocks for [deployment templates](/reference/glossary/index.md#glossary-deployment-template).
+
+$$$glossary-instance-type$$$ instance type
+: In {{ecloud}}, categories for [instances](/reference/glossary/index.md#glossary-instance) representing an Elastic feature or cluster node types, such as `master`, `ml` or `data`.
+
+$$$glossary-instance$$$ instance
+: A product from the {{stack}} that is running in an {{ecloud}} deployment, such as an {{es}} node or a {{kib}} instance. When you choose more [availability zones](/reference/glossary/index.md#glossary-zone), the system automatically creates more instances for you.
+
+$$$glossary-instrumentation$$$ instrumentation
+: Extending application code to track where your application is spending time. Code is considered instrumented when it collects and reports this performance data to APM.
+
+$$$glossary-integration-policy$$$ integration policy
+: An instance of an [integration](/reference/glossary/index.md#glossary-integration) that is configured for a specific use case, such as collecting logs from a specific file.
+
+$$$glossary-integration$$$ integration
+: An easy way for external systems to connect to the {{stack}}. Whether it's collecting data or protecting systems from security threats, integrations provide out-of-the-box assets to make setup easy—many with just a single click.
+
+
+## J [j-glos]
+
+$$$glossary-ml-job$$$$$$glossary-job$$$ job
+: {{ml-cap}} jobs contain the configuration information and metadata necessary to perform an analytics task. There are two types: [{{anomaly-jobs}}](/reference/glossary/index.md#glossary-anomaly-detection-job) and [data frame analytics jobs](/reference/glossary/index.md#glossary-dataframe-job). See also [{{rollup-job}}](/reference/glossary/index.md#glossary-rollup-job).
+
+
+## K [k-glos]
+
+$$$k8s$$$K8s
+: Shortened form (numeronym) of "Kubernetes" derived from replacing "ubernete" with "8".
+
+$$$glossary-kibana-privilege$$$ {{kib}} privilege
+: Enable administrators to grant users read-only, read-write, or no access to individual features within [spaces](/reference/glossary/index.md#glossary-space) in {{kib}}. See [{{kib}} privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
+
+$$$glossary-kql$$$ {{kib}} Query Language (KQL)
+: The default language for querying in {{kib}}. KQL provides support for scripted fields. See [Kibana Query Language](/explore-analyze/query-filter/languages/kql.md).
+
+$$$glossary-kibana$$$ {{kib}}
+: A user interface that lets you visualize your {{es}} data and navigate the {{stack}}.
+
+
+## L [l-glos]
+
+$$$glossary-labs$$$ labs
+: An in-progress or experimental feature in **Canvas** or **Dashboard** that you can try out and provide feedback. When enabled, you'll see **Labs** in the toolbar.
+
+$$$glossary-leader-index$$$ leader index
+: Source [index](/reference/glossary/index.md#glossary-index) for [{{ccr}}](/reference/glossary/index.md#glossary-ccr). A leader index exists on a [remote cluster](/reference/glossary/index.md#glossary-remote-cluster) and is replicated to [follower indices](/reference/glossary/index.md#glossary-follower-index). See [{{ccr-cap}}](/deploy-manage/tools/cross-cluster-replication.md).
+
+$$$glossary-lens$$$ Lens
+: Enables you to build visualizations by dragging and dropping data fields. Lens makes makes smart visualization suggestions for your data, allowing you to switch between visualization types. See [Lens](/explore-analyze/dashboards.md).
+
+$$$glossary-local-cluster$$$ local cluster
+: [Cluster](/reference/glossary/index.md#glossary-cluster) that pulls data from a [remote cluster](/reference/glossary/index.md#glossary-remote-cluster) in [{{ccs}}](/reference/glossary/index.md#glossary-ccs) or [{{ccr}}](/reference/glossary/index.md#glossary-ccr). See [Remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md).
+
+$$$glossary-lucene$$$ Lucene query syntax
+: The query syntax for {{kib}}'s legacy query language. The Lucene query syntax is available under the options menu in the query bar and from [Advanced Settings](/reference/glossary/index.md#glossary-advanced-settings).
+
+
+## M [m-glos]
+
+$$$glossary-ml-nodes$$$ machine learning node
+: A {{ml}} node is a node that has `xpack.ml.enabled` set to `true` and `ml` in `node.roles`. If you want to use {{ml-features}}, there must be at least one {{ml}} node in your cluster. See [Machine learning nodes](elasticsearch://docs/reference/elasticsearch/configuration-reference/node-settings.md#ml-node).
+
+$$$glossary-map$$$ map
+: A representation of geographic data using symbols and labels. See [Maps](/explore-analyze/visualize/maps.md).
+
+$$$glossary-mapping$$$ mapping
+: Defines how a [document](/reference/glossary/index.md#glossary-document), its [fields](/reference/glossary/index.md#glossary-field), and its metadata are stored in {{es}}. Similar to a schema definition. See [Mapping](/manage-data/data-store/mapping.md).
+
+$$$glossary-master-node$$$ master node
+: Handles write requests for the cluster and publishes changes to other nodes in an ordered fashion. Each cluster has a single master node which is chosen automatically by the cluster and is replaced if the current master node fails. Also see [node](/reference/glossary/index.md#glossary-node).
+
+$$$glossary-merge$$$ merge
+: Process of combining a [shard](/reference/glossary/index.md#glossary-shard)'s smaller Lucene [segments](/reference/glossary/index.md#glossary-segment) into a larger one. {{es}} manages merges automatically.
+
+$$$glossary-message-broker$$$ message broker
+: Also referred to as a *message buffer* or *message queue*, a message broker is external software (such as Redis, Kafka, or RabbitMQ) that stores messages from the {{ls}} shipper instance as an intermediate store, waiting to be processed by the {{ls}} indexer instance.
+
+$$$glossary-metric-aggregation$$$ metric aggregation
+: An aggregation that calculates and tracks metrics for a set of documents.
+
+$$$glossary-module$$$ module
+: Out-of-the-box configurations for common data sources to simplify the collection, parsing, and visualization of logs and metrics.
+
+$$$glossary-monitor$$$ monitor
+: A network endpoint which is monitored to track the performance and availability of applications and services.
+
+$$$glossary-multi-field$$$ multi-field
+: A [field](/reference/glossary/index.md#glossary-field) that's [mapped](/reference/glossary/index.md#glossary-mapping) in multiple ways. See the [`fields` mapping parameter](elasticsearch://docs/reference/elasticsearch/mapping-reference/multi-fields.md).
+
+$$$glossary-multifactor$$$ multifactor authentication (MFA)
+: A security process that requires you to provide two or more verification methods to gain access to web-based user interfaces.
+
+
+## N [n-glos]
+
+$$$glossary-namespace$$$ namespace
+: A user-configurable arbitrary data grouping, such as an environment (`dev`, `prod`, or `qa`), a team, or a strategic business unit.
+
+$$$glossary-nlp$$$ natural language processing (NLP)
+: A {{ml}} feature that enables you to perform operations such as language identification, named entity recognition (NER), text classification, or text embedding. See [NLP overview](/explore-analyze/machine-learning/nlp/ml-nlp-overview.md).
+
+$$$glossary-no-op$$$ no-op
+: In {{ecloud}}, the application of a rolling update on your deployment without actually applying any configuration changes. This type of update can be useful to resolve certain health warnings.
+
+$$$glossary-node$$$ node
+: 1. A single {{es}} server. One or more nodes can form a [cluster](/reference/glossary/index.md#glossary-cluster). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md).
+2. In {{eck}}, it can refer to either an [Elasticsearch Node](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.md) or a [Kubernetes Node](https://kubernetes.io/docs/concepts/architecture/nodes/) depending on the context. ECK maps an Elasticsearch node to a Kubernetes Pod which can get scheduled onto any available Kubernetes node that can satisfy the [resource requirements](/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md) and [node constraints](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) defined in the [pod template](/deploy-manage/deploy/cloud-on-k8s/customize-pods.md).
+
+$$$NodeSet$$$NodeSet
+: A set of Elasticsearch nodes that share the same Elasticsearch configuration and a Kubernetes Pod template. Multiple NodeSets can be defined in the Elasticsearch CRD to achieve a cluster topology consisting of groups of Elasticsearch nodes with different node roles, resource requirements and hardware configurations (Kubernetes node constraints).
+
+## O [o-glos]
+
+$$$glossary-observability$$$ Observability
+: Unifying your logs, metrics, uptime data, and application traces to provide granular insights and context into the behavior of services running in your environments.
+
+$$$OpenShift$$$OpenShift
+: A Kubernetes [platform](https://www.openshift.com/) by RedHat.
+
+$$$Operator$$$operator
+: A design pattern in Kubernetes for [managing custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/). {{eck}} implements the operator pattern to manage Elasticsearch, Kibana and APM Server resources on Kubernetes.
+
+$$$glossary-output-plugin$$$ output plugin
+: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that writes [event](/reference/glossary/index.md#glossary-event) data to a specific destination. Outputs are the final stage in the event [pipeline](/reference/glossary/index.md#glossary-pipeline). Popular output plugins include elasticsearch, file, graphite, and statsd.
+
+
+## P [p-glos]
+
+$$$glossary-painless-lab$$$ Painless Lab
+: An interactive code editor that lets you test and debug Painless scripts in real-time. See [Painless Lab](/explore-analyze/scripting/painless-lab.md).
+
+$$$glossary-panel$$$ panel
+: A [dashboard](/reference/glossary/index.md#glossary-dashboard) component that contains a query element or visualization, such as a chart, table, or list.
+
+$$$PDB$$$PDB
+: A [pod disruption budget](https://kubernetes.io/docs/reference/glossary/?all=true#term-pod-disruption-budget) in {{eck}}.
+
+$$$glossary-pipeline$$$ pipeline
+: A term used to describe the flow of [events](/reference/glossary/index.md#glossary-event) through the {{ls}} workflow. A pipeline typically consists of a series of input, filter, and output stages. [Input](/reference/glossary/index.md#glossary-input-plugin) stages get data from a source and generate events, [filter](/reference/glossary/index.md#glossary-filter-plugin) stages, which are optional, modify the event data, and [output](/reference/glossary/index.md#glossary-output-plugin) stages write the data to a destination. Inputs and outputs support [codecs](/reference/glossary/index.md#glossary-codec-plugin) that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.
+
+$$$glossary-plan$$$ plan
+: Specifies the configuration and topology of an {{es}} or {{kib}} cluster, such as capacity, availability, and {{es}} version, for example. When changing a plan, the [constructor](/reference/glossary/index.md#glossary-constructor) determines how to transform the existing cluster into the pending plan.
+
+$$$glossary-plugin-manager$$$ plugin manager
+: Accessed via the `bin/logstash-plugin` script, the plugin manager enables you to manage the lifecycle of [plugins](/reference/glossary/index.md#glossary-plugin) in your {{ls}} deployment. You can install, remove, and upgrade plugins by using the plugin manager Command Line Interface (CLI).
+
+$$$glossary-plugin$$$ plugin
+: A self-contained software package that implements one of the stages in the {{ls}} event processing [pipeline](/reference/glossary/index.md#glossary-pipeline). The list of available plugins includes [input plugins](/reference/glossary/index.md#glossary-input-plugin), [output plugins](/reference/glossary/index.md#glossary-output-plugin), [codec plugins](/reference/glossary/index.md#glossary-codec-plugin), and [filter plugins](/reference/glossary/index.md#glossary-filter-plugin). The plugins are implemented as Ruby [gems](/reference/glossary/index.md#glossary-gem) and hosted on [RubyGems.org](https://rubygems.org). You define the stages of an event processing [pipeline](/reference/glossary/index.md#glossary-pipeline) by configuring plugins.
+
+$$$glossary-primary-shard$$$ primary shard
+: Lucene instance containing some or all data for an [index](/reference/glossary/index.md#glossary-index). When you index a [document](/reference/glossary/index.md#glossary-document), {{es}} adds the document to primary shards before [replica shards](/reference/glossary/index.md#glossary-replica-shard). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md).
+
+$$$glossary-proxy$$$ proxy
+: A highly available, TLS-enabled proxy layer that routes user requests, mapping cluster IDs that are passed in request URLs for the container to the cluster nodes handling the user requests.
+
+$$$PVC$$$PVC
+: A [persistent volume claim](https://kubernetes.io/docs/reference/glossary/?all=true#term-persistent-volume-claim) in {{eck}}.
+
+## Q [q-glos]
+
+$$$QoS$$$QoS
+: Quality of service in {{eck}}. When a Kubernetes cluster is under heavy load, the Kubernetes scheduler makes pod eviction decisions based on the [QoS class of individual pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/). [*Manage compute resources*](/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md) explains how to define QoS classes for Elasticsearch, Kibana and APM Server pods.
+
+$$$glossary-query-profiler$$$ Query Profiler
+: A tool that enables you to inspect and analyze search queries to diagnose and debug poorly performing queries. See [Query Profiler](/explore-analyze/query-filter/tools/search-profiler.md).
+
+$$$glossary-query$$$ query
+: Request for information about your data. You can think of a query as a question, written in a way {{es}} understands. See [Search your data](/solutions/search/querying-for-search.md).
+
+
+## R [r-glos]
+
+$$$RBAC$$$RBAC
+: Role-based Access Control. In {{eck}}, it is a security mechanism in Kubernetes where access to cluster resources is restricted to principals having the appropriate role. Check [https://kubernetes.io/docs/reference/access-authn-authz/rbac/](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for more information.
+
+$$$glossary-real-user-monitoring$$$ Real user monitoring (RUM)
+: Performance monitoring, metrics, and error tracking of web applications.
+
+$$$glossary-recovery$$$ recovery
+: Process of syncing a [replica shard](/reference/glossary/index.md#glossary-replica-shard) from a [primary shard](/reference/glossary/index.md#glossary-primary-shard). Upon completion, the replica shard is available for searches.
+
+$$$glossary-reindex$$$ reindex
+: Copies documents from a source to a destination. The source and destination can be a [data stream](/reference/glossary/index.md#glossary-data-stream), [index](/reference/glossary/index.md#glossary-index), or [alias](/reference/glossary/index.md#glossary-alias).
+
+$$$glossary-remote-cluster$$$ remote cluster
+: A separate [cluster](/reference/glossary/index.md#glossary-cluster), often in a different data center or locale, that contains [indices](/reference/glossary/index.md#glossary-index) that can be replicated or searched by the [local cluster](/reference/glossary/index.md#glossary-local-cluster). The connection to a remote cluster is unidirectional. See [Remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md).
+
+$$$glossary-replica-shard$$$ replica shard
+: Copy of a [primary shard](/reference/glossary/index.md#glossary-primary-shard). Replica shards can improve search performance and resiliency by distributing data across multiple [nodes](/reference/glossary/index.md#glossary-node). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md).
+
+$$$glossary-roles-token$$$ roles token
+: Enables a host to join an existing {{ece}} installation and grants permission to hosts to hold certain roles, such as the [allocator](/reference/glossary/index.md#glossary-allocator) role. Used when installing {{ece}} on additional hosts, a roles token helps secure {{ece}} by making sure that only authorized hosts become part of the installation.
+
+$$$glossary-rollover$$$ rollover
+: Creates a new write index when the current one reaches a certain size, number of docs, or age. A rollover can target a [data stream](/reference/glossary/index.md#glossary-data-stream) or an [alias](/reference/glossary/index.md#glossary-alias) with a write index.
+
+$$$glossary-rollup-index$$$ rollup index
+: Special type of [index](/reference/glossary/index.md#glossary-index) for storing historical data at reduced granularity. Documents are summarized and indexed into a rollup index by a [rollup job](/reference/glossary/index.md#glossary-rollup-job). See [Rolling up historical data](/manage-data/lifecycle/rollup.md).
+
+$$$glossary-rollup-job$$$ {{rollup-job}}
+: Background task that runs continuously to summarize documents in an [index](/reference/glossary/index.md#glossary-index) and index the summaries into a separate rollup index. The job configuration controls what data is rolled up and how often. See [Rolling up historical data](/manage-data/lifecycle/rollup.md).
+
+$$$glossary-rollup$$$ rollup
+: Summarizes high-granularity data into a more compressed format to maintain access to historical data in a cost-effective way. See [Roll up your data](/manage-data/lifecycle/rollup.md).
+
+$$$glossary-routing$$$ routing
+: Process of sending and retrieving data from a specific [primary shard](/reference/glossary/index.md#glossary-primary-shard). {{es}} uses a hashed routing value to choose this shard. You can provide a routing value in [indexing](/reference/glossary/index.md#glossary-index) and search requests to take advantage of caching. See the [`_routing` field](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-routing-field.md).
+
+$$$glossary-rule$$$ rule
+: A set of [conditions](/reference/glossary/index.md#glossary-condition), schedules, and [actions](/reference/glossary/index.md#glossary-action) that enable notifications. See [{{rules-ui}}](/reference/glossary/index.md#glossary-rules).
+
+$$$glossary-rules$$$ Rules
+: A comprehensive view of all your alerting rules. Enables you to access and manage rules for all {{kib}} apps from one place. See [{{rules-ui}}](/explore-analyze/alerts-cases.md).
+
+$$$glossary-runner$$$ runner
+: A local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to it exist and are able to run, and creates or recreates the containers if necessary.
+
+$$$glossary-runtime-fields$$$ runtime field
+: [Field](/reference/glossary/index.md#glossary-field) that is evaluated at query time. You access runtime fields from the search API like any other field, and {{es}} sees runtime fields no differently. See [Runtime fields](/manage-data/data-store/mapping/runtime-fields.md).
+
+
+## S [s-glos]
+
+$$$glossary-saved-object$$$ saved object
+: A representation of a dashboard, visualization, map, data view, or Canvas workpad that can be stored and reloaded.
+
+$$$glossary-saved-search$$$ saved search
+: The query text, filters, and time filter that make up a search, saved for later retrieval and reuse.
+
+$$$glossary-scripted-field$$$ scripted field
+: A field that computes data on the fly from the data in {{es}} indices. Scripted field data is shown in Discover and used in visualizations.
+
+$$$glossary-search-session$$$ search session
+: A group of one or more queries that are executed asynchronously. The results of the session are stored for a period of time, so you can recall the query. Search sessions are user specific.
+
+$$$glossary-search-template$$$ search template
+: A stored search you can run with different variables. See [Search templates](/solutions/search/search-templates.md).
+
+$$$glossary-searchable-snapshot-index$$$ searchable snapshot index
+: [Index](/reference/glossary/index.md#glossary-index) whose data is stored in a [snapshot](/reference/glossary/index.md#glossary-snapshot). Searchable snapshot indices do not need [replica shards](/reference/glossary/index.md#glossary-replica-shard) for resilience, since their data is reliably stored outside the cluster. See [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md).
+
+$$$glossary-searchable-snapshot$$$ searchable snapshot
+: [Snapshot](/reference/glossary/index.md#glossary-snapshot) of an [index](/reference/glossary/index.md#glossary-index) mounted as a [searchable snapshot index](/reference/glossary/index.md#glossary-searchable-snapshot-index). You can search this index like a regular index. See [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md).
+
+$$$glossary-segment$$$ segment
+: Data file in a [shard](/reference/glossary/index.md#glossary-shard)'s Lucene instance. {{es}} manages Lucene segments automatically.
+
+$$$glossary-services-forwarder$$$ services forwarder
+: Routes data internally in an {{ece}} installation.
+
+$$$glossary-shard$$$ shard
+: Lucene instance containing some or all data for an [index](/reference/glossary/index.md#glossary-index). {{es}} automatically creates and manages these Lucene instances. There are two types of shards: [primary](/reference/glossary/index.md#glossary-primary-shard) and [replica](/reference/glossary/index.md#glossary-replica-shard). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md).
+
+$$$glossary-shareable$$$ shareable
+: A Canvas workpad that can be embedded on any webpage. Shareables enable you to display Canvas visualizations on internal wiki pages or public websites.
+
+$$$glossary-shipper$$$ shipper
+: An instance of {{ls}} that send events to another instance of {{ls}}, or some other application.
+
+$$$glossary-shrink$$$ shrink
+: Reduces the number of [primary shards](/reference/glossary/index.md#glossary-primary-shard) in an index.
+
+$$$glossary-snapshot-lifecycle-policy$$$ snapshot lifecycle policy
+: Specifies how frequently to perform automatic backups of a cluster and how long to retain the resulting [snapshots](/reference/glossary/index.md#glossary-snapshot). See [Automate snapshots with {{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm).
+
+$$$glossary-snapshot-repository$$$ snapshot repository
+: Location where [snapshots](/reference/glossary/index.md#glossary-snapshot) are stored. A snapshot repository can be a shared filesystem or a remote repository, such as Azure or Google Cloud Storage. See [Snapshot and restore](/deploy-manage/tools/snapshot-and-restore.md).
+
+$$$glossary-snapshot$$$ snapshot
+: Backup taken of a running [cluster](/reference/glossary/index.md#glossary-cluster). You can take snapshots of the entire cluster or only specific [data streams](/reference/glossary/index.md#glossary-data-stream) and [indices](/reference/glossary/index.md#glossary-index). See [Snapshot and restore](/deploy-manage/tools/snapshot-and-restore.md).
+
+$$$glossary-solution$$$ solution
+: In {{ecloud}}, deployments with specialized [templates](/reference/glossary/index.md#glossary-deployment-template) that are pre-configured with sensible defaults and settings for common use cases.
+
+$$$glossary-source_field$$$ source field
+: Original JSON object provided during [indexing](/reference/glossary/index.md#glossary-index). See the [`_source` field](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-source-field.md).
+
+$$$glossary-space$$$ space
+: A place for organizing [dashboards](/reference/glossary/index.md#glossary-dashboard), [visualizations](/reference/glossary/index.md#glossary-visualization), and other [saved objects](/reference/glossary/index.md#glossary-saved-object) by category. For example, you might have different spaces for each team, use case, or individual. See [Spaces](/deploy-manage/manage-spaces.md).
+
+$$$glossary-span$$$ span
+: Information about the execution of a specific code path. [Spans](/solutions/observability/apps/spans.md) measure from the start to the end of an activity and can have a parent/child relationship with other spans.
+
+$$$glossary-split$$$ split
+: Adds more [primary shards](/reference/glossary/index.md#glossary-primary-shard) to an [index](/reference/glossary/index.md#glossary-index).
+
+$$$glossary-stack-alert$$$ stack rule
+: The general purpose rule types {{kib}} provides out of the box. Refer to [Stack rules](/explore-analyze/alerts-cases/alerts/rule-types.md#stack-rules).
+
+$$$glossary-standalone$$$ standalone
+: This mode allows manual configuration and management of {{agent}}s locally on the systems where they are installed. See [Install standalone {{agent}}s](/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md).
+
+$$$glossary-stunnel$$$ stunnel
+: Securely tunnels all traffic in an {{ece}} installation.
+
+$$$glossary-system-index$$$ system index
+: [Index](/reference/glossary/index.md#glossary-index) containing configurations and other data used internally by the {{stack}}. System index names start with a dot (`.`), such as `.security`. Do not directly access or change system indices.
+
+
+## T [t-glos]
+
+$$$glossary-tag$$$ tag
+: A keyword or label that you assign to {{kib}} saved objects, such as dashboards and visualizations, so you can classify them in a way that is meaningful to you. Tags makes it easier for you to manage your content. See [Tags](/explore-analyze/find-and-organize/tags.md).
+
+$$$glossary-term-join$$$ term join
+: A shared key that combines vector features with the results of an {{es}} terms aggregation. Term joins augment vector features with properties for data-driven styling and rich tooltip content in maps.
+
+$$$glossary-term$$$ term
+: See [token](/reference/glossary/index.md#glossary-token).
+
+$$$glossary-text$$$ text
+: Unstructured content, such as a product description or log message. You typically [analyze](/reference/glossary/index.md#glossary-analysis) text for better search. See [Text analysis](/manage-data/data-store/text-analysis.md).
+
+$$$glossary-time-filter$$$ time filter
+: A {{kib}} control that constrains the search results to a particular time period.
+
+$$$glossary-time-series-data-stream$$$ time series data stream
+: A type of [data stream](/reference/glossary/index.md#glossary-data-stream) optimized for indexing metrics [time series data](/reference/glossary/index.md#glossary-time-series-data). A TSDS allows for reduced storage size and for a sequence of metrics data points to be considered efficiently as a whole. See [Time series data stream](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md).
+
+$$$glossary-time-series-data$$$ time series data
+: A series of data points, such as logs, metrics and events, that is indexed in time order. Time series data can be indexed in a [data stream](/reference/glossary/index.md#glossary-data-stream), where it can be accessed as a single named resource with the data stored across multiple backing indices. A [time series data stream](/reference/glossary/index.md#glossary-time-series-data-stream) is optimized for indexing metrics data.
+
+$$$glossary-timelion$$$ Timelion
+: A tool for building a time series visualization that analyzes data in time order. See [Timelion](/explore-analyze/dashboards.md).
+
+$$$glossary-token$$$ token
+: A chunk of unstructured [text](/reference/glossary/index.md#glossary-text) that's been optimized for search. In most cases, tokens are individual words. Tokens are also called terms. See [Text analysis](/manage-data/data-store/text-analysis.md).
+
+$$$glossary-tokenization$$$ tokenization
+: Process of breaking unstructured text down into smaller, searchable chunks called [tokens](/reference/glossary/index.md#glossary-token). See [Tokenization](/manage-data/data-store/text-analysis.md#tokenization).
+
+$$$glossary-trace$$$ trace
+: Defines the amount of time an application spends on a request. Traces are made up of a collection of transactions and spans that have a common root.
+
+$$$glossary-tracks$$$ tracks
+: A layer type in the **Maps** application. This layer converts a series of point locations into a line, often representing a path or route.
+
+$$$glossary-trained-model$$$ trained model
+: A {{ml}} model that is trained and tested against a labeled data set and can be referenced in an ingest pipeline or in a pipeline aggregation to perform {{classification}} or {{reganalysis}} or [{{nlp}}](/reference/glossary/index.md#glossary-nlp) on new data.
+
+$$$glossary-transaction$$$ transaction
+: A special kind of [span](/reference/glossary/index.md#glossary-span) that has additional attributes associated with it. [Transactions](/solutions/observability/apps/transactions.md) describe an event captured by an Elastic [APM agent](/reference/glossary/index.md#glossary-apm-agent) instrumenting a service.
+
+$$$glossary-tsvb$$$ TSVB
+: A time series data visualizer that allows you to combine an infinite number of aggregations to display complex data. See [TSVB](/explore-analyze/dashboards.md).
+
+
+## U [u-glos]
+
+$$$glossary-upgrade-assistant$$$ Upgrade Assistant
+: A tool that helps you prepare for an upgrade to the next major version of {{es}}. The assistant identifies the deprecated settings in your cluster and indices and guides you through resolving issues, including reindexing. See [Upgrade Assistant](/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md).
+
+$$$glossary-uptime$$$ Uptime
+: A metric of system reliability used to monitor the status of network endpoints via HTTP/S, TCP, and ICMP.
+
+
+## V [v-glos]
+
+$$$glossary-vcpu$$$ vCPU
+: vCPU stands for virtual central processing unit. In {{ecloud}}, vCPUs are virtual compute units assigned to your nodes. The value is dependent on the size and hardware profile of the instance. The instance may be eligible for vCPU boosting depending on the size.
+
+$$$glossary-vector$$$ vector data
+: Points, lines, and polygons used to represent a map.
+
+$$$glossary-vega$$$ Vega
+: A declarative language used to create interactive visualizations. See [Vega](/explore-analyze/dashboards.md).
+
+$$$glossary-visualization$$$ visualization
+: A graphical representation of query results in {{kib}} (e.g., a histogram, line graph, pie chart, or heat map).
+
+
+## W [w-glos]
+
+$$$glossary-warm-phase$$$ warm phase
+: Second possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the warm phase, an [index](/reference/glossary/index.md#glossary-index) is generally optimized for search and no longer updated. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md).
+
+$$$glossary-warm-tier$$$ warm tier
+: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that hold time series data that is accessed less frequently and rarely needs to be updated. See [Data tiers](/manage-data/lifecycle/data-tiers.md).
+
+$$$glossary-watcher$$$ Watcher
+: The original suite of alerting features. See [Watcher](/explore-analyze/alerts-cases/watcher.md).
+
+$$$glossary-wms$$$ Web Map Service (WMS)
+: A layer type in the **Maps** application. Add a WMS source to provide authoritative geographic context to your map. See the [OpenGIS Web Map Service](https://www.ogc.org/standards/wms).
+
+$$$glossary-worker$$$ worker
+: The filter thread model used by {{ls}}, where each worker receives an [event](/reference/glossary/index.md#glossary-event) and applies all filters, in order, before emitting the event to the output queue. This allows scalability across CPUs because many filters are CPU intensive.
+
+$$$glossary-workpad$$$ workpad
+: A workspace where you build presentations of your live data in [Canvas](/reference/glossary/index.md#glossary-canvas). See [Create a workpad](/explore-analyze/visualize/canvas.md).
+
+
+## X [x-glos]
+
+
+## Y [y-glos]
+
+
+## Z [z-glos]
+
+$$$glossary-zookeeper$$$ ZooKeeper
+: A coordination service for distributed systems used by {{ece}} to store the state of the installation. Responsible for discovery of hosts, resource allocation, leader election after failure and high priority notifications.
diff --git a/reference/ingestion-tools.md b/reference/ingestion-tools.md
new file mode 100644
index 0000000000..03879ee6be
--- /dev/null
+++ b/reference/ingestion-tools.md
@@ -0,0 +1,16 @@
+# Ingestion tools
+
+% TO-DO: Add links to "What are Ingestion tools?"%
+
+This section contains reference information for ingestion tools, including:
+
+* Fleet and agent
+* APM
+* Beats
+* Enrich processor reference
+* Logstash
+* Elastic Serverless forwarder for AWS
+* Search connectors
+* ES Hadoop
+
+This document is intended for programmers who want to interact with the ingestion tools and doesn't contain information about the API libraries.
\ No newline at end of file
diff --git a/reference/ingestion-tools/apm/apm-agents.md b/reference/ingestion-tools/apm/apm-agents.md
new file mode 100644
index 0000000000..2a793b03d0
--- /dev/null
+++ b/reference/ingestion-tools/apm/apm-agents.md
@@ -0,0 +1,16 @@
+# APM agents
+
+% TO-DO: Add links to "APM basics"%
+
+This section contains reference information for APM Agents features, including:
+
+* Android
+* .NET
+* Go
+* Java
+* Node.js
+* PHP
+* Python
+* Ruby
+* RUM JavaScript
+* iOS
\ No newline at end of file
diff --git a/reference/ingestion-tools/cloud-enterprise/apm-settings.md b/reference/ingestion-tools/cloud-enterprise/apm-settings.md
new file mode 100644
index 0000000000..25e9bf76d6
--- /dev/null
+++ b/reference/ingestion-tools/cloud-enterprise/apm-settings.md
@@ -0,0 +1,97 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-apm-settings.html#ece_logging_settings_legacy
+---
+
+# APM settings for Elastic Cloud Enterprise [ece-manage-apm-settings]
+
+Starting in {{stack}} version 8.0, how you change APM settings and the settings that are available to you depend on how you spin up Elastic APM. There are two modes:
+
+{{fleet}}-managed APM integration
+: New deployments created in {{stack}} version 8.0 and later will be managed by {{fleet}}.
+
+ * This mode requires SSL/TLS configuration. Check [TLS configuration for {{fleet}}-managed mode](#ece-edit-apm-fleet-tls) for details.
+ * Check [APM integration input settings](/solutions/observability/apps/configure-apm-server.md) for all other Elastic APM configuration options in this mode.
+
+
+Standalone APM Server (legacy)
+: Deployments created prior to {{stack}} version 8.0 are in legacy mode. Upgrading to or past {{stack}} 8.0 does not remove you from legacy mode.
+
+ Check [Edit standalone APM settings (legacy)](#ece-edit-apm-standalone-settings-ece)for information on how to configure Elastic APM in this mode.
+
+
+To learn more about the differences between these modes, or to switch from Standalone APM Server (legacy) mode to {{fleet}}-managed, check [Switch to the Elastic APM integration](/solutions/observability/apps/switch-to-elastic-apm-integration.md).
+
+
+## TLS configuration for {{fleet}}-managed mode [ece-edit-apm-fleet-tls]
+
+Users running {{stack}} versions 7.16 or 7.17 need to manually configure TLS. This step is not necessary for {{stack}} versions ≥ 8.0.
+
+Pick one of the following options:
+
+1. Upload and configure a publicly signed {{es}} TLS certificates. Check [Encrypt traffic in clusters with a self-managed Fleet Server](/reference/ingestion-tools/fleet/secure-connections.md) for details.
+2. Change the {{es}} hosts where {{agent}}s send data from the default public URL, to the internal URL. In {{kib}}, navigate to **Fleet** and select the **Elastic Cloud agent policy**. Click **Fleet settings** and update the {{es}} hosts URL. For example, if the current URL is `https://123abc.us-central1.gcp.foundit.no:9244`, change it to `http://123abc.containerhost:9244`.
+
+
+## Edit standalone APM settings (legacy) [ece-edit-apm-standalone-settings-ece]
+
+Elastic Cloud Enterprise supports most of the legacy APM settings. Through a YAML editor in the console, you can append your APM Server properties to the `apm-server.yml` file. Your changes to the configuration file are read on startup.
+
+::::{important}
+Be aware that some settings could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest APM settings and syntax](/solutions/observability/apps/configure-apm-server.md).
+::::
+
+
+To change APM settings:
+
+1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md).
+2. On the **Deployments** page, select your deployment.
+
+ Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters.
+
+3. From your deployment menu, go to the **Edit** page.
+4. In the **APM** section, select **Edit user settings**. (For existing deployments with user settings, you may have to expand the **Edit apm-server.yml** caret instead.)
+5. Update the user settings.
+6. Select **Save changes**.
+
+::::{note}
+If a setting is not supported by Elastic Cloud Enterprise, you get an error message when you try to save. We suggest changing one setting with each save, so you know which one is not supported.
+::::
+
+
+
+## Example: Enable RUM and increase the rate limit (legacy) [ece_example_enable_rum_and_increase_the_rate_limit_legacy]
+
+When capturing the user interaction with clients with real user monitoring (RUM), particularly for situations with concurrent clients, you can increase the number of times each IP address can send a request to the RUM endpoint. Version 6.5 includes an additional settings for the LRU cache.
+
+For APM Server with RUM agent version 2.x or 3.x:
+
+```sh
+apm-server:
+ rum:
+ enabled: true
+ event rate:
+ limit: 3000
+ lru_size: 5000
+```
+
+
+## Example: Disable RUM (legacy) [ece_example_disable_rum_legacy]
+
+If you know that you won’t be tracking RUM data, you can disable the endpoint proactively.
+
+```sh
+apm-server:
+ rum:
+ enabled: false
+```
+
+
+## Example: Adjust the event limits configuration (legacy) [ece_example_adjust_the_event_limits_configuration_legacy]
+
+If the size of the HTTP request frequently exceeds the maximum, you might need to change the limit on the APM Server and adjust the relevant settings in the agent.
+
+```sh
+apm-server:
+ max_event_size: 407200
+```
diff --git a/reference/ingestion-tools/cloud/apm-settings.md b/reference/ingestion-tools/cloud/apm-settings.md
new file mode 100644
index 0000000000..91c872df5a
--- /dev/null
+++ b/reference/ingestion-tools/cloud/apm-settings.md
@@ -0,0 +1,374 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/cloud/current/ec-manage-apm-settings.html#ec-apm-settings
+---
+
+# APM settings for Elastic Cloud [ec-manage-apm-settings]
+
+Change how Elastic APM runs by providing your own user settings. Starting in {{stack}} version 8.0, how you change APM settings and the settings that are available to you depend on how you spin up Elastic APM. There are two modes:
+
+{{fleet}}-managed APM integration
+: New deployments created in {{stack}} version 8.0 and later will be managed by {{fleet}}.
+
+ Check [APM configuration reference](/solutions/observability/apps/configure-apm-server.md) for information on how to configure Elastic APM in this mode.
+
+
+Standalone APM Server (legacy)
+: Deployments created prior to {{stack}} version 8.0 are in legacy mode. Upgrading to or past {{stack}} 8.0 will not remove you from legacy mode.
+
+ Check [Edit standalone APM settings (legacy)](#ec-edit-apm-standalone-settings) and [Supported standalone APM settings (legacy)](#ec-apm-settings) for information on how to configure Elastic APM in this mode.
+
+
+To learn more about the differences between these modes, or to switch from Standalone APM Server (legacy) mode to {{fleet}}-managed, check [Switch to the Elastic APM integration](/solutions/observability/apps/switch-to-elastic-apm-integration.md).
+
+## Edit standalone APM settings (legacy) [ec-edit-apm-standalone-settings]
+
+User settings are appended to the `apm-server.yml` configuration file for your instance and provide custom configuration options.
+
+To add user settings:
+
+1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body).
+2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments.
+
+ On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list.
+
+3. From your deployment menu, go to the **Edit** page.
+4. In the **APM** section, select **Edit user settings**. (For existing deployments with user settings, you may have to expand the **Edit apm-server.yml** caret instead.)
+5. Update the user settings.
+6. Select **Save changes**.
+
+::::{note}
+If a setting is not supported by Elasticsearch Service, you will get an error message when you try to save.
+::::
+
+
+
+## Supported standalone APM settings (legacy) [ec-apm-settings]
+
+Elasticsearch Service supports the following setting when running APM in standalone mode (legacy).
+
+::::{tip}
+Some settings that could break your cluster if set incorrectly are blocklisted. The following settings are generally safe in cloud environments. For detailed information about APM settings, check the [APM documentation](/solutions/observability/apps/configure-apm-server.md).
+::::
+
+
+### Version 8.0+ [ec_version_8_0_3]
+
+This stack version removes support for some previously supported settings. These are all of the supported settings for this version:
+
+`apm-server.agent.config.cache.expiration`
+: When using APM agent configuration, determines cache expiration from information fetched from Kibana. Defaults to `30s`.
+
+`apm-server.aggregation.transactions.*`
+: This functionality is experimental and may be changed or removed completely in a future release. When enabled, APM Server produces transaction histogram metrics that are used to power the APM app. Shifting this responsibility from APM app to APM Server results in improved query performance and removes the need to store unsampled transactions.
+
+The following `apm-server.auth.anonymous.*` settings can be configured to restrict anonymous access to specified agents and/or services. This is primarily intended to allow limited access for untrusted agents, such as Real User Monitoring. Anonymous auth is automatically enabled when RUM is enabled. Otherwise, anonymous auth is disabled. When anonymous auth is enabled, only agents matching `allow_agent` and services matching `allow_service` are allowed. See below for details on default values for these.
+
+`apm-server.auth.anonymous.allow_agent`
+: Allow anonymous access only for specified agents.
+
+`apm-server.auth.anonymous.allow_service`
+: Allow anonymous access only for specified service names. By default, all service names are allowed. This is replacing the config option `apm-server.rum.allow_service_names`, previously available for `7.x` deployments.
+
+`apm-server.auth.anonymous.rate_limit.event_limit`
+: Rate limiting is defined per unique client IP address, for a limited number of IP addresses. Sites with many concurrent clients should consider increasing this limit. Defaults to 1000. This is replacing the config option `apm-server.rum.event_rate.limit`, previously available for `7.x` deployments.
+
+`apm-server.auth.anonymous.rate_limit.ip_limit`
+: Defines the maximum amount of events allowed per IP per second. Defaults to 300. The overall maximum event throughput for anonymous access is (event_limit * ip_limit). This is replacing the config option `apm-server.rum.event_rate.lru_size`, previously available for `7.x` deployments.
+
+`apm-server.auth.api_key.enabled`
+: Enables agent authorization using Elasticsearch API Keys. This is replacing the config option `apm-server.api_key.enabled`, previously available for `7.x` deployments.
+
+`apm-server.auth.api_key.limit`
+: Restrict how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys configured in your monitored services. Every unique API key triggers one request to Elasticsearch. This is replacing the config option `apm-server.api_key.limit`, previously available for `7.x` deployments.
+
+`apm-server.capture_personal_data`
+: When set to `true`, the server captures the IP of the instrumented service and its User Agent. Enabled by default.
+
+`apm-server.default_service_environment`
+: If specified, APM Server will record this value in events which have no service environment defined, and add it to agent configuration queries to Kibana when none is specified in the request from the agent.
+
+`apm-server.max_event_size`
+: Specifies the maximum allowed size of an event for processing by the server, in bytes. Defaults to `307200`.
+
+`apm-server.rum.allow_headers`
+: A list of Access-Control-Allow-Headers to allow RUM requests, in addition to "Content-Type", "Content-Encoding", and "Accept".
+
+`apm-server.rum.allow_origins`
+: A list of permitted origins for real user monitoring. User-agents will send an origin header that will be validated against this list. An origin is made of a protocol scheme, host, and port, without the URL path. Allowed origins in this setting can have a wildcard `*` to match anything (for example: `http://*.example.com`). If an item in the list is a single `*`, all origins will be allowed.
+
+`apm-server.rum.enabled`
+: Enable Real User Monitoring (RUM) Support. By default RUM is enabled. RUM does not support token based authorization. Enabled RUM endpoints will not require any authorization configured for other endpoints.
+
+`apm-server.rum.exclude_from_grouping`
+: A regexp to be matched against a stacktrace frame’s `file_name`. If the regexp matches, the stacktrace frame is not used for calculating error groups. The default pattern excludes stacktrace frames that have a filename starting with `/webpack`
+
+`apm-server.rum.library_pattern`
+: A regexp to be matched against a stacktrace frame’s `file_name` and `abs_path` attributes. If the regexp matches, the stacktrace frame is considered to be a library frame.
+
+`apm-server.rum.source_mapping.enabled`
+: If a source map has previously been uploaded, source mapping is automatically applied to all error and transaction documents sent to the RUM endpoint. Sourcemapping is enabled by default when RUM is enabled.
+
+`apm-server.rum.source_mapping.cache.expiration`
+: The `cache.expiration` determines how long a source map should be cached in memory. Note that values configured without a time unit will be interpreted as seconds.
+
+`apm-server.sampling.tail.enabled`
+: Set to `true` to enable tail based sampling. Disabled by default.
+
+`apm-server.sampling.tail.policies`
+: Criteria used to match a root transaction to a sample rate.
+
+`apm-server.sampling.tail.interval`
+: Synchronization interval for multiple APM Servers. Should be in the order of tens of seconds or low minutes.
+
+`logging.level`
+: Sets the minimum log level. The default log level is error. Available log levels are: error, warning, info, or debug.
+
+`logging.selectors`
+: Enable debug output for selected components. To enable all selectors use ["*"]. Other available selectors are "beat", "publish", or "service". Multiple selectors can be chained.
+
+`logging.metrics.enabled`
+: If enabled, apm-server periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics are logged on shutdown. The default is false.
+
+`logging.metrics.period`
+: The period after which to log the internal metrics. The default is 30s.
+
+`max_procs`
+: Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system.
+
+`output.elasticsearch.flush_interval`
+: The maximum duration to accumulate events for a bulk request before being flushed to Elasticsearch. The value must have a duration suffix. The default is 1s.
+
+`output.elasticsearch.flush_bytes`
+: The bulk request size threshold, in bytes, before flushing to Elasticsearch. The value must have a suffix. The default is 5MB.
+
+
+### Version 7.17+ [ec_version_7_17]
+
+This stack version includes all of the settings from 7.16 and the following:
+
+Allow anonymous access only for specified agents and/or services. This is primarily intended to allow limited access for untrusted agents, such as Real User Monitoring. Anonymous auth is automatically enabled when RUM is enabled. Otherwise, anonymous auth is disabled. When anonymous auth is enabled, only agents matching allow_agent and services matching allow_service are allowed. See below for details on default values for these.
+
+`apm-server.auth.anonymous.allow_agent`
+: Allow anonymous access only for specified agents.
+
+`apm-server.auth.anonymous.allow_service`
+: Allow anonymous access only for specified service names. By default, all service names are allowed. This will be replacing the config option `apm-server.rum.allow_service_names` from `8.0` on.
+
+`apm-server.auth.anonymous.rate_limit.event_limit`
+: Rate limiting is defined per unique client IP address, for a limited number of IP addresses. Sites with many concurrent clients should consider increasing this limit. Defaults to 1000. This will be replacing the config option`apm-server.rum.event_rate.limit` from `8.0` on.
+
+`apm-server.auth.anonymous.rate_limit.ip_limit`
+: Defines the maximum amount of events allowed per IP per second. Defaults to 300. The overall maximum event throughput for anonymous access is (event_limit * ip_limit). This will be replacing the config option `apm-server.rum.event_rate.lru_size` from `8.0` on.
+
+`apm-server.auth.api_key.enabled`
+: Enables agent authorization using Elasticsearch API Keys. This will be replacing the config option `apm-server.api_key.enabled` from `8.0` on.
+
+`apm-server.auth.api_key.limit`
+: Restrict how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys configured in your monitored services. Every unique API key triggers one request to Elasticsearch. This will be replacing the config option `apm-server.api_key.limit` from `8.0` on.
+
+
+### Supported versions before 8.x [ec_supported_versions_before_8_x_3]
+
+`apm-server.aggregation.transactions.*`
+: This functionality is experimental and may be changed or removed completely in a future release. When enabled, APM Server produces transaction histogram metrics that are used to power the APM app. Shifting this responsibility from APM app to APM Server results in improved query performance and removes the need to store unsampled transactions.
+
+`apm-server.default_service_environment`
+: If specified, APM Server will record this value in events which have no service environment defined, and add it to agent configuration queries to Kibana when none is specified in the request from the agent.
+
+`apm-server.rum.allow_service_names`
+: A list of service names to allow, to limit service-specific indices and data streams created for unauthenticated RUM events. If the list is empty, any service name is allowed.
+
+`apm-server.ilm.setup.mapping`
+: ILM policies now support configurable index suffixes. You can append the `policy_name` with an `index_suffix` based on the `event_type`, which can be one of `span`, `transaction`, `error`, or `metric`.
+
+`apm-server.rum.allow_headers`
+: List of Access-Control-Allow-Headers to allow RUM requests, in addition to "Content-Type", "Content-Encoding", and "Accept".
+
+`setup.template.append_fields`
+: A list of fields to be added to the Elasticsearch template and Kibana data view (formerly *index pattern*).
+
+`apm-server.api_key.enabled`
+: Enabled by default. For any requests where APM Server accepts a `secret_token` in the authorization header, it now alternatively accepts an API Key.
+
+`apm-server.api_key.limit`
+: Configure how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys used in monitored services. Default value is 100.
+
+`apm-server.ilm.setup.enabled`
+: When enabled, APM Server creates aliases, event type specific settings and ILM policies. If disabled, event type specific templates need to be managed manually.
+
+`apm-server.ilm.setup.overwrite`
+: Set to `true` to apply custom policies and to properly overwrite templates when switching between using ILM and not using ILM.
+
+`apm-server.ilm.setup.require_policy`
+: Set to `false` when policies are set up outside of APM Server but referenced in this configuration.
+
+`apm-server.ilm.setup.policies`
+: Array of ILM policies. Each entry has a `name` and a `policy`.
+
+`apm-server.ilm.setup.mapping`
+: Array of mappings of ILM policies to event types. Each entry has a `policy_name` and an `event_type`, which can be one of `span`, `transaction`, `error`, or `metric`.
+
+`apm-server.rum.source_mapping.enabled`
+: When events are monitored using the RUM agent, APM Server tries to apply source mapping by default. This configuration option allows you to disable source mapping on stack traces.
+
+`apm-server.rum.source_mapping.cache.expiration`
+: Sets how long a source map should be cached before being refetched from Elasticsearch. Default value is 5m.
+
+`output.elasticsearch.pipeline`
+: APM comes with a default pipeline definition. This allows overriding it. To disable, you can set `pipeline: _none`
+
+`apm-server.agent.config.cache.expiration`
+: When using APM agent configuration, determines cache expiration from information fetched from Kibana. Defaults to `30s`.
+
+`apm-server.ilm.enabled`
+: Enables index lifecycle management (ILM) for the indices created by the APM Server. Defaults to `false`. If you’re updating an existing APM Server, you must also set `setup.template.overwrite: true`. If you don’t, the index template will not be overridden and ILM changes will not take effect.
+
+`apm-server.max_event_size`
+: Specifies the maximum allowed size of an event for processing by the server, in bytes. Defaults to `307200`.
+
+`output.elasticsearch.pipelines`
+: Adds an array for pipeline selector configurations that support conditionals, format string-based field access, and name mappings used to [parse data using ingest node pipelines](/solutions/observability/apps/application-performance-monitoring-apm.md).
+
+`apm-server.register.ingest.pipeline.enabled`
+: Loads the pipeline definitions to Elasticsearch when the APM Server starts up. Defaults to `false`.
+
+`apm-server.register.ingest.pipeline.overwrite`
+: Overwrites the existing pipeline definitions in Elasticsearch. Defaults to `true`.
+
+`apm-server.rum.event_rate.lru_size`
+: Defines the number of unique IP addresses that can be tracked in the LRU cache, which keeps a rate limit for each of the most recently seen IP addresses. Defaults to `1000`.
+
+`apm-server.rum.event_rate.limit`
+: Sets the rate limit per second for each IP address for events sent to the APM Server v2 RUM endpoint. Defaults to `300`.
+
+`apm-server.rum.enabled`
+: Enables/disables Real User Monitoring (RUM) support. Defaults to `true` (enabled).
+
+`apm-server.rum.allow_origins`
+: Specifies a list of permitted origins from user agents. The default is `*`, which allows everything.
+
+`apm-server.rum.library_pattern`
+: Differentiates library frames against specific attributes. Refer to "Configure Real User Monitoring (RUM)" in the [Observability Guide](/solutions/observability.md) to learn more. The default value is `"node_modules|bower_components|~"`.
+
+`apm-server.rum.exclude_from_grouping`
+: Configures the RegExp to be matched against a stacktrace frame’s `file_name`.
+
+`apm-server.rum.rate_limit`
+: Sets the rate limit per second for each IP address for requests sent to the RUM endpoint. Defaults to `10`.
+
+`apm-server.capture_personal_data`
+: When set to `true`, the server captures the IP of the instrumented service and its User Agent. Enabled by default.
+
+`setup.template.settings.index.number_of_shards`
+: Specifies the number of shards for the Elasticsearch template.
+
+`setup.template.settings.index.number_of_replicas`
+: Specifies the number of replicas for the Elasticsearch template.
+
+`apm-server.frontend.enabled`
+: Enables/disables frontend support.
+
+`apm-server.frontend.allow_origins`
+: Specifies the comma-separated list of permitted origins from user agents. The default is `*`, which allows everything.
+
+`apm-server.frontend.library_pattern`
+: Differentiates library frames against [specific attributes](https://www.elastic.co/guide/en/apm/server/6.3/configuration-frontend.html). The default value is `"node_modules|bower_components|~"`.
+
+`apm-server.frontend.exclude_from_grouping`
+: Configures the RegExp to be matched against a stacktrace frame’s `file_name`.
+
+`apm-server.frontend.rate_limit`
+: Sets the rate limit per second per IP address for requests sent to the frontend endpoint. Defaults to `10`.
+
+`apm-server.capture_personal_data`
+: When set to `true`, the server captures the IP address of the instrumented service and its User Agent. Enabled by default.
+
+`max_procs`
+: Max number of CPUs used simultaneously. Defaults to the number of logical CPUs available.
+
+`setup.template.enabled`
+: Set to false to disable loading of Elasticsearch templates used for APM indices. If set to false, you must load the template manually.
+
+`setup.template.name`
+: Name of the template. Defaults to `apm-server`.
+
+`setup.template.pattern`
+: The template pattern to apply to the default index settings. Default is `apm-*`
+
+`setup.template.settings.index.number_of_shards`
+: Specifies the number of shards for the Elasticsearch template.
+
+`setup.template.settings.index.number_of_replicas`
+: Specifies the number of replicas for the Elasticsearch template.
+
+`output.elasticsearch.bulk_max_size`
+: Maximum number of events to bulk together in a single Elasticsearch bulk API request. By default, this number changes based on the size of the instance:
+
+ | Instance size | Default max events |
+ | --- | --- |
+ | 512MB | 267 |
+ | 1GB | 381 |
+ | 2GB | 533 |
+ | 4GB | 762 |
+ | 8GB | 1067 |
+
+
+`output.elasticsearch.indices`
+: Array of index selector rules supporting conditionals and formatted string.
+
+`output.elasticsearch.index`
+: The index to write the events to. If changed, `setup.template.name` and `setup.template.pattern` must be changed accordingly.
+
+`output.elasticsearch.worker`
+: Maximum number of concurrent workers publishing events to Elasticsearch. By default, this number changes based on the size of the instance:
+
+ | Instance size | Default max concurrent workers |
+ | --- | --- |
+ | 512MB | 5 |
+ | 1GB | 7 |
+ | 2GB | 10 |
+ | 4GB | 14 |
+ | 8GB | 20 |
+
+
+`queue.mem.events`
+: Maximum number of events to concurrently store in the internal queue. By default, this number changes based on the size of the instance:
+
+ | Instance size | Default max events |
+ | --- | --- |
+ | 512MB | 2000 |
+ | 1GB | 4000 |
+ | 2GB | 8000 |
+ | 4GB | 16000 |
+ | 8GB | 32000 |
+
+
+`queue.mem.flush.min_events`
+: Minimum number of events to have before pushing them to Elasticsearch. By default, this number changes based on the size of the instance.
+
+`queue.mem.flush.timeout`
+: Maximum duration before sending the events to the output if the `min_events` is not crossed.
+
+
+### Logging settings [ec_logging_settings]
+
+`logging.level`
+: Specifies the minimum log level. One of *debug*, *info*, *warning*, or *error*. Defaults to *info*.
+
+`logging.selectors`
+: The list of debugging-only selector tags used by different APM Server components. Use *** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing.
+
+`logging.metrics.enabled`
+: If enabled, APM Server periodically logs its internal metrics that have changed in the last period. Defaults to *true*.
+
+`logging.metrics.period`
+: The period after which to log the internal metrics. Defaults to *30s*.
+
+::::{note}
+To change logging settings you must first [enable deployment logging](/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md).
+::::
+
+
+
+
diff --git a/reference/ingestion-tools/fleet/_agent_configuration_encryption.md b/reference/ingestion-tools/fleet/_agent_configuration_encryption.md
new file mode 100644
index 0000000000..102138eabd
--- /dev/null
+++ b/reference/ingestion-tools/fleet/_agent_configuration_encryption.md
@@ -0,0 +1,26 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/_elastic_agent_configuration_encryption.html
+---
+
+# {{agent}} configuration encryption [_agent_configuration_encryption]
+
+It is important for you to understand the {{agent}} security model and how it handles sensitive values in integration configurations. At a high level, {{agent}} receives configuration data from {{fleet-server}} over an encrypted connection and persists the encrypted configuration on disk. This persistence allows agents to continue to operate even if they are unable to connect to the {{fleet-server}}.
+
+The entire Fleet Agent Policy is encrypted at rest, but is recoverable if you have access to both the encrypted configuration data and the associated key. The key material is stored in an OS-dependent manner as described in the following sections.
+
+
+## Darwin (macOS) [_darwin_macos]
+
+Key material is stored in the system keychain. The value is stored as is without any additional transformations.
+
+
+## Windows [_windows]
+
+Configuration data is encrypted with [DPAPI](https://learn.microsoft.com/en-us/dotnet/standard/security/how-to-use-data-protection) `CryptProtectData` with `CRYPTPROTECT_LOCAL_MACHINE``. Additional entropy is derived from crypto/rand bytes stored in the `.seed` file. Configuration data is stored as separate files, where the name of the file is a SHA256 hash of the key, and the content of the file is encrypted with DPAPI data. The security of key data relies on file system permissions. Only the Administrator should be able to access the file.
+
+
+## Linux [_linux]
+
+The encryption key is derived from crypto/rand bytes stored in the `.seed` file after PBKDF2 transformation. Configuration data is stored as separate files, where the name of the file is a SHA256 hash of the key, and the content of the file is AES256-GSM encrypted. The security of the key material largely relies on file system permissions.
+
diff --git a/reference/ingestion-tools/fleet/add-cloud-metadata-processor.md b/reference/ingestion-tools/fleet/add-cloud-metadata-processor.md
new file mode 100644
index 0000000000..2775044934
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add-cloud-metadata-processor.md
@@ -0,0 +1,182 @@
+---
+navigation_title: "add_cloud_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add-cloud-metadata-processor.html
+---
+
+# Add cloud metadata [add-cloud-metadata-processor]
+
+
+::::{tip}
+Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly.
+::::
+
+
+The `add_cloud_metadata` processor enriches each event with instance metadata from the machine’s hosting provider. At startup the processor queries a list of hosting providers and caches the instance metadata.
+
+The following providers are supported:
+
+* Amazon Web Services (AWS)
+* Digital Ocean
+* Google Compute Engine (GCE)
+* [Tencent Cloud](https://www.qcloud.com/?lang=en) (QCloud)
+* Alibaba Cloud (ECS)
+* Huawei Cloud (ECS)
+* Azure Virtual Machine
+* Openstack Nova
+
+The Alibaba Cloud and Tencent providers are disabled by default, because they require to access a remote host. Use the `providers` setting to select a list of default providers to query.
+
+
+## Example [_example_2]
+
+This configuration enables the processor:
+
+```yaml
+ - add_cloud_metadata: ~
+```
+
+The metadata that is added to events varies by hosting provider. For examples, refer to [Provider-specific metadata examples](#provider-specific-examples).
+
+
+## Configuration settings [_configuration_settings]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `timeout` | No | `3s` | Maximum amount of time to wait for a successful response when detecting the hosting provider. If a timeout occurs, no instance metadata is added to the events. This makes it possible to enable this processor for all your deployments (in the cloud or on-premise). |
+| `providers` | No | | List of provider names to use. If `providers` is not configured, all providers that do not access a remote endpoint are enabled by default. The list of providers may alternatively be configured with the environment variable `BEATS_ADD_CLOUD_METADATA_PROVIDERS`, by setting it to a comma-separated list of provider names.
The list of supported provider names includes:
* `alibaba` or `ecs` for the Alibaba Cloud provider (disabled by default).
* `azure` for Azure Virtual Machine (enabled by default).
* `digitalocean` for Digital Ocean (enabled by default).
* `aws` or `ec2` for Amazon Web Services (enabled by default).
* `gcp` for Google Compute Engine (enabled by default).
* `openstack` or `nova` for Openstack Nova (enabled by default).
* `openstack-ssl` or `nova-ssl` for Openstack Nova when SSL metadata APIs are enabled (enabled by default).
* `tencent` or `qcloud` for Tencent Cloud (disabled by default).
* `huawei` for Huawei Cloud (enabled by default).
|
+| `overwrite` | No | `false` | Whether to overwrite existing cloud fields. If `true`, the processoroverwrites existing `cloud.*` fields. |
+
+The `add_cloud_metadata` processor supports SSL options to configure the http client used to query cloud metadata.
+
+For more information, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specifically the settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options).
+
+
+## Provider-specific metadata examples [provider-specific-examples]
+
+The following sections show examples for each of the supported providers.
+
+
+### AWS [_aws]
+
+```json
+{
+ "cloud": {
+ "account.id": "123456789012",
+ "availability_zone": "us-east-1c",
+ "instance.id": "i-4e123456",
+ "machine.type": "t2.medium",
+ "image.id": "ami-abcd1234",
+ "provider": "aws",
+ "region": "us-east-1"
+ }
+}
+```
+
+
+### Digital Ocean [_digital_ocean]
+
+```json
+{
+ "cloud": {
+ "instance.id": "1234567",
+ "provider": "digitalocean",
+ "region": "nyc2"
+ }
+}
+```
+
+
+### GCP [_gcp]
+
+```json
+{
+ "cloud": {
+ "availability_zone": "us-east1-b",
+ "instance.id": "1234556778987654321",
+ "machine.type": "f1-micro",
+ "project.id": "my-dev",
+ "provider": "gcp"
+ }
+}
+```
+
+
+### Tencent Cloud [_tencent_cloud]
+
+```json
+{
+ "cloud": {
+ "availability_zone": "gz-azone2",
+ "instance.id": "ins-qcloudv5",
+ "provider": "qcloud",
+ "region": "china-south-gz"
+ }
+}
+```
+
+
+### Huawei Cloud [_huawei_cloud]
+
+```json
+{
+ "cloud": {
+ "availability_zone": "cn-east-2b",
+ "instance.id": "37da9890-8289-4c58-ba34-a8271c4a8216",
+ "provider": "huawei",
+ "region": "cn-east-2"
+ }
+}
+```
+
+
+### Alibaba Cloud [_alibaba_cloud]
+
+This metadata is only available when VPC is selected as the network type of the ECS instance.
+
+```json
+{
+ "cloud": {
+ "availability_zone": "cn-shenzhen",
+ "instance.id": "i-wz9g2hqiikg0aliyun2b",
+ "provider": "ecs",
+ "region": "cn-shenzhen-a"
+ }
+}
+```
+
+
+### Azure Virtual Machine [_azure_virtual_machine]
+
+```json
+{
+ "cloud": {
+ "provider": "azure",
+ "instance.id": "04ab04c3-63de-4709-a9f9-9ab8c0411d5e",
+ "instance.name": "test-az-vm",
+ "machine.type": "Standard_D3_v2",
+ "region": "eastus2"
+ }
+}
+```
+
+
+### Openstack Nova [_openstack_nova]
+
+```json
+{
+ "cloud": {
+ "instance.name": "test-998d932195.mycloud.tld",
+ "instance.id": "i-00011a84",
+ "availability_zone": "xxxx-az-c",
+ "provider": "openstack",
+ "machine.type": "m2.large"
+ }
+}
+```
+
diff --git a/reference/ingestion-tools/fleet/add-fleet-server-cloud.md b/reference/ingestion-tools/fleet/add-fleet-server-cloud.md
new file mode 100644
index 0000000000..ec01dde2d8
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add-fleet-server-cloud.md
@@ -0,0 +1,83 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-cloud.html
+---
+
+# Deploy on Elastic Cloud [add-fleet-server-cloud]
+
+To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts.
+
+{{fleet-server}} can be provisioned and hosted on {{ecloud}}. When the Cloud deployment is created, a highly available set of {{fleet-server}}s is provisioned automatically.
+
+This approach might be right for you if you want to reduce on-prem compute resources and you’d like Elastic to take care of provisioning and life cycle management of your deployment.
+
+With this approach, multiple {{fleet-server}}s are automatically provisioned to satisfy the chosen instance size (instance sizes are modified to satisfy the scale requirement). You can also choose the resources allocated to each {{fleet-server}} and whether you want each {{fleet-server}} to be deployed in multiple availability zones. If you choose multiple availability zones to address your fault-tolerance requirements, those instances are also utilized to balance the load.
+
+This approach might *not* be right for you if you have restrictions on connectivity to the internet.
+
+:::{image} images/fleet-server-cloud-deployment.png
+:alt: {{fleet-server}} Cloud deployment model
+:::
+
+
+## Compatibility and prerequisites [fleet-server-compatibility]
+
+{{fleet-server}} is compatible with the following Elastic products:
+
+* {{stack}} 7.13 or later.
+
+ * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases).
+ * {{kib}} should be on the same minor version as {{es}}.
+
+* {{ece}} 2.10 or later
+
+ * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`.
+ * The deployment template must contain an {{integrations-server}} node.
+
+ For more information about hosting {{fleet-server}} on {{ece}}, refer to [Manage your {{integrations-server}}](/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md).
+
+
+::::{note}
+The TLS certificates used to secure connections between {{agent}} and {{fleet-server}} are managed by {{ecloud}}. You do not need to create a private key or generate certificates.
+::::
+
+
+When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. See the following table for default port assignments:
+
+| Component communication | Default port |
+| --- | --- |
+| Elastic Agent → {{fleet-server}} | 443 |
+| Elastic Agent → {{es}} | 443 |
+| Elastic Agent → Logstash | 5044 |
+| Elastic Agent → {{kib}} ({{fleet}}) | 443 |
+| {{fleet-server}} → {{kib}} ({{fleet}}) | 443 |
+| {{fleet-server}} → {{es}} | 443 |
+
+::::{note}
+If you do not specify the port for {{es}} as 443, the {{agent}} defaults to 9200.
+::::
+
+
+
+## Setup [add-fleet-server-cloud-set-up]
+
+To confirm that an {{integrations-server}} is available in your deployment:
+
+1. Open {{fleet}}.
+2. On the **Agent policies** tab, look for the **{{ecloud}} agent policy**. This policy is managed by {{ecloud}}, and contains a {{fleet-server}} integration and an Elastic APM integration. You cannot modify the policy. Confirm that the agent status is **Healthy**.
+
+:::::{tip}
+Don’t see the agent? Make sure your deployment includes an {{integrations-server}} instance. This instance is required to use {{fleet}}.
+
+:::{image} images/integrations-server-hosted-container.png
+:alt: Hosted {integrations-server}
+:class: screenshot
+:::
+
+:::::
+
+
+
+## Next steps [add-fleet-server-cloud-next]
+
+Now you’re ready to add {{agent}}s to your host systems. To learn how, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md).
diff --git a/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md b/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md
new file mode 100644
index 0000000000..bfefc80a12
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md
@@ -0,0 +1,564 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-kubernetes.html
+---
+
+# Deploy Fleet Server on Kubernetes [add-fleet-server-kubernetes]
+
+::::{note}
+If your {{stack}} is orchestrated by [ECK](/deploy-manage/deploy/cloud-on-k8s.md), we recommend to deploy the {{fleet-server}} through the operator. That simplifies the process, as the operator automatically handles most of the resources configuration and setup steps.
+
+Refer to [Run Fleet-managed {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md) for more information.
+
+::::
+
+
+::::{important}
+This guide assumes familiarity with Kubernetes concepts and resources, such as `Deployments`, `Pods`, `Secrets`, or `Services`, as well as configuring applications in Kubernetes environments.
+
+::::
+
+
+To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts.
+
+You can deploy {{fleet-server}} on Kubernetes and manage it yourself. In this deployment model, you are responsible for high-availability, fault-tolerance, and lifecycle management of the {{fleet-server}}.
+
+To deploy a {{fleet-server}} on Kubernetes and register it into {{fleet}} you will need the following details:
+
+* The **Policy ID** of a {{fleet}} policy configured with the {{fleet-server}} integration.
+* A **Service token**, used to authenticate {{fleet-server}} with Elasticsearch.
+* For outgoing traffic:
+
+ * The **{{es}} endpoint URL** where the {{fleet-server}} should connect to, configured also in the {{es}} output associated to the policy.
+ * When a private or intermediate Certificate Authority (CA) is used to sign the {{es}} certificate, the **{{es}} CA file** or the **CA fingerprint**, configured also in the {{es}} output associated to the policy.
+
+* For incoming connections:
+
+ * A **TLS/SSL certificate and key** for the {{fleet-server}} HTTPS endpoint, used to encrypt the traffic from the {{agent}}s. This certificate has to be valid for the **{{fleet-server}} Host URL** that {{agent}}s use when connecting to the {{fleet-server}}.
+
+* Extra TLS/SSL certificates and configuration parameters in case of requiring [mutual TLS](/reference/ingestion-tools/fleet/mutual-tls.md) (not covered in this document).
+
+This document walks you through the complete setup process, organized into the following sections:
+
+* [Compatibility requirements](#add-fleet-server-kubernetes-compatibility)
+* [{{fleet-server}} and SSL/TLS certificates considerations](#add-fleet-server-kubernetes-cert-prereq)
+* [{{fleet}} preparations](#add-fleet-server-kubernetes-add-server)
+* [{{fleet-server}} installation](#add-fleet-server-kubernetes-install)
+* [Troubleshoot {{fleet-server}}](#add-fleet-server-kubernetes-troubleshoot)
+* [Next steps](#add-fleet-server-kubernetes-next)
+
+
+## Compatibility [add-fleet-server-kubernetes-compatibility]
+
+{{fleet-server}} is compatible with the following Elastic products:
+
+* {{stack}} 7.13 or later.
+
+ * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases).
+ * {{kib}} should be on the same minor version as {{es}}.
+
+
+
+## Prerequisites [add-fleet-server-kubernetes-prereq]
+
+Before deploying {{fleet-server}}, you need to:
+
+* Prepare the SSL/TLS configuration, server certificate, [{{fleet-server}} host settings](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-server-hosts-setting), and needed Certificate Authorities (CAs).
+* Ensure components have access to the ports needed for communication.
+
+
+### {{fleet-server}} and SSL/TLS certificates considerations [add-fleet-server-kubernetes-cert-prereq]
+
+This section shows the minimum requirements in terms of Transport Layer Security (TLS) certificates for the {{fleet-server}}, assuming no mutual TLS (mTLS) is needed. Refer to [One-way and mutual TLS certifications flow](/reference/ingestion-tools/fleet/tls-overview.md) and [{{agent}} deployment models with mutual TLS](/reference/ingestion-tools/fleet/mutual-tls.md) for more information about the configuration needs of both approaches.
+
+There are two main traffic flows for {{fleet-server}}, each with different TLS requirements:
+
+
+#### [{{agent}} → {{fleet-server}}] inbound traffic flow [add-fleet-server-kubernetes-cert-inbound]
+
+In this flow {{fleet-server}} acts as the server and {{agent}} acts as the client. Therefore, {{fleet-server}} requires a TLS certificate and key, and {{agent}} will need to trust the CA certificate used to sign the {{fleet-server}} certificate.
+
+::::{note}
+A {{fleet-server}} certificate is not required when installing the server using the **Quick start** mode, but should always be used for **production** deployments. In **Quick start** mode, the {{fleet-server}} uses a self-signed certificate and the {{agent}}s have to be enrolled with the `--insecure` option.
+
+::::
+
+
+If your organization already uses the {{stack}}, you may have a CA certificate that could be used to generate the new cert for the {{fleet-server}}. If you do not have a CA certificate, refer to [Generate a custom certificate and private key for {{fleet-server}}](/reference/ingestion-tools/fleet/secure-connections.md#generate-fleet-server-certs) for an example to generate a CA and a server certificate using the `elasticsearch-certutil` tool.
+
+::::{important}
+Before creating the certificate, you need to know and plan in advance the [hostname / URL](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-server-hosts-setting) that the {{agent}} clients will use to access the {{fleet-server}}. This is important because the **hostname** part of the URL needs to be included in the server certificate as an `x.509 Subject Alternative Name (SAN)`. If you plan to make your {{fleet-server}} accessible through **multiple hostnames** or **FQDNs**, add all of them to the server certificate, and take in mind that the **{{fleet-server}} also needs to access the {{fleet}} URL during its bootstrap process**.
+
+::::
+
+
+
+#### [{{fleet-server}} → {{es}} output] outbound traffic flow [add-fleet-server-kubernetes-cert-outbound]
+
+In this flow, {{fleet-server}} acts as the client and {{es}} acts as the HTTPS server. For the communication to succeed, {{fleet-server}} needs to trust the CA certificate used to sign the {{es}} certificate. If your {{es}} cluster uses certificates signed by a corporate CA or multiple intermediate CAs you will need to use them during the {{fleet-server}} setup.
+
+::::{note}
+If your {{es}} cluster is on Elastic Cloud or if it uses a certificate signed by a public and known CA, you won’t need the {{es}} CA during the setup.
+
+::::
+
+
+In summary, you need:
+
+* A **server certificate and key**, valid for the {{fleet-server}} URL. The CA used to sign this certificate will be needed by the {{agent}} clients and the {{fleet-server}} itself.
+* The **CA certificate** (or certificates) associated to your {{es}} cluster, except if you are sure your {{es}} certificate is fully trusted publicly.
+
+
+### Default port assignments [default-port-assignments-kubernetes]
+
+When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. Refer to the following table for default port assignments:
+
+| | |
+| --- | --- |
+| Component communication | Default port |
+| {{agent}} → {{fleet-server}} | 8220 |
+| {{fleet-server}} → {{es}} | 9200 |
+| {{fleet-server}} → {{kib}} (optional, for {{fleet}} setup) | 5601 |
+| {{agent}} → {{es}} | 9200 |
+| {{agent}} → Logstash | 5044 |
+| {{agent}} → {{kib}} (optional, for {{fleet}} setup) | 5601 |
+
+In Kubernetes environments, you can adapt these ports without modifying the listening ports of the {{fleet-server}} or other applications, as traffic is managed by Kubernetes `Services`. This guide includes an example where {{agent}}s connect to the {{fleet-server}} through port `443` instead of the default `8220`.
+
+
+## Add {{fleet-server}} [add-fleet-server-kubernetes-add-server]
+
+A {{fleet-server}} is an {{agent}} that is enrolled in a {{fleet-server}} policy. The policy configures the agent to operate in a special mode to serve as a {{fleet-server}} in your deployment.
+
+
+### {{fleet}} preparations [add-fleet-server-kubernetes-preparations]
+
+::::{tip}
+If you already have a {{fleet}} policy with the {{fleet-server}} integration, you know its ID, and you know how to generate an [{{es}} service token](elasticsearch://docs/reference/elasticsearch/command-line-tools/service-tokens-command.md) for the {{fleet-server}}, skip directly to [{{fleet-server}} installation](#add-fleet-server-kubernetes-install).
+
+Also note that the `service token` required by the {{fleet-server}} is different from the `enrollment tokens` used by {{agent}}s to enroll to {{fleet}}.
+
+::::
+
+
+1. In {{kib}}, open **{{fleet}} → Settings** and ensure the **Elasticsearch output** that will be used by the {{fleet-server}} policy is correctly configured, paying special attention that:
+
+ * The **hosts** field includes a valid URL that will be reachable by the {{fleet-server}} Pod(s).
+ * If your {{es}} cluster uses certificates signed by private or intermediate CAs not publicly trusted, you have added the trust information in the **Elasticsearch CA trusted fingerprint** field or in the **advanced configuration** section through the `ssl.certificate_authorities` setting. For an example, refer to [Secure Connections](/reference/ingestion-tools/fleet/secure-connections.md#_encrypt_traffic_between_agents_fleet_server_and_es) documentation.
+
+ ::::{important}
+ This validation step is critical. The {{es}} host URL and CA information has to be added **in both the {{es}} output and the environment variables** provided to the {{fleet-server}}. It’s a common mistake to ignore the output settings believing that the environment variables will prevail, when the environment variables are only used during the bootstrap of the {{fleet-server}}.
+
+ If the URL that {{fleet-server}} will use to access {{es}} is different from the {{es}} URL used by other clients, you may want to create a dedicated **{{es}} output** for {{fleet-server}}.
+
+ ::::
+
+2. Go to **{{fleet}} → Agent Policies** and select **Create agent policy** to create a policy for the {{fleet-server}}:
+
+ * Set a **name** for the policy, for example `Fleet Server Policy Kubernetes`.
+ * Do **not** select the option **Collect system logs and metrics**. This option adds the System integration to the {{agent}} policy. Because {{fleet-server}} will run as a Kubernetes Pod without any visibility to the Kubernetes node, there won’t be a system to monitor.
+ * Select the **output** that the {{fleet-server}} needs to use to contact {{es}}. This should be the output that you verified in the previous step.
+ * Optionally, you can set the **inactivity timeout** and **inactive agent unenrollment timeout** parameters to automatically unenroll and invalidate API keys after the {{fleet-server}} agents become inactive. This is especially useful in Kubernetes environments, where {{fleet-server}} Pods are ephemeral, and new {{agent}}s appear in {{fleet}} UI after Pod recreations.
+
+3. Open the created policy, and from the **Integrations** tab select **Add integration**:
+
+ * Search for and select the {{fleet-server}} integration.
+ * Select **Add {{fleet-server}}** to add the integration to the {{agent}} policy.
+
+ At this point you can configure the integration settings per [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md).
+
+ * When done, select **Save and continue**. Do not add an {{agent}} at this stage.
+
+4. Open the configured policy, which now includes the {{fleet-server}} integration, and select **Actions** → **Add {{fleet-server}}**. In the next dialog:
+
+ * Confirm that the **policy for {{fleet-server}}** is properly selected.
+ * **Choose a deployment mode for security**:
+
+ * If you select **Quick start**, the {{fleet-server}} generates a self-signed TLS certificate, and subsequent agents should be enrolled using the `--insecure` flag.
+ * If you select **Production**, you provide a TLS certificate, key and CA to the {{fleet-server}} during the deployment, and subsequent agents will need to trust the certificate’s CA.
+
+ * Add your **{{fleet-server}} Host** information. This is the URL that clients ({{agent}}s) will use to connect to the {{fleet-server}}:
+
+ * In **Production** mode, the {{fleet-server}} certificate must include the hostname part of the URL as an `x509 SAN`, and the {{fleet-server}} itself will need to access that URL during its bootstrap process.
+ * On Kubernetes environments this could be the name of the `Kubernetes service` or reverse proxy that exposes the {{fleet-server}} Pods.
+ * In the provided example we use `https://fleet-svc.` as the URL, which corresponds to the Kubernetes service DNS resolution.
+
+ * Select **generate service token** to create a token for the {{fleet-server}}.
+ * From **Install {{fleet-server}} to a centralized host → Linux**, take note of the values of the following settings that will be needed for the {{fleet-server}} installation:
+
+ * Service token(specified by `--fleet-server-service-token` parameter).
+ * {{fleet}} policy ID (specified by `--fleet-server-policy` parameter).
+ * {{es}} URL (specified by `--fleet-server-es` parameter).
+
+5. Keep the {{kib}} browser window open and continue with the [{{fleet-server}} installation](#add-fleet-server-kubernetes-install).
+
+ When the {{fleet-server}} installation has succeeded, the **Confirm Connection** UI will show a **Connected** status.
+
+
+
+### {{fleet-server}} installation [add-fleet-server-kubernetes-install]
+
+
+#### Installation overview [add-fleet-server-kubernetes-install-overview]
+
+To deploy {{fleet-server}} on Kubernetes and enroll it into {{fleet}} you need the following details:
+
+* **Policy ID** of the {{fleet}} policy configured with the {{fleet-server}} integration.
+* **Service token**, that you can generate following the [{{fleet}} preparations](#add-fleet-server-kubernetes-preparations) or manually using the [{{es}}-service-tokens command](elasticsearch://docs/reference/elasticsearch/command-line-tools/service-tokens-command.md).
+* **{{es}} endpoint URL**, configured in both the {{es}} output associated to the policy and in the Fleet Server as an environment variable.
+* **{{es}} CA certificate file**, configured in both the {{es}} output associated to the policy and in the Fleet Server.
+* {{fleet-server}} **certificate and key** (for **Production** deployment mode only).
+* {{fleet-server}} **CA certificate file** (for **Production** deployment mode only).
+* {{fleet-server}} URL (for **Production** deployment mode only).
+
+If you followed the [{{fleet-server}} and SSL/TLS certificates considerations](#add-fleet-server-kubernetes-cert-prereq) and [{{fleet}} preparations](#add-fleet-server-kubernetes-preparations) you should have everything ready to proceed with the {{fleet-server}} installation.
+
+The suggested deployment method for the {{fleet-server}} consists of:
+
+* A Kubernetes Deployment manifest that relies on two Secrets for its configuration:
+
+ * A Secret named `fleet-server-config` with the main configuration parameters, such as the service token, the {{es}} URL and the policy ID.
+ * A Secret named `fleet-server-ssl` with all needed certificate files and the {{fleet-server}} URL.
+
+* A Kubernetes ClusterIP Service named `fleet-svc` that exposes the {{fleet-server}} on port 443, making it available at URLs like `https://fleet-svc`, `https://fleet-svc.` and `https://fleet-svc..svc`.
+
+Adapt and change the suggested manifests and deployment strategy to your needs, ensuring you feed the {{fleet-server}} with the needed configuration and certificates. For example, you can customize:
+
+* CPU and memory `requests` and `limits`. Refer to [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md) for more information about {{fleet-server}} resources utilization.
+* Scheduling configuration, such as `affinity rules` or `tolerations`, if needed in your environment.
+* Number of replicas, to scale the Fleet Server horizontally.
+* Use an {{es}} CA fingerprint instead of a CA file.
+* Configure other [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md).
+
+
+#### Installation Steps [add-fleet-server-kubernetes-install-steps]
+
+1. Create the Secret for the {{fleet-server}} configuration.
+
+ ```shell
+ kubectl create secret generic fleet-server-config \
+ --from-literal=elastic_endpoint='' \
+ --from-literal=elastic_service_token='' \
+ --from-literal=fleet_policy_id=''
+ ```
+
+ When running the command, substitute the following values:
+
+ * ``: Replace this with the URL of your {{es}} host, for example `'https://monitoring-es-http.default.svc:9200'`.
+ * ``: Use the service token provided by {{kib}} in the {{fleet}} UI.
+ * ``: Replace this with the ID of the created policy, for example `'dee949ac-403c-4c83-a489-0122281e4253'`.
+
+ If you prefer to obtain a **yaml manifest** of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file.
+
+2. Create the Secret for the TLS/SSL configuration:
+
+ ::::{tab-set}
+
+ :::{tab-item} Quick start
+
+ The following command assumes you have the {{es}} CA available as a local file.
+
+ ```shell
+ kubectl create secret generic fleet-server-ssl \
+ --from-file=es-ca.crt=
+ ```
+
+ When running the command, substitute the following values:
+
+ * `` with your local file containing the {{es}} CA(s).
+
+ If you prefer to obtain a **yaml manifest** of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file.
+ :::
+
+ :::{tab-item} Production
+ The following command assumes you have the {{es}} CA and the {{fleet-server}} certificate, key and CA available as local files.
+
+ ```shell
+ kubectl create secret generic fleet-server-ssl \
+ --from-file=es-ca.crt= \
+ --from-file=fleet-ca.crt= \
+ --from-file=fleet-server.crt= \
+ --from-file=fleet-server.key= \
+ --from-literal=fleet_url=''
+ ```
+
+ When running the command, substitute the following values:
+
+ * `` with your local file containing the {{es}} CA(s).
+ * `` with your local file containing the {{fleet-server}} CA.
+ * `` with your local file containing the server TLS certificate for the {{fleet-server}}.
+ * `` with your local file containing the server TLS key for the {{fleet-server}}.
+ * `` with the URL that points to the {{fleet-server}}, for example `https://fleet-svc`. This URL will be used by the {{fleet-server}} during its bootstrap, and its hostname must be included in the server certificate’s x509 Subject Alternative Name (SAN) list.
+
+ If you prefer to obtain a **yaml manifest** of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file.
+ :::
+
+ ::::
+
+ If your {{es}} cluster runs on Elastic Cloud or if it uses a publicly trusted CA, remove the `es-ca.crt` key from the proposed secret.
+
+3. Save the proposed Deployment manifest locally, for example as `fleet-server-dep.yaml`, and adapt it to your needs:
+
+ ::::{tab-set}
+
+ :::{tab-item} Production
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: fleet-svc
+ spec:
+ type: ClusterIP
+ selector:
+ app: fleet-server
+ ports:
+ - port: 443
+ protocol: TCP
+ targetPort: 8220
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: fleet-server
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: fleet-server
+ template:
+ metadata:
+ labels:
+ app: fleet-server
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - name: elastic-agent
+ image: docker.elastic.co/beats/elastic-agent:9.0.0-beta1
+ env:
+ - name: FLEET_SERVER_ENABLE
+ value: "true"
+ - name: FLEET_SERVER_ELASTICSEARCH_HOST
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-config
+ key: elastic_endpoint
+ - name: FLEET_SERVER_SERVICE_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-config
+ key: elastic_service_token
+ - name: FLEET_SERVER_POLICY_ID
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-config
+ key: fleet_policy_id
+ - name: ELASTICSEARCH_CA
+ value: /mnt/certs/es-ca.crt
+ - name: FLEET_SERVER_CERT
+ value: /mnt/certs/fleet-server.crt
+ - name: FLEET_SERVER_CERT_KEY
+ value: /mnt/certs/fleet-server.key
+ - name: FLEET_CA
+ value: /mnt/certs/fleet-ca.crt
+ - name: FLEET_URL
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-ssl
+ key: fleet_url
+ - name: FLEET_SERVER_TIMEOUT
+ value: '60s'
+ - name: FLEET_SERVER_PORT
+ value: '8220'
+ ports:
+ - containerPort: 8220
+ protocol: TCP
+ resources: {}
+ volumeMounts:
+ - name: certs
+ mountPath: /mnt/certs
+ readOnly: true
+ volumes:
+ - name: certs
+ secret:
+ defaultMode: 420
+ optional: false
+ secretName: fleet-server-ssl
+ ```
+ :::
+
+ :::{tab-item} Quick start
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: fleet-svc
+ spec:
+ type: ClusterIP
+ selector:
+ app: fleet-server
+ ports:
+ - port: 443
+ protocol: TCP
+ targetPort: 8220
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: fleet-server
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: fleet-server
+ template:
+ metadata:
+ labels:
+ app: fleet-server
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - name: elastic-agent
+ image: docker.elastic.co/beats/elastic-agent:9.0.0-beta1
+ env:
+ - name: FLEET_SERVER_ENABLE
+ value: "true"
+ - name: FLEET_SERVER_ELASTICSEARCH_HOST
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-config
+ key: elastic_endpoint
+ - name: FLEET_SERVER_SERVICE_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-config
+ key: elastic_service_token
+ - name: FLEET_SERVER_POLICY_ID
+ valueFrom:
+ secretKeyRef:
+ name: fleet-server-config
+ key: fleet_policy_id
+ - name: ELASTICSEARCH_CA
+ value: /mnt/certs/es-ca.crt
+ ports:
+ - containerPort: 8220
+ protocol: TCP
+ resources: {}
+ volumeMounts:
+ - name: certs
+ mountPath: /mnt/certs
+ readOnly: true
+ volumes:
+ - name: certs
+ secret:
+ defaultMode: 420
+ optional: false
+ secretName: fleet-server-ssl
+ ```
+ :::
+
+ ::::
+
+ Manifest considerations:
+
+ * If your {{es}} cluster runs on Elastic Cloud or if it uses a publicly trusted CA, remove the `ELASTICSEARCH_CA` environment variable from the manifest.
+ * Check the `image` version to ensure its aligned with the rest of your {{stack}}.
+ * Keep `automountServiceAccountToken` set to `false` to disable the [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md).
+ * Consider configuring requests and limits always as a best practice. Refer to [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md) for more information about resources utilization of the {{fleet-server}}.
+ * You can change the listening `port` of the service to any port of your choice, but do not change the `targetPort`, as the {{fleet-server}} Pods will listen on port 8220.
+ * If you want to expose the {{fleet-server}} externally, consider changing the service type to `LoadBalancer`.
+
+4. Deploy the configured manifest to create the {{fleet-server}} and service:
+
+ ```shell
+ kubectl apply -f fleet-server-dep.yaml
+ ```
+
+ ::::{important}
+ Ensure the `Service`, the `Deployment` and all the referenced `Secrets` are created in the **same Namespace**.
+
+ ::::
+
+5. Check the {{fleet-server}} Pod logs for errors and confirm in {{kib}} that the {{fleet-server}} agent appears as `Connected` and `Healthy` in **{{kib}} → {{fleet}}**.
+
+ ```shell
+ kubectl logs fleet-server-69499449c7-blwjg
+ ```
+
+ It can take a couple of minutes for {{fleet-server}} to fully start. If you left the {{kib}} browser window open during [{{fleet}} preparations](#add-fleet-server-kubernetes-preparations) it will show **Connected** when everything has gone well.
+
+ ::::{note}
+ In **Production mode**, during {{fleet-server}} bootstrap process, the {{fleet-server}} might be unable to access its own `FLEET_URL`. This is usually a temporary issue caused by the Kubernetes Service not forwarding traffic to the Pod(s).
+
+ If the issue persists consider using `https://localhost:8220` as the `FLEET_URL` for the {{fleet-server}} configuration, and ensure that `localhost` is included in the certificate’s SAN.
+
+ ::::
+
+
+## Expose the {{fleet-server}} to {{agent}}s [add-fleet-server-kubernetes-expose]
+
+This may include the creation of a Kubernetes `service`, an `ingress` resource, and / or DNS registers for FQDNs resolution. There are multiple ways to expose applications in Kubernetes.
+
+Considerations when exposing {{fleet-server}}:
+
+* If your environment requires the {{fleet-server}} to be reachable through multiple hostnames or URLs, you can create multiple **{{fleet-server}} Hosts** in **{{fleet}} → Settings**, and create different policies for different groups of agents.
+* Remember that in **Production** mode, the **hostnames** used to access the {{fleet-server}} must be part of the {{fleet-server}} certificate as `x.509 Subject Alternative Names`.
+* **Align always the service listening port to the URL**. If you configure the service to listen in port 8220 use a URL like `https://service-name:8220`, and if it listens in `443` use a URL like `https://service-name`.
+
+Below is an end to end example of how to expose the server to external and internal clients using a LoadBalancer service. For this example we assume the following:
+
+* The {{fleet-server}} runs in a namespace called `elastic`.
+* External clients will access {{fleet-server}} using a URL like `https://fleet.example.com`, which will be resolved in DNS to the external IP of the Load Balancer.
+* Internal clients will access {{fleet-server}} using the Kubernetes service directly `https://fleet-svc-lb.elastic`.
+* The server certificate has both hostnames (`fleet.example.com` and `fleet-svc-lb.elastic`) in its SAN list.
+
+1. Create the `LoadBalancer` Service
+
+ ```shell
+ kubectl expose deployment fleet-server --name fleet-svc-lb --type LoadBalancer --port 443 --target-port 8220
+ ```
+
+ That command creates a service named `fleet-svc-lb`, listening on port `443` and forwarding the traffic to the `fleet-server` deployment’s Pods on port `8220`. The listening `--port` (and the consequent URL) of the service can be customized, but the `--target-port` must remain on the default port (`8220`), because it’s the port used by the {{fleet-server}} application.
+
+2. Add `https://fleet-server.example.com` and `https://fleet-svc-lb.elastic` as a new **{{fleet-server}} Hosts** in **{{fleet}} → Settings**. Align the port of the URLs if you configured something different from `443` in the Load Balancer.
+3. Create a {{fleet}} policy for external clients using the `https://fleet-server.example.com` {{fleet-server}} URL.
+4. Create a {{fleet}} policy for internal clients using the `https://fleet-svc-lb.elastic` {{fleet-server}} URL.
+5. You are ready now to enroll external and internal agents to the relevant policies. Refer to [Next steps](#add-fleet-server-kubernetes-next) for more details.
+
+
+## Troubleshoot {{fleet-server}} [add-fleet-server-kubernetes-troubleshoot]
+
+
+### Common Problems [add-fleet-server-kubernetes-troubleshoot-common]
+
+The following issues may occur when {{fleet-server}} settings are missing or configured incorrectly:
+
+* {{fleet-server}} is trying to access {{es}} at `localhost:9200` even though the `FLEET_SERVER_ELASTICSEARCH_HOST` environment variable is properly set.
+
+ This problem occurs when the `output` of the policy associated to the {{fleet-server}} is not correctly configured.
+
+* TLS certificate trust issues occur even when the `ELASTICSEARCH_CA` environment variable is properly set during deployment.
+
+ This problem occurs when the `output` of the policy associated to the {{fleet-server}} is not correctly configured. Add the **CA certificate** or **CA trusted fingerprint** to the {{es}} output associated to the {{fleet-server}} policy.
+
+* In **Production mode**, {{fleet-server}} enrollment fails due to `FLEET_URL` not being accessible, showing something similar to:
+
+ ```sh
+ Starting enrollment to URL: https://fleet-svc/
+ 1st enrollment attempt failed, retrying enrolling to URL: https://fleet-svc/ with exponential backoff (init 1s, max 10s)
+ Error: fail to enroll: fail to execute request to fleet-server: dial tcp 34.118.226.212:443: connect: connection refused
+ Error: enrollment failed: exit status 1
+ ```
+
+ If the service and URL are correctly configured, this is usually a temporary issue caused by the Kubernetes Service not forwarding traffic to the Pod, and it should be cleared in a couple of restarts.
+
+ As a workaround, consider using `https://localhost:8220` as the `FLEET_URL` for the {{fleet-server}} configuration, and ensure that `localhost` is included in the certificate’s SAN.
+
+
+
+## Next steps [add-fleet-server-kubernetes-next]
+
+Now you’re ready to add {{agent}}s to your host systems. To learn how, refer to [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md), or [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) if your {{agent}}s will also run on Kubernetes.
+
+When you connect {{agent}}s to {{fleet-server}}, remember to use the `--insecure` flag if the **quick start** mode was used, or to provide to the {{agent}}s the CA certificate associated to the {{fleet-server}} certificate if **production** mode was used.
diff --git a/reference/ingestion-tools/fleet/add-fleet-server-mixed.md b/reference/ingestion-tools/fleet/add-fleet-server-mixed.md
new file mode 100644
index 0000000000..a8cd77d01b
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add-fleet-server-mixed.md
@@ -0,0 +1,158 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-mixed.html
+---
+
+# Deploy Fleet Server on-premises and Elasticsearch on Cloud [add-fleet-server-mixed]
+
+To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts.
+
+Another approach is to deploy a cluster of {{fleet-server}}s on-premises and connect them back to {{ecloud}} with access to {{es}} and {{kib}}. In this [deployment model](/reference/ingestion-tools/fleet/deployment-models.md), you are responsible for high-availability, fault-tolerance, and lifecycle management of {{fleet-server}}.
+
+This approach might be right for you if you would like to limit the control plane traffic out of your data center. For example, you might take this approach if you are a managed service provider or a larger enterprise that segregates its networks.
+
+This approach might *not* be right for you if you don’t want to manage the life cycle of an extra compute resource in your environment for {{fleet-server}} to reside on.
+
+:::{image} images/fleet-server-on-prem-es-cloud.png
+:alt: {{fleet-server}} on-premise and {{es}} on Cloud deployment model
+:::
+
+To deploy a self-managed {{fleet-server}} on-premises to work with a hosted {{ess}}, you need to:
+
+* Satisfy all [compatibility requirements](#add-fleet-server-mixed-compatibility) and [prerequisites](#add-fleet-server-mixed-prereq)
+* Create a [{{fleet-server}} policy](#fleet-server-create-policy)
+* [Add {{fleet-server}}](#fleet-server-add-server) by installing an {{agent}} and enrolling it in an agent policy containing the {{fleet-server}} integration
+
+
+## Compatibility [add-fleet-server-mixed-compatibility]
+
+{{fleet-server}} is compatible with the following Elastic products:
+
+* {{stack}} 7.13 or later
+
+ * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases).
+ * {{kib}} should be on the same minor version as {es}
+
+* {{ece}} 2.9 or later—allows you to use a hosted {{fleet-server}} on {{ecloud}}.
+
+ * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`.
+ * The deployment template must contain an {{integrations-server}} node.
+
+ For more information about hosting {{fleet-server}} on {{ece}}, refer to [Manage your {{integrations-server}}](/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md).
+
+
+
+## Prerequisites [add-fleet-server-mixed-prereq]
+
+Before deploying, you need to:
+
+* Obtain or generate a Cerfiticate Authority (CA) certificate.
+* Ensure components have access to the default ports needed for communication.
+
+
+### CA certificate [add-fleet-server-mixed-cert-prereq]
+
+Before setting up {{fleet-server}} using this approach, you will need a CA certificate to configure Transport Layer Security (TLS) to encrypt traffic between the {{fleet-server}}s and the {{stack}}.
+
+If your organization already uses the {{stack}}, you may already have a CA certificate. If you do not have a CA certificate, you can read more about generating one in [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md).
+
+::::{note}
+This is not required when testing and iterating using the **Quick start** option, but should always be used for production deployments.
+::::
+
+
+
+### Default port assignments [default-port-assignments-mixed]
+
+When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. See the following table for default port assignments:
+
+| Component communication | Default port |
+| --- | --- |
+| Elastic Agent → {{fleet-server}} | 8220 |
+| Elastic Agent → {{es}} | 443 |
+| Elastic Agent → Logstash | 5044 |
+| Elastic Agent → {{kib}} ({{fleet}}) | 443 |
+| {{fleet-server}} → {{kib}} ({{fleet}}) | 443 |
+| {{fleet-server}} → {{es}} | 443 |
+
+::::{note}
+If you do not specify the port for {{es}} as 443, the {{agent}} defaults to 9200.
+::::
+
+
+
+## Create a {{fleet-server}} policy [fleet-server-create-policy]
+
+First, create a {{fleet-server}} policy. The {{fleet-server}} policy manages and configures the {{agent}} running on the {{fleet-server}} host to launch a {{fleet-server}} process.
+
+To create a {{fleet-server}} policy:
+
+1. In {{fleet}}, open the **Agent policies** tab.
+2. Click on the **Create agent policy** button, then:
+
+ 1. Provide a meaningful name for the policy that will help you identify this {{fleet-server}} (or cluster) in the future.
+ 2. Ensure you select *Collect system logs and metrics* so the compute system hosting this {{fleet-server}} can be monitored. (This is not required, but is highly recommended.)
+
+3. After creating the {{fleet-server}} policy, navigate to the policy itself and click **Add integration**.
+4. Search for and select the **{{fleet-server}}** integration.
+5. Then click **Add {{fleet-server}}**.
+6. Configure the {{fleet-server}}:
+
+ 1. Expand **Change default**. Because you are deploying this {{fleet-server}} on-premises, you need to enter the *Host* address and *Port* number, `8220`. (In our example the {{fleet-server}} will be installed on the host `10.128.0.46`.)
+ 2. It’s recommended that you also enter the *Max agents* you intend to support with this {{fleet-server}}. This can also be modified at a later stage. This will allow the {{fleet-server}} to handle the load and frequency of updates being sent to the agent and ensure a smooth operation in a bursty environment.
+
+
+
+## Add {{fleet-server}}s [fleet-server-add-server]
+
+Now that the policy exists, you can add {{fleet-server}}s.
+
+A {{fleet-server}} is an {{agent}} that is enrolled in a {{fleet-server}} policy. The policy configures the agent to operate in a special mode to serve as a {{fleet-server}} in your deployment.
+
+To add a {{fleet-server}}:
+
+1. In {{fleet}}, open the **Agents** tab.
+2. Click **Add {{fleet-server}}**.
+3. This will open in-product instructions for adding a {{fleet-server}} using one of two options. Choose **Advanced**.
+
+ :::{image} images/add-fleet-server-advanced.png
+ :alt: In-product instructions for adding a {{fleet-server}} in advanced mode
+ :class: screenshot
+ :::
+
+4. Follow the in-product instructions to add a {{fleet-server}}.
+
+ 1. Select the agent policy that you created for this deployment.
+ 2. Choose **Production** as your deployment mode.
+
+ Production mode is the fully secured mode where TLS certificates ensure a secure communication between {{fleet-server}} and {{es}}.
+
+ 3. Open the **{{fleet-server}} Hosts** dropdown and select **Add new {{fleet-server}} Hosts**. Specify one or more host URLs your {{agent}}s will use to connect to {{fleet-server}}. For example, `https://192.0.2.1:8220`, where `192.0.2.1` is the host IP where you will install {{fleet-server}}.
+ 4. A **Service Token** is required so the {{fleet-server}} can write data to the connected {{es}} instance. Click **Generate service token** and copy the generated token.
+ 5. Copy the installation instructions provided in {{kib}}, which include some of the known deployment parameters.
+ 6. Replace the value of the `--certificate-authorities` parameter with your [CA certificate](#add-fleet-server-mixed-prereq).
+
+5. If installation is successful, a confirmation indicates that {{fleet-server}} is set up and connected.
+
+After {{fleet-server}} is installed and enrolled in {{fleet}}, the newly created {{fleet-server}} policy is applied. You can see this on the {{fleet-server}} policy page.
+
+The {{fleet-server}} agent will also show up on the main {{fleet}} page as another agent whose life-cycle can be managed (like other agents in the deployment).
+
+You can update your {{fleet-server}} configuration in {{kib}} at any time by going to: **Management** → **{{fleet}}** → **Settings**. From there you can:
+
+* Update the {{fleet-server}} host URL.
+* Configure additional outputs where agents will send data.
+* Specify the location from where agents will download binaries.
+* Specify proxy URLs to use for {{fleet-server}} or {{agent}} outputs.
+
+
+## Next steps [fleet-server-install-agents]
+
+Now you’re ready to add {{agent}}s to your host systems. To learn how, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md).
+
+::::{note}
+For on-premises deployments, you can dedicate a policy to all the agents in the network boundary and configure that policy to include a specific {{fleet-server}} (or a cluster of {{fleet-server}}s).
+
+Read more in [Add a {{fleet-server}} to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-fleet-server-to-policy).
+
+::::
diff --git a/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md b/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md
new file mode 100644
index 0000000000..a906cfe72f
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md
@@ -0,0 +1,166 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-on-prem.html
+---
+
+# Deploy on-premises and self-managed [add-fleet-server-on-prem]
+
+To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts.
+
+You can deploy {{fleet-server}} on-premises and manage it yourself. In this [deployment model](/reference/ingestion-tools/fleet/deployment-models.md), you are responsible for high-availability, fault-tolerance, and lifecycle management of {{fleet-server}}.
+
+This approach might be right for you if you would like to limit the control plane traffic out of your data center or have requirements for fully air-gapped operations. For example, you might take this approach if you need to satisfy data governance requirements or you want agents to only have access to a private segmented network.
+
+This approach might *not* be right for you if you don’t want to manage the life cycle of your Elastic environment and instead would like that to be handled by Elastic.
+
+When using this approach, it’s recommended that you provision multiple instances of the {{fleet-server}} and use a load balancer to better scale the deployment. You also have the option to use your organization’s certificate to establish a secure connection from {{fleet-server}} to {{es}}.
+
+:::{image} images/fleet-server-on-prem-deployment.png
+:alt: {{fleet-server}} on-premises deployment model
+:::
+
+To deploy a self-managed {{fleet-server}}, you need to:
+
+* Satisfy all [compatibility requirements](#add-fleet-server-on-prem-compatibility) and [prerequisites](#add-fleet-server-on-prem-prereq).
+* [Add a {{fleet-server}}](#add-fleet-server-on-prem-add-server) by installing an {{agent}} and enrolling it in an agent policy containing the {{fleet-server}} integration.
+
+::::{note}
+You can install only a single {{agent}} per host, which means you cannot run {{fleet-server}} and another {{agent}} on the same host unless you deploy a containerized {{fleet-server}}.
+::::
+
+
+
+## Compatibility [add-fleet-server-on-prem-compatibility]
+
+{{fleet-server}} is compatible with the following Elastic products:
+
+* {{stack}} 7.13 or later.
+
+ * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases).
+ * {{kib}} should be on the same minor version as {{es}}.
+
+* {{ece}} 2.9 or later
+
+ * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`.
+ * The deployment template must contain an {{integrations-server}} node.
+
+ For more information about hosting {{fleet-server}} on {{ece}}, refer to [Manage your {{integrations-server}}](/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md).
+
+
+
+## Prerequisites [add-fleet-server-on-prem-prereq]
+
+Before deploying, you need to:
+
+* Obtain or generate a Cerfiticate Authority (CA) certificate.
+* Ensure components have access to the ports needed for communication.
+
+
+### CA certificate [add-fleet-server-on-prem-cert-prereq]
+
+Before setting up {{fleet-server}} using this approach, you will need a CA certificate to configure Transport Layer Security (TLS) to encrypt traffic between the {{fleet-server}}s and the {{stack}}.
+
+If your organization already uses the {{stack}}, you may already have a CA certificate. If you do not have a CA certificate, you can read more about generating one in [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md).
+
+::::{note}
+This is not required when testing and iterating using the **Quick start** option, but should always be used for production deployments.
+::::
+
+
+
+### Default port assignments [default-port-assignments-on-prem]
+
+When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. Refer to the following table for default port assignments:
+
+| Component communication | Default port |
+| --- | --- |
+| Elastic Agent → {{fleet-server}} | 8220 |
+| Elastic Agent → {{es}} | 9200 |
+| Elastic Agent → Logstash | 5044 |
+| Elastic Agent → {{kib}} ({{fleet}}) | 5601 |
+| {{fleet-server}} → {{kib}} ({{fleet}}) | 5601 |
+| {{fleet-server}} → {{es}} | 9200 |
+
+::::{note}
+Connectivity to {{kib}} on port 5601 is optional and not required at all times. {{agent}} and {{fleet-server}} may need to connect to {{kib}} if deployed in a container environment where an enrollment token can not be provided during deployment.
+::::
+
+
+
+## Add {{fleet-server}} [add-fleet-server-on-prem-add-server]
+
+A {{fleet-server}} is an {{agent}} that is enrolled in a {{fleet-server}} policy. The policy configures the agent to operate in a special mode to serve as a {{fleet-server}} in your deployment.
+
+To add a {{fleet-server}}:
+
+1. In {{fleet}}, open the **Agents** tab.
+2. Click **Add {{fleet-server}}**.
+3. This opens in-product instructions to add a {{fleet-server}} using one of two options: **Quick Start** or **Advanced**.
+
+ * Use **Quick Start** if you want {{fleet}} to generate a {{fleet-server}} policy and enrollment token for you. The {{fleet-server}} policy will include a {{fleet-server}} integration plus a system integration for monitoring {{agent}}. This option generates self-signed certificates and is **not** recommended for production use cases.
+
+ :::{image} images/add-fleet-server.png
+ :alt: In-product instructions for adding a {{fleet-server}} in quick start mode
+ :class: screenshot
+ :::
+
+ * Use **Advanced** if you want to either:
+
+ * **Use your own {{fleet-server}} policy.** {{fleet-server}} policies manage and configure the {{agent}} running on {{fleet-server}} hosts to launch a {{fleet-server}} process. You can create a new {{fleet-server}} policy or select an existing one. Alternatively you can [create a {{fleet-server}} policy without using the UI](/reference/ingestion-tools/fleet/create-policy-no-ui.md), and then select the policy here.
+ * **Use your own TLS certificates.** TLS certificates encrypt traffic between {{agent}}s and {{fleet-server}}. To learn how to generate certs, refer to [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md).
+
+ ::::{note}
+ If you are providing your own certificates:
+
+ * Before running the `install` command, make sure you replace the values in angle brackets.
+ * Note that the URL specified by `--url` must match the DNS name used to generate the certificate specified by `--fleet-server-cert`.
+
+ ::::
+
+
+ :::{image} images/add-fleet-server-advanced.png
+ :alt: In-product instructions for adding a {{fleet-server}} in advanced mode
+ :class: screenshot
+ :::
+
+4. Step through the in-product instructions to configure and install {{fleet-server}}.
+
+ ::::{note}
+ * The fields to configure {{fleet-server}} hosts are not available if the hosts are already configured outside of {{fleet}}. For more information, refer to [{{fleet}} settings in {{kib}}](kibana://docs/reference/configuration-reference/fleet-settings.md).
+ * When using the **Advanced** option, it’s recommended to generate a unique service token for each {{fleet-server}}. For other ways to generate service tokens, refer to [`elasticsearch-service-tokens`](elasticsearch://docs/reference/elasticsearch/command-line-tools/service-tokens-command.md).
+ * If you’ve configured a non-default port for {{fleet-server}} in the {{fleet-server}} integration, you need to include the `--fleet-server-host` and `--fleet-server-port` options in the `elastic-agent install` command. Refer to the [install command documentation](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) for details.
+
+ ::::
+
+
+ At the **Install Fleet Server to a centralized host** step, the `elastic-agent install` command installs an {{agent}} as a managed service and enrolls it in a {{fleet-server}} policy. For more {{fleet-server}} commands, refer to the [{{agent}} command reference](/reference/ingestion-tools/fleet/agent-command-reference.md).
+
+5. If installation is successful, a confirmation indicates that {{fleet-server}} is set up and connected.
+
+After {{fleet-server}} is installed and enrolled in {{fleet}}, the newly created {{fleet-server}} policy is applied. You can see this on the {{fleet-server}} policy page.
+
+The {{fleet-server}} agent also shows up on the main {{fleet}} page as another agent whose life-cycle can be managed (like other agents in the deployment).
+
+You can update your {{fleet-server}} configuration in {{kib}} at any time by going to: **Management** → **{{fleet}}** → **Settings**. From there you can:
+
+* Update the {{fleet-server}} host URL.
+* Configure additional outputs where agents should send data.
+* Specify the location from where agents should download binaries.
+* Specify proxy URLs to use for {{fleet-server}} or {{agent}} outputs.
+
+
+## Troubleshooting [add-fleet-server-on-prem-troubleshoot]
+
+If you’re unable to add a {{fleet}}-managed agent, click the **Agents** tab and confirm that the agent running {{fleet-server}} is healthy.
+
+
+## Next steps [add-fleet-server-on-prem-next]
+
+Now you’re ready to add {{agent}}s to your host systems. To learn how, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md).
+
+::::{note}
+For on-premises deployments, you can dedicate a policy to all the agents in the network boundary and configure that policy to include a specific {{fleet-server}} (or a cluster of {{fleet-server}}s).
+
+Read more in [Add a {{fleet-server}} to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-fleet-server-to-policy).
+
+::::
diff --git a/reference/ingestion-tools/fleet/add-integration-to-policy.md b/reference/ingestion-tools/fleet/add-integration-to-policy.md
new file mode 100644
index 0000000000..c9c9c28fa7
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add-integration-to-policy.md
@@ -0,0 +1,43 @@
+---
+navigation_title: "Add an integration to an {{agent}} policy"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add-integration-to-policy.html
+---
+
+# Add an integration to an {{agent}} policy [add-integration-to-policy]
+
+
+An [{{agent}} policy](/reference/ingestion-tools/fleet/agent-policy.md) consists of one or more integrations that are applied to the agents enrolled in that policy. When you add an integration, the policy created for that integration can be shared with multiple {{agent}} policies. This reduces the number of integrations policies that you need to actively manage.
+
+To add a new integration to one or more {{agent}} policies:
+
+1. In {{kib}}, go to the **Integrations** page.
+2. The Integrations page shows {{agent}} integrations along with other types, such as {{beats}}. Scroll down and select **Elastic Agent only** to view only integrations that work with {{agent}}.
+3. Search for and select an integration. You can select a category to narrow your search.
+4. Click **Add **.
+5. You can opt to install an {{agent}} if you haven’t already, or choose **Add integration only** to proceed.
+6. In Step 1 on the **Add ** page, you can select the configuration settings specific to the integration.
+7. In Step 2 on the page, you have two options:
+
+ 1. If you’d like to create a new policy for your {{agent}}s, on the **New hosts** tab specify a name for the new agent policy and choose whether or not to collect system logs and metrics. Collecting logs and metrics will add the System integration to the new agent policy.
+ 2. If you already have an {{agent}} policy created, on the **Existing hosts** tab use the drop-down menu to specify one or more agent policies that you’d like to add the integration to.
+
+8. Click **Save and continue** to confirm your settings.
+
+This action installs the integration (if it’s not already installed) and adds it to the {{agent}} policies that you specified. {{fleet}} distributes the new integration policy to all {{agent}}s that are enrolled in the agent policies.
+
+You can update the settings for an installed integration at any time:
+
+1. In {{kib}}, go to the **Integrations** page.
+2. On the **Integration policies** tab, for the integration that you like to update open the **Actions** menu and select **Edit integration**.
+3. On the **Edit ** page you can update any configuration settings and also update the list of {{agent}} polices to which the integration is added.
+
+ If you clear the **Agent policies** field, the integration will be removed from any {{agent}} policies to which it had been added.
+
+ To identify any integrations that have been "orphaned", that is, not associated with any {{agent}} policies, check the **Agent polices** column on the **Integration policies** tab. Any integrations that are installed but not associated with an {{agent}} policy are as labeled as `No agent policies`.
+
+
+If you haven’t deployed any {{agent}}s yet or set up agent policies, start with one of our quick start guides:
+
+* [Get started with logs and metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md)
+* [Get started with application traces and APM](/solutions/observability/apps/get-started-with-apm.md)
diff --git a/reference/ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md b/reference/ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md
new file mode 100644
index 0000000000..cb3fb2afa7
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md
@@ -0,0 +1,65 @@
+---
+navigation_title: "add_cloudfoundry_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_cloudfoundry_metadata-processor.html
+---
+
+# Add Cloud Foundry metadata [add_cloudfoundry_metadata-processor]
+
+
+The `add_cloudfoundry_metadata` processor annotates each event with relevant metadata from Cloud Foundry applications.
+
+For events to be annotated with Cloud Foundry metadata, they must have a field called `cloudfoundry.app.id` that contains a reference to a Cloud Foundry application, and the configured Cloud Foundry client must be able to retrieve information for the application.
+
+Each event is annotated with:
+
+* Application Name
+* Space ID
+* Space Name
+* Organization ID
+* Organization Name
+
+::::{note}
+Pivotal Application Service and Tanzu Application Service include this metadata in all events from the firehose since version 2.8. In these cases the metadata in the events is used, and `add_cloudfoundry_metadata` processor doesn’t modify these fields.
+::::
+
+
+For efficient annotation, application metadata retrieved by the Cloud Foundry client is stored in a persistent cache on the filesystem. This is done so the metadata can persist across restarts of {{agent}} and its underlying programs. For control over this cache, use the `cache_duration` and `cache_retry_delay` settings.
+
+
+## Example [_example_3]
+
+```yaml
+ - add_cloudfoundry_metadata:
+ api_address: https://api.dev.cfdev.sh
+ client_id: uaa-filebeat
+ client_secret: verysecret
+ ssl:
+ verification_mode: none
+ # To connect to Cloud Foundry over verified TLS you can specify a client and CA certificate.
+ #ssl:
+ # certificate_authorities: ["/etc/pki/cf/ca.pem"]
+ # certificate: "/etc/pki/cf/cert.pem"
+ # key: "/etc/pki/cf/cert.key"
+```
+
+
+## Configuration settings [_configuration_settings_2]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `api_address` | No | `http://api.bosh-lite.com` | URL of the Cloud Foundry API. |
+| `doppler_address` | No | `${api_address}/v2/info` | URL of the Cloud Foundry Doppler Websocket. |
+| `uaa_address` | No | `${api_address}/v2/info` | URL of the Cloud Foundry UAA API. |
+| `rlp_address` | No | `${api_address}/v2/info` | URL of the Cloud Foundry RLP Gateway. |
+| `client_id` | Yes | | Client ID to authenticate with Cloud Foundry. |
+| `client_secret` | Yes | | Client Secret to authenticate with Cloud Foundry. |
+| `cache_duration` | No | `120s` | Maximum amount of time to cache an application’s metadata. |
+| `cache_retry_delay` | No | `20s` | Time to wait before trying to obtain an application’s metadata again in case of error. |
+| `ssl` | No | | SSL configuration to use when connecting to Cloud Foundry. For a list ofavailable settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specificallythe settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). |
+
diff --git a/reference/ingestion-tools/fleet/add_docker_metadata-processor.md b/reference/ingestion-tools/fleet/add_docker_metadata-processor.md
new file mode 100644
index 0000000000..54624b19af
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_docker_metadata-processor.md
@@ -0,0 +1,80 @@
+---
+navigation_title: "add_docker_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_docker_metadata-processor.html
+---
+
+# Add Docker metadata [add_docker_metadata-processor]
+
+
+::::{tip}
+Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly.
+::::
+
+
+The `add_docker_metadata` processor annotates each event with relevant metadata from Docker containers. At startup the processor detects a Docker environment and caches the metadata.
+
+For events to be annotated with Docker metadata, the configuration must be valid, and the processor must be able to reach the Docker API.
+
+Each event is annotated with:
+
+* Container ID
+* Name
+* Image
+* Labels
+
+::::{note}
+When running {{agent}} in a container, you need to provide access to Docker’s unix socket in order for the `add_docker_metadata` processor to work. You can do this by mounting the socket inside the container. For example:
+
+`docker run -v /var/run/docker.sock:/var/run/docker.sock ...`
+
+To avoid privilege issues, you may also need to add `--user=root` to the `docker run` flags. Because the user must be part of the Docker group in order to access `/var/run/docker.sock`, root access is required if {{agent}} is running as non-root inside the container.
+
+If the Docker daemon is restarted, the mounted socket will become invalid, and metadata will stop working. When this happens, you can do one of the following:
+
+* Restart {{agent}} every time Docker is restarted
+* Mount the entire `/var/run` directory (instead of just the socket)
+
+::::
+
+
+
+## Example [_example_4]
+
+```yaml
+ - add_docker_metadata:
+ host: "unix:///var/run/docker.sock"
+ #match_fields: ["system.process.cgroup.id"]
+ #match_pids: ["process.pid", "process.parent.pid"]
+ #match_source: true
+ #match_source_index: 4
+ #match_short_id: true
+ #cleanup_timeout: 60
+ #labels.dedot: false
+ # To connect to Docker over TLS you must specify a client and CA certificate.
+ #ssl:
+ # certificate_authority: "/etc/pki/root/ca.pem"
+ # certificate: "/etc/pki/client/cert.pem"
+ # key: "/etc/pki/client/cert.key"
+```
+
+
+## Configuration settings [_configuration_settings_3]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `host` | No | `unix:///var/run/docker.sock` | Docker socket (UNIX or TCP socket). |
+| `ssl` | No | | SSL configuration to use when connecting to the Docker socket. For a list ofavailable settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specificallythe settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). |
+| `match_fields` | No | | List of fields to match a container ID. At least one of the fields most hold a container ID to get the event enriched. |
+| `match_pids` | No | `["process.pid", "process.parent.pid"]` | List of fields that contain process IDs. If the process is running in Docker, the event will be enriched. |
+| `match_source` | No | `true` | Whether to match the container ID from a log path present in the `log.file.path` field. |
+| `match_short_id` | No | `false` | Whether to match the container short ID from a log path present in the `log.file.path` field. This setting allows you to match directory names that have the first 12 characters of the container ID. For example, `/var/log/containers/b7e3460e2b21/*.log`. |
+| `match_source_index` | No | `4` | Index in the source path split by a forward slash (`/`) to find the container ID. For example, the default, `4`, matches the container ID in `/var/lib/docker/containers//*.log`. |
+| `cleanup_timeout` | No | `60s` | Time of inactivity before container metadata is cleaned up and forgotten. |
+| `labels.dedot` | No | `false` | Whether to replace dots (`.`) in labels with underscores (`_`). |
+
diff --git a/reference/ingestion-tools/fleet/add_fields-processor.md b/reference/ingestion-tools/fleet/add_fields-processor.md
new file mode 100644
index 0000000000..5b85386d81
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_fields-processor.md
@@ -0,0 +1,59 @@
+---
+navigation_title: "add_fields"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_fields-processor.html
+---
+
+# Add fields [add_fields-processor]
+
+
+The `add_fields` processor adds fields to the event. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. The `add_fields` processor overwrites the target field if it already exists. By default, the fields that you specify are grouped under the `fields` sub-dictionary in the event. To group the fields under a different sub-dictionary, use the `target` setting. To store the fields as top-level fields, set `target: ''`.
+
+
+## Examples [_examples_2]
+
+This configuration:
+
+```yaml
+ - add_fields:
+ target: project
+ fields:
+ name: myproject
+ id: '574734885120952459'
+```
+
+Adds these fields to any event:
+
+```json
+{
+ "project": {
+ "name": "myproject",
+ "id": "574734885120952459"
+ }
+}
+```
+
+This configuration alters the event metadata:
+
+```yaml
+ - add_fields:
+ target: '@metadata'
+ fields:
+ op_type: "index"
+```
+
+When the event is ingested by {{es}}, the document will have `op_type: "index"` set as a metadata field.
+
+
+## Configuration settings [_configuration_settings_4]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `target` | No | `fields` | Sub-dictionary to put all fields into. Set `target` to `@metadata` to add values to the event metadata instead of fields. |
+| `fields` | Yes | | Fields to be added. |
+
diff --git a/reference/ingestion-tools/fleet/add_host_metadata-processor.md b/reference/ingestion-tools/fleet/add_host_metadata-processor.md
new file mode 100644
index 0000000000..ebd64ad687
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_host_metadata-processor.md
@@ -0,0 +1,96 @@
+---
+navigation_title: "add_host_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_host_metadata-processor.html
+---
+
+# Add Host metadata [add_host_metadata-processor]
+
+
+::::{tip}
+Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly.
+::::
+
+
+The `add_host_metadata` processor annotates each event with relevant metadata from the host machine.
+
+::::{note}
+If you are using {{agent}} to monitor external system, use the [`add_observer_metadata`](/reference/ingestion-tools/fleet/add_observer_metadata-processor.md) processor instead of `add_host_metadata`.
+::::
+
+
+
+## Example [_example_5]
+
+```yaml
+ - add_host_metadata:
+ cache.ttl: 5m
+ geo:
+ name: nyc-dc1-rack1
+ location: 40.7128, -74.0060
+ continent_name: North America
+ country_iso_code: US
+ region_name: New York
+ region_iso_code: NY
+ city_name: New York
+```
+
+The fields added to the event look like this:
+
+```json
+{
+ "host":{
+ "architecture":"x86_64",
+ "name":"example-host",
+ "id":"",
+ "os":{
+ "family":"darwin",
+ "type":"macos",
+ "build":"16G1212",
+ "platform":"darwin",
+ "version":"10.12.6",
+ "kernel":"16.7.0",
+ "name":"Mac OS X"
+ },
+ "ip": ["192.168.0.1", "10.0.0.1"],
+ "mac": ["00:25:96:12:34:56", "72:00:06:ff:79:f1"],
+ "geo": {
+ "continent_name": "North America",
+ "country_iso_code": "US",
+ "region_name": "New York",
+ "region_iso_code": "NY",
+ "city_name": "New York",
+ "name": "nyc-dc1-rack1",
+ "location": "40.7128, -74.0060"
+ }
+ }
+}
+```
+
+
+## Configuration settings [_configuration_settings_5]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+::::{important}
+If `host.*` fields already exist in the event, they are overwritten by default unless you set `replace_fields` to `true` in the processor configuration.
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `netinfo.enabled` | No | `true` | Whether to include IP addresses and MAC addresses as fields `host.ip` and `host.mac`. |
+| `cache.ttl` | No | `5m` | Sets the cache expiration time for the internal cache used by the processor. Negative values disable caching altogether. |
+| `geo.name` | No | | User-definable token to be used for identifying a discrete location. Frequently a data center, rack, or similar. |
+| `geo.location` | No | | Longitude and latitude in comma-separated format. |
+| `geo.continent_name` | No | | Name of the continent. |
+| `geo.country_name` | No | | Name of the country. |
+| `geo.region_name` | No | | Name of the region. |
+| `geo.city_name` | No | | Name of the city. |
+| `geo.country_iso_code` | No | | ISO country code. |
+| `geo.region_iso_code` | No | | ISO region code. |
+| `replace_fields` | No | `true` | Whether to replace original host fields from the event. If set `false`, original host fields from the event are not replaced by host fields from `add_host_metadata`. |
+
diff --git a/reference/ingestion-tools/fleet/add_id-processor.md b/reference/ingestion-tools/fleet/add_id-processor.md
new file mode 100644
index 0000000000..b6dc2bb377
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_id-processor.md
@@ -0,0 +1,31 @@
+---
+navigation_title: "add_id"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_id-processor.html
+---
+
+# Generate an ID for an event [add_id-processor]
+
+
+The `add_id` processor generates a unique ID for an event.
+
+
+## Example [_example_6]
+
+```yaml
+ - add_id: ~
+```
+
+
+## Configuration settings [_configuration_settings_6]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `target_field` | No | `@metadata._id` | Field where the generated ID will be stored. |
+| `type` | No | `elasticsearch` | Type of ID to generate. Currently only `elasticsearch` is supported. The `elasticsearch` type uses the same algorithm that {{es}} uses to auto-generate document IDs. |
+
diff --git a/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md b/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md
new file mode 100644
index 0000000000..170225524e
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md
@@ -0,0 +1,225 @@
+---
+navigation_title: "add_kubernetes_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_kubernetes_metadata-processor.html
+---
+
+# Add Kubernetes metadata [add_kubernetes_metadata-processor]
+
+
+::::{tip}
+Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly.
+::::
+
+
+The `add_kubernetes_metadata` processor annotates each event with relevant metadata based on which Kubernetes Pod the event originated from. At startup it detects an `in_cluster` environment and caches the Kubernetes-related metadata.
+
+For events to be annotated with Kubernetes-related metadata, the Kubernetes configuration must be valid.
+
+Each event is annotated with:
+
+* Pod Name
+* Pod UID
+* Namespace
+* Labels
+
+In addition, the node and namespace metadata are added to the Pod metadata.
+
+The `add_kubernetes_metadata` processor has two basic building blocks:
+
+* Indexers
+* Matchers
+
+Indexers use Pod metadata to create unique identifiers for each one of the Pods. These identifiers help to correlate the metadata of the observed Pods with actual events. For example, the `ip_port` indexer can take a Kubernetes Pod and create identifiers for it based on all its `pod_ip:container_port` combinations.
+
+Matchers use information in events to construct lookup keys that match the identifiers created by the indexers. For example, when the `fields` matcher takes `["metricset.host"]` as a lookup field, it constructs a lookup key with the value of the field `metricset.host`. When one of these lookup keys matches with one of the identifiers, the event is enriched with the metadata of the identified Pod.
+
+For more information about available indexers and matchers, plus some examples, refer to [Indexers and matchers](#kubernetes-indexers-and-matchers).
+
+
+## Examples [_examples_3]
+
+This configuration enables the processor when {{agent}} is run as a Pod in Kubernetes.
+
+```yaml
+ - add_kubernetes_metadata:
+ # Defining indexers and matchers manually is required for {beatname_lc}, for instance:
+ #indexers:
+ # - ip_port:
+ #matchers:
+ # - fields:
+ # lookup_fields: ["metricset.host"]
+ #labels.dedot: true
+ #annotations.dedot: true
+```
+
+This configuration enables the processor on an {{agent}} running as a process on the Kubernetes node:
+
+```yaml
+ - add_kubernetes_metadata:
+ host:
+ # If kube_config is not set, KUBECONFIG environment variable will be checked
+ # and if not present it will fall back to InCluster
+ kube_config: ${fleet} and {agent} Guide/.kube/config
+ # Defining indexers and matchers manually is required for {beatname_lc}, for instance:
+ #indexers:
+ # - ip_port:
+ #matchers:
+ # - fields:
+ # lookup_fields: ["metricset.host"]
+ #labels.dedot: true
+ #annotations.dedot: true
+```
+
+This configuration disables the default indexers and matchers, and then enables different indexers and matchers:
+
+```yaml
+ - add_kubernetes_metadata:
+ host:
+ # If kube_config is not set, KUBECONFIG environment variable will be checked
+ # and if not present it will fall back to InCluster
+ kube_config: ~/.kube/config
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - ip_port:
+ matchers:
+ - fields:
+ lookup_fields: ["metricset.host"]
+ #labels.dedot: true
+ #annotations.dedot: true
+```
+
+
+## Configuration settings [_configuration_settings_7]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `host` | No | | Node to scope {{agent}} to in case it cannot be accurately detected, as when running {{agent}} in host network mode. |
+| `scope` | No | `node` | Whether the processor should have visibility at the node level (`node`) or at the entire cluster level (`cluster`). |
+| `namespace` | No | | Namespace to collect the metadata from. If no namespaces is specified, collects metadata from all namespaces. |
+| `add_resource_metadata` | No | | Filters and configuration for adding extra metadata to the event. This setting accepts the following settings:
* `node` or `namespace`: Labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included, but annotations are not. To change the default behavior, you can set `include_labels`, `exclude_labels`, and `include_annotations`. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. To turn off enrichment of `node` or `namespace` metadata individually, set `enabled: false`.
* `deployment`: If the resource is `pod` and it is created from a `deployment`, the deployment name is not added by default. To enable this behavior, set `deployment: true`.
* `cronjob`: If the resource is `pod` and it is created from a `cronjob`, the cronjob name is not added by default. To enable this behavior, set `cronjob: true`.
::::{dropdown} Expand this to see an example
```yaml
add_resource_metadata:
namespace:
include_labels: ["namespacelabel1"]
# use_regex_include: false
# use_regex_exclude: false
# exclude_labels: ["namespacelabel2"]
#labels.dedot: true
#annotations.dedot: true
node:
# use_regex_include: false
include_labels: ["nodelabel2"]
include_annotations: ["nodeannotation1"]
# use_regex_exclude: false
# exclude_annotations: ["nodeannotation2"]
#labels.dedot: true
#annotations.dedot: true
deployment: true
cronjob: true
```
::::
|
+| `kube_config` | No | `KUBECONFIG` environment variable, if present | Config file to use as the configuration for the Kubernetes client. |
+| `kube_client_options` | No | | Additional configuration options for the Kubernetes client. Currently client QPS and burst are supported. If this setting is not configured, the Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) is used.
::::{dropdown} Expand this to see an example
```yaml
kube_client_options:
qps: 5
burst: 10
```
::::
|
+| `cleanup_timeout` | No | `60s` | Time of inactivity before stopping the running configuration for a container. |
+| `sync_period` | No | | Timeout for listing historical resources. |
+| `labels.dedot` | No | `true` | Whether to replace dots (`.`) in labels with underscores (`_`).
`annotations.dedot` |
+
+
+## Indexers and matchers [kubernetes-indexers-and-matchers]
+
+The `add_kubernetes_metadata` processor has two basic building blocks:
+
+* Indexers
+* Matchers
+
+
+### Indexers [_indexers]
+
+Indexers use Pod metadata to create unique identifiers for each one of the Pods.
+
+Available indexers are:
+
+`container`
+: Identifies the Pod metadata using the IDs of its containers.
+
+`ip_port`
+: Identifies the Pod metadata using combinations of its IP and its exposed ports. When using this indexer, metadata is identified using the combination of `ip:port` for each of the ports exposed by all containers of the pod. The `ip` is the IP of the pod.
+
+`pod_name`
+: Identifies the Pod metadata using its namespace and its name as `namespace/pod_name`.
+
+`pod_uid`
+: Identifies the Pod metadata using the UID of the Pod.
+
+
+### Matchers [_matchers]
+
+Matchers are used to construct the lookup keys that match with the identifiers created by indexes.
+
+Available matchers are:
+
+`field_format`
+: Looks up Pod metadata using a key created with a string format that can include event fields.
+
+ This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event.
+
+ For example, the following configuration uses the `ip_port` indexer to identify the Pod metadata by combinations of the Pod IP and its exposed ports, and uses the destination IP and port in events as match keys:
+
+ ```yaml
+ - add_kubernetes_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - ip_port:
+ matchers:
+ - field_format:
+ format: '%{[destination.ip]}:%{[destination.port]}'
+ ```
+
+
+`fields`
+: Looks up Pod metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used.
+
+ This matcher has an option `lookup_fields` to define the files whose value will be used for lookup.
+
+ For example, the following configuration uses the `ip_port` indexer to identify Pods, and defines a matcher that uses the destination IP or the server IP for the lookup, the first it finds in the event:
+
+ ```yaml
+ - add_kubernetes_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - ip_port:
+ matchers:
+ - fields:
+ lookup_fields: ['destination.ip', 'server.ip']
+ ```
+
+
+`logs_path`
+: Looks up Pod metadata using identifiers extracted from the log path stored in the `log.file.path` field.
+
+ This matcher has the following configuration settings:
+
+ `logs_path`
+ : (Optional) Base path of container logs. If not specified, it uses the default logs path of the platform where Agent is running: for Linux - `/var/lib/docker/containers/`, Windows - `C:\\ProgramData\\Docker\\containers`. To change the default value: container ID must follow right after the `logs_path` - `/`, where `container_id` is a 64-character-long hexadecimal string.
+
+ `resource_type`
+ : (Optional) Type of the resource to obtain the ID of. Valid `resource_type`:
+
+ * `pod`: to make the lookup based on the Pod UID. When `resource_type` is set to `pod`, `logs_path` must be set as well, supported path in this case:
+
+ * `/var/lib/kubelet/pods/` used to read logs from mounted into the Pod volumes, those logs end up under `/var/lib/kubelet/pods//volumes//...` To use `/var/lib/kubelet/pods/` as a `log_path`, `/var/lib/kubelet/pods` must be mounted into the filebeat Pods.
+ * `/var/log/pods/` Note: when using `resource_type: 'pod'` logs will be enriched only with Pod metadata: Pod id, Pod name, etc., not container metadata.
+
+ * `container`: to make the lookup based on the container ID, `logs_path` must be set to `/var/log/containers/`. It defaults to `container`.
+
+
+ To be able to use `logs_path` matcher agent’s input path must be a subdirectory of directory defined in `logs_path` configuration setting.
+
+ The default configuration is able to lookup the metadata using the container ID when the logs are collected from the default docker logs path (`/var/lib/docker/containers//...` on Linux).
+
+ For example the following configuration would use the Pod UID when the logs are collected from `/var/lib/kubelet/pods//...`.
+
+ ```yaml
+ - add_kubernetes_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - pod_uid:
+ matchers:
+ - logs_path:
+ logs_path: '/var/lib/kubelet/pods'
+ resource_type: 'pod'
+ ```
+
+
diff --git a/reference/ingestion-tools/fleet/add_labels-processor.md b/reference/ingestion-tools/fleet/add_labels-processor.md
new file mode 100644
index 0000000000..63196bf03c
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_labels-processor.md
@@ -0,0 +1,56 @@
+---
+navigation_title: "add_labels"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_labels-processor.html
+---
+
+# Add labels [add_labels-processor]
+
+
+The `add_labels` processors adds a set of key-value pairs to an event. The processor flattens nested configuration objects like arrays or dictionaries into a fully qualified name by merging nested names with a dot (`.`). Array entries create numeric names starting with 0. Labels are always stored under the Elastic Common Schema compliant `labels` sub-dictionary.
+
+
+## Example [_example_7]
+
+This configuration:
+
+```yaml
+ - add_labels:
+ labels:
+ number: 1
+ with.dots: test
+ nested:
+ with.dots: nested
+ array:
+ - do
+ - re
+ - with.field: mi
+```
+
+Adds these fields to every event:
+
+```json
+{
+ "labels": {
+ "number": 1,
+ "with.dots": "test",
+ "nested.with.dots": "nested",
+ "array.0": "do",
+ "array.1": "re",
+ "array.2.with.field": "mi"
+ }
+}
+```
+
+
+## Configuration settings [_configuration_settings_8]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `labels` | Yes | | Dictionaries of labels to be added. |
+
diff --git a/reference/ingestion-tools/fleet/add_locale-processor.md b/reference/ingestion-tools/fleet/add_locale-processor.md
new file mode 100644
index 0000000000..33fed145b6
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_locale-processor.md
@@ -0,0 +1,44 @@
+---
+navigation_title: "add_locale"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_locale-processor.html
+---
+
+# Add the local time zone [add_locale-processor]
+
+
+The `add_locale` processor enriches each event with either the machine’s time zone offset from UTC or the name of the time zone. The processor adds the a `event.timezone` value to each event.
+
+
+## Examples [_examples_4]
+
+The configuration adds the processor with the default settings:
+
+```yaml
+ - add_locale: ~
+```
+
+This configuration adds the processor and configures it to add the time zone abbreviation to events:
+
+```yaml
+ - add_locale:
+ format: abbreviation
+```
+
+::::{note}
+The `add_locale` processor differentiates between daylight savings time (DST) and regular time. For example `CEST` indicates DST and and `CET` is regular time.
+::::
+
+
+
+## Configuration settings [_configuration_settings_9]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `format` | No | `offset` | Whether an `offset` or time zone `abbreviation` is added to the event. |
+
diff --git a/reference/ingestion-tools/fleet/add_network_direction-processor.md b/reference/ingestion-tools/fleet/add_network_direction-processor.md
new file mode 100644
index 0000000000..5c6f5de0ba
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_network_direction-processor.md
@@ -0,0 +1,37 @@
+---
+navigation_title: "add_network_direction"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_network_direction-processor.html
+---
+
+# Add network direction [add_network_direction-processor]
+
+
+The `add_network_direction` processor attempts to compute the perimeter-based network direction when given a source and destination IP address and a list of internal networks.
+
+
+## Example [_example_8]
+
+```yaml
+ - add_network_direction:
+ source: source.ip
+ destination: destination.ip
+ target: network.direction
+ internal_networks: [ private ]
+```
+
+
+## Configuration settings [_configuration_settings_10]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `source` | Yes | | Source IP. |
+| `destination` | Yes | | Destination IP. |
+| `target` | Yes | | Target field where the network direction will be written. |
+| `internal_networks` | Yes | | List of internal networks. The value can contain either CIDR blocks or a list of special values enumerated in the network section of [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions). |
+
diff --git a/reference/ingestion-tools/fleet/add_nomad_metadata-processor.md b/reference/ingestion-tools/fleet/add_nomad_metadata-processor.md
new file mode 100644
index 0000000000..305de98330
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_nomad_metadata-processor.md
@@ -0,0 +1,137 @@
+---
+navigation_title: "add_nomad_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_nomad_metadata-processor.html
+---
+
+# Add Nomad metadata [add_nomad_metadata-processor]
+
+
+::::{warning}
+This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
+::::
+
+
+The `add_nomad_metadata` processor adds fields with relevant metadata for applications deployed in Nomad.
+
+Each event is annotated with the following information:
+
+* Allocation name, identifier, and status
+* Job name and type
+* Namespace where the job is deployed
+* Datacenter and region where the agent running the allocation is located.
+
+
+## Example [_example_9]
+
+```yaml
+ - add_nomad_metadata: ~
+```
+
+
+## Configuration settings [_configuration_settings_11]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `address` | No | `http://127.0.0.1:4646` | URL of the agent API used to request the metadata. |
+| `namespace` | No | | Namespace to watch. If set, only events for allocations in this namespace are annotated. |
+| `region` | No | | Region to watch. If set, only events for allocations in this region are annotated. |
+| `secret_id` | No | | SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token.
```json
namespace "*" {
policy = "read"
}
node {
policy = "read"
}
agent {
policy = "read"
}
```
|
+| `refresh_interval` | No | `30s` | Interval used to update the cached metadata. |
+| `cleanup_timeout` | No | `60s` | Time to wait before cleaning up an allocation’s associated resources after it has been removed.This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. |
+| `scope` | No | `node` | Scope of the resources to watch.Specify `node` to get metadata for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. |
+| `node` | No | | When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically.
For example, you can use the following configuration when {{agent}} is collecting events from all the allocations in the cluster:
```yaml
- add_nomad_metadata:
scope: global
```
|
+
+
+## Indexers and matchers [_indexers_and_matchers]
+
+Indexers and matchers are used to correlate fields in events with actual metadata. {{agent}} uses this information to know what metadata to include in each event.
+
+
+### Indexers [_indexers_2]
+
+Indexers use allocation metadata to create unique identifiers for each one of the Pods.
+
+Available indexers are:
+
+`allocation_name`
+: Identifies allocations by their name and namespace (as `/`)
+
+`allocation_uuid`
+: Identifies allocations by their unique identifier.
+
+
+### Matchers [_matchers_2]
+
+Matchers are used to construct the lookup keys that match with the identifiers created by indexes.
+
+
+#### `field_format` [_field_format]
+
+Looks up allocation metadata using a key created with a string format that can include event fields.
+
+This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event.
+
+For example, the following configuration uses the `allocation_name` indexer to identify the allocation metadata by its name and namespace, and uses custom fields existing in the event as match keys:
+
+```yaml
+- add_nomad_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - allocation_name:
+ matchers:
+ - field_format:
+ format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}'
+```
+
+
+#### `fields` [_fields]
+
+Looks up allocation metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used.
+
+This matcher has an option `lookup_fields` to define the fields whose value will be used for lookup.
+
+For example, the following configuration uses the `allocation_uuid` indexer to identify allocations, and defines a matcher that uses some fields where the allocation UUID can be found for lookup, the first it finds in the event:
+
+```yaml
+- add_nomad_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - allocation_uuid:
+ matchers:
+ - fields:
+ lookup_fields: ['host.name', 'fields.nomad_alloc_uuid']
+```
+
+
+#### `logs_path` [_logs_path]
+
+Looks up allocation metadata using identifiers extracted from the log path stored in the `log.file.path` field.
+
+This matcher has an optional `logs_path` option with the base path of the directory containing the logs for the local agent.
+
+The default configuration is able to lookup the metadata using the allocation UUID when the logs are collected under `/var/lib/nomad`.
+
+For example the following configuration would use the allocation UUID when the logs are collected from `/var/lib/NomadClient001/alloc//alloc/logs/...`.
+
+```yaml
+- add_nomad_metadata:
+ ...
+ default_indexers.enabled: false
+ default_matchers.enabled: false
+ indexers:
+ - allocation_uuid:
+ matchers:
+ - logs_path:
+ logs_path: '/var/lib/NomadClient001'
+```
+
diff --git a/reference/ingestion-tools/fleet/add_observer_metadata-processor.md b/reference/ingestion-tools/fleet/add_observer_metadata-processor.md
new file mode 100644
index 0000000000..a382642d87
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_observer_metadata-processor.md
@@ -0,0 +1,81 @@
+---
+navigation_title: "add_observer_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_observer_metadata-processor.html
+---
+
+# Add Observer metadata [add_observer_metadata-processor]
+
+
+::::{warning}
+This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
+::::
+
+
+The `add_observer_metadata` processor annotates each event with relevant metadata from the observer machine.
+
+
+## Example [_example_10]
+
+```yaml
+ - add_observer_metadata:
+ cache.ttl: 5m
+ geo:
+ name: nyc-dc1-rack1
+ location: 40.7128, -74.0060
+ continent_name: North America
+ country_iso_code: US
+ region_name: New York
+ region_iso_code: NY
+ city_name: New York
+```
+
+The fields added to the event look like this:
+
+```json
+{
+ "observer" : {
+ "hostname" : "avce",
+ "type" : "heartbeat",
+ "vendor" : "elastic",
+ "ip" : [
+ "192.168.1.251",
+ "fe80::64b2:c3ff:fe5b:b974",
+ ],
+ "mac" : [
+ "dc:c1:02:6f:1b:ed",
+ ],
+ "geo": {
+ "continent_name": "North America",
+ "country_iso_code": "US",
+ "region_name": "New York",
+ "region_iso_code": "NY",
+ "city_name": "New York",
+ "name": "nyc-dc1-rack1",
+ "location": "40.7128, -74.0060"
+ }
+ }
+}
+```
+
+
+## Configuration settings [_configuration_settings_12]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `netinfo.enabled` | No | `true` | Whether to include IP addresses and MAC addresses as fields `observer.ip` and `observer.mac`. |
+| `cache.ttl` | No | `5m` | Sets the cache expiration time for the internal cache used by the processor. Negative values disable caching altogether. |
+| `geo.name` | No | | User-definable token to be used for identifying a discrete location. Frequently a data center, rack, or similar. |
+| `geo.location` | No | | Longitude and latitude in comma-separated format. |
+| `geo.continent_name` | No | | Name of the continent. |
+| `geo.country_name` | No | | Name of the country. |
+| `geo.region_name` | No | | Name of the region. |
+| `geo.city_name` | No | | Name of the city. |
+| `geo.country_iso_code` | No | | ISO country code. |
+| `geo.region_iso_code` | No | | ISO region code. |
+
diff --git a/reference/ingestion-tools/fleet/add_process_metadata-processor.md b/reference/ingestion-tools/fleet/add_process_metadata-processor.md
new file mode 100644
index 0000000000..49a790566e
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_process_metadata-processor.md
@@ -0,0 +1,77 @@
+---
+navigation_title: "add_process_metadata"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_process_metadata-processor.html
+---
+
+# Add process metadata [add_process_metadata-processor]
+
+
+The `add_process_metadata` processor enriches events with information from running processes, identified by their process ID (PID).
+
+
+## Example [_example_11]
+
+```yaml
+ - add_process_metadata:
+ match_pids: [system.process.ppid]
+ target: system.process.parent
+```
+
+The fields added to the event look as follows:
+
+```json
+"process": {
+ "name": "systemd",
+ "title": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22",
+ "exe": "/usr/lib/systemd/systemd",
+ "args": ["/usr/lib/systemd/systemd", "--switched-root", "--system", "--deserialize", "22"],
+ "pid": 1,
+ "parent": {
+ "pid": 0
+ },
+ "start_time": "2018-08-22T08:44:50.684Z",
+ "owner": {
+ "name": "root",
+ "id": "0"
+ }
+},
+"container": {
+ "id": "b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1"
+},
+```
+
+Optionally, the process environment can be included, too:
+
+```json
+ ...
+ "env": {
+ "HOME": "/",
+ "TERM": "linux",
+ "BOOT_IMAGE": "/boot/vmlinuz-4.11.8-300.fc26.x86_64",
+ "LANG": "en_US.UTF-8",
+ }
+ ...
+```
+
+
+## Configuration settings [_configuration_settings_13]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `match_pids` | Yes | | List of fields to lookup for a PID. The processor searches the list sequentially until the field is found in the current event, and the PID lookup is then applied to the value of this field. |
+| `target` | No | event root | Destination prefix where the `process` object will be created. |
+| `include_fields` | No | | List of fields to add. By default, adds all available fields except `process.env`. |
+| `ignore_missing` | No | `true` | Whether to ignore missing fields. If `false`, discards events that don’t contain any of the fields specified in `match_pids` and then generates an error. If `true`, missing fields are ignored. |
+| `overwrite_keys` | No | `false` | Whether to overwrite existing keys. If `false` and a target field already exists, it is not, overwritten, and an error is logged. If `true`, the target field is overwritten. |
+| `restricted_fields` | No | `false` | Whether to output restricted fields. If `false`, to avoid leaking sensitive data, the `process.env` field is not output. If `true`, the field will be present in the output. |
+| `host_path` | No | root directory (`/`) of host | Host path where `/proc` is mounted. For different runtime configurations of Kubernetes or Docker, set the `host_path` to overwrite the default. |
+| `cgroup_prefixes` | No | `/kubepods` and `/docker` | Prefix where the container ID is inside cgroup. For different runtime configurations of Kubernetes or Docker, set `cgroup_prefixes` to overwrite the defaults. |
+| `cgroup_regex` | No | | Regular expression with capture group for capturing the container ID from the cgroup path. For example:
1. `^\/.+\/.+\/.+\/([0-9a-f]{{64}}).*` matches the container ID of a cgroup like `/kubepods/besteffort/pod665fb997-575b-11ea-bfce-080027421ddf/b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1`
2. `^\/.+\/.+\/.+\/docker-([0-9a-f]{{64}}).scope` matches the container ID of a cgroup like `/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349abe_d645_11ea_9c4c_08002709c05c.slice/docker-80d85a3a585f1575028ebe468d83093c301eda20d37d1671ff2a0be50fc0e460.scope`
3. `^\/.+\/.+\/.+\/crio-([0-9a-f]{{64}}).scope` matches the container ID of a cgroup like `/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349abe_d645_11ea_9c4c_08002709c05c.slice/crio-80d85a3a585f1575028ebe468d83093c301eda20d37d1671ff2a0be50fc0e460.scope`
If `cgroup_regex` is not set, the container ID is extracted from the cgroup file based on the `cgroup_prefixes` setting.
|
+| `cgroup_cache_expire_time` | No | `30s` | Time in seconds before cgroup cache elements expire. To disable the cgroup cache, set this to `0`. In some container runtime technologies, like runc, the container’s process is also a process in the host kernel and will be affected by PID rollover/reuse. Set the expire time to a value that is smaller than the PIDs wrap around time to avoid the wrong container ID. |
+
diff --git a/reference/ingestion-tools/fleet/add_tags-processor.md b/reference/ingestion-tools/fleet/add_tags-processor.md
new file mode 100644
index 0000000000..ee10babe4b
--- /dev/null
+++ b/reference/ingestion-tools/fleet/add_tags-processor.md
@@ -0,0 +1,43 @@
+---
+navigation_title: "add_tags"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/add_tags-processor.html
+---
+
+# Add tags [add_tags-processor]
+
+
+The `add_tags` processor adds tags to a list of tags. If the target field already exists, the tags are appended to the existing list of tags.
+
+
+## Example [_example_12]
+
+This configuration:
+
+```yaml
+ - add_tags:
+ tags: [web, production]
+ target: "environment"
+```
+
+Adds the `environment` field to every event:
+
+```json
+{
+ "environment": ["web", "production"]
+}
+```
+
+
+## Configuration settings [_configuration_settings_14]
+
+::::{note}
+{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations)
+::::
+
+
+| Name | Required | Default | Description |
+| --- | --- | --- | --- |
+| `tags` | Yes | | List of tags to add. |
+| `target` | No | `tags` | Field the tags will be added to. Setting tags in `@metadata` is not supported. |
+
diff --git a/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md b/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md
new file mode 100644
index 0000000000..09789303d2
--- /dev/null
+++ b/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md
@@ -0,0 +1,106 @@
+---
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/advanced-kubernetes-managed-by-fleet.html
+---
+
+# Advanced Elastic Agent configuration managed by Fleet [advanced-kubernetes-managed-by-fleet]
+
+For basic {{agent}} managed by {{fleet}} scenarios follow the steps in [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md).
+
+On managed {{agent}} installations it can be useful to provide the ability to configure more advanced options, such as the configuration of providers during the startup. Refer to [Providers](/reference/ingestion-tools/fleet/providers.md) for more details.
+
+Following steps demonstrate above scenario:
+
+
+## Step 1: Download the {{agent}} manifest [_step_1_download_the_agent_manifest_2]
+
+It is advisable to follow the steps of [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md) with Kubernetes Integration installed in your policy and download the {{agent}} manifest from Kibana UI
+
+:::{image} images/k8skibanaUI.png
+:alt: {{agent}} with K8s Package manifest
+:::
+
+Notes
+: Sample manifests can also be found [here](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml)
+
+
+## Step 2: Create a new configmap [_step_2_create_a_new_configmap]
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: agent-node-datastreams
+ namespace: kube-system
+ labels:
+ k8s-app: elastic-agent
+data:
+ agent.yml: |-
+ providers.kubernetes_leaderelection.enabled: false
+ fleet.enabled: true
+ fleet.access_token: ""
+---
+```
+
+Notes
+: 1. In the above example the disablement of `kubernetes_leaderelection` provider is demonstrated. Same procedure can be followed for alternative scenarios.
+
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: agent-node-datastreams
+ namespace: kube-system
+ labels:
+ k8s-app: elastic-agent
+data:
+ agent.yml: |-
+ providers.kubernetes:
+ add_resource_metadata:
+ deployment: true
+ cronjob: true
+ fleet.enabled: true
+ fleet.access_token: ""
+---
+```
+
+1. Find more information about [Enrollment Tokens](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md).
+
+
+## Step 3: Configure Daemonset [_step_3_configure_daemonset]
+
+Inside the downloaded manifest, update the Daemonset resource:
+
+```yaml
+containers:
+ - name: elastic-agent
+ image: docker.elastic.co/elastic-agent/elastic-agent:
+ args: ["-c", "/etc/elastic-agent/agent.yml", "-e"]
+```
+
+Notes
+: The is just a placeholder for the elastic-agent image version that you will download in your manifest: eg. `image: docker.elastic.co/elastic-agent/elastic-agent: 8.11.0` Important thing is to update your manifest with args details
+
+```yaml
+volumeMounts:
+ - name: datastreams
+ mountPath: /etc/elastic-agent/agent.yml
+ readOnly: true
+ subPath: agent.yml
+```
+
+```yaml
+volumes:
+ - name: datastreams
+ configMap:
+ defaultMode: 0640
+ name: agent-node-datastreams
+```
+
+
+## Important Notes [_important_notes]
+
+1. By default the manifests for {{agent}} managed by {{fleet}} have `hostNetwork:true`. In order to support multiple installations of {{agent}}s in the same node you should set `hostNetwork:false`. See this relevant [example](https://github.com/elastic/elastic-agent/tree/main/docs/manifests/hostnetwork) as described in [{{agent}} Manifests in order to support Kube-State-Metrics Sharding](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-ksm-sharding.md).
+2. The volume `/usr/share/elastic-agent/state` must remain mounted in [elastic-agent-managed-kubernetes.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml), otherwise custom config map provided above will be overwritten.
+
diff --git a/reference/ingestion-tools/fleet/agent-command-reference.md b/reference/ingestion-tools/fleet/agent-command-reference.md
new file mode 100644
index 0000000000..d6cd2ed315
--- /dev/null
+++ b/reference/ingestion-tools/fleet/agent-command-reference.md
@@ -0,0 +1,1196 @@
+---
+navigation_title: "Command reference"
+mapped_pages:
+ - https://www.elastic.co/guide/en/fleet/current/elastic-agent-cmd-options.html
+---
+
+# {{agent}} command reference [elastic-agent-cmd-options]
+
+
+{{agent}} provides commands for running {{agent}}, managing {{fleet-server}}, and doing common tasks. The commands listed here apply to both [{{fleet}}-managed](/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md) and [standalone](/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md) {{agent}}.
+
+::::{admonition} Restrictions
+:class: important
+
+Note the following restrictions for running {{agent}} commands:
+
+* You might need to log in as a root user (or Administrator on Windows) to run the commands described here. After the {{agent}} service is installed and running, make sure you run these commands without prepending them with `./` to avoid invoking the wrong binary.
+* Running {{agent}} commands using the Windows PowerShell ISE is not supported.
+
+::::
+
+
+* [diagnostics](#elastic-agent-diagnostics-command)
+* [enroll](#elastic-agent-enroll-command)
+* [help](#elastic-agent-help-command)
+* [inspect](#elastic-agent-inspect-command)
+* [install](#elastic-agent-install-command)
+* [otel](#elastic-agent-otel-command) [preview]
+* [privileged](#elastic-agent-privileged-command)
+* [restart](#elastic-agent-restart-command)
+* [run](#elastic-agent-run-command)
+* [status](#elastic-agent-status-command)
+* [uninstall](#elastic-agent-uninstall-command)
+* [upgrade](#elastic-agent-upgrade-command)
+* [logs](#elastic-agent-logs-command)
+* [unprivileged](#elastic-agent-unprivileged-command)
+* [version](#elastic-agent-version-command)
+
+
+
+## elastic-agent diagnostics [elastic-agent-diagnostics-command]
+
+Gather diagnostics information from the {{agent}} and component/unit it’s running. This command produces an archive that contains:
+
+* version.txt - version information
+* pre-config.yaml - pre-configuration before variable substitution
+* variables.yaml - current variable contexts from providers
+* computed-config.yaml - configuration after variable substitution
+* components-expected.yaml - expected computed components model from the computed-config.yaml
+* components-actual.yaml - actual running components model as reported by the runtime manager
+* state.yaml - current state information of all running components
+* Components Directory - diagnostic information from each running component:
+
+ * goroutine.txt - goroutine dump
+ * heap.txt - memory allocation of live objects
+ * allocs.txt - sampling past memory allocations
+ * threadcreate.txt - traces led to creation of new OS threads
+ * block.txt - stack traces that led to blocking on synchronization primitives
+ * mutex.txt - stack traces of holders of contended mutexes
+ * Unit Directory - If a given unit provides specific diagnostics, it will be placed here.
+
+
+Note that **credentials may not be redacted** in the archive; they may appear in plain text in the configuration or policy files inside the archive.
+
+This command is intended for debugging purposes only. The output format and structure of the archive may change between releases.
+
+
+### Synopsis [_synopsis]
+
+```shell
+elastic-agent diagnostics [--file ]
+ [--cpu-profile]
+ [--exclude-events]
+ [--help]
+ [global-flags]
+```
+
+
+### Options [_options]
+
+`--file`
+: Specifies the output archive name. Defaults to `elastic-agent-diagnostics-.zip`, where the timestamp is the current time in UTC.
+
+`--help`
+: Show help for the `diagnostics` command.
+
+`--cpu-profile`
+: Additionally runs a 30-second CPU profile on each running component. This will generate an additional `cpu.pprof` file for each component.
+
+`--p`
+: Alias for `--cpu-profile`.
+
+`--exclude-events`
+: Exclude the events log files from the diagnostics archive.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Example [_example_38]
+
+```shell
+elastic-agent diagnostics
+```
+
+
+
+## elastic-agent enroll [elastic-agent-enroll-command]
+
+Enroll the {{agent}} in {{fleet}}.
+
+Use this command to enroll the {{agent}} in {{fleet}} without installing the agent as a service. You will need to do this if you installed the {{agent}} from a DEB or RPM package and plan to use systemd commands to start and manage the service. This command is also useful for testing {{agent}} prior to installing it.
+
+If you’ve already installed {{agent}}, use this command to modify the settings that {{agent}} runs with.
+
+::::{tip}
+To enroll an {{agent}} *and* install it as a service, use the [`install` command](#elastic-agent-install-command) instead. Installing as a service is the most common scenario.
+::::
+
+
+We recommend that you run the `enroll` (or `install`) command as the root user because some integrations require root privileges to collect sensitive data. This command overwrites the `elastic-agent.yml` file in the agent directory.
+
+This command includes optional flags to set up [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md).
+
+::::{important}
+This command enrolls the {{agent}} in {{fleet}}; it does not start the agent. To start the agent, either [start the service](/reference/ingestion-tools/fleet/start-stop-elastic-agent.md#start-elastic-agent-service), if one exists, or use the [`run` command](#elastic-agent-run-command) to start the agent from a terminal.
+::::
+
+
+
+### Synopsis [_synopsis_2]
+
+To enroll the {{agent}} in {{fleet}}:
+
+```shell
+elastic-agent enroll --url
+ --enrollment-token
+ [--ca-sha256 ]
+ [--certificate-authorities ]
+ [--daemon-timeout ]
+ [--delay-enroll]
+ [--elastic-agent-cert ]
+ [--elastic-agent-cert-key ]
+ [--elastic-agent-cert-key-passphrase ]
+ [--force]
+ [--header ]
+ [--help]
+ [--insecure ]
+ [--proxy-disabled]
+ [--proxy-header ]
+ [--proxy-url ]
+ [--staging ]
+ [--tag ]
+ [global-flags]
+```
+
+To enroll the {{agent}} in {{fleet}} and set up {{fleet-server}}:
+
+```shell
+elastic-agent enroll --fleet-server-es
+ --fleet-server-service-token
+ [--fleet-server-service-token-path ]
+ [--ca-sha256 ]
+ [--certificate-authorities ]
+ [--daemon-timeout ]
+ [--delay-enroll]
+ [--elastic-agent-cert ]
+ [--elastic-agent-cert-key ]
+ [--elastic-agent-cert-key-passphrase ]
+ [--fleet-server-cert ] <1>
+ [--fleet-server-cert-key ]
+ [--fleet-server-cert-key-passphrase ]
+ [--fleet-server-client-auth ]
+ [--fleet-server-es-ca ]
+ [--fleet-server-es-ca-trusted-fingerprint ] <2>
+ [--fleet-server-es-cert ]
+ [--fleet-server-es-cert-key ]
+ [--fleet-server-es-insecure]
+ [--fleet-server-host ]
+ [--fleet-server-policy ]
+ [--fleet-server-port ]
+ [--fleet-server-timeout ]
+ [--force]
+ [--header ]
+ [--help]
+ [--non-interactive]
+ [--proxy-disabled]
+ [--proxy-header ]
+ [--proxy-url ]
+ [--staging ]
+ [--tag ]
+ [--url ] <3>
+ [global-flags]
+```
+
+1. If no `fleet-server-cert*` flags are specified, {{agent}} auto-generates a self-signed certificate with the hostname of the machine. Remote {{agent}}s enrolling into a {{fleet-server}} with self-signed certificates must specify the `--insecure` flag.
+2. Required when using self-signed certificates with {{es}}.
+3. Required when enrolling in a {{fleet-server}} with custom certificates. The URL must match the DNS name used to generate the certificate specified by `--fleet-server-cert`.
+
+
+For more information about custom certificates, refer to [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md).
+
+
+### Options [_options_2]
+
+`--ca-sha256 `
+: Comma-separated list of certificate authority hash pins used for certificate verification.
+
+`--certificate-authorities `
+: Comma-separated list of root certificates used for server verification.
+
+`--daemon-timeout `
+: Timeout waiting for {{agent}} daemon.
+
+`--delay-enroll`
+: Delays enrollment to occur on first start of the {{agent}} service. This setting is useful when you don’t want the {{agent}} to enroll until the next reboot or manual start of the service, for example, when you’re preparing an image that includes {{agent}}.
+
+`--elastic-agent-cert`
+: Certificate to use as the client certificate for the {{agent}}'s connections to {{fleet-server}}.
+
+`--elastic-agent-cert-key`
+: Private key to use as for the {{agent}}'s connections to {{fleet-server}}.
+
+`--elastic-agent-cert-key-passphrase`
+: The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}. The file must only contain the characters of the passphrase, no newline or extra non-printing characters.
+
+ This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use.
+
+
+`--enrollment-token `
+: Enrollment token to use to enroll {{agent}} into {{fleet}}. You can use the same enrollment token for multiple agents.
+
+`--fleet-server-cert `
+: Certificate to use for exposed {{fleet-server}} HTTPS endpoint.
+
+`--fleet-server-cert-key `
+: Private key to use for exposed {{fleet-server}} HTTPS endpoint.
+
+`--fleet-server-cert-key-passphrase `
+: Path to passphrase file for decrypting {{fleet-server}}'s private key if an encrypted private key is used.
+
+`--fleet-server-client-auth `
+: One of `none`, `optional`, or `required`. Defaults to `none`. {{fleet-server}}'s `client_authentication` option for client mTLS connections. If `optional`, or `required` is specified, client certificates are verified using CAs specified in the `--certificate-authorities` flag.
+
+`--fleet-server-es `
+: Start a {{fleet-server}} process when {{agent}} is started, and connect to the specified {{es}} URL.
+
+`--fleet-server-es-ca `
+: Path to certificate authority to use to communicate with {{es}}.
+
+`--fleet-server-es-ca-trusted-fingerprint `
+: The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint will be used to verify self-signed certificates presented by {{fleet-server}} and any inputs started by {{agent}} for communication. This flag is required when using self-signed certificates with {{es}}.
+
+`--fleet-server-es-cert`
+: The path to the client certificate that {{fleet-server}} will use when connecting to {{es}}.
+
+`--fleet-server-es-cert-key`
+: The path to the private key that {{fleet-server}} will use when connecting to {{es}}.
+
+`--fleet-server-es-insecure`
+: Allows fleet server to connect to {{es}} in the following situations:
+
+ * When connecting to an HTTP server.
+ * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified.
+
+ When this flag is used the certificate verification is disabled.
+
+
+`--fleet-server-host `
+: {{fleet-server}} HTTP binding host (overrides the policy).
+
+`--fleet-server-policy `
+: Used when starting a self-managed {{fleet-server}} to allow a specific policy to be used.
+
+`--fleet-server-port `
+: {{fleet-server}} HTTP binding port (overrides the policy).
+
+`--fleet-server-service-token `
+: Service token to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token-path`.
+
+`--fleet-server-service-token-path `
+: Service token file to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token`.
+
+`--fleet-server-timeout `
+: Timeout waiting for {{fleet-server}} to be ready to start enrollment.
+
+`--force`
+: Force overwrite of current configuration without prompting for confirmation. This flag is helpful when using automation software or scripted deployments.
+
+ ::::{note}
+ If the {{agent}} is already installed on the host, using `--force` may result in unpredictable behavior with duplicate {{agent}}s appearing in {{fleet}}.
+ ::::
+
+
+`--header `
+: Headers used in communication with elasticsearch.
+
+`--help`
+: Show help for the `enroll` command.
+
+`--insecure`
+: Allow the {{agent}} to connect to {{fleet-server}} over insecure connections. This setting is required in the following situations:
+
+ * When connecting to an HTTP server. The API keys are sent in clear text.
+ * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified.
+ * When using self-signed certificates generated by {{agent}}.
+
+ We strongly recommend that you use a secure connection.
+
+
+`--non-interactive`
+: Install {{agent}} in a non-interactive mode. This flag is helpful when using automation software or scripted deployments. If {{agent}} is already installed on the host, the installation will terminate.
+
+`--proxy-disabled`
+: Disable proxy support including environment variables.
+
+`--proxy-header `
+: Proxy headers used with CONNECT request.
+
+`--proxy-url `
+: Configures the proxy URL.
+
+`--staging `
+: Configures agent to download artifacts from a staging build.
+
+`--tag `
+: A comma-separated list of tags to apply to {{fleet}}-managed {{agent}}s. You can use these tags to filter the list of agents in {{fleet}}.
+
+ ::::{note}
+ Currently, there is no way to remove or edit existing tags. To change the tags, you must unenroll the {{agent}}, then re-enroll it using new tags.
+ ::::
+
+
+`--url `
+: {{fleet-server}} URL to use to enroll the {{agent}} into {{fleet}}.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_11]
+
+Enroll the {{agent}} in {{fleet}}:
+
+```shell
+elastic-agent enroll \
+ --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \
+ --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ==
+```
+
+Enroll the {{agent}} in {{fleet}} and set up {{fleet-server}}:
+
+```shell
+elastic-agent enroll --fleet-server-es=http://elasticsearch:9200 \
+ --fleet-server-service-token=AbEAAdesYXN1abMvZmxlZXQtc2VldmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg7dzEta0JDTmZUcGlDTjlwRmNVTjNVQQ \
+ --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6
+```
+
+Start {{agent}} with {{fleet-server}} (running on a custom CA). This example assumes you’ve generated the certificates with the following names:
+
+* `ca.crt`: Root CA certificate
+* `fleet-server.crt`: {{fleet-server}} certificate
+* `fleet-server.key`: {{fleet-server}} private key
+* `elasticsearch-ca.crt`: CA certificate to use to connect to {es}
+
+```shell
+elastic-agent enroll \
+ --url=https://fleet-server:8220 \
+ --fleet-server-es=https://elasticsearch:9200 \
+ --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \
+ --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 \
+ --certificate-authorities=/path/to/ca.crt \
+ --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \
+ --fleet-server-cert=/path/to/fleet-server.crt \
+ --fleet-server-cert-key=/path/to/fleet-server.key \
+ --fleet-server-port=8220
+```
+
+Then enroll another {{agent}} into the {{fleet-server}} started in the previous example:
+
+```shell
+elastic-agent enroll --url=https://fleet-server:8220 \
+ --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \
+ --certificate-authorities=/path/to/ca.crt
+```
+
+
+
+## elastic-agent help [elastic-agent-help-command]
+
+Show help for a specific command.
+
+
+### Synopsis [_synopsis_3]
+
+```shell
+elastic-agent help [--help] [global-flags]
+```
+
+
+### Options [_options_3]
+
+`command`
+: The name of the command.
+
+`--help`
+: Show help for the `help` command.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Example [_example_39]
+
+```shell
+elastic-agent help enroll
+```
+
+
+
+## elastic-agent inspect [elastic-agent-inspect-command]
+
+Show the current {{agent}} configuration.
+
+If no parameters are specified, shows the full {{agent}} configuration.
+
+
+### Synopsis [_synopsis_4]
+
+```shell
+elastic-agent inspect [--help]
+elastic-agent inspect components [--show-config]
+ [--show-spec]
+ [--help]
+ [id]
+```
+
+
+### Options [_options_4]
+
+`components`
+: Display the current configuration for the component. This command accepts additional flags:
+
+ `--show-config`
+ : Use to display the configuration in all units.
+
+ `--show-spec`
+ : Use to get input/output runtime spectification for a component.
+
+
+`--help`
+: Show help for the `inspect` command.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_12]
+
+```shell
+elastic-agent inspect
+elastic-agent inspect components --show-config
+elastic-agent inspect components log-default
+```
+
+
+
+## elastic-agent privileged [elastic-agent-privileged-command]
+
+Run {{agent}} with full superuser privileges. This is the usual, default running mode for {{agent}}. The `privileged` command allows you to switch back to running an agent with full administrative privileges when you have been running it in `unprivileged` mode.
+
+Refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md) for more detail.
+
+
+### Examples [_examples_13]
+
+```shell
+elastic-agent privileged
+```
+
+
+
+## elastic-agent install [elastic-agent-install-command]
+
+Install {{agent}} permanently on the system and manage it by using the system’s service manager. The agent will start automatically after installation is complete. On Linux (tar package), this command requires a system and service manager like systemd.
+
+::::{important}
+If you installed {{agent}} from a DEB or RPM package, the `install` command will skip the installation itself and function as an alias of the [`enroll` command](#elastic-agent-enroll-command) instead. Note that after an upgrade of the {{agent}} using DEB or RPM the {{agent}} service needs to be restarted.
+::::
+
+
+You must run this command as the root user (or Administrator on Windows) to write files to the correct locations. This command overwrites the `elastic-agent.yml` file in the agent directory.
+
+The syntax for running this command varies by platform. For platform-specific examples, refer to [*Install {{agent}}s*](/reference/ingestion-tools/fleet/install-elastic-agents.md).
+
+
+### Synopsis [_synopsis_5]
+
+To install the {{agent}} as a service, enroll it in {{fleet}}, and start the `elastic-agent` service:
+
+```shell
+elastic-agent install --url
+ --enrollment-token
+ [--base-path ]
+ [--ca-sha256 ]
+ [--certificate-authorities ]
+ [--daemon-timeout ]
+ [--delay-enroll]
+ [--elastic-agent-cert ]
+ [--elastic-agent-cert-key ]
+ [--elastic-agent-cert-key-passphrase ]
+ [--force]
+ [--header ]
+ [--help]
+ [--insecure ]
+ [--non-interactive]
+ [--privileged]
+ [--proxy-disabled]
+ [--proxy-header ]
+ [--proxy-url ]
+ [--staging ]
+ [--tag ]
+ [--unprivileged]
+ [global-flags]
+```
+
+To install the {{agent}} as a service, enroll it in {{fleet}}, and start a `fleet-server` process alongside the `elastic-agent` service:
+
+```shell
+elastic-agent install --fleet-server-es
+ --fleet-server-service-token
+ [--fleet-server-service-token-path ]
+ [--base-path ]
+ [--ca-sha256 ]
+ [--certificate-authorities ]
+ [--daemon-timeout ]
+ [--delay-enroll]
+ [--elastic-agent-cert ]
+ [--elastic-agent-cert-key ]
+ [--elastic-agent-cert-key-passphrase ]
+ [--fleet-server-cert ] <1>
+ [--fleet-server-cert-key ]
+ [--fleet-server-cert-key-passphrase ]
+ [--fleet-server-client-auth ]
+ [--fleet-server-es-ca ]
+ [--fleet-server-es-ca-trusted-fingerprint ] <2>
+ [--fleet-server-es-cert ]
+ [--fleet-server-es-cert-key ]
+ [--fleet-server-es-insecure]
+ [--fleet-server-host ]
+ [--fleet-server-policy ]
+ [--fleet-server-port ]
+ [--fleet-server-timeout ]
+ [--force]
+ [--header ]
+ [--help]
+ [--non-interactive]
+ [--privileged]
+ [--proxy-disabled]
+ [--proxy-header ]
+ [--proxy-url ]
+ [--staging ]
+ [--tag ]
+ [--unprivileged]
+ [--url ] <3>
+ [global-flags]
+```
+
+1. If no `fleet-server-cert*` flags are specified, {{agent}} auto-generates a self-signed certificate with the hostname of the machine. Remote {{agent}}s enrolling into a {{fleet-server}} with self-signed certificates must specify the `--insecure` flag.
+2. Required when using self-signed certificate on {{es}} side.
+3. Required when enrolling in a {{fleet-server}} with custom certificates. The URL must match the DNS name used to generate the certificate specified by `--fleet-server-cert`.
+
+
+For more information about custom certificates, refer to [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md).
+
+
+### Options [_options_5]
+
+`--base-path `
+: Install {{agent}} in a location other than the [default](/reference/ingestion-tools/fleet/installation-layout.md). Specify the custom base path for the install.
+
+ The `--base-path` option is not currently supported with [{{elastic-defend}}](/reference/security/elastic-defend/install-endpoint.md).
+
+
+`--ca-sha256 `
+: Comma-separated list of certificate authority hash pins used for certificate verification.
+
+`--certificate-authorities `
+: Comma-separated list of root certificates used for server verification.
+
+`--daemon-timeout `
+: Timeout waiting for {{agent}} daemon.
+
+`--delay-enroll`
+: Delays enrollment to occur on first start of the {{agent}} service. This setting is useful when you don’t want the {{agent}} to enroll until the next reboot or manual start of the service, for example, when you’re preparing an image that includes {{agent}}.
+
+`--elastic-agent-cert`
+: Certificate to use as the client certificate for the {{agent}}'s connections to {{fleet-server}}.
+
+`--elastic-agent-cert-key`
+: Private key to use as for the {{agent}}'s connections to {{fleet-server}}.
+
+`--elastic-agent-cert-key-passphrase`
+: The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}. The file must only contain the characters of the passphrase, no newline or extra non-printing characters.
+
+ This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use.
+
+
+`--enrollment-token `
+: Enrollment token to use to enroll {{agent}} into {{fleet}}. You can use the same enrollment token for multiple agents.
+
+`--fleet-server-cert `
+: Certificate to use for exposed {{fleet-server}} HTTPS endpoint.
+
+`--fleet-server-cert-key `
+: Private key to use for exposed {{fleet-server}} HTTPS endpoint.
+
+`--fleet-server-cert-key-passphrase `
+: Path to passphrase file for decrypting {{fleet-server}}'s private key if an encrypted private key is used.
+
+`--fleet-server-client-auth `
+: One of `none`, `optional`, or `required`. Defaults to `none`. {{fleet-server}}'s `client_authentication` option for client mTLS connections. If `optional`, or `required` is specified, client certificates are verified using CAs specified in the `--certificate-authorities` flag.
+
+`--fleet-server-es `
+: Start a {{fleet-server}} process when {{agent}} is started, and connect to the specified {{es}} URL.
+
+`--fleet-server-es-ca `
+: Path to certificate authority to use to communicate with {{es}}.
+
+`--fleet-server-es-ca-trusted-fingerprint `
+: The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint will be used to verify self-signed certificates presented by {{fleet-server}} and any inputs started by {{agent}} for communication. This flag is required when using self-signed certificates with {{es}}.
+
+`--fleet-server-es-cert`
+: The path to the client certificate that {{fleet-server}} will use when connecting to {{es}}.
+
+`--fleet-server-es-cert-key`
+: The path to the private key that {{fleet-server}} will use when connecting to {{es}}.
+
+`--fleet-server-es-insecure`
+: Allows fleet server to connect to {{es}} in the following situations:
+
+ * When connecting to an HTTP server.
+ * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified.
+
+ When this flag is used the certificate verification is disabled.
+
+
+`--fleet-server-host `
+: {{fleet-server}} HTTP binding host (overrides the policy).
+
+`--fleet-server-policy `
+: Used when starting a self-managed {{fleet-server}} to allow a specific policy to be used.
+
+`--fleet-server-port `
+: {{fleet-server}} HTTP binding port (overrides the policy).
+
+`--fleet-server-service-token `
+: Service token to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token-path`.
+
+`--fleet-server-service-token-path `
+: Service token file to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token`.
+
+`--fleet-server-timeout `
+: Timeout waiting for {{fleet-server}} to be ready to start enrollment.
+
+`--force`
+: Force overwrite of current configuration without prompting for confirmation. This flag is helpful when using automation software or scripted deployments.
+
+ ::::{note}
+ If the {{agent}} is already installed on the host, using `--force` may result in unpredictable behavior with duplicate {{agent}}s appearing in {{fleet}}.
+ ::::
+
+
+`--header `
+: Headers used in communication with elasticsearch.
+
+`--help`
+: Show help for the `enroll` command.
+
+`--insecure`
+: Allow the {{agent}} to connect to {{fleet-server}} over insecure connections. This setting is required in the following situations:
+
+ * When connecting to an HTTP server. The API keys are sent in clear text.
+ * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified.
+ * When using self-signed certificates generated by {{agent}}.
+
+ We strongly recommend that you use a secure connection.
+
+
+`--non-interactive`
+: Install {{agent}} in a non-interactive mode. This flag is helpful when using automation software or scripted deployments. If {{agent}} is already installed on the host, the installation will terminate.
+
+`--privileged`
+: Run {{agent}} with full superuser privileges. This is the usual, default running mode for {{agent}}. The `--privileged` option allows you to switch back to running an agent with full administrative privileges when you have been running it in `unprivileged`.
+
+See the `--unprivileged` option and [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md) for more detail.
+
+`--proxy-disabled`
+: Disable proxy support including environment variables.
+
+`--proxy-header `
+: Proxy headers used with CONNECT request.
+
+`--proxy-url `
+: Configures the proxy URL.
+
+`--staging `
+: Configures agent to download artifacts from a staging build.
+
+`--tag `
+: A comma-separated list of tags to apply to {{fleet}}-managed {{agent}}s. You can use these tags to filter the list of agents in {{fleet}}.
+
+ ::::{note}
+ Currently, there is no way to remove or edit existing tags. To change the tags, you must unenroll the {{agent}}, then re-enroll it using new tags.
+ ::::
+
+
+`--unprivileged`
+: Run {{agent}} without full superuser privileges. This option is useful in organizations that limit `root` access on Linux or macOS systems, or `admin` access on Windows systems. For details and limitations for running {{agent}} in this mode, refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md).
+
+ Note that changing to `unprivileged` mode is prevented if the agent is currently enrolled in a policy that includes an integration that requires administrative access, such as the {{elastic-defend}} integration.
+
+ [preview] To run {{agent}} without superuser privileges as a pre-existing user or group, for instance under an Active Directory account, you can specify the user or group, and the password to use.
+
+ For example:
+
+ ```shell
+ elastic-agent install --unprivileged --user="my.path\username" --password="mypassword"
+ ```
+
+ ```shell
+ elastic-agent install --unprivileged --group="my.path\groupname" --password="mypassword"
+ ```
+
+
+`--url `
+: {{fleet-server}} URL to use to enroll the {{agent}} into {{fleet}}.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_14]
+
+Install the {{agent}} as a service, enroll it in {{fleet}}, and start the `elastic-agent` service:
+
+```shell
+elastic-agent install \
+ --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \
+ --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ==
+```
+
+Install the {{agent}} as a service, enroll it in {{fleet}}, and start a `fleet-server` process alongside the `elastic-agent` service:
+
+```shell
+elastic-agent install --fleet-server-es=http://elasticsearch:9200 \
+ --fleet-server-service-token=AbEAAdesYXN1abMvZmxlZXQtc2VldmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg7dzEta0JDTmZUcGlDTjlwRmNVTjNVQQ \
+ --fleet-server-policy=a35fd620-26f6-11ec-8bd9-3374690f57b6
+```
+
+Start {{agent}} with {{fleet-server}} (running on a custom CA). This example assumes you’ve generated the certificates with the following names:
+
+* `ca.crt`: Root CA certificate
+* `fleet-server.crt`: {{fleet-server}} certificate
+* `fleet-server.key`: {{fleet-server}} private key
+* `elasticsearch-ca.crt`: CA certificate to use to connect to {es}
+
+```shell
+elastic-agent install \
+ --url=https://fleet-server:8220 \
+ --fleet-server-es=https://elasticsearch:9200 \
+ --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \
+ --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 \
+ --certificate-authorities=/path/to/ca.crt \
+ --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \
+ --fleet-server-cert=/path/to/fleet-server.crt \
+ --fleet-server-cert-key=/path/to/fleet-server.key \
+ --fleet-server-port=8220
+```
+
+Then install another {{agent}} and enroll it into the {{fleet-server}} started in the previous example:
+
+```shell
+elastic-agent install --url=https://fleet-server:8220 \
+ --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \
+ --certificate-authorities=/path/to/ca.crt
+```
+
+
+
+## elastic-agent otel [elastic-agent-otel-command]
+
+::::{warning}
+This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
+::::
+
+
+Run {{agent}} as an [OpenTelemetry Collector](/reference/ingestion-tools/fleet/otel-agent.md).
+
+
+### Synopsis [_synopsis_6]
+
+```shell
+elastic-agent otel [flags]
+elastic-agent otel [command]
+```
+
+::::{note}
+You can also run the `./otelcol` command, which calls `./elastic-agent otel` and passes any arguments to it.
+::::
+
+
+
+### Available commands [_available_commands]
+
+`validate`
+: Validates the OpenTelemetry collector configuration without running the collector.
+
+
+### Flags [_flags]
+
+`--config=file:/path/to/first --config=file:path/to/second`
+: Locations to the config file(s). Note that only a single location can be set per flag entry, for example `--config=file:/path/to/first --config=file:path/to/second`.
+
+`--feature-gates flag`
+: Comma-delimited list of feature gate identifiers. Prefix with `-` to disable the feature. Prefixing with `+` or no prefix will enable the feature.
+
+`-h, --help`
+: Get help for the `otel` sub-command. Use `elastic-agent otel [command] --help` for more information about a command.
+
+`--set string`
+: Set an arbitrary component config property. The component has to be defined in the configuration file and the flag has a higher precedence. Array configuration properties are overridden and maps are joined. For example, `--set=processors::batch::timeout=2s`.
+
+
+### Examples [_examples_15]
+
+Run {{agent}} as on OTel Collector using the supplied `otel.yml` configuration file.
+
+```shell
+./elastic-agent otel --config otel.yml
+```
+
+Change the default verbosity setting in the {{agent}} OTel configuration from `detailed` to `normal`.
+
+```shell
+./elastic-agent otel --config otel.yml --set "exporters::debug::verbosity=normal"
+```
+
+
+
+## elastic-agent restart [elastic-agent-restart-command]
+
+Restart the currently running {{agent}} daemon.
+
+
+### Synopsis [_synopsis_7]
+
+```shell
+elastic-agent restart [--help] [global-flags]
+```
+
+
+### Options [_options_6]
+
+`--help`
+: Show help for the `restart` command.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_16]
+
+```shell
+elastic-agent restart
+```
+
+
+
+## elastic-agent run [elastic-agent-run-command]
+
+Start the `elastic-agent` process.
+
+
+### Synopsis [_synopsis_8]
+
+```shell
+elastic-agent run [global-flags]
+```
+
+
+### Global flags [elastic-agent-global-flags]
+
+These flags are valid whenever you run `elastic-agent` on the command line.
+
+`-c `
+: The configuration file to use. If not specified, {{agent}} uses `{path.config}/elastic-agent.yml`.
+
+`--e`
+: Log to stderr and disable syslog/file output.
+
+`--environment `
+: The environment in which the agent will run.
+
+`--path.config `
+: The directory where {{agent}} looks for its configuration file. The default varies by platform.
+
+`--path.home `
+: The root directory of {{agent}}. `path.home` determines the location of the configuration files and data directory.
+
+ If not specified, {{agent}} uses the current working directory.
+
+
+`--path.logs `
+: Path to the log output for {{agent}}. The default varies by platform.
+
+`--v`
+: Set log level to INFO.
+
+
+### Example [_example_40]
+
+```shell
+elastic-agent run -c myagentconfig.yml
+```
+
+
+
+## elastic-agent status [elastic-agent-status-command]
+
+Returns the current status of the running {{agent}} daemon and of each process in the {{agent}}. The last known status of the {{fleet}} server is also returned. The `output` option controls the level of detail and formatting of the information.
+
+
+### Synopsis [_synopsis_9]
+
+```shell
+elastic-agent status [--output ]
+ [--help]
+ [global-flags]
+```
+
+
+### Options [_options_7]
+
+`--output `
+: Output the status information in either `human` (the default), `full`, `json`, or `yaml`. `human` returns limited information when {{agent}} is in the `HEALTHY` state. If any components or units are not in `HEALTHY` state, then full details are displayed for that component or unit. `full`, `json` and `yaml` always return the full status information. Components map to individual processes running underneath {{agent}}, for example {{filebeat}} or {{endpoint-sec}}. Units map to discrete configuration units within that process, for example {{filebeat}} inputs or {{metricbeat}} modules.
+
+When the output is `json` or `yaml`, status codes are returned as numerical values. The status codes can be mapped using the following table:
+
++
+
+| Code | Status |
+| --- | --- |
+| 0 | `STARTING` |
+| 1 | `CONFIGURING` |
+| 2 | `HEALTHY` |
+| 3 | `DEGRADED` |
+| 4 | `FAILED` |
+| 5 | `STOPPING` |
+| 6 | `UPGRADING` |
+| 7 | `ROLLBACK` |
+
+`--help`
+: Show help for the `status` command.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_17]
+
+```shell
+elastic-agent status
+```
+
+
+
+## elastic-agent uninstall [elastic-agent-uninstall-command]
+
+Permanently uninstall {{agent}} from the system.
+
+You must run this command as the root user (or Administrator on Windows) to remove files.
+
+::::{important}
+Be sure to run the `uninstall` command from a directory outside of where {{agent}} is installed.
+
+For example, on a Windows system the install location is `C:\Program Files\Elastic\Agent`. Run the uninstall command from `C:\Program Files\Elastic` or `\tmp`, or even your default home directory:
+
+```shell
+C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall
+```
+
+::::
+
+
+:::::::{tab-set}
+
+::::::{tab-item} macOS
+::::{tip}
+You must run this command as the root user.
+::::
+
+
+```shell
+sudo /Library/Elastic/Agent/elastic-agent uninstall
+```
+::::::
+
+::::::{tab-item} Linux
+::::{tip}
+You must run this command as the root user.
+::::
+
+
+```shell
+sudo /opt/Elastic/Agent/elastic-agent uninstall
+```
+::::::
+
+::::::{tab-item} Windows
+Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**).
+
+From the PowerShell prompt, run:
+
+```shell
+C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall
+```
+::::::
+
+:::::::
+
+### Synopsis [_synopsis_10]
+
+```shell
+elastic-agent uninstall [--force] [--help] [global-flags]
+```
+
+
+### Options [_options_8]
+
+`--force`
+: Uninstall {{agent}} and do not prompt for confirmation. This flag is helpful when using automation software or scripted deployments.
+
+`--skip-fleet-audit`
+: Skip auditing with the {{fleet-server}}.
+
+`--help`
+: Show help for the `uninstall` command.
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_18]
+
+```shell
+elastic-agent uninstall
+```
+
+
+
+## elastic-agent unprivileged [elastic-agent-unprivileged-command]
+
+Run {{agent}} without full superuser privileges. This is useful in organizations that limit `root` access on Linux or macOS systems, or `admin` access on Windows systems. For details and limitations for running {{agent}} in this mode, refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md).
+
+Note that changing a running {{agent}} to `unprivileged` mode is prevented if the agent is currently enrolled with a policy that contains the {{elastic-defend}} integration.
+
+[preview] To run {{agent}} without superuser privileges as a pre-existing user or group, for instance under an Active Directory account, add either a `--user` or `--group` parameter together with a `--password` parameter.
+
+
+### Examples [_examples_19]
+
+Run {{agent}} without administrative privileges:
+
+```shell
+elastic-agent unprivileged
+```
+
+[preview] Run {{agent}} without administrative privileges, as a pre-existing user:
+
+```shell
+elastic-agent unprivileged --user="my.pathl\username" --password="mypassword"
+```
+
+[preview] Run {{agent}} without administrative privileges, as a pre-existing group:
+
+```shell
+elastic-agent unprivileged --group="my.pathl\groupname" --password="mypassword"
+```
+
+
+
+## elastic-agent upgrade [elastic-agent-upgrade-command]
+
+Upgrade the currently running {{agent}} to the specified version. This should only be used with agents running in standalone mode. Agents enrolled in {{fleet}} should be upgraded through {{fleet}}.
+
+
+### Synopsis [_synopsis_11]
+
+```shell
+elastic-agent upgrade [--source-uri ] [--help] [flags]
+```
+
+
+### Options [_options_9]
+
+`version`
+: The version of {{agent}} to upgrade to.
+
+`--source-uri `
+: The source URI to download the new version from. By default, {{agent}} uses the Elastic Artifacts URL.
+
+`--skip-verify`
+: Skip the package verification process. This option is not recommended as it is insecure.
+
+`--pgp-path `
+: Use a locally stored copy of the PGP key to verify the upgrade package.
+
+`--pgp-uri `
+: Use the specified online PGP key to verify the upgrade package.
+
+`--help`
+: Show help for the `upgrade` command.
+
+For details about using the `--skip-verify`, `--pgp-path `, and `--pgp-uri ` package verification options, refer to [Verifying {{agent}} package signatures](/reference/ingestion-tools/fleet/upgrade-standalone.md#upgrade-standalone-verify-package).
+
+For more flags, see [Global flags](#elastic-agent-global-flags).
+
+
+### Examples [_examples_20]
+
+```shell
+elastic-agent upgrade 7.10.1
+```
+
+
+
+## elastic-agent logs [elastic-agent-logs-command]
+
+Show the logs of the running {{agent}}.
+
+
+### Synopsis [_synopsis_12]
+
+```shell
+elastic-agent logs [--follow] [--number ] [--component ] [--no-color] [--help] [global-flags]
+```
+
+
+### Options [_options_10]
+
+`--follow` or `-f`
+: Follow log updates until the command is interrupted (for example with `Ctrl-C`).
+
+`--number ` or `-n `
+: How many lines of logs to print. If logs following is enabled, affects the initial output.
+
+`--component ` or `-C