diff --git a/docset.yml b/docset.yml index 5fcde28fb0..0f3befd591 100644 --- a/docset.yml +++ b/docset.yml @@ -3,6 +3,25 @@ exclude: - 'README.md' cross_links: - asciidocalypse + - kibana + - integration-docs + - integrations + - logstash + - elasticsearch + - cloud + - beats + - go-elasticsearch + - elasticsearch-java + - elasticsearch-net + - elasticsearch-php + - elasticsearch-py + - elasticsearch-ruby + - elasticsearch-js + - ecs + - ecs-logging + - search-ui + - cloud-on-k8s + toc: - file: index.md - toc: get-started @@ -12,6 +31,9 @@ toc: - toc: deploy-manage - toc: cloud-account - toc: troubleshoot + - toc: release-notes + - toc: reference + - toc: extend - toc: raw-migrated-files subs: diff --git a/extend/index.md b/extend/index.md new file mode 100644 index 0000000000..92bebe7cd2 --- /dev/null +++ b/extend/index.md @@ -0,0 +1,20 @@ +# Extend and contribute + +This section contains information on how to extend or contribute to our various products. + +## Contributing to Elastic Projects + +You can contribute to various projects, including: + +- [Kibana](kibana://docs/extend/index.md): Enhance our data visualization platform by contributing to Kibana. +- [Logstash](logstash://docs/extend/index.md): Help us improve the data processing pipeline with your contributions to Logstash. +- [Beats](beats://docs/extend/index.md): Add new features or beats to our lightweight data shippers. + +## Creating Integrations + +Extend the capabilities of Elastic by creating integrations that connect Elastic products with other tools and systems. Visit our [Integrations Guide](integrations://docs/extend/index.md) to get started. + +## Elasticsearch Plugins + +Develop custom plugins to add new functionalities to Elasticsearch. Check out our [Elasticsearch Plugins Development Guide](elasticsearch://docs/extend/index.md) for detailed instructions and best practices. + diff --git a/extend/toc.yml b/extend/toc.yml new file mode 100644 index 0000000000..f2ab236796 --- /dev/null +++ b/extend/toc.yml @@ -0,0 +1,2 @@ +toc: + - file: index.md \ No newline at end of file diff --git a/reference/data-analysis/index.md b/reference/data-analysis/index.md new file mode 100644 index 0000000000..e4f03e50bf --- /dev/null +++ b/reference/data-analysis/index.md @@ -0,0 +1,10 @@ +# Data analysis + +% TO-DO: Add links to "What is data analysis?"% + +This section contains reference information for data analysis features, including: + +* [Text analysis components](elasticsearch://docs/reference/data-analysis/text-analysis/index.md) +* [Aggregations](elasticsearch://docs/reference/data-analysis/aggregations/index.md) +* [Machine learning functions](/reference/data-analysis/machine-learning/machine-learning-functions.md) +* [Canvas functions](/reference/data-analysis/kibana/canvas-functions.md) diff --git a/reference/data-analysis/kibana/canvas-functions.md b/reference/data-analysis/kibana/canvas-functions.md new file mode 100644 index 0000000000..1746624c34 --- /dev/null +++ b/reference/data-analysis/kibana/canvas-functions.md @@ -0,0 +1,1850 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/kibana/current/canvas-function-reference.html +--- + +# Canvas function reference [canvas-function-reference] + +Behind the scenes, Canvas is driven by a powerful expression language, with dozens of functions and other capabilities, including table transforms, type casting, and sub-expressions. + +The Canvas expression language also supports [TinyMath functions](/reference/data-analysis/kibana/tinymath-functions.md), which perform complex math calculations. + +A * denotes a required argument. + +A † denotes an argument can be passed multiple times. + +[A](#a_fns) | B | [C](#c_fns) | [D](#d_fns) | [E](#e_fns) | [F](#f_fns) | [G](#g_fns) | [H](#h_fns) | [I](#i_fns) | [J](#j_fns) | [K](#k_fns) | [L](#l_fns) | [M](#m_fns) | [N](#n_fns) | O | [P](#p_fns) | Q | [R](#r_fns) | [S](#s_fns) | [T](#t_fns) | [U](#u_fns) | [V](#v_fns) | W | X | Y | Z + + +## A [a_fns] + + +### `all` [all_fn] + +Returns `true` if all of the conditions are met. See also [`any`](#any_fn). + +**Expression syntax** + +```js +all {neq "foo"} {neq "bar"} {neq "fizz"} +all condition={gt 10} condition={lt 20} +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| math "mean(percent_uptime)" +| formatnumber "0.0%" +| metric "Average uptime" + metricFont={ + font size=48 family="'Open Sans', Helvetica, Arial, sans-serif" + color={ + if {all {gte 0} {lt 0.8}} then="red" else="green" + } + align="center" lHeight=48 + } +| render +``` + +This sets the color of the metric text to `"red"` if the context passed into `metric` is greater than or equal to 0 and less than 0.8. Otherwise, the color is set to `"green"`. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* * †
Alias: `condition` | `boolean` | The conditions to check. | + +**Returns:** `boolean` + + +### `alterColumn` [alterColumn_fn] + +Converts between core types, including `string`, `number`, `null`, `boolean`, and `date`, and renames columns. See also [`mapColumn`](#mapColumn_fn), [`mathColumn`](#mathColumn_fn), and [`staticColumn`](#staticColumn_fn). + +**Expression syntax** + +```js +alterColumn "cost" type="string" +alterColumn column="@timestamp" name="foo" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| alterColumn "time" name="time_in_ms" type="number" +| table +| render +``` + +This renames the `time` column to `time_in_ms` and converts the type of the column’s values from `date` to `number`. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `column` | `string` | The name of the column to alter. | +| `name` | `string` | The resultant column name. Leave blank to not rename. | +| `type` | `string` | The type to convert the column to. Leave blank to not change the type. | + +**Returns:** `datatable` + + +### `any` [any_fn] + +Returns `true` if at least one of the conditions is met. See also [`all`](#all_fn). + +**Expression syntax** + +```js +any {eq "foo"} {eq "bar"} {eq "fizz"} +any condition={lte 10} condition={gt 30} +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| filterrows { + getCell "project" | any {eq "elasticsearch"} {eq "kibana"} {eq "x-pack"} + } +| pointseries color="project" size="max(price)" +| pie +| render +``` + +This filters out any rows that don’t contain `"elasticsearch"`, `"kibana"` or `"x-pack"` in the `project` field. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* * †
Alias: `condition` | `boolean` | The conditions to check. | + +**Returns:** `boolean` + + +### `as` [as_fn] + +Creates a `datatable` with a single value. See also [`getCell`](#getCell_fn). + +**Expression syntax** + +```js +as +as "foo" +as name="bar" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| ply by="project" fn={math "count(username)" | as "num_users"} fn={math "mean(price)" | as "price"} +| pointseries x="project" y="num_users" size="price" color="project" +| plot +| render +``` + +`as` casts any primitive value (`string`, `number`, `date`, `null`) into a `datatable` with a single row and a single column with the given name (or defaults to `"value"` if no name is provided). This is useful when piping a primitive value into a function that only takes `datatable` as an input. + +In the example, `ply` expects each `fn` subexpression to return a `datatable` in order to merge the results of each `fn` back into a `datatable`, but using a `math` aggregation in the subexpressions returns a single `math` value, which is then cast into a `datatable` using `as`. + +**Accepts:** `string`, `boolean`, `number`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `name` | `string` | The name to give the column.
Default: `"value"` | + +**Returns:** `datatable` + + +### `asset` [asset_fn] + +Retrieves Canvas workpad asset objects to provide as argument values. Usually images. + +**Expression syntax** + +```js +asset "asset-52f14f2b-fee6-4072-92e8-cd2642665d02" +asset id="asset-498f7429-4d56-42a2-a7e4-8bf08d98d114" +``` + +**Code example** + +```text +image dataurl={asset "asset-c661a7cc-11be-45a1-a401-d7592ea7917a"} mode="contain" +| render +``` + +The image asset stored with the ID `"asset-c661a7cc-11be-45a1-a401-d7592ea7917a"` is passed into the `dataurl` argument of the `image` function to display the stored asset. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `id` | `string` | The ID of the asset to retrieve. | + +**Returns:** `string` + + +### `axisConfig` [axisConfig_fn] + +Configures the axis of a visualization. Only used with [`plot`](#plot_fn). + +**Expression syntax** + +```js +axisConfig show=false +axisConfig position="right" min=0 max=10 tickSize=1 +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| pointseries x="size(cost)" y="project" color="project" +| plot defaultStyle={seriesStyle bars=0.75 horizontalBars=true} + legend=false + xaxis={axisConfig position="top" min=0 max=400 tickSize=100} + yaxis={axisConfig position="right"} +| render +``` + +This sets the `x-axis` to display on the top of the chart and sets the range of values to `0-400` with ticks displayed at `100` intervals. The `y-axis` is configured to display on the `right`. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `max` | `number`, `string`, `null` | The maximum value displayed in the axis. Must be a number, a date in milliseconds since epoch, or an ISO8601 string. | +| `min` | `number`, `string`, `null` | The minimum value displayed in the axis. Must be a number, a date in milliseconds since epoch, or an ISO8601 string. | +| `position` | `string` | The position of the axis labels. For example, `"top"`, `"bottom"`, `"left"`, or `"right"`.
Default: `"left"` | +| `show` | `boolean` | Show the axis labels?
Default: `true` | +| `tickSize` | `number`, `null` | The increment size between each tick. Use for `number` axes only. | + +**Returns:** `axisConfig` + + +## C [c_fns] + + +### `case` [case_fn] + +Builds a [`case`](#case_fn), including a condition and a result, to pass to the [`switch`](#switch_fn) function. + +**Expression syntax** + +```js +case 0 then="red" +case when=5 then="yellow" +case if={lte 50} then="green" +``` + +**Code example** + +```text +math "random()" +| progress shape="gauge" label={formatnumber "0%"} + font={ + font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" align="center" + color={ + switch {case if={lte 0.5} then="green"} + {case if={all {gt 0.5} {lte 0.75}} then="orange"} + default="red" + } + } + valueColor={ + switch {case if={lte 0.5} then="green"} + {case if={all {gt 0.5} {lte 0.75}} then="orange"} + default="red" + } +| render +``` + +This sets the color of the progress indicator and the color of the label to `"green"` if the value is less than or equal to `0.5`, `"orange"` if the value is greater than `0.5` and less than or equal to `0.75`, and `"red"` if `none` of the case conditions are met. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `when` | `any` | The value compared to the *context* to see if they are equal. The `when` argument is ignored when the `if` argument is also specified. | +| `if` | `boolean` | This value indicates whether the condition is met. The `if` argument overrides the `when` argument when both are provided. | +| `then` * | `any` | The value returned if the condition is met. | + +**Returns:** `case` + + +### `clear` [clear_fn] + +Clears the *context*, and returns `null`. + +**Accepts:** `null` + +**Returns:** `null` + + +### `clog` [clog_fn] + +Outputs the *input* in the console. This function is for debug purposes + +**Expression syntax** + +```js +clog +``` + +**Code example** + +```text +kibana +| demodata +| clog +| filterrows fn={getCell "age" | gt 70} +| clog +| pointseries x="time" y="mean(price)" +| plot defaultStyle={seriesStyle lines=1 fill=1} +| render +``` + +This prints the `datatable` objects in the browser console before and after the `filterrows` function. + +**Accepts:** `any` + +**Returns:** Depends on your input and arguments + + +### `columns` [columns_fn] + +Includes or excludes columns from a `datatable`. When both arguments are specified, the excluded columns will be removed first. + +**Expression syntax** + +```js +columns include="@timestamp, projects, cost" +columns exclude="username, country, age" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| columns include="price, cost, state, project" +| table +| render +``` + +This only keeps the `price`, `cost`, `state`, and `project` columns from the `demodata` data source and removes all other columns. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `include` | `string` | A comma-separated list of column names to keep in the `datatable`. | +| `exclude` | `string` | A comma-separated list of column names to remove from the `datatable`. | + +**Returns:** `datatable` + + +### `compare` [compare_fn] + +Compares the *context* to specified value to determine `true` or `false`. Usually used in combination with `<>` or [`case`](#case_fn). This only works with primitive types, such as `number`, `string`, `boolean`, `null`. See also [`eq`](#eq_fn), [`gt`](#gt_fn), [`gte`](#gte_fn), [`lt`](#lt_fn), [`lte`](#lte_fn), [`neq`](#neq_fn) + +**Expression syntax** + +```js +compare "neq" to="elasticsearch" +compare op="lte" to=100 +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| mapColumn project + fn={getCell project | + switch + {case if={compare eq to=kibana} then=kibana} + {case if={compare eq to=elasticsearch} then=elasticsearch} + default="other" + } +| pointseries size="size(cost)" color="project" +| pie +| render +``` + +This maps all `project` values that aren’t `"kibana"` and `"elasticsearch"` to `"other"`. Alternatively, you can use the individual comparator functions instead of compare. + +**Accepts:** `string`, `number`, `boolean`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `op` | `string` | The operator to use in the comparison: `"eq"` (equal to), `"gt"` (greater than), `"gte"` (greater than or equal to), `"lt"` (less than), `"lte"` (less than or equal to), `"ne"` or `"neq"` (not equal to).
Default: `"eq"` | +| `to`
Aliases: `b`, `this` | `any` | The value compared to the *context*. | + +**Returns:** `boolean` + + +### `containerStyle` [containerStyle_fn] + +Creates an object used for styling an element’s container, including background, border, and opacity. + +**Expression syntax** + +```js +containerStyle backgroundColor="red"’ +containerStyle borderRadius="50px" +containerStyle border="1px solid black" +containerStyle padding="5px" +containerStyle opacity="0.5" +containerStyle overflow="hidden" +containerStyle backgroundImage={asset id=asset-f40d2292-cf9e-4f2c-8c6f-a504a25e949c} + backgroundRepeat="no-repeat" + backgroundSize="cover" +``` + +**Code example** + +```text +shape "star" fill="#E61D35" maintainAspect=true +| render containerStyle={ + containerStyle backgroundColor="#F8D546" + borderRadius="200px" + border="4px solid #05509F" + padding="0px" + opacity="0.9" + overflow="hidden" + } +``` + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `backgroundColor` | `string` | A valid CSS background color. | +| `backgroundImage` | `string` | A valid CSS background image. | +| `backgroundRepeat` | `string` | A valid CSS background repeat.
Default: `"no-repeat"` | +| `backgroundSize` | `string` | A valid CSS background size.
Default: `"contain"` | +| `border` | `string` | A valid CSS border. | +| `borderRadius` | `string` | The number of pixels to use when rounding the corners. | +| `opacity` | `number` | A number between 0 and 1 that represents the degree of transparency of the element. | +| `overflow` | `string` | A valid CSS overflow.
Default: `"hidden"` | +| `padding` | `string` | The distance of the content, in pixels, from the border. | + +**Returns:** `containerStyle` + + +### `context` [context_fn] + +Returns whatever you pass into it. This can be useful when you need to use *context* as argument to a function as a sub-expression. + +**Expression syntax** + +```js +context +``` + +**Code example** + +```text +date +| formatdate "LLLL" +| markdown "Last updated: " {context} +| render +``` + +Using the `context` function allows us to pass the output, or *context*, of the previous function as a value to an argument in the next function. Here we get the formatted date string from the previous function and pass it as `content` for the markdown element. + +**Accepts:** `any` + +**Returns:** Depends on your input and arguments + + +### `createTable` [createTable_fn] + +Creates a datatable with a list of columns, and 1 or more empty rows. To populate the rows, use [`mapColumn`](#mapColumn_fn) or [`mathColumn`](#mathColumn_fn). + +**Expression syntax** + +```js +createTable id="a" id="b" +createTable id="a" name="A" id="b" name="B" rowCount=5 +``` + +**Code example** + +```text +var_set +name="logs" value={essql "select count(*) as a from kibana_sample_data_logs"} +name="commerce" value={essql "select count(*) as b from kibana_sample_data_ecommerce"} +| createTable ids="totalA" ids="totalB" +| staticColumn name="totalA" value={var "logs" | getCell "a"} +| alterColumn column="totalA" type="number" +| staticColumn name="totalB" value={var "commerce" | getCell "b"} +| alterColumn column="totalB" type="number" +| mathColumn id="percent" name="percent" expression="totalA / totalB" +| render +``` + +This creates a table based on the results of two `essql` queries, joined into one table. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `ids` † | `string` | Column ids to generate in positional order. ID represents the key in the row. | +| `names` † | `string` | Column names to generate in positional order. Names are not required to be unique, and default to the ID if not provided. | +| `rowCount` | `number` | The number of empty rows to add to the table, to be assigned a value later
Default: `1` | + +**Returns:** `datatable` + + +### `csv` [csv_fn] + +Creates a `datatable` from CSV input. + +**Expression syntax** + +```js +csv "fruit, stock + kiwi, 10 + Banana, 5" +``` + +**Code example** + +```text +csv "fruit,stock + kiwi,10 + banana,5" +| pointseries color=fruit size=stock +| pie +| render +``` + +This creates a `datatable` with `fruit` and `stock` columns with two rows. This is useful for quickly mocking data. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `data` | `string` | The CSV data to use. | +| `delimiter` | `string` | The data separation character. | +| `newline` | `string` | The row separation character. | + +**Returns:** `datatable` + + +## D [d_fns] + + +### `date` [date_fn] + +Returns the current time, or a time parsed from a specified string, as milliseconds since epoch. + +**Expression syntax** + +```js +date +date value=1558735195 +date "2019-05-24T21:59:55+0000" +date "01/31/2019" format="MM/DD/YYYY" +``` + +**Code example** + +```text +date +| formatdate "LLL" +| markdown {context} + font={font family="Arial, sans-serif" size=30 align="left" + color="#000000" + weight="normal" + underline=false + italic=false} +| render +``` + +Using `date` without passing any arguments will return the current date and time. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `value` | `string` | An optional date string that is parsed into milliseconds since epoch. The date string can be either a valid JavaScript `Date` input or a string to parse using the `format` argument. Must be an ISO8601 string, or you must provide the format. | +| `format` | `string` | The MomentJS format used to parse the specified date string. For more information, see [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). | + +**Returns:** `number` + + +### `demodata` [demodata_fn] + +A sample data set that includes project CI times with usernames, countries, and run phases. + +**Expression syntax** + +```js +demodata +demodata "ci" +demodata type="shirts" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| table +| render +``` + +`demodata` is a mock data set that you can use to start playing around in Canvas. + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `type` | `string` | The name of the demo data set to use.
Default: `"ci"` | + +**Returns:** `datatable` + + +### `do` [do_fn] + +Executes multiple sub-expressions, then returns the original *context*. Use for running functions that produce an action or a side effect without changing the original *context*. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Aliases: `exp`, `expression`, `fn`, `function` | `any` | The sub-expressions to execute. The return values of these sub-expressions are not available in the root pipeline as this function simply returns the original *context*. | + +**Returns:** Depends on your input and arguments + + +### `dropdownControl` [dropdownControl_fn] + +Configures a dropdown filter control element. + +**Expression syntax** + +```js +dropdownControl valueColumn=project filterColumn=project +dropdownControl valueColumn=agent filterColumn=agent.keyword filterGroup=group1 +``` + +**Code example** + +```text +demodata +| dropdownControl valueColumn=project filterColumn=project +| render +``` + +This creates a dropdown filter element. It requires a data source and uses the unique values from the given `valueColumn` (i.e. `project`) and applies the filter to the `project` column. Note: `filterColumn` should point to a keyword type field for Elasticsearch data sources. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| `filterColumn` * | `string` | The column or field that you want to filter. | +| `filterGroup` | `string` | The group name for the filter. | +| `labelColumn` | `string` | The column or field to use as the label in the dropdown control | +| `valueColumn` * | `string` | The column or field from which to extract the unique values for the dropdown control. | + +**Returns:** `render` + + +## E [e_fns] + + +### `embeddable` [embeddable_fn] + +Returns an embeddable with the provided configuration + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `config` | `string` | The base64 encoded embeddable input object | +| `type` * | `string` | The embeddable type | + +**Returns:** `embeddable` + + +### `eq` [eq_fn] + +Returns whether the *context* is equal to the argument. + +**Expression syntax** + +```js +eq true +eq null +eq 10 +eq "foo" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| mapColumn project + fn={getCell project | + switch + {case if={eq kibana} then=kibana} + {case if={eq elasticsearch} then=elasticsearch} + default="other" + } +| pointseries size="size(cost)" color="project" +| pie +| render +``` + +This changes all values in the project column that don’t equal `"kibana"` or `"elasticsearch"` to `"other"`. + +**Accepts:** `boolean`, `number`, `string`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. | + +**Returns:** `boolean` + + +### `escount` [escount_fn] + +Query Elasticsearch for the number of hits matching the specified query. + +**Expression syntax** + +```js +escount index="logstash-*" +escount "currency:"EUR"" index="kibana_sample_data_ecommerce" +escount query="response:404" index="kibana_sample_data_logs" +``` + +**Code example** + +```text +kibana +| selectFilter +| escount "Cancelled:true" index="kibana_sample_data_flights" +| math "value" +| progress shape="semicircle" + label={formatnumber 0,0} + font={font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center} + max={filters | escount index="kibana_sample_data_flights"} +| render +``` + +The first `escount` expression retrieves the number of flights that were cancelled. The second `escount` expression retrieves the total number of flights. + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `q`, `query` | `string` | A Lucene query string.
Default: `"-_index:.kibana"` | +| `index`
Alias: `dataView` | `string` | An index or data view. For example, `"logstash-*"`.
Default: `"_all"` | + +**Returns:** `number` + + +### `esdocs` [esdocs_fn] + +Query Elasticsearch for raw documents. Specify the fields you want to retrieve, especially if you are asking for a lot of rows. + +**Expression syntax** + +```js +esdocs index="logstash-*" +esdocs "currency:"EUR"" index="kibana_sample_data_ecommerce" +esdocs query="response:404" index="kibana_sample_data_logs" +esdocs index="kibana_sample_data_flights" count=100 +esdocs index="kibana_sample_data_flights" sort="AvgTicketPrice, asc" +``` + +**Code example** + +```text +kibana +| selectFilter +| esdocs index="kibana_sample_data_ecommerce" + fields="customer_gender, taxful_total_price, order_date" + sort="order_date, asc" + count=10000 +| mapColumn "order_date" + fn={getCell "order_date" | date {context} | rounddate "YYYY-MM-DD"} +| alterColumn "order_date" type="date" +| pointseries x="order_date" y="sum(taxful_total_price)" color="customer_gender" +| plot defaultStyle={seriesStyle lines=3} + palette={palette "#7ECAE3" "#003A4D" gradient=true} +| render +``` + +This retrieves the first 10000 documents data from the `kibana_sample_data_ecommerce` index sorted by `order_date` in ascending order, and only requests the `customer_gender`, `taxful_total_price`, and `order_date` fields. + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `q`, `query` | `string` | A Lucene query string.
Default: `"-_index:.kibana"` | +| `count` | `number` | The number of documents to retrieve. For better performance, use a smaller data set.
Default: `1000` | +| `fields` | `string` | A comma-separated list of fields. For better performance, use fewer fields. | +| `index`
Alias: `dataView` | `string` | An index or data view. For example, `"logstash-*"`.
Default: `"_all"` | +| `metaFields` | `string` | Comma separated list of meta fields. For example, `"_index,_type"`. | +| `sort` | `string` | The sort direction formatted as `"field, direction"`. For example, `"@timestamp, desc"` or `"bytes, asc"`. | + +**Returns:** `datatable` + + +### `essql` [essql_fn] + +Queries Elasticsearch using Elasticsearch SQL. + +**Expression syntax** + +```js +essql query="SELECT * FROM "logstash*"" +essql "SELECT * FROM "apm*"" count=10000 +``` + +**Code example** + +```text +kibana +| selectFilter +| essql query="SELECT Carrier, FlightDelayMin, AvgTicketPrice FROM "kibana_sample_data_flights"" +| table +| render +``` + +This retrieves the `Carrier`, `FlightDelayMin`, and `AvgTicketPrice` fields from the "kibana_sample_data_flights" index. + +**Accepts:** `kibana_context`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `q`, `query` | `string` | An Elasticsearch SQL query. | +| `count` | `number` | The number of documents to retrieve. For better performance, use a smaller data set.
Default: `1000` | +| `parameter` †
Alias: `param` | `string`, `number`, `boolean` | A parameter to be passed to the SQL query. | +| `timeField`
Alias: `timeField` | `string` | The time field to use in the time range filter, which is set in the context. | +| `timezone`
Alias: `tz` | `string` | The timezone to use for date operations. Valid ISO8601 formats and UTC offsets both work.
Default: `"UTC"` | + +**Returns:** `datatable` + + +### `exactly` [exactly_fn] + +Creates a filter that matches a given column to an exact value. + +**Expression syntax** + +```js +exactly "state" value="running" +exactly "age" value=50 filterGroup="group2" +exactly column="project" value="beats" +``` + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| `column` *
Aliases: `c`, `field` | `string` | The column or field that you want to filter. | +| `filterGroup` | `string` | The group name for the filter. | +| `value` *
Aliases: `v`, `val` | `string` | The value to match exactly, including white space and capitalization. | + +**Returns:** `filter` + + +## F [f_fns] + + +### `filterrows` [filterrows_fn] + +Filters rows in a `datatable` based on the return value of a sub-expression. + +**Expression syntax** + +```js +filterrows {getCell "project" | eq "kibana"} +filterrows fn={getCell "age" | gt 50} +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| filterrows {getCell "country" | any {eq "IN"} {eq "US"} {eq "CN"}} +| mapColumn "@timestamp" + fn={getCell "@timestamp" | rounddate "YYYY-MM"} +| alterColumn "@timestamp" type="date" +| pointseries x="@timestamp" y="mean(cost)" color="country" +| plot defaultStyle={seriesStyle points="2" lines="1"} + palette={palette "#01A4A4" "#CC6666" "#D0D102" "#616161" "#00A1CB" "#32742C" "#F18D05" "#113F8C" "#61AE24" "#D70060" gradient=false} +| render +``` + +This uses `filterrows` to only keep data from India (`IN`), the United States (`US`), and China (`CN`). + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Aliases: `exp`, `expression`, `fn`, `function` | `boolean` | An expression to pass into each row in the `datatable`. The expression should return a `boolean`. A `true` value preserves the row, and a `false` value removes it. | + +**Returns:** `datatable` + + +### `filters` [filters_fn] + +Aggregates element filters from the workpad for use elsewhere, usually a data source. [`filters`](#filters_fn) is deprecated and will be removed in a future release. Use `kibana | selectFilter` instead. + +**Expression syntax** + +```js +filters +filters group="timefilter1" +filters group="timefilter2" group="dropdownfilter1" ungrouped=true +``` + +**Code example** + +```text +filters group=group2 ungrouped=true +| demodata +| pointseries x="project" y="size(cost)" color="project" +| plot defaultStyle={seriesStyle bars=0.75} legend=false + font={ + font size=14 + family="'Open Sans', Helvetica, Arial, sans-serif" + align="left" + color="#FFFFFF" + weight="lighter" + underline=true + italic=true + } +| render +``` + +`filters` sets the existing filters as context and accepts a `group` parameter to opt into specific filter groups. Setting `ungrouped` to `true` opts out of using global filters. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Alias: `group` | `string` | The name of the filter group to use. | +| `ungrouped`
Aliases: `nogroup`, `nogroups` | `boolean` | Exclude filters that belong to a filter group?
Default: `false` | + +**Returns:** `filter` + + +### `font` [font_fn] + +Create a font style. + +**Expression syntax** + +```js +font size=12 +font family=Arial +font align=middle +font color=pink +font weight=lighter +font underline=true +font italic=false +font lHeight=32 +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| pointseries x="project" y="size(cost)" color="project" +| plot defaultStyle={seriesStyle bars=0.75} legend=false + font={ + font size=14 + family="'Open Sans', Helvetica, Arial, sans-serif" + align="left" + color="#FFFFFF" + weight="lighter" + underline=true + italic=true + } +| render +``` + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `align` | `string` | The horizontal text alignment.
Default: `${ theme "font.align" default="left" }` | +| `color` | `string` | The text color.
Default: `${ theme "font.color" }` | +| `family` | `string` | An acceptable CSS web font string
Default: `${ theme "font.family" default="'Open Sans', Helvetica, Arial, sans-serif" }` | +| `italic` | `boolean` | Italicize the text?
Default: `${ theme "font.italic" default=false }` | +| `lHeight`
Alias: `lineHeight` | `number`, `null` | The line height in pixels
Default: `${ theme "font.lHeight" }` | +| `size` | `number` | The font size
Default: `${ theme "font.size" default=14 }` | +| `sizeUnit` | `string` | The font size unit
Default: `"px"` | +| `underline` | `boolean` | Underline the text?
Default: `${ theme "font.underline" default=false }` | +| `weight` | `string` | The font weight. For example, `"normal"`, `"bold"`, `"bolder"`, `"lighter"`, `"100"`, `"200"`, `"300"`, `"400"`, `"500"`, `"600"`, `"700"`, `"800"`, or `"900"`.
Default: `${ theme "font.weight" default="normal" }` | + +**Returns:** `style` + + +### `formatdate` [formatdate_fn] + +Formats an ISO8601 date string or a date in milliseconds since epoch using MomentJS. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). + +**Expression syntax** + +```js +formatdate format="YYYY-MM-DD" +formatdate "MM/DD/YYYY" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| mapColumn "time" fn={getCell time | formatdate "MMM 'YY"} +| pointseries x="time" y="sum(price)" color="state" +| plot defaultStyle={seriesStyle points=5} +| render +``` + +This transforms the dates in the `time` field into strings that look like `"Jan ‘19"`, `"Feb ‘19"`, etc. using a MomentJS format. + +**Accepts:** `number`, `string` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `format` | `string` | A MomentJS format. For example, `"MM/DD/YYYY"`. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). | + +**Returns:** `string` + + +### `formatnumber` [formatnumber_fn] + +Formats a number into a formatted number string using the Numeral pattern. + +**Expression syntax** + +```js +formatnumber format="$0,0.00" +formatnumber "0.0a" +``` + +**Code example** + +```text +kibana +| selectFilter +| demodata +| math "mean(percent_uptime)" +| progress shape="gauge" + label={formatnumber "0%"} + font={font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align="center"} +| render +``` + +The `formatnumber` subexpression receives the same `context` as the `progress` function, which is the output of the `math` function. It formats the value into a percentage. + +**Accepts:** `number` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `format` | `string` | A Numeral pattern format string. For example, `"0.0a"` or `"0%"`. | + +**Returns:** `string` + + +## G [g_fns] + + +### `getCell` [getCell_fn] + +Fetches a single cell from a `datatable`. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `c`, `column` | `string` | The name of the column to fetch the value from. If not provided, the value is retrieved from the first column. | +| `row`
Alias: `r` | `number` | The row number, starting at 0.
Default: `0` | + +**Returns:** Depends on your input and arguments + + +### `gt` [gt_fn] + +Returns whether the *context* is greater than the argument. + +**Accepts:** `number`, `string` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. | + +**Returns:** `boolean` + + +### `gte` [gte_fn] + +Returns whether the *context* is greater or equal to the argument. + +**Accepts:** `number`, `string` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. | + +**Returns:** `boolean` + + +## H [h_fns] + + +### `head` [head_fn] + +Retrieves the first N rows from the `datatable`. See also [`tail`](#tail_fn). + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `count` | `number` | The number of rows to retrieve from the beginning of the `datatable`.
Default: `1` | + +**Returns:** `datatable` + + +## I [i_fns] + + +### `if` [if_fn] + +Performs conditional logic. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `condition` | `boolean` | A `true` or `false` indicating whether a condition is met, usually returned by a sub-expression. When unspecified, the original *context* is returned. | +| `else` | `any` | The return value when the condition is `false`. When unspecified and the condition is not met, the original *context* is returned. | +| `then` | `any` | The return value when the condition is `true`. When unspecified and the condition is met, the original *context* is returned. | + +**Returns:** Depends on your input and arguments + + +### `image` [image_fn] + +Displays an image. Provide an image asset as a `base64` data URL, or pass in a sub-expression. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `dataurl`, `url` | `string`, `null` | The HTTP(S) URL or `base64` data URL of an image.
Default: `null` | +| `mode` | `string` | `"contain"` shows the entire image, scaled to fit. `"cover"` fills the container with the image, cropping from the sides or bottom as needed. `"stretch"` resizes the height and width of the image to 100% of the container.
Default: `"contain"` | + +**Returns:** `image` + + +## J [j_fns] + + +### `joinRows` [joinRows_fn] + +Concatenates values from rows in a `datatable` into a single string. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `column` | `string` | The column or field from which to extract the values. | +| `distinct` | `boolean` | Extract only unique values?
Default: `true` | +| `quote` | `string` | The quote character to wrap around each extracted value.
Default: `"'"` | +| `separator`
Aliases: `delimiter`, `sep` | `string` | The delimiter to insert between each extracted value.
Default: `","` | + +**Returns:** `string` + + +## K [k_fns] + + +### `kibana` [kibana_fn] + +Gets kibana global context + +**Accepts:** `kibana_context`, `null` + +**Returns:** `kibana_context` + + +## L [l_fns] + + +### `location` [location_fn] + +Find your current location using the Geolocation API of the browser. Performance can vary, but is fairly accurate. See [https://developer.mozilla.org/en-US/docs/Web/API/Navigator/geolocation](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/geolocation). Don’t use [`location`](#location_fn) if you plan to generate PDFs as this function requires user input. + +**Accepts:** `null` + +**Returns:** `datatable` + + +### `lt` [lt_fn] + +Returns whether the *context* is less than the argument. + +**Accepts:** `number`, `string` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. | + +**Returns:** `boolean` + + +### `lte` [lte_fn] + +Returns whether the *context* is less than or equal to the argument. + +**Accepts:** `number`, `string` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `value` | `number`, `string` | The value compared to the *context*. | + +**Returns:** `boolean` + + +## M [m_fns] + + +### `mapCenter` [mapCenter_fn] + +Returns an object with the center coordinates and zoom level of the map. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `lat` * | `number` | Latitude for the center of the map | +| `lon` * | `number` | Longitude for the center of the map | +| `zoom` * | `number` | Zoom level of the map | + +**Returns:** `mapCenter` + + +### `mapColumn` [mapColumn_fn] + +Adds a column calculated as the result of other columns. Changes are made only when you provide arguments.See also [`alterColumn`](#alterColumn_fn) and [`staticColumn`](#staticColumn_fn). + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. | +| `copyMetaFrom` | `string`, `null` | If set, the meta object from the specified column id is copied over to the specified target column. If the column doesn’t exist it silently fails.
Default: `null` | +| `expression` *
Aliases: `exp`, `fn`, `function` | `boolean`, `number`, `string`, `null` | An expression that is executed on every row, provided with a single-row `datatable` context and returning the cell value. | +| `id` | `string`, `null` | An optional id of the resulting column. When no id is provided, the id will be looked up from the existing column by the provided name argument. If no column with this name exists yet, a new column with this name and an identical id will be added to the table.
Default: `null` | + +**Returns:** `datatable` + + +### `markdown` [markdown_fn] + +Adds an element that renders Markdown text. TIP: Use the [`markdown`](#markdown_fn) function for single numbers, metrics, and paragraphs of text. + +**Accepts:** `datatable`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Aliases: `content`, `expression` | `string` | A string of text that contains Markdown. To concatenate, pass the `string` function multiple times.
Default: `""` | +| `font` | `style` | The CSS font properties for the content. For example, "font-family" or "font-weight".
Default: `${font}` | +| `openLinksInNewTab` | `boolean` | A true or false value for opening links in a new tab. The default value is `false`. Setting to `true` opens all links in a new tab.
Default: `false` | + +**Returns:** `render` + + +### `math` [math_fn] + +Interprets a `TinyMath` math expression using a `number` or `datatable` as *context*. The `datatable` columns are available by their column name. If the *context* is a number it is available as `value`. + +**Accepts:** `number`, `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `expression` | `string` | An evaluated `TinyMath` expression. See [/reference/data-analysis/kibana/tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). | +| `onError` | `string` | In case the `TinyMath` evaluation fails or returns NaN, the return value is specified by onError. When `'throw'`, it will throw an exception, terminating expression execution (default). | + +**Returns:** Depends on your input and arguments + + +### `mathColumn` [mathColumn_fn] + +Adds a column by evaluating `TinyMath` on each row. This function is optimized for math and performs better than using a math expression in [`mapColumn`](#mapColumn_fn). + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. | +| *Unnamed*
Alias: `expression` | `string` | An evaluated `TinyMath` expression. See [/reference/data-analysis/kibana/tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). | +| `castColumns` † | `string` | The column ids that are cast to numbers before the formula is applied. | +| `copyMetaFrom` | `string`, `null` | If set, the meta object from the specified column id is copied over to the specified target column. If the column doesn’t exist it silently fails.
Default: `null` | +| `id` * | `string` | id of the resulting column. Must be unique. | +| `onError` | `string` | In case the `TinyMath` evaluation fails or returns NaN, the return value is specified by onError. When `'throw'`, it will throw an exception, terminating expression execution (default). | + +**Returns:** `datatable` + + +### `metric` [metric_fn] + +Displays a number over a label. + +**Accepts:** `number`, `string`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `description`, `label`, `text` | `string` | The text describing the metric.
Default: `""` | +| `labelFont` | `style` | The CSS font properties for the label. For example, `font-family` or `font-weight`.
Default: `${font size=14 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center}` | +| `metricFont` | `style` | The CSS font properties for the metric. For example, `font-family` or `font-weight`.
Default: `${font size=48 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center lHeight=48}` | +| `metricFormat`
Alias: `format` | `string` | A Numeral pattern format string. For example, `"0.0a"` or `"0%"`. | + +**Returns:** `render` + + +## N [n_fns] + + +### `neq` [neq_fn] + +Returns whether the *context* is not equal to the argument. + +**Accepts:** `boolean`, `number`, `string`, `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. | + +**Returns:** `boolean` + + +## P [p_fns] + + +### `palette` [palette_fn] + +Creates a color palette. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Alias: `color` | `string` | The palette colors. Accepts an HTML color name, HEX, HSL, HSLA, RGB, or RGBA. | +| `continuity` | `string` | Default: `"above"` | +| `gradient` | `boolean` | Make a gradient palette where supported?
Default: `false` | +| `range` | `string` | Default: `"percent"` | +| `rangeMax` | `number` | | +| `rangeMin` | `number` | | +| `reverse` | `boolean` | Reverse the palette?
Default: `false` | +| `stop` † | `number` | The palette color stops. When used, it must be associated with each color. | + +**Returns:** `palette` + + +### `pie` [pie_fn] + +Configures a pie chart element. + +**Accepts:** `pointseries` + +| Argument | Type | Description | +| --- | --- | --- | +| `font` | `style` | The CSS font properties for the labels. For example, `font-family` or `font-weight`.
Default: `${font}` | +| `hole` | `number` | Draws a hole in the pie, between `0` and `100`, as a percentage of the pie radius.
Default: `0` | +| `labelRadius` | `number` | The percentage of the container area to use as a radius for the label circle.
Default: `100` | +| `labels` | `boolean` | Display the pie labels?
Default: `true` | +| `legend` | `string`, `boolean` | The legend position. For example, `"nw"`, `"sw"`, `"ne"`, `"se"`, or `false`. When `false`, the legend is hidden.
Default: `false` | +| `palette` | `palette` | A `palette` object for describing the colors to use in this pie chart.
Default: `${palette}` | +| `radius` | `string`, `number` | The radius of the pie as a percentage, between `0` and `1`, of the available space. To automatically set the radius, use `"auto"`.
Default: `"auto"` | +| `seriesStyle` † | `seriesStyle` | A style of a specific series | +| `tilt` | `number` | The percentage of tilt where `1` is fully vertical, and `0` is completely flat.
Default: `1` | + +**Returns:** `render` + + +### `plot` [plot_fn] + +Configures a chart element. + +**Accepts:** `pointseries` + +| Argument | Type | Description | +| --- | --- | --- | +| `defaultStyle` | `seriesStyle` | The default style to use for every series.
Default: `${seriesStyle points=5}` | +| `font` | `style` | The CSS font properties for the labels. For example, `font-family` or `font-weight`.
Default: `${font}` | +| `legend` | `string`, `boolean` | The legend position. For example, `"nw"`, `"sw"`, `"ne"`, `"se"`, or `false`. When `false`, the legend is hidden.
Default: `"ne"` | +| `palette` | `palette` | A `palette` object for describing the colors to use in this chart.
Default: `${palette}` | +| `seriesStyle` † | `seriesStyle` | A style of a specific series | +| `xaxis` | `boolean`, `axisConfig` | The axis configuration. When `false`, the axis is hidden.
Default: `true` | +| `yaxis` | `boolean`, `axisConfig` | The axis configuration. When `false`, the axis is hidden.
Default: `true` | + +**Returns:** `render` + + +### `ply` [ply_fn] + +Subdivides a `datatable` by the unique values of the specified columns, and passes the resulting tables into an expression, then merges the outputs of each expression. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| `by` † | `string` | The column to subdivide the `datatable`. | +| `expression` †
Aliases: `exp`, `fn`, `function` | `datatable` | An expression to pass each resulting `datatable` into. Tips: Expressions must return a `datatable`. Use [`as`](#as_fn) to turn literals into `datatable`s. Multiple expressions must return the same number of rows.If you need to return a different row count, pipe into another instance of [`ply`](#ply_fn). If multiple expressions returns the columns with the same name, the last one wins. | + +**Returns:** `datatable` + + +### `pointseries` [pointseries_fn] + +Turn a `datatable` into a point series model. Currently we differentiate measure from dimensions by looking for a `TinyMath` expression. See [/reference/data-analysis/kibana/tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). If you enter a `TinyMath` expression in your argument, we treat that argument as a measure, otherwise it is a dimension. Dimensions are combined to create unique keys. Measures are then deduplicated by those keys using the specified `TinyMath` function + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| `color` | `string` | An expression to use in determining the mark’s color. | +| `size` | `string` | The size of the marks. Only applicable to supported elements. | +| `text` | `string` | The text to show on the mark. Only applicable to supported elements. | +| `x` | `string` | The values along the X-axis. | +| `y` | `string` | The values along the Y-axis. | + +**Returns:** `pointseries` + + +### `progress` [progress_fn] + +Configures a progress element. + +**Accepts:** `number` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `shape` | `string` | Select `"gauge"`, `"horizontalBar"`, `"horizontalPill"`, `"semicircle"`, `"unicorn"`, `"verticalBar"`, `"verticalPill"`, or `"wheel"`.
Default: `"gauge"` | +| `barColor` | `string` | The color of the background bar.
Default: `"#f0f0f0"` | +| `barWeight` | `number` | The thickness of the background bar.
Default: `20` | +| `font` | `style` | The CSS font properties for the label. For example, `font-family` or `font-weight`.
Default: `${font size=24 family="'Open Sans', Helvetica, Arial, sans-serif" color="#000000" align=center}` | +| `label` | `boolean`, `string` | To show or hide the label, use `true` or `false`. Alternatively, provide a string to display as a label.
Default: `true` | +| `max` | `number` | The maximum value of the progress element.
Default: `1` | +| `valueColor` | `string` | The color of the progress bar.
Default: `"#1785b0"` | +| `valueWeight` | `number` | The thickness of the progress bar.
Default: `20` | + +**Returns:** `render` + + +## R [r_fns] + + +### `removeFilter` [removeFilter_fn] + +Removes filters from context + +**Accepts:** `kibana_context` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `group` | `string` | Removes only filters belonging to the provided group | +| `from` | `string` | Removes only filters owned by the provided id | +| `ungrouped`
Aliases: `nogroup`, `nogroups` | `boolean` | Should filters without group be removed
Default: `false` | + +**Returns:** `kibana_context` + + +### `render` [render_fn] + +Renders the *context* as a specific element and sets element level options, such as background and border styling. + +**Accepts:** `render` + +| Argument | Type | Description | +| --- | --- | --- | +| `as` | `string` | The element type to render. You probably want a specialized function instead, such as [`plot`](#plot_fn) or [`shape`](#shape_fn). | +| `containerStyle` | `containerStyle` | The style for the container, including background, border, and opacity.
Default: `${containerStyle}` | +| `css` | `string` | Any block of custom CSS to be scoped to the element.
Default: `".canvasRenderEl${}"` | + +**Returns:** `render` + + +### `repeatImage` [repeatImage_fn] + +Configures a repeating image element. + +**Accepts:** `number` + +| Argument | Type | Description | +| --- | --- | --- | +| `emptyImage` | `string`, `null` | Fills the difference between the *context* and `max` parameter for the element with this image. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
Default: `null` | +| `image` | `string`, `null` | The image to repeat. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
Default: `null` | +| `max` | `number`, `null` | The maximum number of times the image can repeat.
Default: `1000` | +| `size` | `number` | The maximum height or width of the image, in pixels. When the image is taller than it is wide, this function limits the height.
Default: `100` | + +**Returns:** `render` + + +### `replace` [replace_fn] + +Uses a regular expression to replace parts of a string. + +**Accepts:** `string` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `pattern`, `regex` | `string` | The text or pattern of a JavaScript regular expression. For example, `"[aeiou]"`. You can use capturing groups here. | +| `flags`
Alias: `modifiers` | `string` | Specify flags. See [https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp).
Default: `"g"` | +| `replacement` | `string` | The replacement for the matching parts of string. Capturing groups can be accessed by their index. For example, `"$1"`.
Default: `""` | + +**Returns:** `string` + + +### `revealImage` [revealImage_fn] + +Configures an image reveal element. + +**Accepts:** `number` + +| Argument | Type | Description | +| --- | --- | --- | +| `emptyImage` | `string`, `null` | An optional background image to reveal over. Provide an image asset as a ``base64`` data URL, or pass in a sub-expression.
Default: `null` | +| `image` | `string`, `null` | The image to reveal. Provide an image asset as a `base64` data URL, or pass in a sub-expression.
Default: `null` | +| `origin` | `string` | The position to start the image fill. For example, `"top"`, `"bottom"`, `"left"`, or right.
Default: `"bottom"` | + +**Returns:** `render` + + +### `rounddate` [rounddate_fn] + +Uses a MomentJS formatting string to round milliseconds since epoch, and returns milliseconds since epoch. + +**Accepts:** `number` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `format` | `string` | The MomentJS format to use for bucketing. For example, `"YYYY-MM"` rounds to months. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). | + +**Returns:** `number` + + +### `rowCount` [rowCount_fn] + +Returns the number of rows. Pairs with [`ply`](#ply_fn) to get the count of unique column values, or combinations of unique column values. + +**Accepts:** `datatable` + +**Returns:** `number` + + +## S [s_fns] + + +### `selectFilter` [selectFilter_fn] + +Selects filters from context + +**Accepts:** `kibana_context` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Alias: `group` | `string` | Select only filters belonging to the provided group | +| `from` | `string` | Select only filters owned by the provided id | +| `ungrouped`
Aliases: `nogroup`, `nogroups` | `boolean` | Should filters without group be included
Default: `false` | + +**Returns:** `kibana_context` + + +### `seriesStyle` [seriesStyle_fn] + +Creates an object used for describing the properties of a series on a chart. Use [`seriesStyle`](#seriesStyle_fn) inside of a charting function, like [`plot`](#plot_fn) or [`pie`](#pie_fn). + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `bars` | `number` | The width of bars. | +| `color` | `string` | The line color. | +| `fill` | `number`, `boolean` | Should we fill in the points?
Default: `false` | +| `horizontalBars` | `boolean` | Sets the orientation of the bars in the chart to horizontal. | +| `label` | `string` | The name of the series to style. | +| `lines` | `number` | The width of the line. | +| `points` | `number` | The size of points on line. | +| `stack` | `number`, `null` | Specifies if the series should be stacked. The number is the stack ID. Series with the same stack ID are stacked together. | + +**Returns:** `seriesStyle` + + +### `shape` [shape_fn] + +Creates a shape. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `shape` | `string` | Pick a shape.
Default: `"square"` | +| `border`
Alias: `stroke` | `string` | An SVG color for the border outlining the shape. | +| `borderWidth`
Alias: `strokeWidth` | `number` | The thickness of the border.
Default: `0` | +| `fill` | `string` | An SVG color to fill the shape.
Default: `"black"` | +| `maintainAspect` | `boolean` | Maintain the shape’s original aspect ratio?
Default: `false` | + +**Returns:** Depends on your input and arguments + + +### `sort` [sort_fn] + +Sorts a `datatable` by the specified column. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `by`, `column` | `string` | The column to sort by. When unspecified, the `datatable` is sorted by the first column. | +| `reverse` | `boolean` | Reverses the sorting order. When unspecified, the `datatable` is sorted in ascending order.
Default: `false` | + +**Returns:** `datatable` + + +### `staticColumn` [staticColumn_fn] + +Adds a column with the same static value in every row. See also [`alterColumn`](#alterColumn_fn), [`mapColumn`](#mapColumn_fn), and [`mathColumn`](#mathColumn_fn) + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Aliases: `column`, `name` | `string` | The name of the new column. | +| `value` | `string`, `number`, `boolean`, `null` | The value to insert in each row in the new column. TIP: use a sub-expression to rollup other columns into a static value.
Default: `null` | + +**Returns:** `datatable` + + +### `string` [string_fn] + +Concatenates all of the arguments into a single string. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Alias: `value` | `string`, `number`, `boolean` | The values to join together into one string. Include spaces where needed. | + +**Returns:** `string` + + +### `switch` [switch_fn] + +Performs conditional logic with multiple conditions. See also [`case`](#case_fn), which builds a `case` to pass to the [`switch`](#switch_fn) function. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* * †
Alias: `case` | `case` | The conditions to check. | +| `default`
Alias: `finally` | `any` | The value returned when no conditions are met. When unspecified and no conditions are met, the original *context* is returned. | + +**Returns:** Depends on your input and arguments + + +## T [t_fns] + + +### `table` [table_fn] + +Configures a table element. + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| `font` | `style` | The CSS font properties for the contents of the table. For example, `font-family` or `font-weight`.
Default: `${font}` | +| `paginate` | `boolean` | Show pagination controls? When `false`, only the first page is displayed.
Default: `true` | +| `perPage` | `number` | The number of rows to display on each page.
Default: `10` | +| `showHeader` | `boolean` | Show or hide the header row with titles for each column.
Default: `true` | + +**Returns:** `render` + + +### `tail` [tail_fn] + +Retrieves the last N rows from the end of a `datatable`. See also [`head`](#head_fn). + +**Accepts:** `datatable` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Alias: `count` | `number` | The number of rows to retrieve from the end of the `datatable`.
Default: `1` | + +**Returns:** `datatable` + + +### `timefilter` [timefilter_fn] + +Creates a time filter for querying a source. + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| `column`
Aliases: `c`, `field` | `string` | The column or field that you want to filter.
Default: `"@timestamp"` | +| `filterGroup` | `string` | The group name for the filter | +| `from`
Aliases: `f`, `start` | `string` | The beginning of the range, in ISO8601 or Elasticsearch `datemath` format | +| `to`
Aliases: `end`, `t` | `string` | The end of the range, in ISO8601 or Elasticsearch `datemath` format | + +**Returns:** `filter` + + +### `timefilterControl` [timefilterControl_fn] + +Configures a time filter control element. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `column`
Aliases: `c`, `field` | `string` | The column or field that you want to filter.
Default: `"@timestamp"` | +| `compact` | `boolean` | Shows the time filter as a button, which triggers a popover.
Default: `true` | +| `filterGroup` | `string` | The group name for the filter. | + +**Returns:** `render` + + +### `timelion` [timelion_fn] + +Uses Timelion to extract one or more time series from many sources. + +**Accepts:** `filter` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed*
Aliases: `q`, `query` | `string` | A Timelion query
Default: `".es(*)"` | +| `from` | `string` | The Elasticsearch `datemath` string for the beginning of the time range.
Default: `"now-1y"` | +| `interval` | `string` | The bucket interval for the time series.
Default: `"auto"` | +| `timezone` | `string` | The timezone for the time range. See [https://momentjs.com/timezone/](https://momentjs.com/timezone/).
Default: `"UTC"` | +| `to` | `string` | The Elasticsearch `datemath` string for the end of the time range.
Default: `"now"` | + +**Returns:** `datatable` + + +### `timerange` [timerange_fn] + +An object that represents a span of time. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| `from` * | `string` | The start of the time range | +| `to` * | `string` | The end of the time range | + +**Returns:** `timerange` + + +### `to` [to_fn] + +Explicitly casts the type of the *context* from one type to the specified type. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* †
Alias: `type` | `string` | A known data type in the expression language. | + +**Returns:** Depends on your input and arguments + + +## U [u_fns] + + +### `uiSetting` [uiSetting_fn] + +Returns a UI settings parameter value. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `parameter` | `string` | The parameter name. | +| `default` | `any` | A default value in case of the parameter is not set. | + +**Returns:** Depends on your input and arguments + + +### `urlparam` [urlparam_fn] + +Retrieves a URL parameter to use in an expression. The [`urlparam`](#urlparam_fn) function always returns a `string`. For example, you can retrieve the value `"20"` from the parameter `myVar` from the URL `https://localhost:5601/app/canvas?myVar=20`. + +**Accepts:** `null` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Aliases: `param`, `var`, `variable` | `string` | The URL hash parameter to retrieve. | +| `default` | `string` | The string returned when the URL parameter is unspecified.
Default: `""` | + +**Returns:** `string` + + +## V [v_fns] + + +### `var` [var_fn] + +Updates the Kibana global context. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* *
Alias: `name` | `string` | Specify the name of the variable. | + +**Returns:** Depends on your input and arguments + + +### `var_set` [var_set_fn] + +Updates the Kibana global context. + +**Accepts:** `any` + +| Argument | Type | Description | +| --- | --- | --- | +| *Unnamed* * †
Alias: `name` | `string` | Specify the name of the variable. | +| `value` †
Alias: `val` | `any` | Specify the value for the variable. When unspecified, the input context is used. | + +**Returns:** Depends on your input and arguments + + diff --git a/reference/data-analysis/kibana/tinymath-functions.md b/reference/data-analysis/kibana/tinymath-functions.md new file mode 100644 index 0000000000..034662ea69 --- /dev/null +++ b/reference/data-analysis/kibana/tinymath-functions.md @@ -0,0 +1,692 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/kibana/current/canvas-tinymath-functions.html +--- + +# TinyMath functions [canvas-tinymath-functions] + +TinyMath provides a set of functions that can be used with the Canvas expression language to perform complex math calculations. Read on for detailed information about the functions available in TinyMath, including what parameters each function accepts, the return value of that function, and examples of how each function behaves. + +Most of the functions accept arrays and apply JavaScript Math methods to each element of that array. For the functions that accept multiple arrays as parameters, the function generally does the calculation index by index. + +Any function can be wrapped by another function as long as the return type of the inner function matches the acceptable parameter type of the outer function. + + +## abs( a ) [_abs_a] + +Calculates the absolute value of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The absolute value of `a`. Returns an array with the absolute values of each element if `a` is an array. + +**Example** + +```js +abs(-1) // returns 1 +abs(2) // returns 2 +abs([-1 , -2, 3, -4]) // returns [1, 2, 3, 4] +``` + + +## add( …​args ) [_add_args] + +Calculates the sum of one or more numbers/arrays passed into the function. If at least one array of numbers is passed into the function, the function will calculate the sum by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The sum of all numbers in `args` if `args` contains only numbers. Returns an array of sums of the elements at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Throws**: `'Array length mismatch'` if `args` contains arrays of different lengths + +**Example** + +```js +add(1, 2, 3) // returns 6 +add([10, 20, 30, 40], 10, 20, 30) // returns [70, 80, 90, 100] +add([1, 2], 3, [4, 5], 6) // returns [(1 + 3 + 4 + 6), (2 + 3 + 5 + 6)] = [14, 16] +``` + + +## cbrt( a ) [_cbrt_a] + +Calculates the cube root of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The cube root of `a`. Returns an array with the cube roots of each element if `a` is an array. + +**Example** + +```js +cbrt(-27) // returns -3 +cbrt(94) // returns 4.546835943776344 +cbrt([27, 64, 125]) // returns [3, 4, 5] +``` + + +## ceil( a ) [_ceil_a] + +Calculates the ceiling of a number, i.e., rounds a number towards positive infinity. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The ceiling of `a`. Returns an array with the ceilings of each element if `a` is an array. + +**Example** + +```js +ceil(1.2) // returns 2 +ceil(-1.8) // returns -1 +ceil([1.1, 2.2, 3.3]) // returns [2, 3, 4] +``` + + +## clamp( …​a, min, max ) [_clamp_a_min_max] + +Restricts value to a given range and returns closed available value. If only `min` is provided, values are restricted to only a lower bound. + +| Param | Type | Description | +| --- | --- | --- | +| …​a | number | Array. | one or more numbers or arrays of numbers | +| min | number | Array. | (optional) The minimum value this function will return. | +| max | number | Array. | (optional) The maximum value this function will return. | + +**Returns**: `number` | `Array.`. The closest value between `min` (inclusive) and `max` (inclusive). Returns an array with values greater than or equal to `min` and less than or equal to `max` (if provided) at each index. + +**Throws**: + +* `'Array length mismatch'` if a `min` and/or `max` are arrays of different lengths +* `'Min must be less than max'` if `max` is less than `min` + +**Example** + +```js +clamp(1, 2, 3) // returns 2 +clamp([10, 20, 30, 40], 15, 25) // returns [15, 20, 25, 25] +clamp(10, [15, 2, 4, 20], 25) // returns [15, 10, 10, 20] +clamp(35, 10, [20, 30, 40, 50]) // returns [20, 30, 35, 35] +clamp([1, 9], 3, [4, 5]) // returns [clamp([1, 3, 4]), clamp([9, 3, 5])] = [3, 5] +``` + + +## count( a ) [_count_a] + +Returns the length of an array. Alias for size. + +| Param | Type | Description | +| --- | --- | --- | +| a | Array. | array of any values | + +**Returns**: `number`. The length of the array. + +**Throws**: `'Must pass an array'` if `a` is not an array. + +**Example** + +```js +count([]) // returns 0 +count([-1, -2, -3, -4]) // returns 4 +count(100) // returns 1 +``` + + +## cube( a ) [_cube_a] + +Calculates the cube of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The cube of `a`. Returns an array with the cubes of each element if `a` is an array. + +**Example** + +```js +cube(-3) // returns -27 +cube([3, 4, 5]) // returns [27, 64, 125] +``` + + +## divide( a, b ) [_divide_a_b] + +Divides two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | dividend, a number or an array of numbers | +| b | number | Array. | divisor, a number or an array of numbers, b != 0 | + +**Returns**: `number` | `Array.`. Returns the quotient of `a` and `b` if both are numbers. Returns an array with the quotients applied index-wise to each element if `a` or `b` is an array. + +**Throws**: + +* `'Array length mismatch'` if `a` and `b` are arrays with different lengths +* `'Cannot divide by 0'` if `b` equals 0 or contains 0 + +**Example** + +```js +divide(6, 3) // returns 2 +divide([10, 20, 30, 40], 10) // returns [1, 2, 3, 4] +divide(10, [1, 2, 5, 10]) // returns [10, 5, 2, 1] +divide([14, 42, 65, 108], [2, 7, 5, 12]) // returns [7, 6, 13, 9] +``` + + +## exp( a ) [_exp_a] + +Calculates *e^x* where *e* is Euler’s number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. Returns an array with the values of `e^x` evaluated where `x` is each element of `a` if `a` is an array. + +**Example** + +```js +exp(2) // returns e^2 = 7.3890560989306495 +exp([1, 2, 3]) // returns [e^1, e^2, e^3] = [2.718281828459045, 7.3890560989306495, 20.085536923187668] +``` + + +## first( a ) [_first_a] + +Returns the first element of an array. If anything other than an array is passed in, the input is returned. + +| Param | Type | Description | +| --- | --- | --- | +| a | Array. | array of any values | + +**Returns**: `*`. The first element of `a`. Returns `a` if `a` is not an array. + +**Example** + +```js +first(2) // returns 2 +first([1, 2, 3]) // returns 1 +``` + + +## fix( a ) [_fix_a] + +Calculates the fix of a number, i.e., rounds a number towards 0. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The fix of `a`. Returns an array with the fixes for each element if `a` is an array. + +**Example** + +```js +fix(1.2) // returns 1 +fix(-1.8) // returns -1 +fix([1.8, 2.9, -3.7, -4.6]) // returns [1, 2, -3, -4] +``` + + +## floor( a ) [_floor_a] + +Calculates the floor of a number, i.e., rounds a number towards negative infinity. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The floor of `a`. Returns an array with the floor of each element if `a` is an array. + +**Example** + +```js +floor(1.8) // returns 1 +floor(-1.2) // returns -2 +floor([1.7, 2.8, 3.9]) // returns [1, 2, 3] +``` + + +## last( a ) [_last_a] + +Returns the last element of an array. If anything other than an array is passed in, the input is returned. + +| Param | Type | Description | +| --- | --- | --- | +| a | Array. | array of any values | + +**Returns**: `*`. The last element of `a`. Returns `a` if `a` is not an array. + +**Example** + +```js +last(2) // returns 2 +last([1, 2, 3]) // returns 3 +``` + + +## log( a, b ) [_log_a_b] + +Calculates the logarithm of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers, `a` must be greater than 0 | +| b | Object | (optional) base for the logarithm. If not provided a value, the default base is e, and the natural log is calculated. | + +**Returns**: `number` | `Array.`. The logarithm of `a`. Returns an array with the the logarithms of each element if `a` is an array. + +**Throws**: + +* `'Base out of range'` if `b` ⇐ 0 +* `'Must be greater than 0'` if `a` > 0 + +**Example** + +```js +log(1) // returns 0 +log(64, 8) // returns 2 +log(42, 5) // returns 2.322344707681546 +log([2, 4, 8, 16, 32], 2) // returns [1, 2, 3, 4, 5] +``` + + +## log10( a ) [_log10_a] + +Calculates the logarithm base 10 of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers, `a` must be greater than 0 | + +**Returns**: `number` | `Array.`. The logarithm of `a`. Returns an array with the the logarithms base 10 of each element if `a` is an array. + +**Throws**: `'Must be greater than 0'` if `a` < 0 + +**Example** + +```js +log(10) // returns 1 +log(100) // returns 2 +log(80) // returns 1.9030899869919433 +log([10, 100, 1000, 10000, 100000]) // returns [1, 2, 3, 4, 5] +``` + + +## max( …​args ) [_max_args] + +Finds the maximum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the maximum by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The maximum value of all numbers if `args` contains only numbers. Returns an array with the the maximum values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Throws**: `'Array length mismatch'` if `args` contains arrays of different lengths + +**Example** + +```js +max(1, 2, 3) // returns 3 +max([10, 20, 30, 40], 15) // returns [15, 20, 30, 40] +max([1, 9], 4, [3, 5]) // returns [max([1, 4, 3]), max([9, 4, 5])] = [4, 9] +``` + + +## mean( …​args ) [_mean_args] + +Finds the mean value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mean by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The mean value of all numbers if `args` contains only numbers. Returns an array with the the mean values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Example** + +```js +mean(1, 2, 3) // returns 2 +mean([10, 20, 30, 40], 20) // returns [15, 20, 25, 30] +mean([1, 9], 5, [3, 4]) // returns [mean([1, 5, 3]), mean([9, 5, 4])] = [3, 6] +``` + + +## median( …​args ) [_median_args] + +Finds the median value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the median by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The median value of all numbers if `args` contains only numbers. Returns an array with the the median values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Example** + +```js +median(1, 1, 2, 3) // returns 1.5 +median(1, 1, 2, 2, 3) // returns 2 +median([10, 20, 30, 40], 10, 20, 30) // returns [15, 20, 25, 25] +median([1, 9], 2, 4, [3, 5]) // returns [median([1, 2, 4, 3]), median([9, 2, 4, 5])] = [2.5, 4.5] +``` + + +## min( …​args ) [_min_args] + +Finds the minimum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the minimum by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The minimum value of all numbers if `args` contains only numbers. Returns an array with the the minimum values of each index, including all scalar numbers in `args` in the calculation at each index if `a` is an array. + +**Throws**: `'Array length mismatch'` if `args` contains arrays of different lengths. + +**Example** + +```js +min(1, 2, 3) // returns 1 +min([10, 20, 30, 40], 25) // returns [10, 20, 25, 25] +min([1, 9], 4, [3, 5]) // returns [min([1, 4, 3]), min([9, 4, 5])] = [1, 4] +``` + + +## mod( a, b ) [_mod_a_b] + +Remainder after dividing two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | dividend, a number or an array of numbers | +| b | number | Array. | divisor, a number or an array of numbers, b != 0 | + +**Returns**: `number` | `Array.`. The remainder of `a` divided by `b` if both are numbers. Returns an array with the the remainders applied index-wise to each element if `a` or `b` is an array. + +**Throws**: + +* `'Array length mismatch'` if `a` and `b` are arrays with different lengths +* `'Cannot divide by 0'` if `b` equals 0 or contains 0 + +**Example** + +```js +mod(10, 7) // returns 3 +mod([11, 22, 33, 44], 10) // returns [1, 2, 3, 4] +mod(100, [3, 7, 11, 23]) // returns [1, 2, 1, 8] +mod([14, 42, 65, 108], [5, 4, 14, 2]) // returns [5, 2, 9, 0] +``` + + +## mode( …​args ) [_mode_args] + +Finds the mode value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mode by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.>`. An array of mode value(s) of all numbers if `args` contains only numbers. Returns an array of arrays with mode value(s) of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Example** + +```js +mode(1, 1, 2, 3) // returns [1] +mode(1, 1, 2, 2, 3) // returns [1,2] +mode([10, 20, 30, 40], 10, 20, 30) // returns [[10], [20], [30], [10, 20, 30, 40]] +mode([1, 9], 1, 4, [3, 5]) // returns [mode([1, 1, 4, 3]), mode([9, 1, 4, 5])] = [[1], [4, 5, 9]] +``` + + +## multiply( a, b ) [_multiply_a_b] + +Multiplies two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | +| b | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The product of `a` and `b` if both are numbers. Returns an array with the the products applied index-wise to each element if `a` or `b` is an array. + +**Throws**: `'Array length mismatch'` if `a` and `b` are arrays with different lengths + +**Example** + +```js +multiply(6, 3) // returns 18 +multiply([10, 20, 30, 40], 10) // returns [100, 200, 300, 400] +multiply(10, [1, 2, 5, 10]) // returns [10, 20, 50, 100] +multiply([1, 2, 3, 4], [2, 7, 5, 12]) // returns [2, 14, 15, 48] +``` + + +## pow( a, b ) [_pow_a_b] + +Calculates the cube root of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | +| b | number | the power that `a` is raised to | + +**Returns**: `number` | `Array.`. `a` raised to the power of `b`. Returns an array with the each element raised to the power of `b` if `a` is an array. + +**Throws**: `'Missing exponent'` if `b` is not provided + +**Example** + +```js +pow(2,3) // returns 8 +pow([1, 2, 3], 4) // returns [1, 16, 81] +``` + + +## random( a, b ) [_random_a_b] + +Generates a random number within the given range where the lower bound is inclusive and the upper bound is exclusive. If no numbers are passed in, it will return a number between 0 and 1. If only one number is passed in, it will return a number between 0 and the number passed in. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | (optional) must be greater than 0 if `b` is not provided | +| b | number | (optional) must be greater than `a` | + +**Returns**: `number`. A random number between 0 and 1 if no numbers are passed in. Returns a random number between 0 and `a` if only one number is passed in. Returns a random number between `a` and `b` if two numbers are passed in. + +**Throws**: `'Min must be greater than max'` if `a` < 0 when only `a` is passed in or if `a` > `b` when both `a` and `b` are passed in + +**Example** + +```js +random() // returns a random number between 0 (inclusive) and 1 (exclusive) +random(10) // returns a random number between 0 (inclusive) and 10 (exclusive) +random(-10,10) // returns a random number between -10 (inclusive) and 10 (exclusive) +``` + + +## range( …​args ) [_range_args] + +Finds the range of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the range by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Example** + +```js +range(1, 2, 3) // returns 2 +range([10, 20, 30, 40], 15) // returns [5, 5, 15, 25] +range([1, 9], 4, [3, 5]) // returns [range([1, 4, 3]), range([9, 4, 5])] = [3, 5] +``` + + +## range( …​args ) [_range_args_2] + +Finds the range of one of more numbers/arrays of numbers into the function. If at least one array of numbers is passed into the function, the function will find the range by index. + +| Param | Type | Description | +| --- | --- | --- | +| …​args | number | Array. | one or more numbers or arrays of numbers | + +**Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. + +**Example** + +```js +range(1, 2, 3) // returns 2 +range([10, 20, 30, 40], 15) // returns [5, 5, 15, 25] +range([1, 9], 4, [3, 5]) // returns [range([1, 4, 3]), range([9, 4, 5])] = [3, 5] +``` + + +## round( a, b ) [_round_a_b] + +Rounds a number towards the nearest integer by default, or decimal place (if passed in as `b`). For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | +| b | number | (optional) number of decimal places, default value: 0 | + +**Returns**: `number` | `Array.`. The rounded value of `a`. Returns an array with the the rounded values of each element if `a` is an array. + +**Example** + +```js +round(1.2) // returns 2 +round(-10.51) // returns -11 +round(-10.1, 2) // returns -10.1 +round(10.93745987, 4) // returns 10.9375 +round([2.9234, 5.1234, 3.5234, 4.49234324], 2) // returns [2.92, 5.12, 3.52, 4.49] +``` + + +## size( a ) [_size_a] + +Returns the length of an array. Alias for count. + +| Param | Type | Description | +| --- | --- | --- | +| a | Array. | array of any values | + +**Returns**: `number`. The length of the array. + +**Throws**: `'Must pass an array'` if `a` is not an array + +**Example** + +```js +size([]) // returns 0 +size([-1, -2, -3, -4]) // returns 4 +size(100) // returns 1 +``` + + +## sqrt( a ) [_sqrt_a] + +Calculates the square root of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The square root of `a`. Returns an array with the the square roots of each element if `a` is an array. + +**Throws**: `'Unable find the square root of a negative number'` if `a` < 0 + +**Example** + +```js +sqrt(9) // returns 3 +sqrt(30) //5.477225575051661 +sqrt([9, 16, 25]) // returns [3, 4, 5] +``` + + +## square( a ) [_square_a] + +Calculates the square of a number. For arrays, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The square of `a`. Returns an array with the the squares of each element if `a` is an array. + +**Example** + +```js +square(-3) // returns 9 +square([3, 4, 5]) // returns [9, 16, 25] +``` + + +## subtract( a, b ) [_subtract_a_b] + +Subtracts two numbers. If at least one array of numbers is passed into the function, the function will be applied index-wise to each element. + +| Param | Type | Description | +| --- | --- | --- | +| a | number | Array. | a number or an array of numbers | +| b | number | Array. | a number or an array of numbers | + +**Returns**: `number` | `Array.`. The difference of `a` and `b` if both are numbers, or an array of differences applied index-wise to each element. + +**Throws**: `'Array length mismatch'` if `a` and `b` are arrays with different lengths + +**Example** + +```js +subtract(6, 3) // returns 3 +subtract([10, 20, 30, 40], 10) // returns [0, 10, 20, 30] +subtract(10, [1, 2, 5, 10]) // returns [9, 8, 5, 0] +subtract([14, 42, 65, 108], [2, 7, 5, 12]) // returns [12, 35, 52, 96] +``` + + +## sum( …​args ) [_sum_args] + +Calculates the sum of one or more numbers/arrays passed into the function. If at least one array is passed, the function will sum up one or more numbers/arrays of numbers and distinct values of an array. Sum accepts arrays of different lengths. + +**Returns**: `number`. The sum of one or more numbers/arrays of numbers including distinct values in arrays + +**Example** + +```js +sum(1, 2, 3) // returns 6 +sum([10, 20, 30, 40], 10, 20, 30) // returns 160 +sum([1, 2], 3, [4, 5], 6) // returns sum(1, 2, 3, 4, 5, 6) = 21 +sum([10, 20, 30, 40], 10, [1, 2, 3], 22) // returns sum(10, 20, 30, 40, 10, 1, 2, 3, 22) = 138 +``` + + +## unique( a ) [_unique_a] + +Counts the number of unique values in an array. + +**Returns**: `number`. The number of unique values in the array. Returns 1 if `a` is not an array. + +**Example** + +```js +unique(100) // returns 1 +unique([]) // returns 0 +unique([1, 2, 3, 4]) // returns 4 +unique([1, 2, 3, 4, 2, 2, 2, 3, 4, 2, 4, 5, 2, 1, 4, 2]) // returns 5 +``` + diff --git a/reference/data-analysis/machine-learning/machine-learning-functions.md b/reference/data-analysis/machine-learning/machine-learning-functions.md new file mode 100644 index 0000000000..d5966b9648 --- /dev/null +++ b/reference/data-analysis/machine-learning/machine-learning-functions.md @@ -0,0 +1,24 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-functions.html +--- + +# Function reference [ml-functions] + +The {{ml-features}} include analysis functions that provide a wide variety of flexible ways to analyze data for anomalies. + +When you create {{anomaly-jobs}}, you specify one or more detectors, which define the type of analysis that needs to be done. If you are creating your job by using {{ml}} APIs, you specify the functions in detector configuration objects. If you are creating your job in {{kib}}, you specify the functions differently depending on whether you are creating single metric, multi-metric, or advanced jobs. + +Most functions detect anomalies in both low and high values. In statistical terminology, they apply a two-sided test. Some functions offer low and high variations (for example, `count`, `low_count`, and `high_count`). These variations apply one-sided tests, detecting anomalies only when the values are low or high, depending one which alternative is used. + +You can specify a `summary_count_field_name` with any function except `metric`. When you use `summary_count_field_name`, the {{ml}} features expect the input data to be pre-aggregated. The value of the `summary_count_field_name` field must contain the count of raw events that were summarized. In {{kib}}, use the **summary_count_field_name** in advanced {{anomaly-jobs}}. Analyzing aggregated input data provides a significant boost in performance. For more information, see [Aggregating data for faster performance](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-aggregation.md). + +If your data is sparse, there may be gaps in the data which means you might have empty buckets. You might want to treat these as anomalies or you might want these gaps to be ignored. Your decision depends on your use case and what is important to you. It also depends on which functions you use. The `sum` and `count` functions are strongly affected by empty buckets. For this reason, there are `non_null_sum` and `non_zero_count` functions, which are tolerant to sparse data. These functions effectively ignore empty buckets. + +* [Count functions](/reference/data-analysis/machine-learning/ml-count-functions.md) +* [Geographic functions](/reference/data-analysis/machine-learning/ml-geo-functions.md) +* [Information content functions](/reference/data-analysis/machine-learning/ml-info-functions.md) +* [Metric functions](/reference/data-analysis/machine-learning/ml-metric-functions.md) +* [Rare functions](/reference/data-analysis/machine-learning/ml-rare-functions.md) +* [Sum functions](/reference/data-analysis/machine-learning/ml-sum-functions.md) +* [Time functions](/reference/data-analysis/machine-learning/ml-time-functions.md) diff --git a/reference/data-analysis/machine-learning/ml-count-functions.md b/reference/data-analysis/machine-learning/ml-count-functions.md new file mode 100644 index 0000000000..f3ce220070 --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-count-functions.md @@ -0,0 +1,224 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-count-functions.html +--- + +# Count functions [ml-count-functions] + +Count functions detect anomalies when the number of events in a bucket is anomalous. + +Use `non_zero_count` functions if your data is sparse and you want to ignore cases where the bucket count is zero. + +Use `distinct_count` functions to determine when the number of distinct values in one field is unusual, as opposed to the total count. + +Use high-sided functions if you want to monitor unusually high event rates. Use low-sided functions if you want to look at drops in event rate. + +The {{ml-features}} include the following count functions: + +* [`count`, `high_count`, `low_count`](ml-count-functions.md#ml-count) +* [`non_zero_count`, `high_non_zero_count`, `low_non_zero_count`](ml-count-functions.md#ml-nonzero-count) +* [`distinct_count`, `high_distinct_count`, `low_distinct_count`](ml-count-functions.md#ml-distinct-count) + + +## Count, high_count, low_count [ml-count] + +The `count` function detects anomalies when the number of events in a bucket is anomalous. + +The `high_count` function detects anomalies when the count of events in a bucket are unusually high. + +The `low_count` function detects anomalies when the count of events in a bucket are unusually low. + +These functions support the following properties: + +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```console +PUT _ml/anomaly_detectors/example1 +{ + "analysis_config": { + "detectors": [{ + "function" : "count" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +This example is probably the simplest possible analysis. It identifies time buckets during which the overall count of events is higher or lower than usual. + +When you use this function in a detector in your {{anomaly-job}}, it models the event rate and detects when the event rate is unusual compared to its past behavior. + +```console +PUT _ml/anomaly_detectors/example2 +{ + "analysis_config": { + "detectors": [{ + "function" : "high_count", + "by_field_name" : "error_code", + "over_field_name": "user" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +If you use this `high_count` function in a detector in your {{anomaly-job}}, it models the event rate for each error code. It detects users that generate an unusually high count of error codes compared to other users. + +```console +PUT _ml/anomaly_detectors/example3 +{ + "analysis_config": { + "detectors": [{ + "function" : "low_count", + "by_field_name" : "status_code" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +In this example, the function detects when the count of events for a status code is lower than usual. + +When you use this function in a detector in your {{anomaly-job}}, it models the event rate for each status code and detects when a status code has an unusually low count compared to its past behavior. + +```console +PUT _ml/anomaly_detectors/example4 +{ + "analysis_config": { + "summary_count_field_name" : "events_per_min", + "detectors": [{ + "function" : "count" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +If you are analyzing an aggregated `events_per_min` field, do not use a sum function (for example, `sum(events_per_min)`). Instead, use the count function and the `summary_count_field_name` property. For more information, see [Aggregating data for faster performance](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-aggregation.md). + + +## Non_zero_count, high_non_zero_count, low_non_zero_count [ml-nonzero-count] + +The `non_zero_count` function detects anomalies when the number of events in a bucket is anomalous, but it ignores cases where the bucket count is zero. Use this function if you know your data is sparse or has gaps and the gaps are not important. + +The `high_non_zero_count` function detects anomalies when the number of events in a bucket is unusually high and it ignores cases where the bucket count is zero. + +The `low_non_zero_count` function detects anomalies when the number of events in a bucket is unusually low and it ignores cases where the bucket count is zero. + +These functions support the following properties: + +* `by_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +For example, if you have the following number of events per bucket: + +::::{admonition} +1,22,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,43,31,0,0,0,0,0,0,0,0,0,0,0,0,2,1 + +:::: + + +The `non_zero_count` function models only the following data: + +::::{admonition} +1,22,2,43,31,2,1 + +:::: + + +```console +PUT _ml/anomaly_detectors/example5 +{ + "analysis_config": { + "detectors": [{ + "function" : "high_non_zero_count", + "by_field_name" : "signaturename" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +If you use this `high_non_zero_count` function in a detector in your {{anomaly-job}}, it models the count of events for the `signaturename` field. It ignores any buckets where the count is zero and detects when a `signaturename` value has an unusually high count of events compared to its past behavior. + +::::{note} +Population analysis (using an `over_field_name` property value) is not supported for the `non_zero_count`, `high_non_zero_count`, and `low_non_zero_count` functions. If you want to do population analysis and your data is sparse, use the `count` functions, which are optimized for that scenario. +:::: + + + +## Distinct_count, high_distinct_count, low_distinct_count [ml-distinct-count] + +The `distinct_count` function detects anomalies where the number of distinct values in one field is unusual. + +The `high_distinct_count` function detects unusually high numbers of distinct values in one field. + +The `low_distinct_count` function detects unusually low numbers of distinct values in one field. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```console +PUT _ml/anomaly_detectors/example6 +{ + "analysis_config": { + "detectors": [{ + "function" : "distinct_count", + "field_name" : "user" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +This `distinct_count` function detects when a system has an unusual number of logged in users. When you use this function in a detector in your {{anomaly-job}}, it models the distinct count of users. It also detects when the distinct number of users is unusual compared to the past. + +```console +PUT _ml/anomaly_detectors/example7 +{ + "analysis_config": { + "detectors": [{ + "function" : "high_distinct_count", + "field_name" : "dst_port", + "over_field_name": "src_ip" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +This example detects instances of port scanning. When you use this function in a detector in your {{anomaly-job}}, it models the distinct count of ports. It also detects the `src_ip` values that connect to an unusually high number of different `dst_ports` values compared to other `src_ip` values. + diff --git a/reference/data-analysis/machine-learning/ml-geo-functions.md b/reference/data-analysis/machine-learning/ml-geo-functions.md new file mode 100644 index 0000000000..011b632d29 --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-geo-functions.md @@ -0,0 +1,70 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-geo-functions.html +--- + +# Geographic functions [ml-geo-functions] + +The geographic functions detect anomalies in the geographic location of the input data. + +The {{ml-features}} include the following geographic function: `lat_long`. + +::::{note} +You cannot create forecasts for {{anomaly-jobs}} that contain geographic functions. You also cannot add rules with conditions to detectors that use geographic functions. +:::: + + + +## Lat_long [ml-lat-long] + +The `lat_long` function detects anomalies in the geographic location of the input data. + +This function supports the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```console +PUT _ml/anomaly_detectors/example1 +{ + "analysis_config": { + "detectors": [{ + "function" : "lat_long", + "field_name" : "transaction_coordinates", + "by_field_name" : "credit_card_number" + }] + }, + "data_description": { + "time_field":"timestamp", + "time_format": "epoch_ms" + } +} +``` + +If you use this `lat_long` function in a detector in your {{anomaly-job}}, it detects anomalies where the geographic location of a credit card transaction is unusual for a particular customer’s credit card. An anomaly might indicate fraud. + +A "typical" value indicates a centroid of a cluster of previously observed locations that is closest to the "actual" location at that time. For example, there may be one centroid near the person’s home that is associated with the cluster of local grocery stores and restaurants, and another centroid near the person’s work associated with the cluster of lunch and coffee places. + +::::{important} +The `field_name` that you supply must be a single string that contains two comma-separated numbers of the form `latitude,longitude`, a `geo_point` field, a `geo_shape` field that contains point values, or a `geo_centroid` aggregation. The `latitude` and `longitude` must be in the range -180 to 180 and represent a point on the surface of the Earth. +:::: + + +For example, JSON data might contain the following transaction coordinates: + +```js +{ + "time": 1460464275, + "transaction_coordinates": "40.7,-74.0", + "credit_card_number": "1234123412341234" +} +``` + +In {{es}}, location data is likely to be stored in `geo_point` fields. For more information, see [`geo_point` data type](elasticsearch://docs/reference/elasticsearch/mapping-reference/geo-point.md). This data type is supported natively in {{ml-features}}. Specifically, when pulling data from a `geo_point` field, a {{dfeed}} will transform the data into the appropriate `lat,lon` string format before sending to the {{anomaly-job}}. + +For more information, see [Altering data in your {{dfeed}} with runtime fields](/explore-analyze/machine-learning/anomaly-detection/ml-configuring-transform.md). + diff --git a/reference/data-analysis/machine-learning/ml-info-functions.md b/reference/data-analysis/machine-learning/ml-info-functions.md new file mode 100644 index 0000000000..5907a3a45d --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-info-functions.md @@ -0,0 +1,64 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-info-functions.html +--- + +# Information content functions [ml-info-functions] + +The information content functions detect anomalies in the amount of information that is contained in strings within a bucket. These functions can be used as a more sophisticated method to identify incidences of data exfiltration or C2C activity, when analyzing the size in bytes of the data might not be sufficient. + +The {{ml-features}} include the following information content functions: + +* `info_content`, `high_info_content`, `low_info_content` + + +## Info_content, High_info_content, Low_info_content [ml-info-content] + +The `info_content` function detects anomalies in the amount of information that is contained in strings in a bucket. + +If you want to monitor for unusually high amounts of information, use `high_info_content`. If want to look at drops in information content, use `low_info_content`. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "info_content", + "field_name" : "subdomain", + "over_field_name" : "highest_registered_domain" +} +``` + +If you use this `info_content` function in a detector in your {{anomaly-job}}, it models information that is present in the `subdomain` string. It detects anomalies where the information content is unusual compared to the other `highest_registered_domain` values. An anomaly could indicate an abuse of the DNS protocol, such as malicious command and control activity. + +::::{note} +In this example, both high and low values are considered anomalous. In many use cases, the `high_info_content` function is often a more appropriate choice. +:::: + + +```js +{ + "function" : "high_info_content", + "field_name" : "query", + "over_field_name" : "src_ip" +} +``` + +If you use this `high_info_content` function in a detector in your {{anomaly-job}}, it models information content that is held in the DNS query string. It detects `src_ip` values where the information content is unusually high compared to other `src_ip` values. This example is similar to the example for the `info_content` function, but it reports anomalies only where the amount of information content is higher than expected. + +```js +{ + "function" : "low_info_content", + "field_name" : "message", + "by_field_name" : "logfilename" +} +``` + +If you use this `low_info_content` function in a detector in your {{anomaly-job}}, it models information content that is present in the message string for each `logfilename`. It detects anomalies where the information content is low compared to its past behavior. For example, this function detects unusually low amounts of information in a collection of rolling log files. Low information might indicate that a process has entered an infinite loop or that logging features have been disabled. + diff --git a/reference/data-analysis/machine-learning/ml-metric-functions.md b/reference/data-analysis/machine-learning/ml-metric-functions.md new file mode 100644 index 0000000000..84139a6fde --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-metric-functions.md @@ -0,0 +1,240 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-metric-functions.html +--- + +# Metric functions [ml-metric-functions] + +The metric functions include functions such as mean, min and max. These values are calculated for each bucket. Field values that cannot be converted to double precision floating point numbers are ignored. + +The {{ml-features}} include the following metric functions: + +* [`min`](ml-metric-functions.md#ml-metric-min) +* [`max`](ml-metric-functions.md#ml-metric-max) +* [`median`, `high_median`, `low_median`](ml-metric-functions.md#ml-metric-median) +* [`mean`, `high_mean`, `low_mean`](ml-metric-functions.md#ml-metric-mean) +* [`metric`](ml-metric-functions.md#ml-metric-metric) +* [`varp`, `high_varp`, `low_varp`](ml-metric-functions.md#ml-metric-varp) + +::::{note} +You cannot add rules with conditions to detectors that use the `metric` function. +:::: + + + +## Min [ml-metric-min] + +The `min` function detects anomalies in the arithmetic minimum of a value. The minimum value is calculated for each bucket. + +High- and low-sided functions are not applicable. + +This function supports the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "min", + "field_name" : "amt", + "by_field_name" : "product" +} +``` + +If you use this `min` function in a detector in your {{anomaly-job}}, it detects where the smallest transaction is lower than previously observed. You can use this function to detect items for sale at unintentionally low prices due to data entry mistakes. It models the minimum amount for each product over time. + + +## Max [ml-metric-max] + +The `max` function detects anomalies in the arithmetic maximum of a value. The maximum value is calculated for each bucket. + +High- and low-sided functions are not applicable. + +This function supports the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "max", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `max` function in a detector in your {{anomaly-job}}, it detects where the longest `responsetime` is longer than previously observed. You can use this function to detect applications that have `responsetime` values that are unusually lengthy. It models the maximum `responsetime` for each application over time and detects when the longest `responsetime` is unusually long compared to previous applications. + +```js +{ + "function" : "max", + "field_name" : "responsetime", + "by_field_name" : "application" +}, +{ + "function" : "high_mean", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +The analysis in the previous example can be performed alongside `high_mean` functions by application. By combining detectors and using the same influencer this job can detect both unusually long individual response times and average response times for each bucket. + + +## Median, high_median, low_median [ml-metric-median] + +The `median` function detects anomalies in the statistical median of a value. The median value is calculated for each bucket. + +If you want to monitor unusually high median values, use the `high_median` function. + +If you are just interested in unusually low median values, use the `low_median` function. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "median", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `median` function in a detector in your {{anomaly-job}}, it models the median `responsetime` for each application over time. It detects when the median `responsetime` is unusual compared to previous `responsetime` values. + + +## Mean, high_mean, low_mean [ml-metric-mean] + +The `mean` function detects anomalies in the arithmetic mean of a value. The mean value is calculated for each bucket. + +If you want to monitor unusually high average values, use the `high_mean` function. + +If you are just interested in unusually low average values, use the `low_mean` function. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "mean", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `mean` function in a detector in your {{anomaly-job}}, it models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusual compared to previous `responsetime` values. + +```js +{ + "function" : "high_mean", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `high_mean` function in a detector in your {{anomaly-job}}, it models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusually high compared to previous `responsetime` values. + +```js +{ + "function" : "low_mean", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `low_mean` function in a detector in your {{anomaly-job}}, it models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusually low compared to previous `responsetime` values. + + +## Metric [ml-metric-metric] + +The `metric` function combines `min`, `max`, and `mean` functions. You can use it as a shorthand for a combined analysis. If you do not specify a function in a detector, this is the default function. + +High- and low-sided functions are not applicable. You cannot use this function when a `summary_count_field_name` is specified. + +This function supports the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "metric", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `metric` function in a detector in your {{anomaly-job}}, it models the mean, min, and max `responsetime` for each application over time. It detects when the mean, min, or max `responsetime` is unusual compared to previous `responsetime` values. + + +## Varp, high_varp, low_varp [ml-metric-varp] + +The `varp` function detects anomalies in the variance of a value which is a measure of the variability and spread in the data. + +If you want to monitor unusually high variance, use the `high_varp` function. + +If you are just interested in unusually low variance, use the `low_varp` function. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "varp", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `varp` function in a detector in your {{anomaly-job}}, it models the variance in values of `responsetime` for each application over time. It detects when the variance in `responsetime` is unusual compared to past application behavior. + +```js +{ + "function" : "high_varp", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `high_varp` function in a detector in your {{anomaly-job}}, it models the variance in values of `responsetime` for each application over time. It detects when the variance in `responsetime` is unusual compared to past application behavior. + +```js +{ + "function" : "low_varp", + "field_name" : "responsetime", + "by_field_name" : "application" +} +``` + +If you use this `low_varp` function in a detector in your {{anomaly-job}}, it models the variance in values of `responsetime` for each application over time. It detects when the variance in `responsetime` is unusual compared to past application behavior. + diff --git a/reference/data-analysis/machine-learning/ml-rare-functions.md b/reference/data-analysis/machine-learning/ml-rare-functions.md new file mode 100644 index 0000000000..fda99190a9 --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-rare-functions.md @@ -0,0 +1,91 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-rare-functions.html +--- + +# Rare functions [ml-rare-functions] + +The rare functions detect values that occur rarely in time or rarely for a population. + +The `rare` analysis detects anomalies according to the number of distinct rare values. This differs from `freq_rare`, which detects anomalies according to the number of times (frequency) rare values occur. + +::::{note} +* The `rare` and `freq_rare` functions should not be used in conjunction with `exclude_frequent`. +* You cannot create forecasts for {{anomaly-jobs}} that contain `rare` or `freq_rare` functions. +* You cannot add rules with conditions to detectors that use `rare` or `freq_rare` functions. +* Shorter bucket spans (less than 1 hour, for example) are recommended when looking for rare events. The functions model whether something happens in a bucket at least once. With longer bucket spans, it is more likely that entities will be seen in a bucket and therefore they appear less rare. Picking the ideal bucket span depends on the characteristics of the data with shorter bucket spans typically being measured in minutes, not hours. +* To model rare data, a learning period of at least 20 buckets is required for typical data. + +:::: + + +The {{ml-features}} include the following rare functions: + +* [`rare`](ml-rare-functions.md#ml-rare) +* [`freq_rare`](ml-rare-functions.md#ml-freq-rare) + + +## Rare [ml-rare] + +The `rare` function detects values that occur rarely in time or rarely for a population. It detects anomalies according to the number of distinct rare values. + +This function supports the following properties: + +* `by_field_name` (required) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "rare", + "by_field_name" : "status" +} +``` + +If you use this `rare` function in a detector in your {{anomaly-job}}, it detects values that are rare in time. It models status codes that occur over time and detects when rare status codes occur compared to the past. For example, you can detect status codes in a web access log that have never (or rarely) occurred before. + +```js +{ + "function" : "rare", + "by_field_name" : "status", + "over_field_name" : "clientip" +} +``` + +If you use this `rare` function in a detector in your {{anomaly-job}}, it detects values that are rare in a population. It models status code and client IP interactions that occur. It defines a rare status code as one that occurs for few client IP values compared to the population. It detects client IP values that experience one or more distinct rare status codes compared to the population. For example in a web access log, a `clientip` that experiences the highest number of different rare status codes compared to the population is regarded as highly anomalous. This analysis is based on the number of different status code values, not the count of occurrences. + +::::{note} +To define a status code as rare the {{ml-features}} look at the number of distinct status codes that occur, not the number of times the status code occurs. If a single client IP experiences a single unique status code, this is rare, even if it occurs for that client IP in every bucket. +:::: + + + +## Freq_rare [ml-freq-rare] + +The `freq_rare` function detects values that occur rarely for a population. It detects anomalies according to the number of times (frequency) that rare values occur. + +This function supports the following properties: + +* `by_field_name` (required) +* `over_field_name` (required) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "freq_rare", + "by_field_name" : "uri", + "over_field_name" : "clientip" +} +``` + +If you use this `freq_rare` function in a detector in your {{anomaly-job}}, it detects values that are frequently rare in a population. It models URI paths and client IP interactions that occur. It defines a rare URI path as one that is visited by few client IP values compared to the population. It detects the client IP values that experience many interactions with rare URI paths compared to the population. For example in a web access log, a client IP that visits one or more rare URI paths many times compared to the population is regarded as highly anomalous. This analysis is based on the count of interactions with rare URI paths, not the number of different URI path values. + +::::{note} +Defining a URI path as rare happens the same way as you can see in the case of the status codes above: the analytics consider the number of distinct values that occur and not the number of times the URI path occurs. If a single client IP visits a single unique URI path, this is rare, even if it occurs for that client IP in every bucket. +:::: + + diff --git a/reference/data-analysis/machine-learning/ml-sum-functions.md b/reference/data-analysis/machine-learning/ml-sum-functions.md new file mode 100644 index 0000000000..489b663c96 --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-sum-functions.md @@ -0,0 +1,91 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-sum-functions.html +--- + +# Sum functions [ml-sum-functions] + +The sum functions detect anomalies when the sum of a field in a bucket is anomalous. + +If you want to monitor unusually high totals, use high-sided functions. + +If want to look at drops in totals, use low-sided functions. + +If your data is sparse, use `non_null_sum` functions. Buckets without values are ignored; buckets with a zero value are analyzed. + +The {{ml-features}} include the following sum functions: + +* [`sum`, `high_sum`, `low_sum`](ml-sum-functions.md#ml-sum) +* [`non_null_sum`, `high_non_null_sum`, `low_non_null_sum`](ml-sum-functions.md#ml-nonnull-sum) + + +## Sum, high_sum, low_sum [ml-sum] + +The `sum` function detects anomalies where the sum of a field in a bucket is anomalous. + +If you want to monitor unusually high sum values, use the `high_sum` function. + +If you want to monitor unusually low sum values, use the `low_sum` function. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "sum", + "field_name" : "expenses", + "by_field_name" : "costcenter", + "over_field_name" : "employee" +} +``` + +If you use this `sum` function in a detector in your {{anomaly-job}}, it models total expenses per employees for each cost center. For each time bucket, it detects when an employee’s expenses are unusual for a cost center compared to other employees. + +```js +{ + "function" : "high_sum", + "field_name" : "cs_bytes", + "over_field_name" : "cs_host" +} +``` + +If you use this `high_sum` function in a detector in your {{anomaly-job}}, it models total `cs_bytes`. It detects `cs_hosts` that transfer unusually high volumes compared to other `cs_hosts`. This example looks for volumes of data transferred from a client to a server on the internet that are unusual compared to other clients. This scenario could be useful to detect data exfiltration or to find users that are abusing internet privileges. + + +## Non_null_sum, high_non_null_sum, low_non_null_sum [ml-nonnull-sum] + +The `non_null_sum` function is useful if your data is sparse. Buckets without values are ignored and buckets with a zero value are analyzed. + +If you want to monitor unusually high totals, use the `high_non_null_sum` function. + +If you want to look at drops in totals, use the `low_non_null_sum` function. + +These functions support the following properties: + +* `field_name` (required) +* `by_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +::::{note} +Population analysis (that is to say, use of the `over_field_name` property) is not applicable for this function. +:::: + + +```js +{ + "function" : "high_non_null_sum", + "field_name" : "amount_approved", + "by_field_name" : "employee" +} +``` + +If you use this `high_non_null_sum` function in a detector in your {{anomaly-job}}, it models the total `amount_approved` for each employee. It ignores any buckets where the amount is null. It detects employees who approve unusually high amounts compared to their past behavior. + diff --git a/reference/data-analysis/machine-learning/ml-time-functions.md b/reference/data-analysis/machine-learning/ml-time-functions.md new file mode 100644 index 0000000000..d05cd43052 --- /dev/null +++ b/reference/data-analysis/machine-learning/ml-time-functions.md @@ -0,0 +1,76 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ml-time-functions.html +--- + +# Time functions [ml-time-functions] + +The time functions detect events that happen at unusual times, either of the day or of the week. These functions can be used to find unusual patterns of behavior, typically associated with suspicious user activity. + +The {{ml-features}} include the following time functions: + +* [`time_of_day`](ml-time-functions.md#ml-time-of-day) +* [`time_of_week`](ml-time-functions.md#ml-time-of-week) + +::::{note} +* You cannot create forecasts for {{anomaly-jobs}} that contain time functions. +* The `time_of_day` function is not aware of the difference between days, for instance work days and weekends. When modeling different days, use the `time_of_week` function. In general, the `time_of_week` function is more suited to modeling the behavior of people rather than machines, as people vary their behavior according to the day of the week. +* Shorter bucket spans (for example, 10 minutes) are recommended when performing a `time_of_day` or `time_of_week` analysis. The time of the events being modeled are not affected by the bucket span, but a shorter bucket span enables quicker alerting on unusual events. +* Unusual events are flagged based on the previous pattern of the data, not on what we might think of as unusual based on human experience. So, if events typically occur between 3 a.m. and 5 a.m., an event occurring at 3 p.m. is flagged as unusual. +* When Daylight Saving Time starts or stops, regular events can be flagged as anomalous. This situation occurs because the actual time of the event (as measured against a UTC baseline) has changed. This situation is treated as a step change in behavior and the new times will be learned quickly. + +:::: + + + +## Time_of_day [ml-time-of-day] + +The `time_of_day` function detects when events occur that are outside normal usage patterns. For example, it detects unusual activity in the middle of the night. + +The function expects daily behavior to be similar. If you expect the behavior of your data to differ on Saturdays compared to Wednesdays, the `time_of_week` function is more appropriate. + +This function supports the following properties: + +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "time_of_day", + "by_field_name" : "process" +} +``` + +If you use this `time_of_day` function in a detector in your {{anomaly-job}}, it models when events occur throughout a day for each process. It detects when an event occurs for a process that is at an unusual time in the day compared to its past behavior. + + +## Time_of_week [ml-time-of-week] + +The `time_of_week` function detects when events occur that are outside normal usage patterns. For example, it detects login events on the weekend. + +::::{important} +The `time_of_week` function models time in epoch seconds modulo the duration of a week in seconds. It means that the `typical` and `actual` values are seconds after a whole number of weeks since 1/1/1970 in UTC which is a Thursday. For example, a value of `475` is 475 seconds after midnight on Thursday in UTC. +:::: + + +This function supports the following properties: + +* `by_field_name` (optional) +* `over_field_name` (optional) +* `partition_field_name` (optional) + +For more information about those properties, see the [create {{anomaly-jobs}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job). + +```js +{ + "function" : "time_of_week", + "by_field_name" : "eventcode", + "over_field_name" : "workstation" +} +``` + +If you use this `time_of_week` function in a detector in your {{anomaly-job}}, it models when events occur throughout the week for each `eventcode`. It detects when a workstation event occurs at an unusual time during the week for that `eventcode` compared to other workstations. It detects events for a particular workstation that are outside the normal usage pattern. + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md new file mode 100644 index 0000000000..053620845a --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md @@ -0,0 +1,41 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-apache.html +--- + +# Apache {{anomaly-detect}} configurations [ootb-ml-jobs-apache] + +These {{anomaly-job}} wizards appear in {{kib}} if you use the Apache integration in {{fleet}} or you use {{filebeat}} to ship access logs from your [Apache](https://httpd.apache.org/) HTTP servers to {{es}}. The jobs assume that you use fields and data types from the Elastic Common Schema (ECS). + + +## Apache access logs [apache-access-logs] + +These {{anomaly-jobs}} find unusual activity in HTTP access logs. + +For more details, see the {{dfeed}} and job definitions in [GitHub](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json). Note that these jobs are available in {{kib}} only if data exists that matches the query specified in the [manifest file](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L11). + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| low_request_rate_apache | Detects low request rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L215) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L370) | +| source_ip_request_rate_apache | Detects unusual source IPs - high request rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L176) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L349) | +| source_ip_url_count_apache | Detects unusual source IPs - high distinct count of URLs. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L136) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L328) | +| status_code_rate_apache | Detects unusual status code rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L90) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L307) | +| visitor_rate_apache | Detects unusual visitor rates. | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L47) | [code](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L260) | + + +## Apache access logs ({{filebeat}}) [apache-access-logs-filebeat] + +These legacy {{anomaly-jobs}} find unusual activity in HTTP access logs. For the latest versions, install the Apache integration in {{fleet}}; see [Apache access logs](ootb-ml-jobs-apache.md#apache-access-logs). + +For more details, see the {{dfeed}} and job definitions in [GitHub](https://github.com/elastic/kibana/tree/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml). + +These configurations are only available if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/manifest.json#L8). + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| low_request_rate_ecs | Detects low request rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/low_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_low_request_rate_ecs.json) | +| source_ip_request_rate_ecs | Detects unusual source IPs - high request rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/source_ip_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_source_ip_request_rate_ecs.json) | +| source_ip_url_count_ecs | Detect unusual source IPs - high distinct count of URLs (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/source_ip_url_count_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_source_ip_url_count_ecs.json) | +| status_code_rate_ecs | Detects unusual status code rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/status_code_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_status_code_rate_ecs.json) | +| visitor_rate_ecs | Detects unusual visitor rates (ECS). | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/visitor_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apache_ecs/ml/datafeed_visitor_rate_ecs.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md new file mode 100644 index 0000000000..1db0ac0231 --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-apm.html +--- + +# APM {{anomaly-detect}} configurations [ootb-ml-jobs-apm] + +This {{anomaly-job}} appears in the {{apm-app}} and the {{ml-app}} app when you have data from APM Agents or an APM Server in your cluster. It is available only if data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/apm_transaction/manifest.json). + +For more information about {{anomaly-detect}} in the {{apm-app}}, refer to [{{ml-cap}} integration](/solutions/observability/apps/integrate-with-machine-learning.md). + + +## Transactions [apm-transaction-jobs] + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| apm_tx_metrics | Detects anomalies in transaction latency, throughput and error percentage for metric data. | [code](https://github.com/elastic/kibana/blob/main/x-pack/plugins/ml/server/models/data_recognizer/modules/apm_transaction/ml/apm_tx_metrics.json) | [code](https://github.com/elastic/kibana/blob/main/x-pack/plugins/ml/server/models/data_recognizer/modules/apm_transaction/ml/datafeed_apm_tx_metrics.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md new file mode 100644 index 0000000000..ce1f7d8b92 --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md @@ -0,0 +1,21 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-auditbeat.html +--- + +# {{auditbeat}} {{anomaly-detect}} configurations [ootb-ml-jobs-auditbeat] + +These {{anomaly-job}} wizards appear in {{kib}} if you use [{{auditbeat}}](beats://docs/reference/auditbeat/auditbeat.md) to audit process activity on your systems. For more details, see the {{dfeed}} and job definitions in GitHub. + + +## Auditbeat docker processes [auditbeat-process-docker-ecs] + +Detect unusual processes in docker containers from auditd data (ECS). + +These configurations are only available if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/manifest.json#L8). + +| Name | Description | Job (JSON)| Datafeed | +| --- | --- | --- | --- | +| docker_high_count_process_events_ecs | Detect unusual increases in process execution rates in docker containers (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/docker_high_count_process_events_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/datafeed_docker_high_count_process_events_ecs.json) | +| docker_rare_process_activity_ecs | Detect rare process executions in docker containers (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/docker_rare_process_activity_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/auditbeat_process_docker_ecs/ml/datafeed_docker_rare_process_activity_ecs.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md new file mode 100644 index 0000000000..245d55022d --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md @@ -0,0 +1,27 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-logs-ui.html +--- + +# Logs {{anomaly-detect}} configurations [ootb-ml-jobs-logs-ui] + +These {{anomaly-jobs}} appear by default in the [{{logs-app}}](/solutions/observability/logs/explore-logs.md) in {{kib}}. For more information about their usage, refer to [Categorize log entries](/solutions/observability/logs/categorize-log-entries.md) and [Inspect log anomalies](/solutions/observability/logs/inspect-log-anomalies.md). + + +## Log analysis [logs-ui-analysis] + +Detect anomalies in log entries via the Logs UI. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| log_entry_rate | Detects anomalies in the log entry ingestion rate | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_analysis/ml/log_entry_rate.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_analysis/ml/datafeed_log_entry_rate.json) | + + +## Log entry categories [logs-ui-categories] + +Detect anomalies in count of log entries by category. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| log_entry_categories_count | Detects anomalies in count of log entries by category | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_categories/ml/log_entry_categories_count.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/logs_ui_categories/ml/datafeed_log_entry_categories_count.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md new file mode 100644 index 0000000000..ceb58cf5f6 --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md @@ -0,0 +1,22 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-metricbeat.html +--- + +# {{metricbeat}} {{anomaly-detect}} configurations [ootb-ml-jobs-metricbeat] + +These {{anomaly-job}} wizards appear in {{kib}} if you use the [{{metricbeat}} system module](beats://docs/reference/metricbeat/metricbeat-module-system.md) to monitor your servers. For more details, see the {{dfeed}} and job definitions in GitHub. + + +## {{metricbeat}} system [metricbeat-system-ecs] + +Detect anomalies in {{metricbeat}} System data (ECS). + +These configurations are only available if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/manifest.json#L8). + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| high_mean_cpu_iowait_ecs | Detect unusual increases in cpu time spent in iowait (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/high_mean_cpu_iowait_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/datafeed_high_mean_cpu_iowait_ecs.json) | +| max_disk_utilization_ecs | Detect unusual increases in disk utilization (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/max_disk_utilization_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/datafeed_max_disk_utilization_ecs.json) | +| metricbeat_outages_ecs | Detect unusual decreases in metricbeat documents (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/metricbeat_outages_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metricbeat_system_ecs/ml/datafeed_metricbeat_outages_ecs.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md new file mode 100644 index 0000000000..e41df52ccc --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md @@ -0,0 +1,31 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-metrics-ui.html +--- + +# Metrics {{anomaly-detect}} configurations [ootb-ml-jobs-metrics-ui] + +These {{anomaly-jobs}} can be created in the [{{infrastructure-app}}](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) in {{kib}}. For more information about their usage, refer to [Inspect metric anomalies](/solutions/observability/infra-and-hosts/detect-metric-anomalies.md). + + +## Metrics hosts [metrics-ui-hosts] + +Detect anomalous memory and network behavior on hosts. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| hosts_memory_usage | Identify unusual spikes in memory usage across hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/hosts_memory_usage.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/datafeed_hosts_memory_usage.json) | +| hosts_network_in | Identify unusual spikes in inbound traffic across hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/hosts_network_in.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/datafeed_hosts_network_in.json) | +| hosts_network_out | Identify unusual spikes in outbound traffic across hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/hosts_network_out.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_hosts/ml/datafeed_hosts_network_out.json) | + + +## Metrics Kubernetes [metrics-ui-k8s] + +Detect anomalous memory and network behavior on Kubernetes pods. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| k8s_memory_usage | Identify unusual spikes in memory usage across Kubernetes pods. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/k8s_memory_usage.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/datafeed_k8s_memory_usage.json) | +| k8s_network_in | Identify unusual spikes in inbound traffic across Kubernetes pods. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/k8s_network_in.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/datafeed_k8s_network_in.json) | +| k8s_network_out | Identify unusual spikes in outbound traffic across Kubernetes pods. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/k8s_network_out.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/metrics_ui_k8s/ml/datafeed_k8s_network_out.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md new file mode 100644 index 0000000000..9f3c3412cb --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md @@ -0,0 +1,39 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-nginx.html +--- + +# Nginx {{anomaly-detect}} configurations [ootb-ml-jobs-nginx] + +These {{anomaly-job}} wizards appear in {{kib}} if you use the Nginx integration in {{fleet}} or you use {{filebeat}} to ship access logs from your [Nginx](http://nginx.org/) HTTP servers to {{es}}. The jobs assume that you use fields and data types from the Elastic Common Schema (ECS). + + +## Nginx access logs [nginx-access-logs] + +Find unusual activity in HTTP access logs. + +These jobs are available in {{kib}} only if data exists that matches the query specified in the [manifest file](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json). + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| low_request_rate_nginx | Detect low request rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L215) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L370) | +| source_ip_request_rate_nginx | Detect unusual source IPs - high request rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L176) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L349) | +| source_ip_url_count_nginx | Detect unusual source IPs - high distinct count of URLs | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L136) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L328) | +| status_code_rate_nginx | Detect unusual status code rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L90) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L307) | +| visitor_rate_nginx | Detect unusual visitor rates | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L47) | [code](https://github.com/elastic/integrations/blob/main/packages/nginx/kibana/ml_module/nginx-Logs-ml.json#L260) | + + +## Nginx access logs ({{filebeat}}) [nginx-access-logs-filebeat] + +These legacy {{anomaly-jobs}} find unusual activity in HTTP access logs. For the latest versions, install the Nginx integration in {{fleet}}; see [Nginx access logs](ootb-ml-jobs-nginx.md#nginx-access-logs). + +These jobs exist in {{kib}} only if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/manifest.json). + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| low_request_rate_ecs | Detect low request rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/low_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_low_request_rate_ecs.json) | +| source_ip_request_rate_ecs | Detect unusual source IPs - high request rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/source_ip_request_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_source_ip_request_rate_ecs.json) | +| source_ip_url_count_ecs | Detect unusual source IPs - high distinct count of URLs (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/source_ip_url_count_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_source_ip_url_count_ecs.json) | +| status_code_rate_ecs | Detect unusual status code rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/status_code_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_status_code_rate_ecs.json) | +| visitor_rate_ecs | Detect unusual visitor rates (ECS) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/visitor_rate_ecs.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/nginx_ecs/ml/datafeed_visitor_rate_ecs.json) | + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md new file mode 100644 index 0000000000..b35701b203 --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md @@ -0,0 +1,217 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-siem.html +--- + +# Security {{anomaly-detect}} configurations [ootb-ml-jobs-siem] + +These {{anomaly-jobs}} automatically detect file system and network anomalies on your hosts. They appear in the **Anomaly Detection** interface of the {{security-app}} in {{kib}} when you have data that matches their configuration. For more information, refer to [Anomaly detection with machine learning](/solutions/security/advanced-entity-analytics/anomaly-detection.md). + + +## Security: Authentication [security-authentication] + +Detect anomalous activity in your ECS-compatible authentication logs. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +By default, when you create these job in the {{security-app}}, it uses a {{data-source}} that applies to multiple indices. To get the same results if you use the {{ml-app}} app, create a similar [{{data-source}}](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/manifest.json#L7) then select it in the job wizard. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| auth_high_count_logon_events | Looks for an unusually large spike in successful authentication events. This can be due to password spraying, user enumeration, or brute force activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_events.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_events.json) | +| auth_high_count_logon_events_for_a_source_ip | Looks for an unusually large spike in successful authentication events from a particular source IP address. This can be due to password spraying, user enumeration or brute force activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_events_for_a_source_ip.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_events_for_a_source_ip.json) | +| auth_high_count_logon_fails | Looks for an unusually large spike in authentication failure events. This can be due to password spraying, user enumeration, or brute force activity and may be a precursor to account takeover or credentialed access. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_fails.json)] | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_fails.json) | +| auth_rare_hour_for_a_user | Looks for a user logging in at a time of day that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different time zones. In addition, unauthorized user activity often takes place during non-business hours. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_hour_for_a_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_hour_for_a_user.json) | +| auth_rare_source_ip_for_a_user | Looks for a user logging in from an IP address that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different locations. An unusual source IP address for a username could also be due to lateral movement when a compromised account is used to pivot between hosts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_source_ip_for_a_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_source_ip_for_a_user.json) | +| auth_rare_user | Looks for an unusual user name in the authentication logs. An unusual user name is one way of detecting credentialed access by means of a new or dormant user account. A user account that is normally inactive, because the user has left the organization, which becomes active, may be due to credentialed access using a compromised account password. Threat actors will sometimes also create new users as a means of persisting in a compromised web application. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_user.json) | +| suspicious_login_activity | Detect unusually high number of authentication attempts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_suspicious_login_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/suspicious_login_activity.json) | + + +## Security: CloudTrail [security-cloudtrail-jobs] + +Detect suspicious activity recorded in your CloudTrail logs. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_cloudtrail/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| high_distinct_count_error_message | Looks for a spike in the rate of an error message which may simply indicate an impending service failure but these can also be byproducts of attempted or successful persistence, privilege escalation, defense evasion, discovery, lateral movement, or collection activity by a threat actor. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/high_distinct_count_error_message.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_high_distinct_count_error_message.json) | +| rare_error_code | Looks for unusual errors. Rare and unusual errors may simply indicate an impending service failure but they can also be byproducts of attempted or successful persistence, privilege escalation, defense evasion, discovery, lateral movement, or collection activity by a threat actor. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_error_code.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_error_code.json) | +| rare_method_for_a_city | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (city) that is unusual. This can be the result of compromised credentials or keys. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_city.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_city.json) | +| rare_method_for_a_country | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (country) that is unusual. This can be the result of compromised credentials or keys. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_country.json) | +| rare_method_for_a_username | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a user context that does not normally call the method. This can be the result of compromised credentials or keys as someone uses a valid account to persist, move laterally, or exfil data. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_username.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_username.json) | + + +## Security: Host [security-host-jobs] + +Anomaly detection jobs for host-based threat hunting and detection. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +To access the host traffic anomalies dashboard in Kibana, go to: `Security -> Dashboards -> Host Traffic Anomalies`. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| high_count_events_for_a_host_name | Looks for a sudden spike in host based traffic. This can be due to a range of security issues, such as a compromised system, DDoS attacks, malware infections, privilege escalation, or data exfiltration. | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/high_count_events_for_a_host_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_high_count_events_for_a_host_name.json) | +| low_count_events_for_a_host_name | Looks for a sudden drop in host based traffic. This can be due to a range of security issues, such as a compromised system, a failed service, or a network misconfiguration. | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/low_count_events_for_a_host_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_low_count_events_for_a_host_name.json) | + + +## Security: Linux [security-linux-jobs] + +Anomaly detection jobs for Linux host-based threat hunting and detection. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| v3_linux_anomalous_network_activity | Looks for unusual processes using the network which could indicate command-and-control, lateral movement, persistence, or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_network_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_network_activity.json) | +| v3_linux_anomalous_network_port_activity | Looks for unusual destination port activity that could indicate command-and-control, persistence mechanism, or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_network_port_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_network_port_activity.json) | +| v3_linux_anomalous_process_all_hosts | Looks for processes that are unusual to all Linux hosts. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_process_all_hosts.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_process_all_hosts.json) | +| v3_linux_anomalous_user_name | Rare and unusual users that are not normally active may indicate unauthorized changes or activity by an unauthorized user which may be credentialed access or lateral movement. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_user_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_user_name.json) | +| v3_linux_network_configuration_discovery | Looks for commands related to system network configuration discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used by a threat actor to engage in system network configuration discovery to increase their understanding of connected networks and hosts. This information may be used to shape follow-up behaviors such as lateral movement or additional discovery. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_network_configuration_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_datafeed_linux_network_configuration_discovery.json) | +| v3_linux_network_connection_discovery | Looks for commands related to system network connection discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used by a threat actor to engage in system network connection discovery to increase their understanding of connected services and systems. This information may be used to shape follow-up behaviors such as lateral movement or additional discovery. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_network_connection_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_datafeed_linux_network_connection_discovery.json) | +| v3_linux_rare_metadata_process | Looks for anomalous access to the metadata service by an unusual process. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_metadata_process.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_metadata_process.json) | +| v3_linux_rare_metadata_user | Looks for anomalous access to the metadata service by an unusual user. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_metadata_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_metadata_user.json) | +| v3_linux_rare_sudo_user | Looks for sudo activity from an unusual user context. Unusual user context changes can be due to privilege escalation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_sudo_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/securiity_linux/ml/datafeed_v3_linux_rare_sudo_user.json) | +| v3_linux_rare_user_compiler | Looks for compiler activity by a user context which does not normally run compilers. This can be ad-hoc software changes or unauthorized software deployment. This can also be due to local privilege elevation via locally run exploits or malware activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_user_compiler.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_user_compiler.json) | +| v3_linux_system_information_discovery | Looks for commands related to system information discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system information discovery to gather detailed information about system configuration and software versions. This may be a precursor to the selection of a persistence mechanism or a method of privilege elevation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_information_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_information_discovery.json) | +| v3_linux_system_process_discovery | Looks for commands related to system process discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system process discovery to increase their understanding of software applications running on a target host or network. This may be a precursor to the selection of a persistence mechanism or a method of privilege elevation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_process_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_process_discovery.json) | +| v3_linux_system_user_discovery | Looks for commands related to system user or owner discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system owner or user discovery to identify currently active or primary users of a system. This may be a precursor to additional discovery, credential dumping, or privilege elevation activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_user_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_user_discovery.json) | +| v3_rare_process_by_host_linux | Looks for processes that are unusual to a particular Linux host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_rare_process_by_host_linux.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_rare_process_by_host_linux.json) | + + +## Security: Network [security-network-jobs] + +Detect anomalous network activity in your ECS-compatible network logs. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +By default, when you create these jobs in the {{security-app}}, it uses a {{data-source}} that applies to multiple indices. To get the same results if you use the {{ml-app}} app, create a similar [{{data-source}}](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/manifest.json#L7) then select it in the job wizard. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| high_count_by_destination_country | Looks for an unusually large spike in network activity to one destination country in the network logs. This could be due to unusually large amounts of reconnaissance or enumeration traffic. Data exfiltration activity may also produce such a surge in traffic to a destination country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_by_destination_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_by_destination_country.json) | +| high_count_network_denies | Looks for an unusually large spike in network traffic that was denied by network ACLs or firewall rules. Such a burst of denied traffic is usually either 1) a misconfigured application or firewall or 2) suspicious or malicious activity. Unsuccessful attempts at network transit, in order to connect to command-and-control (C2), or engage in data exfiltration, may produce a burst of failed connections. This could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_denies.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_denies.json) | +| high_count_network_events | Looks for an unusually large spike in network traffic. Such a burst of traffic, if not caused by a surge in business activity, can be due to suspicious or malicious activity. Large-scale data exfiltration may produce a burst of network traffic; this could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_events.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_events.json) | +| rare_destination_country | Looks for an unusual destination country name in the network logs. This can be due to initial access, persistence, command-and-control, or exfiltration activity. For example, when a user clicks on a link in a phishing email or opens a malicious document, a request may be sent to download and run a payload from a server in a country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/rare_destination_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_rare_destination_country.json) | + + +## Security: {{packetbeat}} [security-packetbeat-jobs] + +Detect suspicious network activity in {{packetbeat}} data. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_packetbeat/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| packetbeat_dns_tunneling | Looks for unusual DNS activity that could indicate command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_dns_tunneling.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_dns_tunneling.json) | +| packetbeat_rare_dns_question | Looks for unusual DNS activity that could indicate command-and-control activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_dns_question.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_dns_question.json) | +| packetbeat_rare_server_domain | Looks for unusual HTTP or TLS destination domain activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_server_domain.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_server_domain.json) | +| packetbeat_rare_urls | Looks for unusual web browsing URL activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_urls.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_urls.json) | +| packetbeat_rare_user_agent | Looks for unusual HTTP user agent activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_user_agent.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_user_agent.json) | + + +## Security: Windows [security-windows-jobs] + +Anomaly detection jobs for Windows host-based threat hunting and detection. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://docs/reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +If there are additional requirements such as installing the Windows System Monitor (Sysmon) or auditing process creation in the Windows security event log, they are listed for each job. + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| v3_rare_process_by_host_windows | Looks for processes that are unusual to a particular Windows host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_rare_process_by_host_windows.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_rare_process_by_host_windows.json) | +| v3_windows_anomalous_network_activity | Looks for unusual processes using the network which could indicate command-and-control, lateral movement, persistence, or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_network_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_network_activity.json) | +| v3_windows_anomalous_path_activity | Looks for activity in unusual paths that may indicate execution of malware or persistence mechanisms. Windows payloads often execute from user profile paths. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_path_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_path_activity.json) | +| v3_windows_anomalous_process_all_hosts | Looks for processes that are unusual to all Windows hosts. Such unusual processes may indicate execution of unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_process_all_hosts.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_process_all_hosts.json) | +| v3_windows_anomalous_process_creation | Looks for unusual process relationships which may indicate execution of malware or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_process_creation.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_process_creation.json) | +| v3_windows_anomalous_script | Looks for unusual powershell scripts that may indicate execution of malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_script.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_script.json) | +| v3_windows_anomalous_service | Looks for rare and unusual Windows service names which may indicate execution of unauthorized services, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_service.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_service.json) | +| v3_windows_anomalous_user_name | Rare and unusual users that are not normally active may indicate unauthorized changes or activity by an unauthorized user which may be credentialed access or lateral movement. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_user_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_user_name.json) | +| v3_windows_rare_metadata_process | Looks for anomalous access to the metadata service by an unusual process. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_metadata_process.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_metadata_process.json) | +| v3_windows_rare_metadata_user | Looks for anomalous access to the metadata service by an unusual user. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_metadata_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_metadata_user.json) | +| v3_windows_rare_user_runas_event | Unusual user context switches can be due to privilege escalation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_runas_event.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_runas_event.json) | +| v3_windows_rare_user_type10_remote_login | Unusual RDP (remote desktop protocol) user logins can indicate account takeover or credentialed access. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_type10_remote_login.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_type10_remote_login.json) | + + +## Security: Elastic Integrations [security-integrations-jobs] + +[Elastic Integrations](integration-docs://docs/reference/index.md) are a streamlined way to add Elastic assets to your environment, such as data ingestion, {{transforms}}, and in this case, {{ml}} capabilities for Security. + +The following Integrations use {{ml}} to analyze patterns of user and entity behavior, and help detect and alert when there is related suspicious activity in your environment. + +* [Data Exfiltration Detection](integration-docs://docs/reference/ded.md) +* [Domain Generation Algorithm Detection](integration-docs://docs/reference/dga.md) +* [Lateral Movement Detection](integration-docs://docs/reference/lmd.md) +* [Living off the Land Attack Detection](integration-docs://docs/reference/problemchild.md) + +**Domain Generation Algorithm (DGA) Detection** + +{{ml-cap}} solution package to detect domain generation algorithm (DGA) activity in your network data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. + +To download, refer to the [documentation](integration-docs://docs/reference/dga.md). + +| Name | Description | +| --- | --- | +| dga_high_sum_probability | Detect domain generation algorithm (DGA) activity in your network data. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/dga/kibana/ml_module/dga-ml.json). + +**Living off the Land Attack (LotL) Detection** + +{{ml-cap}} solution package to detect Living off the Land (LotL) attacks in your environment. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. (Also known as ProblemChild). + +To download, refer to the [documentation](integration-docs://docs/reference/problemchild.md). + +| Name | Description | +| --- | --- | +| problem_child_rare_process_by_host | Looks for a process that has been classified as malicious on a host that does not commonly manifest malicious process activity. | +| problem_child_high_sum_by_host | Looks for a set of one or more malicious child processes on a single host. | +| problem_child_rare_process_by_user | Looks for a process that has been classified as malicious where the user context is unusual and does not commonly manifest malicious process activity. | +| problem_child_rare_process_by_parent | Looks for rare malicious child processes spawned by a parent process. | +| problem_child_high_sum_by_user | Looks for a set of one or more malicious processes, started by the same user. | +| problem_child_high_sum_by_parent | Looks for a set of one or more malicious child processes spawned by the same parent process. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/problemchild/kibana/ml_module/problemchild-ml.json). + +**Data Exfiltration Detection (DED)** + +{{ml-cap}} package to detect data exfiltration in your network and file data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. + +To download, refer to the [documentation](integration-docs://docs/reference/ded.md). + +| Name | Description | +| --- | --- | +| ded_high_sent_bytes_destination_geo_country_iso_code | Detects data exfiltration to an unusual geo-location (by country iso code). | +| ded_high_sent_bytes_destination_ip | Detects data exfiltration to an unusual geo-location (by IP address). | +| ded_high_sent_bytes_destination_port | Detects data exfiltration to an unusual destination port. | +| ded_high_sent_bytes_destination_region_name | Detects data exfiltration to an unusual geo-location (by region name). | +| ded_high_bytes_written_to_external_device | Detects data exfiltration activity by identifying high bytes written to an external device. | +| ded_rare_process_writing_to_external_device | Detects data exfiltration activity by identifying a file write started by a rare process to an external device. | +| ded_high_bytes_written_to_external_device_airdrop | Detects data exfiltration activity by identifying high bytes written to an external device via Airdrop. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/ded/kibana/ml_module/ded-ml.json). + +**Lateral Movement Detection (LMD)** + +{{ml-cap}} package to detect lateral movement based on file transfer activity and Windows RDP events. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. + +To download, refer to the [documentation](integration-docs://docs/reference/lmd.md). + +| Name | Description | +| --- | --- | +| lmd_high_count_remote_file_transfer | Detects unusually high file transfers to a remote host in the network. | +| lmd_high_file_size_remote_file_transfer | Detects unusually high size of files shared with a remote host in the network. | +| lmd_rare_file_extension_remote_transfer | Detects data exfiltration to an unusual destination port. | +| lmd_rare_file_path_remote_transfer | Detects unusual folders and directories on which a file is transferred. | +| lmd_high_mean_rdp_session_duration | Detects unusually high mean of RDP session duration. | +| lmd_high_var_rdp_session_duration | Detects unusually high variance in RDP session duration. | +| lmd_high_sum_rdp_number_of_processes | Detects unusually high number of processes started in a single RDP session. | +| lmd_unusual_time_weekday_rdp_session_start | Detects an RDP session started at an usual time or weekday. | +| lmd_high_rdp_distinct_count_source_ip_for_destination | Detects a high count of source IPs making an RDP connection with a single destination IP. | +| lmd_high_rdp_distinct_count_destination_ip_for_source | Detects a high count of destination IPs establishing an RDP connection with a single source IP. | +| lmd_high_mean_rdp_process_args | Detects unusually high number of process arguments in an RDP session. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/lmd/kibana/ml_module/lmd-ml.json). + diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md new file mode 100644 index 0000000000..f0d2e2264d --- /dev/null +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs-uptime.html +--- + +# Uptime {{anomaly-detect}} configurations [ootb-ml-jobs-uptime] + +If you have appropriate {{heartbeat}} data in {{es}}, you can enable this {{anomaly-job}} in the [{{uptime-app}}](/solutions/observability/apps/synthetic-monitoring.md#monitoring-uptime) in {{kib}}. For more usage information, refer to [Inspect uptime duration anomalies](/solutions/observability/apps/inspect-uptime-duration-anomalies.md). + + +## Uptime: {{heartbeat}} [uptime-heartbeat] + +Detect latency issues in heartbeat monitors. + +These configurations are available in {{kib}} only if data exists that matches the recognizer query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/uptime_heartbeat/manifest.json). + +| Name | Description | Job (JSON) | Datafeed | +| --- | --- | --- | --- | +| high_latency_by_geo | Identify periods of increased latency across geographical regions | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/uptime_heartbeat/ml/high_latency_by_geo.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/uptime_heartbeat/ml/datafeed_high_latency_by_geo.json) | + diff --git a/reference/data-analysis/machine-learning/supplied-anomaly-detection-configurations.md b/reference/data-analysis/machine-learning/supplied-anomaly-detection-configurations.md new file mode 100644 index 0000000000..4e3302e945 --- /dev/null +++ b/reference/data-analysis/machine-learning/supplied-anomaly-detection-configurations.md @@ -0,0 +1,30 @@ +--- +navigation_title: "Supplied configurations" +mapped_pages: + - https://www.elastic.co/guide/en/machine-learning/current/ootb-ml-jobs.html +--- + +# Supplied {{anomaly-detect}} configurations [ootb-ml-jobs] + + +{{anomaly-jobs-cap}} contain the configuration information and metadata necessary to perform an analytics task. {{kib}} can recognize certain types of data and provide specialized wizards for that context. This page lists the categories of the {{anomaly-jobs}} that are ready to use via {{kib}} in **Machine learning**. Refer to [Create {{anomaly-jobs}}](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md) to learn more about creating a job by using supplied configurations. Logs and Metrics supplied configurations are available and can be created via the related solution UI in {{kib}}. + +* [Apache](/reference/data-analysis/machine-learning/ootb-ml-jobs-apache.md) +* [APM](/reference/data-analysis/machine-learning/ootb-ml-jobs-apm.md) +* [{{auditbeat}}](/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md) +* [Logs](/reference/data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md) +* [{{metricbeat}}](/reference/data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md) +* [Metrics](/reference/data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md) +* [Nginx](/reference/data-analysis/machine-learning/ootb-ml-jobs-nginx.md) +* [Security](/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md) +* [Uptime](/reference/data-analysis/machine-learning/ootb-ml-jobs-uptime.md) + +::::{note} +The configurations are only available if data exists that matches the queries specified in the manifest files. These recognizer queries are linked in the descriptions of the individual configurations. +:::: + + + +## Model memory considerations [ootb-ml-model-memory] + +By default, these jobs have `model_memory_limit` values that are deemed appropriate for typical user environments and data characteristics. If your environment or your data is atypical and your jobs reach a memory status value of `soft_limit` or `hard_limit`, you might need to update the model memory limits. For more information, see [Working with {{anomaly-detect}} at scale](/explore-analyze/machine-learning/anomaly-detection/anomaly-detection-scale.md#set-model-memory-limit). diff --git a/reference/data-analysis/observability/index.md b/reference/data-analysis/observability/index.md new file mode 100644 index 0000000000..5f4d4483e9 --- /dev/null +++ b/reference/data-analysis/observability/index.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/metrics-reference.html +--- + +# Metrics reference [metrics-reference] + +Learn about the key metrics displayed in the Infrastructure app and how they are calculated. + +* [Host metrics](/reference/data-analysis/observability/observability-host-metrics-serverless.md) +* [Container metrics](/reference/data-analysis/observability/observability-container-metrics-serverless.md) +* [Kubernetes pod metrics](/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md) +* [AWS metrics](/reference/data-analysis/observability/observability-aws-metrics-serverless.md) + + + + + diff --git a/reference/data-analysis/observability/metrics-reference-serverless.md b/reference/data-analysis/observability/metrics-reference-serverless.md new file mode 100644 index 0000000000..70b5380366 --- /dev/null +++ b/reference/data-analysis/observability/metrics-reference-serverless.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html +--- + +# Metrics reference [observability-metrics-reference] + +Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. + +* [Host metrics](/reference/data-analysis/observability/observability-host-metrics-serverless.md) +* [Kubernetes pod metrics](/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md) +* [Container metrics](/reference/data-analysis/observability/observability-container-metrics-serverless.md) +* [AWS metrics](/reference/data-analysis/observability/observability-aws-metrics-serverless.md) + + + + + diff --git a/reference/data-analysis/observability/observability-aws-metrics-serverless.md b/reference/data-analysis/observability/observability-aws-metrics-serverless.md new file mode 100644 index 0000000000..1d7a4ba631 --- /dev/null +++ b/reference/data-analysis/observability/observability-aws-metrics-serverless.md @@ -0,0 +1,66 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/observability-aws-metrics.html +--- + +# AWS metrics [observability-aws-metrics] + +::::{important} +Additional AWS charges for GetMetricData API requests are generated using this module. + +:::: + + + +## Monitor EC2 instances [monitor-ec2-instances] + +To analyze EC2 instance metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics). + +| | | +| --- | --- | +| **CPU Usage** | Average of `aws.ec2.cpu.total.pct`. | +| **Inbound Traffic** | Average of `aws.ec2.network.in.bytes_per_sec`. | +| **Outbound Traffic** | Average of `aws.ec2.network.out.bytes_per_sec`. | +| **Disk Reads (Bytes)** | Average of `aws.ec2.diskio.read.bytes_per_sec`. | +| **Disk Writes (Bytes)** | Average of `aws.ec2.diskio.write.bytes_per_sec`. | + + +## Monitor S3 buckets [monitor-s3-buckets] + +To analyze S3 bucket metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics). + +| | | +| --- | --- | +| **Bucket Size** | Average of `aws.s3_daily_storage.bucket.size.bytes`. | +| **Total Requests** | Average of `aws.s3_request.requests.total`. | +| **Number of Objects** | Average of `aws.s3_daily_storage.number_of_objects`. | +| **Downloads (Bytes)** | Average of `aws.s3_request.downloaded.bytes`. | +| **Uploads (Bytes)** | Average of `aws.s3_request.uploaded.bytes`. | + + +## Monitor SQS queues [monitor-sqs-queues] + +To analyze SQS queue metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics). + +| | | +| --- | --- | +| **Messages Available** | Max of `aws.sqs.messages.visible`. | +| **Messages Delayed** | Max of `aws.sqs.messages.delayed`. | +| **Messages Added** | Max of `aws.sqs.messages.sent`. | +| **Messages Returned Empty** | Max of `aws.sqs.messages.not_visible`. | +| **Oldest Message** | Max of `aws.sqs.oldest_message_age.sec`. | + + +## Monitor RDS databases [monitor-rds-databases] + +To analyze RDS database metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics). + +| | | +| --- | --- | +| **CPU Usage** | Average of `aws.rds.cpu.total.pct`. | +| **Connections** | Average of `aws.rds.database_connections`. | +| **Queries Executed** | Average of `aws.rds.queries`. | +| **Active Transactions** | Average of `aws.rds.transactions.active`. | +| **Latency** | Average of `aws.rds.latency.dml`. | + +For information about the fields used by the Infrastructure UI to display AWS services metrics, see the [Infrastructure app fields](/reference/observability/serverless/infrastructure-app-fields.md). diff --git a/reference/data-analysis/observability/observability-container-metrics-serverless.md b/reference/data-analysis/observability/observability-container-metrics-serverless.md new file mode 100644 index 0000000000..033030bd46 --- /dev/null +++ b/reference/data-analysis/observability/observability-container-metrics-serverless.md @@ -0,0 +1,65 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/observability-container-metrics.html +--- + +# Container metrics [observability-container-metrics] + +Learn about key container metrics displayed in the Infrastructure UI: + +* [Docker](#key-metrics-docker) +* [Kubernetes](#key-metrics-kubernetes) + + +## Docker container metrics [key-metrics-docker] + +These are the key metrics displayed for Docker containers. + + +### CPU usage metrics [key-metrics-docker-cpu] + +| Metric | Description | +| --- | --- | +| **CPU Usage (%)** | Average CPU for the container.

**Field Calculation:** `average(docker.cpu.total.pct)`
| + + +### Memory metrics [key-metrics-docker-memory] + +| Metric | Description | +| --- | --- | +| **Memory Usage (%)** | Average memory usage for the container.

**Field Calculation:** `average(docker.memory.usage.pct)`
| + + +### Network metrics [key-metrics-docker-network] + +| Metric | Description | +| --- | --- | +| **Inbound Traffic (RX)** | Derivative of the maximum of `docker.network.in.bytes` scaled to a 1 second rate.

**Field Calculation:** `average(docker.network.inbound.bytes) * 8 / (max(metricset.period, kql='docker.network.inbound.bytes: *') / 1000)`
| +| **Outbound Traffic (TX)** | Derivative of the maximum of `docker.network.out.bytes` scaled to a 1 second rate.

**Field Calculation:** `average(docker.network.outbound.bytes) * 8 / (max(metricset.period, kql='docker.network.outbound.bytes: *') / 1000)`
| + + +### Disk metrics [observability-container-metrics-disk-metrics] + +| Metric | Description | +| --- | --- | +| **Disk Read IOPS** | Average count of read operations from the device per second.

**Field Calculation:** `counter_rate(max(docker.diskio.read.ops), kql='docker.diskio.read.ops: *')`
| +| **Disk Write IOPS** | Average count of write operations from the device per second.

**Field Calculation:** `counter_rate(max(docker.diskio.write.ops), kql='docker.diskio.write.ops: *')`
| + + +## Kubernetes container metrics [key-metrics-kubernetes] + +These are the key metrics displayed for Kubernetes (containerd) containers. + + +### CPU usage metrics [key-metrics-kubernetes-cpu] + +| Metric | Description | +| --- | --- | +| **CPU Usage (%)** | Average CPU for the container.

**Field Calculation:** `average(kubernetes.container.cpu.usage.limit.pct)`
| + + +### Memory metrics [key-metrics-kubernetes-memory] + +| Metric | Description | +| --- | --- | +| **Memory Usage (%)** | Average memory usage for the container.

**Field Calculation:** `average(kubernetes.container.memory.usage.limit.pct)`
| diff --git a/reference/data-analysis/observability/observability-host-metrics-serverless.md b/reference/data-analysis/observability/observability-host-metrics-serverless.md new file mode 100644 index 0000000000..2c6d4245c8 --- /dev/null +++ b/reference/data-analysis/observability/observability-host-metrics-serverless.md @@ -0,0 +1,94 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/observability-host-metrics.html +--- + +# Host metrics [observability-host-metrics] + +Learn about key host metrics displayed in the Infrastructure UI: + +* [Hosts](#key-metrics-hosts) +* [CPU usage](#key-metrics-cpu) +* [Memory](#key-metrics-memory) +* [Log](#key-metrics-log) +* [Network](#key-metrics-network) +* [Disk](#key-metrics-network) +* [Legacy](#legacy-metrics) + + +## Hosts metrics [key-metrics-hosts] + +| Metric | Description | +| --- | --- | +| **Hosts** | Number of hosts returned by your search criteria.

**Field Calculation**: `count(system.cpu.cores)`
| + + +## CPU usage metrics [key-metrics-cpu] + +| Metric | Description | +| --- | --- | +| **CPU Usage (%)** | Average of percentage of CPU time spent in states other than Idle and IOWait, normalized by the number of CPU cores. Includes both time spent on user space and kernel space. 100% means all CPUs of the host are busy.

**Field Calculation**: `average(system.cpu.total.norm.pct)`

For legacy metric calculations, refer to [Legacy metrics](#legacy-metrics).
| +| **CPU Usage - iowait (%)** | The percentage of CPU time spent in wait (on disk).

**Field Calculation**: `average(system.cpu.iowait.pct) / max(system.cpu.cores)`
| +| **CPU Usage - irq (%)** | The percentage of CPU time spent servicing and handling hardware interrupts.

**Field Calculation**: `average(system.cpu.irq.pct) / max(system.cpu.cores)`
| +| **CPU Usage - nice (%)** | The percentage of CPU time spent on low-priority processes.

**Field Calculation**: `average(system.cpu.nice.pct) / max(system.cpu.cores)`
| +| **CPU Usage - softirq (%)** | The percentage of CPU time spent servicing and handling software interrupts.

**Field Calculation**: `average(system.cpu.softirq.pct) / max(system.cpu.cores)`
| +| **CPU Usage - steal (%)** | The percentage of CPU time spent in involuntary wait by the virtual CPU while the hypervisor was servicing another processor. Available only on Unix.

**Field Calculation**: `average(system.cpu.steal.pct) / max(system.cpu.cores)`
| +| **CPU Usage - system (%)** | The percentage of CPU time spent in kernel space.

**Field Calculation**: `average(system.cpu.system.pct) / max(system.cpu.cores)`
| +| **CPU Usage - user (%)** | The percentage of CPU time spent in user space. On multi-core systems, you can have percentages that are greater than 100%. For example, if 3 cores are at 60% use, then the system.cpu.user.pct will be 180%.

**Field Calculation**: `average(system.cpu.user.pct) / max(system.cpu.cores)`
| +| **Load (1m)** | 1 minute load average.

Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).

**Field Calculation**: `average(system.load.1)`
| +| **Load (5m)** | 5 minute load average.

Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).

**Field Calculation**: `average(system.load.5)`
| +| **Load (15m)** | 15 minute load average.

Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).

**Field Calculation**: `average(system.load.15)`
| +| **Normalized Load** | 1 minute load average normalized by the number of CPU cores.

Load average gives an indication of the number of threads that are runnable (either busy running on CPU, waiting to run, or waiting for a blocking IO operation to complete).

100% means the 1 minute load average is equal to the number of CPU cores of the host.

Taking the example of a 32 CPU cores host, if the 1 minute load average is 32, the value reported here is 100%. If the 1 minute load average is 48, the value reported here is 150%.

**Field Calculation**: `average(system.load.1) / max(system.load.cores)`
| + + +## Memory metrics [key-metrics-memory] + +| Metric | Description | +| --- | --- | +| **Memory Cache** | Memory (page) cache.

**Field Calculation**: `average(system.memory.used.bytes ) - average(system.memory.actual.used.bytes)`
| +| **Memory Free** | Total available memory.

**Field Calculation**: `max(system.memory.total) - average(system.memory.actual.used.bytes)`
| +| **Memory Free (excluding cache)** | Total available memory excluding the page cache.

**Field Calculation**: `system.memory.free`
| +| **Memory Total** | Total memory capacity.

**Field Calculation**: `avg(system.memory.total)`
| +| **Memory Usage (%)** | Percentage of main memory usage excluding page cache.

This includes resident memory for all processes plus memory used by the kernel structures and code apart from the page cache.

A high level indicates a situation of memory saturation for the host. For example, 100% means the main memory is entirely filled with memory that can’t be reclaimed, except by swapping out.

**Field Calculation**: `average(system.memory.actual.used.pct)`
| +| **Memory Used** | Main memory usage excluding page cache.

**Field Calculation**: `average(system.memory.actual.used.bytes)`
| + + +## Log metrics [key-metrics-log] + +| Metric | Description | +| --- | --- | +| **Log Rate** | Derivative of the cumulative sum of the document count scaled to a 1 second rate. This metric relies on the same indices as the logs.

**Field Calculation**: `cumulative_sum(doc_count)`
| + + +## Network metrics [key-metrics-network] + +| Metric | Description | +| --- | --- | +| **Network Inbound (RX)** | Number of bytes that have been received per second on the public interfaces of the hosts.

**Field Calculation**: `sum(host.network.ingress.bytes) * 8 / 1000`

For legacy metric calculations, refer to [Legacy metrics](#legacy-metrics).
| +| **Network Outbound (TX)** | Number of bytes that have been sent per second on the public interfaces of the hosts.

**Field Calculation**: `sum(host.network.egress.bytes) * 8 / 1000`

For legacy metric calculations, refer to [Legacy metrics](#legacy-metrics).
| + + +## Disk metrics [observability-host-metrics-disk-metrics] + +| Metric | Description | +| --- | --- | +| **Disk Latency** | Time spent to service disk requests.

**Field Calculation**: `average(system.diskio.read.time + system.diskio.write.time) / (system.diskio.read.count + system.diskio.write.count)`
| +| **Disk Read IOPS** | Average count of read operations from the device per second.

**Field Calculation**: `counter_rate(max(system.diskio.read.count), kql='system.diskio.read.count: *')`
| +| **Disk Read Throughput** | Average number of bytes read from the device per second.

**Field Calculation**: `counter_rate(max(system.diskio.read.bytes), kql='system.diskio.read.bytes: *')`
| +| **Disk Usage - Available (%)** | Percentage of disk space available.

**Field Calculation**: `1-average(system.filesystem.used.pct)`
| +| **Disk Usage - Max (%)** | Percentage of disk space used. A high percentage indicates that a partition on a disk is running out of space.

**Field Calculation**: `max(system.filesystem.used.pct)`
| +| **Disk Write IOPS** | Average count of write operations from the device per second.

**Field Calculation**: `counter_rate(max(system.diskio.write.count), kql='system.diskio.write.count: *')`
| +| **Disk Write Throughput** | Average number of bytes written from the device per second.

**Field Calculation**: `counter_rate(max(system.diskio.write.bytes), kql='system.diskio.write.bytes: *')`
| + + +## Legacy metrics [legacy-metrics] + +Over time, we may change the formula used to calculate a specific metric. To avoid affecting your existing rules, instead of changing the actual metric definition, we create a new metric and refer to the old one as "legacy." + +The UI and any new rules you create will use the new metric definition. However, any alerts that use the old definition will refer to the metric as "legacy." + +| Metric | Description | +| --- | --- | +| **CPU Usage (legacy)** | Percentage of CPU time spent in states other than Idle and IOWait, normalized by the number of CPU cores. This includes both time spent on user space and kernel space. 100% means all CPUs of the host are busy.

**Field Calculation**: `(average(system.cpu.user.pct) + average(system.cpu.system.pct)) / max(system.cpu.cores)`
| +| **Network Inbound (RX) (legacy)** | Number of bytes that have been received per second on the public interfaces of the hosts.

**Field Calculation**: `average(host.network.ingress.bytes) * 8 / (max(metricset.period, kql='host.network.ingress.bytes: *') / 1000)`
| +| **Network Outbound (TX) (legacy)** | Number of bytes that have been sent per second on the public interfaces of the hosts.

**Field Calculation**: `average(host.network.egress.bytes) * 8 / (max(metricset.period, kql='host.network.egress.bytes: *') / 1000)`
| diff --git a/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md b/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md new file mode 100644 index 0000000000..e5f32d0269 --- /dev/null +++ b/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md @@ -0,0 +1,17 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/observability-kubernetes-pod-metrics.html +--- + +# Kubernetes pod metrics [observability-kubernetes-pod-metrics] + +To analyze Kubernetes pod metrics, you can select view filters based on the following predefined metrics, or you can add [custom metrics](/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md#custom-metrics). + +| | | +| --- | --- | +| **CPU Usage** | Average of `kubernetes.pod.cpu.usage.node.pct`. | +| **Memory Usage** | Average of `kubernetes.pod.memory.usage.node.pct`. | +| **Inbound Traffic** | Derivative of the maximum of `kubernetes.pod.network.rx.bytes` scaled to a 1 second rate. | +| **Outbound Traffic** | Derivative of the maximum of `kubernetes.pod.network.tx.bytes` scaled to a 1 second rate. | + +For information about the fields used by the Infrastructure UI to display Kubernetes pod metrics, see the [Infrastructure app fields](/reference/observability/serverless/infrastructure-app-fields.md). diff --git a/reference/ecs.md b/reference/ecs.md new file mode 100644 index 0000000000..a8cc7766f5 --- /dev/null +++ b/reference/ecs.md @@ -0,0 +1,9 @@ +--- +navigation_title: ECS +--- +# Elastic Common Schema + +Elastic Common Schema (ECS) defines a common set of fields for ingesting data into Elasticsearch. +For field details and usage information, refer to [](ecs://docs/reference/index.md). + +ECS loggers are plugins for your favorite logging libraries, which help you to format your logs into ECS-compatible JSON. Check out [](ecs-logging://docs/reference/intro.md). diff --git a/reference/elasticsearch.md b/reference/elasticsearch.md new file mode 100644 index 0000000000..5e23cb0854 --- /dev/null +++ b/reference/elasticsearch.md @@ -0,0 +1,18 @@ +# Elasticsearch and index management + +% TO-DO: Add links to "Elasticsearch basics"% + +This section contains reference information for Elasticsearch and index management features, including: + +* Settings +* Security roles and privileges +* Index lifecycle actions +* Mappings +* Command line tools +* Curator +* Clients + +% TO-DO: Add links to "query language and scripting language sections"% + +Elasticsearch also provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features. +Refer to [Elasticsearch API](https://www.elastic.co/docs/api/doc/elasticsearch) and [Elasticsearch Serverless API](https://www.elastic.co/docs/api/doc/elasticsearch-serverless). \ No newline at end of file diff --git a/reference/elasticsearch/clients/index.md b/reference/elasticsearch/clients/index.md new file mode 100644 index 0000000000..0d98eb88b0 --- /dev/null +++ b/reference/elasticsearch/clients/index.md @@ -0,0 +1,34 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/elasticsearch/client/index.html + - https://www.elastic.co/guide/en/serverless/current/elasticsearch-clients.html +navigation_title: Clients +--- + +# Elasticsearch clients [elasticsearch-clients] + +This section contains documentation for all the official Elasticsearch clients: + +* Eland +* Go +* Java +* JavaScript +* .NET +* PHP +* Python +* Ruby +* Rust + +You can use the following language clients with {{es-serverless}}: + +* [Go](go-elasticsearch://docs/reference/getting-started-serverless.md) +* [Java](elasticsearch-java://docs/reference/getting-started-serverless.md) +* [.NET](elasticsearch-net://docs/reference/getting-started.md) +* [Node.JS](elasticsearch-js://docs/reference/getting-started.md) +* [PHP](elasticsearch-php://docs/reference/getting-started.md) +* [Python](elasticsearch-py://docs/reference/getting-started.md) +* [Ruby](elasticsearch-ruby://docs/reference/getting-started.md) + +::::{tip} +Learn how to [connect to your {{es-serverless}} endpoint](/solutions/search/serverless-elasticsearch-get-started.md). +:::: \ No newline at end of file diff --git a/reference/glossary/index.md b/reference/glossary/index.md new file mode 100644 index 0000000000..f0c145faf0 --- /dev/null +++ b/reference/glossary/index.md @@ -0,0 +1,835 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/elastic-stack-glossary/current/index.html + - https://www.elastic.co/guide/en/elastic-stack-glossary/current/terms.html + - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-glossary.html + - https://www.elastic.co/guide/en/ecs/current/ecs-glossary.html +--- + +# Glossary [terms] + +$$$glossary-metadata$$$ @metadata +: A special field for storing content that you don't want to include in output [events](/reference/glossary/index.md#glossary-event). For example, the `@metadata` field is useful for creating transient fields for use in [conditional](/reference/glossary/index.md#glossary-conditional) statements. + + +## A [a-glos] + +$$$glossary-action$$$ action +: 1. The rule-specific response that occurs when an alerting rule fires. A rule can have multiple actions. See [Connectors and actions](kibana://docs/reference/connectors-kibana.md). +2. In {{elastic-sec}}, actions send notifications via other systems when a detection alert is created, such as email, Slack, PagerDuty, and {{webhook}}. + + +$$$glossary-admin-console$$$ administration console +: A component of {{ece}} that provides the API server for the [Cloud UI](/reference/glossary/index.md#glossary-cloud-ui). Also syncs cluster and allocator data from ZooKeeper to {{es}}. + +$$$glossary-advanced-settings$$$ Advanced Settings +: Enables you to control the appearance and behavior of {{kib}} by setting the date format, default index, and other attributes. Part of {{kib}} Stack Management. See [Advanced Settings](kibana://docs/reference/advanced-settings.md). + +$$$glossary-agent-policy$$$ Agent policy +: A collection of inputs and settings that defines the data to be collected by {{agent}}. An agent policy can be applied to a single agent or shared by a group of agents; this makes it easier to manage many agents at scale. See [{{agent}} policies](/reference/ingestion-tools/fleet/agent-policy.md). + +$$$glossary-alias$$$ alias +: Secondary name for a group of [data streams](/reference/glossary/index.md#glossary-data-stream) or [indices](/reference/glossary/index.md#glossary-index). Most {{es}} APIs accept an alias in place of a data stream or index. See [Aliases](/manage-data/data-store/aliases.md). + +$$$glossary-allocator-affinity$$$ allocator affinity +: Controls how {{stack}} deployments are distributed across the available set of allocators in your {{ece}} installation. + +$$$glossary-allocator-tag$$$ allocator tag +: In {{ece}}, characterizes hardware resources for {{stack}} deployments. Used by [instance configurations](/reference/glossary/index.md#glossary-instance-configuration) to determine which instances of the {{stack}} should be placed on what hardware. + +$$$glossary-allocator$$$ allocator +: Manages hosts that contain {{es}} and {{kib}} nodes. Controls the lifecycle of these nodes by creating new [containers](/reference/glossary/index.md#glossary-container) and managing the nodes within these containers when requested. Used to scale the capacity of your {{ece}} installation. + +$$$glossary-analysis$$$ analysis +: Process of converting unstructured [text](/reference/glossary/index.md#glossary-text) into a format optimized for search. See [Text analysis](/manage-data/data-store/text-analysis.md). + +$$$glossary-annotation$$$ annotation +: A way to augment a data display with descriptive domain knowledge. + +$$$glossary-anomaly-detection-job$$$ {{anomaly-job}} +: {{anomaly-jobs-cap}} contain the configuration information and metadata necessary to perform an analytics task. See [{{ml-jobs-cap}}](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-create-job). + +$$$glossary-api-key$$$ API key +: Unique identifier for authentication in {{es}}. When [transport layer security (TLS)](/deploy-manage/deploy/self-managed/installing-elasticsearch.md) is enabled, all requests must be authenticated using an API key or a username and password. + +$$$glossary-apm-agent$$$ APM agent +: An open-source library, written in the same language as your service, which [instruments](/reference/glossary/index.md#glossary-instrumentation) your code and collects performance data and errors at runtime. + +$$$glossary-apm-server$$$ APM Server +: An open-source application that receives data from [APM agents](/reference/glossary/index.md#glossary-apm-agent) and sends it to {{es}}. + +$$$glossary-app$$$ app +: A top-level {{kib}} component that is accessed through the side navigation. Apps include core {{kib}} components such as Discover and Dashboard, solutions like {{observability}} and Security, and special-purpose tools like Maps and {{stack-manage-app}}. + +$$$glossary-auto-follow-pattern$$$ auto-follow pattern +: [Index pattern](/reference/glossary/index.md#glossary-index-pattern) that automatically configures new [indices](/reference/glossary/index.md#glossary-index) as [follower indices](/reference/glossary/index.md#glossary-follower-index) for [{{ccr}}](/reference/glossary/index.md#glossary-ccr). See [Manage auto-follow patterns](/deploy-manage/tools/cross-cluster-replication/manage-auto-follow-patterns.md). + +$$$glossary-zone$$$ availability zone +: Contains resources available to a {{ece}} installation that are isolated from other availability zones to safeguard against failure. Could be a rack, a server zone or some other logical constraint that creates a failure boundary. In a highly available cluster, the nodes of a cluster are spread across two or three availability zones to ensure that the cluster can survive the failure of an entire availability zone. Also see [Fault Tolerance (High Availability)](/deploy-manage/deploy/cloud-enterprise/ece-ha.md). + + +## B [b-glos] + +$$$glossary-basemap$$$ basemap +: The background detail necessary to orient the location of a map. + +$$$glossary-beats-runner$$$ beats runner +: Used to send {{filebeat}} and {{metricbeat}} information to the logging cluster. + +$$$glossary-bucket-aggregation$$$ bucket aggregation +: An aggregation that creates buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type), which determines whether or not a document in the current context falls into the bucket. + +$$$glossary-ml-bucket$$$ bucket +: 1. A set of documents in {{kib}} that have certain characteristics in common. For example, matching documents might be bucketed by color, distance, or date range. +2. The {{ml-features}} also use the concept of a bucket to divide the time series into batches for processing. The *bucket span* is part of the configuration information for {{anomaly-jobs}}. It defines the time interval that is used to summarize and model the data. This is typically between 5 minutes to 1 hour and it depends on your data characteristics. When you set the bucket span, take into account the granularity at which you want to analyze, the frequency of the input data, the typical duration of the anomalies, and the frequency at which alerting is required. + + + +## C [c-glos] + +$$$glossary-canvas-language$$$ Canvas expression language +: A pipeline-based expression language for manipulating and visualizing data. Includes dozens of functions and other capabilities, such as table transforms, type casting, and sub-expressions. Supports TinyMath functions for complex math calculations. See [Canvas function reference](/reference/data-analysis/kibana/canvas-functions.md). + +$$$glossary-canvas$$$ Canvas +: Enables you to create presentations and infographics that pull live data directly from {{es}}. See [Canvas](/explore-analyze/visualize/canvas.md). + +$$$glossary-certainty$$$ certainty +: Specifies how many documents must contain a pair of terms before it is considered a useful connection in a graph. + +$$$CA$$$CA +: Certificate authority. An entity that issues digital certificates to verify identities over a network. + +$$$glossary-client-forwarder$$$ client forwarder +: Used for secure internal communications between various components of {{ece}} and ZooKeeper. + +$$$glossary-cloud-ui$$$ Cloud UI +: Provides web-based access to manage your {{ece}} installation, supported by the [administration console](/reference/glossary/index.md#glossary-admin-console). + +$$$glossary-cluster$$$ cluster +: 1. A group of one or more connected {{es}} [nodes](/reference/glossary/index.md#glossary-node). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md). +2. A layer type and display option in the **Maps** application. Clusters display a cluster symbol across a grid on the map, one symbol per grid cluster. The cluster location is the weighted centroid for all documents in the grid cell. +3. In {{eck}}, it can refer to either an [Elasticsearch cluster](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) or a Kubernetes cluster depending on the context. + +$$$glossary-codec-plugin$$$ codec plugin +: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that changes the data representation of an [event](/reference/glossary/index.md#glossary-event). Codecs are essentially stream filters that can operate as part of an input or output. Codecs enable you to separate the transport of messages from the serialization process. Popular codecs include json, msgpack, and plain (text). + +$$$glossary-cold-phase$$$ cold phase +: Third possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the cold phase, data is no longer updated and seldom [queried](/reference/glossary/index.md#glossary-query). The data still needs to be searchable, but it's okay if those queries are slower. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-cold-tier$$$ cold tier +: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that hold time series data that is accessed occasionally and not normally updated. See [Data tiers](/manage-data/lifecycle/data-tiers.md). + +$$$glossary-component-template$$$ component template +: Building block for creating [index templates](/reference/glossary/index.md#glossary-index-template). A component template can specify [mappings](/reference/glossary/index.md#glossary-mapping), [index settings](elasticsearch://docs/reference/elasticsearch/index-settings/index.md), and [aliases](/reference/glossary/index.md#glossary-alias). See [index templates](/manage-data/data-store/templates.md). + +$$$glossary-condition$$$ condition +: Specifies the circumstances that must be met to trigger an alerting [rule](/reference/glossary/index.md#glossary-rule). + +$$$glossary-conditional$$$ conditional +: A control flow that executes certain actions based on whether a statement (also called a condition) is true or false. {{ls}} supports `if`, `else if`, and `else` statements. You can use conditional statements to apply filters and send events to a specific output based on conditions that you specify. + +$$$glossary-connector$$$ connector +: A configuration that enables integration with an external system (the destination for an action). See [Connectors and actions](kibana://docs/reference/connectors-kibana.md). + +$$$glossary-console$$$ Console +: In {{kib}}, a tool for interacting with the {{es}} REST API. You can send requests to {{es}}, view responses, view API documentation, and get your request history. See [Console](/explore-analyze/query-filter/tools/console.md). + + In {{ess}}, provides web-based access to manage your {{ecloud}} deployments. + + +$$$glossary-constructor$$$ constructor +: Directs [allocators](/reference/glossary/index.md#glossary-allocator) to manage containers of {{es}} and {{kib}} nodes and maximizes the utilization of allocators. Monitors plan change requests from the Cloud UI and determines how to transform the existing cluster. In a highly available installation, places cluster nodes within different availability zones to ensure that the cluster can survive the failure of an entire availability zone. + +$$$glossary-container$$$ container +: Includes an instance of {{ece}} software and its dependencies. Used to provision similar environments, to assign a guaranteed share of host resources to nodes, and to simplify operational effort in {{ece}}. + +$$$glossary-content-tier$$$ content tier +: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that handle the [indexing](/reference/glossary/index.md#glossary-index) and [query](/reference/glossary/index.md#glossary-query) load for content, such as a product catalog. See [Data tiers](/manage-data/lifecycle/data-tiers.md). + +$$$glossary-coordinator$$$ coordinator +: Consists of a logical grouping of some {{ece}} services and acts as a distributed coordination system and resource scheduler. + +$$$glossary-ccr$$$ {{ccr}} (CCR) +: Replicates [data streams](/reference/glossary/index.md#glossary-data-stream) and [indices](/reference/glossary/index.md#glossary-index) from [remote clusters](/reference/glossary/index.md#glossary-remote-cluster) in a [local cluster](/reference/glossary/index.md#glossary-local-cluster). See [{{ccr-cap}}](/deploy-manage/tools/cross-cluster-replication.md). + +$$$glossary-ccs$$$ {{ccs}} (CCS) +: Searches [data streams](/reference/glossary/index.md#glossary-data-stream) and [indices](/reference/glossary/index.md#glossary-index) on [remote clusters](/reference/glossary/index.md#glossary-remote-cluster) from a [local cluster](/reference/glossary/index.md#glossary-local-cluster). See [Search across clusters](/solutions/search/cross-cluster-search.md). + +$$$CRD$$$CRD +: [Custom resource definition](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-CustomResourceDefinition). {{eck}} extends the Kubernetes API with CRDs to allow users to deploy and manage Elasticsearch, Kibana, APM Server, Enterprise Search, Beats, Elastic Agent, Elastic Maps Server, and Logstash resources just as they would do with built-in Kubernetes resources. + +$$$glossary-custom-rule$$$ custom rule +: A set of conditions and actions that change the behavior of {{anomaly-jobs}}. You can also use filters to further limit the scope of the rules. See [Custom rules](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-rules). {{kib}} refers to custom rules as job rules. + + +## D [d-glos] + +$$$glossary-dashboard$$$ dashboard +: A collection of [visualizations](/reference/glossary/index.md#glossary-visualization), [saved searches](/reference/glossary/index.md#glossary-saved-search), and [maps](/reference/glossary/index.md#glossary-map) that provide insights into your data from multiple perspectives. + +$$$glossary-data-center$$$ data center +: Check [availability zone](/reference/glossary/index.md#glossary-zone). + +$$$glossary-dataframe-job$$$ data frame analytics job +: Data frame analytics jobs contain the configuration information and metadata necessary to perform {{ml}} analytics tasks on a source index and store the outcome in a destination index. See [{{dfanalytics-cap}} overview](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-overview.md). + +$$$glossary-data-source$$$ data source +: A file, database, or service that provides the underlying data for a map, Canvas element, or visualization. + +$$$glossary-data-stream$$$ data stream +: A named resource used to manage [time series data](/reference/glossary/index.md#glossary-time-series-data). A data stream stores data across multiple backing [indices](/reference/glossary/index.md#glossary-index). See [Data streams](/manage-data/data-store/data-streams.md). + +$$$glossary-data-tier$$$ data tier +: Collection of [nodes](/reference/glossary/index.md#glossary-node) with the same [data role](elasticsearch://docs/reference/elasticsearch/configuration-reference/node-settings.md) that typically share the same hardware profile. Data tiers include the [content tier](/reference/glossary/index.md#glossary-content-tier), [hot tier](/reference/glossary/index.md#glossary-hot-tier), [warm tier](/reference/glossary/index.md#glossary-warm-tier), [cold tier](/reference/glossary/index.md#glossary-cold-tier), and [frozen tier](/reference/glossary/index.md#glossary-frozen-tier). See [Data tiers](/manage-data/lifecycle/data-tiers.md). + +$$$glossary-data-view$$$ data view +: An object that enables you to select the data that you want to use in {{kib}} and define the properties of the fields. A data view can point to one or more [data streams](/reference/glossary/index.md#glossary-data-stream), [indices](/reference/glossary/index.md#glossary-index), or [aliases](/reference/glossary/index.md#glossary-alias). For example, a data view can point to your log data from yesterday, or all indices that contain your data. + +$$$glossary-ml-datafeed$$$ datafeed +: {{anomaly-jobs-cap}} can analyze either a one-off batch of data or continuously in real time. {{dfeeds-cap}} retrieve data from {{es}} for analysis. + +$$$glossary-dataset$$$ dataset +: A collection of data that has the same structure. The name of a dataset typically signifies its source. See [data stream naming scheme](/reference/ingestion-tools/fleet/data-streams.md). + +$$$glossary-delete-phase$$$ delete phase +: Last possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the delete phase, an [index](/reference/glossary/index.md#glossary-index) is no longer needed and can safely be deleted. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-deployment-template$$$ deployment template +: A reusable configuration of Elastic products and solutions used to create an {{ecloud}} [deployment](/reference/glossary/index.md#glossary-deployment). + +$$$glossary-deployment$$$ deployment +: One or more products from the {{stack}} configured to work together and run on {{ecloud}}. + +$$$glossary-detection-alert$$$ detection alert +: {{elastic-sec}} produced alerts. Detection alerts are never received from external systems. When a rule's conditions are met, {{elastic-sec}} writes a detection alert to an {{es}} alerts index. + +$$$glossary-detection-rule$$$ detection rule +: Background tasks in {{elastic-sec}} that run periodically and produce alerts when suspicious activity is detected. + +$$$glossary-ml-detector$$$ detector +: As part of the configuration information that is associated with {{anomaly-jobs}}, detectors define the type of analysis that needs to be done. They also specify which fields to analyze. You can have more than one detector in a job, which is more efficient than running multiple jobs against the same data. + +$$$glossary-director$$$ director +: Manages the [ZooKeeper](/reference/glossary/index.md#glossary-zookeeper) datastore. This role is often shared with the [coordinator](/reference/glossary/index.md#glossary-coordinator), though in production deployments it can be separated. + +$$$glossary-discover$$$ Discover +: Enables you to search and filter your data to zoom in on the information that you are interested in. + +$$$glossary-distributed-tracing$$$ distributed tracing +: The end-to-end collection of performance data throughout your microservices architecture. + +$$$glossary-document$$$ document +: JSON object containing data stored in {{es}}. See [Documents and indices](/manage-data/data-store/index-basics.md). + +$$$glossary-drilldown$$$ drilldown +: A navigation path that retains context (time range and filters) from the source to the destination, so you can view the data from a new perspective. A dashboard that shows the overall status of multiple data centers might have a drilldown to a dashboard for a single data center. See [Drilldowns](/explore-analyze/dashboards.md). + + +## E [e-glos] + +$$$glossary-edge$$$ edge +: A connection between nodes in a graph that shows that they are related. The line weight indicates the strength of the relationship. See [Graph](/explore-analyze/visualize/graph.md). + +$$$glossary-elastic-agent$$$ {{agent}} +: A single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. See [{{agent}} overview](/reference/ingestion-tools/fleet/index.md). + +$$$glossary-ece$$$ {{ece}} (ECE) +: The official enterprise offering to host and manage the {{stack}} yourself at scale. Can be installed on a public cloud platform, such as AWS, GCP or Microsoft Azure, on your own private cloud, or on bare metal. + +$$$glossary-eck$$$ {{eck}} (ECK) +: Built on the Kubernetes Operator pattern, ECK extends the basic Kubernetes orchestration capabilities to support the setup and management of Elastic products and solutions on Kubernetes. + +$$$glossary-ecs$$$ Elastic Common Schema (ECS) +: A document schema for Elasticsearch, for use cases such as logging and metrics. ECS defines a common set of fields, their datatype, and gives guidance on their correct usage. ECS is used to improve uniformity of event data coming from different sources. + +$$$EKS$$$ Elastic Kubernetes Service (EKS) +: A managed Kubernetes service provided by Amazon Web Services (AWS). + +$$$glossary-ems$$$ Elastic Maps Service (EMS) +: A service that provides basemap tiles, shape files, and other key features that are essential for visualizing geospatial data. + +$$$glossary-epr$$$ Elastic Package Registry (EPR) +: A service hosted by Elastic that stores Elastic package definitions in a central location. See the [EPR GitHub repository](https://github.com/elastic/package-registry). + +$$$glossary-elastic-security-indices$$$ {{elastic-sec}} indices +: Indices containing host and network source events (such as `packetbeat-*`, `log-*`, and `winlogbeat-*`). When you [create a new rule in {{elastic-sec}}](/solutions/security/detect-and-alert/create-detection-rule.md), the default index pattern corresponds to the values defined in the `securitySolution:defaultIndex` advanced setting. + +$$$glossary-elastic-stack$$$ {{stack}} +: Also known as the *ELK Stack*, the {{stack}} is the combination of various Elastic products that integrate for a scalable and flexible way to manage your data. + +$$$glossary-elasticsearch-service$$$ {{ess}} +: The official hosted {{stack}} offering, from the makers of {{es}}. Available as a software-as-a-service (SaaS) offering on different cloud platforms, such as AWS, GCP, and Microsoft Azure. + +$$$glossary-element$$$ element +: A [Canvas](/reference/glossary/index.md#glossary-canvas) workpad object that displays an image, text, or visualization. + +$$$glossary-endpoint-exception$$$ endpoint exception +: [Exceptions](/reference/glossary/index.md#glossary-exception) added to both rules and Endpoint agents on hosts. Endpoint exceptions can only be added when: + + * Endpoint agents are installed on the hosts. + * The {{elastic-endpoint}} Security rule is activated. + + +$$$glossary-eql$$$ Event Query Language (EQL) +: [Query](/reference/glossary/index.md#glossary-query) language for event-based time series data, such as logs, metrics, and traces. EQL supports matching for event sequences. See [EQL](/explore-analyze/query-filter/languages/eql.md). + +$$$glossary-event$$$ event +: A single unit of information, containing a timestamp plus additional data. An event arrives via an input, and is subsequently parsed, timestamped, and passed through the {{ls}} [pipeline](/reference/glossary/index.md#glossary-pipeline). + +$$$glossary-exception$$$ exception +: In {{elastic-sec}}, exceptions are added to rules to prevent specific source event field values from generating alerts. + +$$$glossary-external-alert$$$ external alert +: Alerts {{elastic-sec}} receives from external systems, such as Suricata. + + +## F [f-glos] + +$$$glossary-feature-controls$$$ Feature Controls +: Enables administrators to customize which features are available in each [space](/reference/glossary/index.md#glossary-space). See [Feature Controls](/deploy-manage/manage-spaces.md#spaces-control-feature-visibility). + +$$$glossary-feature-importance$$$ feature importance +: In supervised {{ml}} methods such as {{regression}} and {{classification}}, feature importance indicates the degree to which a specific feature affects a prediction. See [{{regression-cap}} feature importance](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md#dfa-regression-feature-importance) and [{{classification-cap}} feature importance](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md#dfa-classification-feature-importance). + +$$$glossary-feature-influence$$$ feature influence +: In {{oldetection}}, feature influence scores indicate which features of a data point contribute to its outlier behavior. See [Feature influence](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md#dfa-feature-influence). + +$$$glossary-feature-state$$$ feature state +: The indices and data streams used to store configurations, history, and other data for an Elastic feature, such as {{es}} security or {{kib}}. A feature state typically includes one or more [system indices or data streams](/reference/glossary/index.md#glossary-system-index). It may also include regular indices and data streams used by the feature. You can use [snapshots](/reference/glossary/index.md#glossary-snapshot) to back up and restore feature states. See [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state). + +$$$glossary-field-reference$$$ field reference +: A reference to an event [field](/reference/glossary/index.md#glossary-field). This reference may appear in an output block or filter block in the {{ls}} config file. Field references are typically wrapped in square (`[]`) brackets, for example `[fieldname]`. If you are referring to a top-level field, you can omit the `[]` and simply use the field name. To refer to a nested field, you specify the full path to that field: `[top-level field][nested field]`. + +$$$glossary-field$$$ field +: 1. Key-value pair in a [document](/reference/glossary/index.md#glossary-document). See [Mapping](/manage-data/data-store/mapping.md). +2. In {{ls}}, this term refers to an [event](/reference/glossary/index.md#glossary-event) property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. {{ls}} uses the term "fields" to refer to these properties. + + +$$$glossary-filter-plugin$$$ filter plugin +: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that performs intermediary processing on an [event](/reference/glossary/index.md#glossary-event). Typically, filters act upon event data after it has been ingested via inputs, by mutating, enriching, and/or modifying the data according to configuration rules. Filters are often applied conditionally depending on the characteristics of the event. Popular filter plugins include grok, mutate, drop, clone, and geoip. Filter stages are optional. + +$$$glossary-filter$$$ filter +: [Query](/reference/glossary/index.md#glossary-query) that does not score matching documents. See [filter context](/explore-analyze/query-filter/languages/querydsl.md). + +$$$glossary-fleet-server$$$ {{fleet-server}} +: {{fleet-server}} is a component used to centrally manage {{agent}}s. It serves as a control plane for updating agent policies, collecting status information, and coordinating actions across agents. + +$$$glossary-fleet$$$ Fleet +: Fleet provides a way to centrally manage {{agent}}s at scale. There are two parts: The Fleet app in {{kib}} provides a web-based UI to add and remotely manage agents, while the {{fleet-server}} provides the backend service that manages agents. See [{{agent}} overview](/reference/ingestion-tools/fleet/index.md). + +$$$glossary-flush$$$ flush +: Writes data from the [transaction log](elasticsearch://docs/reference/elasticsearch/index-settings/translog.md) to disk for permanent storage. + +$$$glossary-follower-index$$$ follower index +: Target [index](/reference/glossary/index.md#glossary-index) for [{{ccr}}](/reference/glossary/index.md#glossary-ccr). A follower index exists in a [local cluster](/reference/glossary/index.md#glossary-local-cluster) and replicates a [leader index](/reference/glossary/index.md#glossary-leader-index). See [{{ccr-cap}}](/deploy-manage/tools/cross-cluster-replication.md). + +$$$glossary-force-merge$$$ force merge +: Manually triggers a [merge](/reference/glossary/index.md#glossary-merge) to reduce the number of [segments](/reference/glossary/index.md#glossary-segment) in an index's [shards](/reference/glossary/index.md#glossary-shard). + +$$$glossary-frozen-phase$$$ frozen phase +: Fourth possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the frozen phase, an [index](/reference/glossary/index.md#glossary-index) is no longer updated and [queried](/reference/glossary/index.md#glossary-query) rarely. The information still needs to be searchable, but it's okay if those queries are extremely slow. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-frozen-tier$$$ frozen tier +: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that hold time series data that is accessed rarely and not normally updated. See [Data tiers](/manage-data/lifecycle/data-tiers.md). + + +## G [g-glos] + +$$$GCS$$$GCS +: Google Cloud Storage. Block storage service provided by Google Cloud Platform (GCP). + +$$$GKE$$$GKE +: [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/). Managed Kubernetes service provided by Google Cloud Platform (GCP). + +$$$glossary-gem$$$ gem +: A self-contained package of code that's hosted on [RubyGems.org](https://rubygems.org). {{ls}} [plugins](/reference/glossary/index.md#glossary-plugin) are packaged as Ruby Gems. You can use the {{ls}} [plugin manager](/reference/glossary/index.md#glossary-plugin-manager) to manage {{ls}} gems. + +$$$glossary-geo-point$$$ geo-point +: A field type in {{es}}. A geo-point field accepts latitude-longitude pairs for storing point locations. The latitude-longitude format can be from a string, geohash, array, well-known text, or object. See [geo-point](elasticsearch://docs/reference/elasticsearch/mapping-reference/geo-point.md). + +$$$glossary-geo-shape$$$ geo-shape +: A field type in {{es}}. A geo-shape field accepts arbitrary geographic primitives, like polygons, lines, or rectangles (and more). You can populate a geo-shape field from GeoJSON or well-known text. See [geo-shape](elasticsearch://docs/reference/elasticsearch/mapping-reference/geo-shape.md). + +$$$glossary-geojson$$$ GeoJSON +: A format for representing geospatial data. GeoJSON is also a file-type, commonly used in the **Maps** application to upload a file of geospatial data. See [GeoJSON data](/explore-analyze/visualize/maps/indexing-geojson-data-tutorial.md). + +$$$glossary-graph$$$ graph +: A data structure and visualization that shows interconnections between a set of entities. Each entity is represented by a node. Connections between nodes are represented by [edges](/reference/glossary/index.md#glossary-edge). See [Graph](/explore-analyze/visualize/graph.md). + +$$$glossary-grok-debugger$$$ Grok Debugger +: A tool for building and debugging grok patterns. Grok is good for parsing syslog, Apache, and other webserver logs. See [Debugging grok expressions](/explore-analyze/query-filter/tools/grok-debugger.md). + + +## H [h-glos] + +$$$glossary-hardware-profile$$$ hardware profile +: In {{ecloud}}, a built-in [deployment template](/reference/glossary/index.md#glossary-deployment-template) that supports a specific use case for the {{stack}}, such as a compute optimized deployment that provides high vCPU for search-heavy use cases. + +$$$glossary-heat-map$$$ heat map +: A layer type in the **Maps** application. Heat maps cluster locations to show higher (or lower) densities. Heat maps describe a visualization with color-coded cells or regions to analyze patterns across multiple dimensions. See [Heat map layer](/explore-analyze/visualize/maps/heatmap-layer.md). + +$$$glossary-hidden-index$$$ hidden data stream or index +: [Data stream](/reference/glossary/index.md#glossary-data-stream) or [index](/reference/glossary/index.md#glossary-index) excluded from most [index patterns](/reference/glossary/index.md#glossary-index-pattern) by default. See [Hidden data streams and indices](elasticsearch://docs/reference/elasticsearch/rest-apis/api-conventions.md#multi-hidden). + +$$$glossary-host-runner$$$ host runner (runner) +: In {{ece}}, a local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to the host exist and are able to run, and creates or recreates the containers if necessary. + +$$$glossary-hot-phase$$$ hot phase +: First possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the hot phase, an [index](/reference/glossary/index.md#glossary-index) is actively updated and queried. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-hot-thread$$$ hot thread +: A Java thread that has high CPU usage and executes for a longer than normal period of time. + +$$$glossary-hot-tier$$$ hot tier +: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that handle the [indexing](/reference/glossary/index.md#glossary-index) load for time series data, such as logs or metrics. This tier holds your most recent, most frequently accessed data. See [Data tiers](/manage-data/lifecycle/data-tiers.md). + + +## I [i-glos] + +$$$glossary-id$$$ ID +: Identifier for a [document](/reference/glossary/index.md#glossary-document). Document IDs must be unique within an [index](/reference/glossary/index.md#glossary-index). See the [`_id` field](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-id-field.md). + +$$$glossary-index-lifecycle-policy$$$ index lifecycle policy +: Specifies how an [index](/reference/glossary/index.md#glossary-index) moves between phases in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle) and what actions to perform during each phase. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-index-lifecycle$$$ index lifecycle +: Five phases an [index](/reference/glossary/index.md#glossary-index) can transition through: [hot](/reference/glossary/index.md#glossary-hot-phase), [warm](/reference/glossary/index.md#glossary-warm-phase), [cold](/reference/glossary/index.md#glossary-cold-phase), [frozen](/reference/glossary/index.md#glossary-frozen-phase), and [delete](/reference/glossary/index.md#glossary-delete-phase). See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-index-pattern$$$ index pattern +: In {{es}}, a string containing a wildcard (`*`) pattern that can match multiple [data streams](/reference/glossary/index.md#glossary-data-stream), [indices](/reference/glossary/index.md#glossary-index), or [aliases](/reference/glossary/index.md#glossary-alias). See [Multi-target syntax](elasticsearch://docs/reference/elasticsearch/rest-apis/api-conventions.md). + +$$$glossary-index-template$$$ index template +: Automatically configures the [mappings](/reference/glossary/index.md#glossary-mapping), [index settings](elasticsearch://docs/reference/elasticsearch/index-settings/index.md), and [aliases](/reference/glossary/index.md#glossary-alias) of new [indices](/reference/glossary/index.md#glossary-index) that match its [index pattern](/reference/glossary/index.md#glossary-index-pattern). You can also use index templates to create [data streams](/reference/glossary/index.md#glossary-data-stream). See [Index templates](/manage-data/data-store/templates.md). + +$$$glossary-index$$$ index +: 1. Collection of JSON [documents](/reference/glossary/index.md#glossary-document). See [Documents and indices](/manage-data/data-store/index-basics.md). +2. To add one or more JSON documents to {{es}}. This process is called indexing. + + +$$$glossary-indexer$$$ indexer +: A {{ls}} instance that is tasked with interfacing with an {{es}} cluster in order to index [event](/reference/glossary/index.md#glossary-event) data. + +$$$glossary-indicator-index$$$ indicator index +: Indices containing suspect field values in {{elastic-sec}}. [Indicator match rules](/solutions/security/detect-and-alert/create-detection-rule.md#create-indicator-rule) use these indices to compare their field values with source event values contained in [{{elastic-sec}} indices](/reference/glossary/index.md#glossary-elastic-security-indices). + +$$$glossary-inference-aggregation$$$ inference aggregation +: A pipeline aggregation that references a [trained model](/reference/glossary/index.md#glossary-trained-model) in an aggregation to infer on the results field of the parent bucket aggregation. It enables you to use supervised {{ml}} at search time. + +$$$glossary-inference-processor$$$ inference processor +: A processor specified in an ingest pipeline that uses a [trained model](/reference/glossary/index.md#glossary-trained-model) to infer against the data that is being ingested in the pipeline. + +$$$glossary-inference$$$ inference +: A {{ml}} feature that enables you to use supervised learning processes – like {{classification}}, {{regression}}, or [{{nlp}}](/reference/glossary/index.md#glossary-nlp) – in a continuous fashion by using [trained models](/reference/glossary/index.md#glossary-trained-model) against incoming data. + +$$$glossary-influencer$$$ influencer +: Influencers are entities that might have contributed to an anomaly in a specific bucket in an {{anomaly-job}}. For more information, see [Influencers](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-influencers). + +$$$glossary-ingestion$$$ ingestion +: The process of collecting and sending data from various data sources to {{es}}. + +$$$glossary-input-plugin$$$ input plugin +: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that reads [event](/reference/glossary/index.md#glossary-event) data from a specific source. Input plugins are the first stage in the {{ls}} event processing [pipeline](/reference/glossary/index.md#glossary-pipeline). Popular input plugins include file, syslog, redis, and beats. + +$$$glossary-instance-configuration$$$ instance configuration +: In {{ecloud}}, enables the instances of the {{stack}} to run on suitable hardware resources by filtering on [allocator tags](/reference/glossary/index.md#glossary-allocator-tag). Used as building blocks for [deployment templates](/reference/glossary/index.md#glossary-deployment-template). + +$$$glossary-instance-type$$$ instance type +: In {{ecloud}}, categories for [instances](/reference/glossary/index.md#glossary-instance) representing an Elastic feature or cluster node types, such as `master`, `ml` or `data`. + +$$$glossary-instance$$$ instance +: A product from the {{stack}} that is running in an {{ecloud}} deployment, such as an {{es}} node or a {{kib}} instance. When you choose more [availability zones](/reference/glossary/index.md#glossary-zone), the system automatically creates more instances for you. + +$$$glossary-instrumentation$$$ instrumentation +: Extending application code to track where your application is spending time. Code is considered instrumented when it collects and reports this performance data to APM. + +$$$glossary-integration-policy$$$ integration policy +: An instance of an [integration](/reference/glossary/index.md#glossary-integration) that is configured for a specific use case, such as collecting logs from a specific file. + +$$$glossary-integration$$$ integration +: An easy way for external systems to connect to the {{stack}}. Whether it's collecting data or protecting systems from security threats, integrations provide out-of-the-box assets to make setup easy—​many with just a single click. + + +## J [j-glos] + +$$$glossary-ml-job$$$$$$glossary-job$$$ job +: {{ml-cap}} jobs contain the configuration information and metadata necessary to perform an analytics task. There are two types: [{{anomaly-jobs}}](/reference/glossary/index.md#glossary-anomaly-detection-job) and [data frame analytics jobs](/reference/glossary/index.md#glossary-dataframe-job). See also [{{rollup-job}}](/reference/glossary/index.md#glossary-rollup-job). + + +## K [k-glos] + +$$$k8s$$$K8s +: Shortened form (numeronym) of "Kubernetes" derived from replacing "ubernete" with "8". + +$$$glossary-kibana-privilege$$$ {{kib}} privilege +: Enable administrators to grant users read-only, read-write, or no access to individual features within [spaces](/reference/glossary/index.md#glossary-space) in {{kib}}. See [{{kib}} privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md). + +$$$glossary-kql$$$ {{kib}} Query Language (KQL) +: The default language for querying in {{kib}}. KQL provides support for scripted fields. See [Kibana Query Language](/explore-analyze/query-filter/languages/kql.md). + +$$$glossary-kibana$$$ {{kib}} +: A user interface that lets you visualize your {{es}} data and navigate the {{stack}}. + + +## L [l-glos] + +$$$glossary-labs$$$ labs +: An in-progress or experimental feature in **Canvas** or **Dashboard** that you can try out and provide feedback. When enabled, you'll see **Labs** in the toolbar. + +$$$glossary-leader-index$$$ leader index +: Source [index](/reference/glossary/index.md#glossary-index) for [{{ccr}}](/reference/glossary/index.md#glossary-ccr). A leader index exists on a [remote cluster](/reference/glossary/index.md#glossary-remote-cluster) and is replicated to [follower indices](/reference/glossary/index.md#glossary-follower-index). See [{{ccr-cap}}](/deploy-manage/tools/cross-cluster-replication.md). + +$$$glossary-lens$$$ Lens +: Enables you to build visualizations by dragging and dropping data fields. Lens makes makes smart visualization suggestions for your data, allowing you to switch between visualization types. See [Lens](/explore-analyze/dashboards.md). + +$$$glossary-local-cluster$$$ local cluster +: [Cluster](/reference/glossary/index.md#glossary-cluster) that pulls data from a [remote cluster](/reference/glossary/index.md#glossary-remote-cluster) in [{{ccs}}](/reference/glossary/index.md#glossary-ccs) or [{{ccr}}](/reference/glossary/index.md#glossary-ccr). See [Remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md). + +$$$glossary-lucene$$$ Lucene query syntax +: The query syntax for {{kib}}'s legacy query language. The Lucene query syntax is available under the options menu in the query bar and from [Advanced Settings](/reference/glossary/index.md#glossary-advanced-settings). + + +## M [m-glos] + +$$$glossary-ml-nodes$$$ machine learning node +: A {{ml}} node is a node that has `xpack.ml.enabled` set to `true` and `ml` in `node.roles`. If you want to use {{ml-features}}, there must be at least one {{ml}} node in your cluster. See [Machine learning nodes](elasticsearch://docs/reference/elasticsearch/configuration-reference/node-settings.md#ml-node). + +$$$glossary-map$$$ map +: A representation of geographic data using symbols and labels. See [Maps](/explore-analyze/visualize/maps.md). + +$$$glossary-mapping$$$ mapping +: Defines how a [document](/reference/glossary/index.md#glossary-document), its [fields](/reference/glossary/index.md#glossary-field), and its metadata are stored in {{es}}. Similar to a schema definition. See [Mapping](/manage-data/data-store/mapping.md). + +$$$glossary-master-node$$$ master node +: Handles write requests for the cluster and publishes changes to other nodes in an ordered fashion. Each cluster has a single master node which is chosen automatically by the cluster and is replaced if the current master node fails. Also see [node](/reference/glossary/index.md#glossary-node). + +$$$glossary-merge$$$ merge +: Process of combining a [shard](/reference/glossary/index.md#glossary-shard)'s smaller Lucene [segments](/reference/glossary/index.md#glossary-segment) into a larger one. {{es}} manages merges automatically. + +$$$glossary-message-broker$$$ message broker +: Also referred to as a *message buffer* or *message queue*, a message broker is external software (such as Redis, Kafka, or RabbitMQ) that stores messages from the {{ls}} shipper instance as an intermediate store, waiting to be processed by the {{ls}} indexer instance. + +$$$glossary-metric-aggregation$$$ metric aggregation +: An aggregation that calculates and tracks metrics for a set of documents. + +$$$glossary-module$$$ module +: Out-of-the-box configurations for common data sources to simplify the collection, parsing, and visualization of logs and metrics. + +$$$glossary-monitor$$$ monitor +: A network endpoint which is monitored to track the performance and availability of applications and services. + +$$$glossary-multi-field$$$ multi-field +: A [field](/reference/glossary/index.md#glossary-field) that's [mapped](/reference/glossary/index.md#glossary-mapping) in multiple ways. See the [`fields` mapping parameter](elasticsearch://docs/reference/elasticsearch/mapping-reference/multi-fields.md). + +$$$glossary-multifactor$$$ multifactor authentication (MFA) +: A security process that requires you to provide two or more verification methods to gain access to web-based user interfaces. + + +## N [n-glos] + +$$$glossary-namespace$$$ namespace +: A user-configurable arbitrary data grouping, such as an environment (`dev`, `prod`, or `qa`), a team, or a strategic business unit. + +$$$glossary-nlp$$$ natural language processing (NLP) +: A {{ml}} feature that enables you to perform operations such as language identification, named entity recognition (NER), text classification, or text embedding. See [NLP overview](/explore-analyze/machine-learning/nlp/ml-nlp-overview.md). + +$$$glossary-no-op$$$ no-op +: In {{ecloud}}, the application of a rolling update on your deployment without actually applying any configuration changes. This type of update can be useful to resolve certain health warnings. + +$$$glossary-node$$$ node +: 1. A single {{es}} server. One or more nodes can form a [cluster](/reference/glossary/index.md#glossary-cluster). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md). +2. In {{eck}}, it can refer to either an [Elasticsearch Node](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.md) or a [Kubernetes Node](https://kubernetes.io/docs/concepts/architecture/nodes/) depending on the context. ECK maps an Elasticsearch node to a Kubernetes Pod which can get scheduled onto any available Kubernetes node that can satisfy the [resource requirements](/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md) and [node constraints](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) defined in the [pod template](/deploy-manage/deploy/cloud-on-k8s/customize-pods.md). + +$$$NodeSet$$$NodeSet +: A set of Elasticsearch nodes that share the same Elasticsearch configuration and a Kubernetes Pod template. Multiple NodeSets can be defined in the Elasticsearch CRD to achieve a cluster topology consisting of groups of Elasticsearch nodes with different node roles, resource requirements and hardware configurations (Kubernetes node constraints). + +## O [o-glos] + +$$$glossary-observability$$$ Observability +: Unifying your logs, metrics, uptime data, and application traces to provide granular insights and context into the behavior of services running in your environments. + +$$$OpenShift$$$OpenShift +: A Kubernetes [platform](https://www.openshift.com/) by RedHat. + +$$$Operator$$$operator +: A design pattern in Kubernetes for [managing custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/). {{eck}} implements the operator pattern to manage Elasticsearch, Kibana and APM Server resources on Kubernetes. + +$$$glossary-output-plugin$$$ output plugin +: A {{ls}} [plugin](/reference/glossary/index.md#glossary-plugin) that writes [event](/reference/glossary/index.md#glossary-event) data to a specific destination. Outputs are the final stage in the event [pipeline](/reference/glossary/index.md#glossary-pipeline). Popular output plugins include elasticsearch, file, graphite, and statsd. + + +## P [p-glos] + +$$$glossary-painless-lab$$$ Painless Lab +: An interactive code editor that lets you test and debug Painless scripts in real-time. See [Painless Lab](/explore-analyze/scripting/painless-lab.md). + +$$$glossary-panel$$$ panel +: A [dashboard](/reference/glossary/index.md#glossary-dashboard) component that contains a query element or visualization, such as a chart, table, or list. + +$$$PDB$$$PDB +: A [pod disruption budget](https://kubernetes.io/docs/reference/glossary/?all=true#term-pod-disruption-budget) in {{eck}}. + +$$$glossary-pipeline$$$ pipeline +: A term used to describe the flow of [events](/reference/glossary/index.md#glossary-event) through the {{ls}} workflow. A pipeline typically consists of a series of input, filter, and output stages. [Input](/reference/glossary/index.md#glossary-input-plugin) stages get data from a source and generate events, [filter](/reference/glossary/index.md#glossary-filter-plugin) stages, which are optional, modify the event data, and [output](/reference/glossary/index.md#glossary-output-plugin) stages write the data to a destination. Inputs and outputs support [codecs](/reference/glossary/index.md#glossary-codec-plugin) that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter. + +$$$glossary-plan$$$ plan +: Specifies the configuration and topology of an {{es}} or {{kib}} cluster, such as capacity, availability, and {{es}} version, for example. When changing a plan, the [constructor](/reference/glossary/index.md#glossary-constructor) determines how to transform the existing cluster into the pending plan. + +$$$glossary-plugin-manager$$$ plugin manager +: Accessed via the `bin/logstash-plugin` script, the plugin manager enables you to manage the lifecycle of [plugins](/reference/glossary/index.md#glossary-plugin) in your {{ls}} deployment. You can install, remove, and upgrade plugins by using the plugin manager Command Line Interface (CLI). + +$$$glossary-plugin$$$ plugin +: A self-contained software package that implements one of the stages in the {{ls}} event processing [pipeline](/reference/glossary/index.md#glossary-pipeline). The list of available plugins includes [input plugins](/reference/glossary/index.md#glossary-input-plugin), [output plugins](/reference/glossary/index.md#glossary-output-plugin), [codec plugins](/reference/glossary/index.md#glossary-codec-plugin), and [filter plugins](/reference/glossary/index.md#glossary-filter-plugin). The plugins are implemented as Ruby [gems](/reference/glossary/index.md#glossary-gem) and hosted on [RubyGems.org](https://rubygems.org). You define the stages of an event processing [pipeline](/reference/glossary/index.md#glossary-pipeline) by configuring plugins. + +$$$glossary-primary-shard$$$ primary shard +: Lucene instance containing some or all data for an [index](/reference/glossary/index.md#glossary-index). When you index a [document](/reference/glossary/index.md#glossary-document), {{es}} adds the document to primary shards before [replica shards](/reference/glossary/index.md#glossary-replica-shard). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md). + +$$$glossary-proxy$$$ proxy +: A highly available, TLS-enabled proxy layer that routes user requests, mapping cluster IDs that are passed in request URLs for the container to the cluster nodes handling the user requests. + +$$$PVC$$$PVC +: A [persistent volume claim](https://kubernetes.io/docs/reference/glossary/?all=true#term-persistent-volume-claim) in {{eck}}. + +## Q [q-glos] + +$$$QoS$$$QoS +: Quality of service in {{eck}}. When a Kubernetes cluster is under heavy load, the Kubernetes scheduler makes pod eviction decisions based on the [QoS class of individual pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/). [*Manage compute resources*](/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md) explains how to define QoS classes for Elasticsearch, Kibana and APM Server pods. + +$$$glossary-query-profiler$$$ Query Profiler +: A tool that enables you to inspect and analyze search queries to diagnose and debug poorly performing queries. See [Query Profiler](/explore-analyze/query-filter/tools/search-profiler.md). + +$$$glossary-query$$$ query +: Request for information about your data. You can think of a query as a question, written in a way {{es}} understands. See [Search your data](/solutions/search/querying-for-search.md). + + +## R [r-glos] + +$$$RBAC$$$RBAC +: Role-based Access Control. In {{eck}}, it is a security mechanism in Kubernetes where access to cluster resources is restricted to principals having the appropriate role. Check [https://kubernetes.io/docs/reference/access-authn-authz/rbac/](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for more information. + +$$$glossary-real-user-monitoring$$$ Real user monitoring (RUM) +: Performance monitoring, metrics, and error tracking of web applications. + +$$$glossary-recovery$$$ recovery +: Process of syncing a [replica shard](/reference/glossary/index.md#glossary-replica-shard) from a [primary shard](/reference/glossary/index.md#glossary-primary-shard). Upon completion, the replica shard is available for searches. + +$$$glossary-reindex$$$ reindex +: Copies documents from a source to a destination. The source and destination can be a [data stream](/reference/glossary/index.md#glossary-data-stream), [index](/reference/glossary/index.md#glossary-index), or [alias](/reference/glossary/index.md#glossary-alias). + +$$$glossary-remote-cluster$$$ remote cluster +: A separate [cluster](/reference/glossary/index.md#glossary-cluster), often in a different data center or locale, that contains [indices](/reference/glossary/index.md#glossary-index) that can be replicated or searched by the [local cluster](/reference/glossary/index.md#glossary-local-cluster). The connection to a remote cluster is unidirectional. See [Remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md). + +$$$glossary-replica-shard$$$ replica shard +: Copy of a [primary shard](/reference/glossary/index.md#glossary-primary-shard). Replica shards can improve search performance and resiliency by distributing data across multiple [nodes](/reference/glossary/index.md#glossary-node). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md). + +$$$glossary-roles-token$$$ roles token +: Enables a host to join an existing {{ece}} installation and grants permission to hosts to hold certain roles, such as the [allocator](/reference/glossary/index.md#glossary-allocator) role. Used when installing {{ece}} on additional hosts, a roles token helps secure {{ece}} by making sure that only authorized hosts become part of the installation. + +$$$glossary-rollover$$$ rollover +: Creates a new write index when the current one reaches a certain size, number of docs, or age. A rollover can target a [data stream](/reference/glossary/index.md#glossary-data-stream) or an [alias](/reference/glossary/index.md#glossary-alias) with a write index. + +$$$glossary-rollup-index$$$ rollup index +: Special type of [index](/reference/glossary/index.md#glossary-index) for storing historical data at reduced granularity. Documents are summarized and indexed into a rollup index by a [rollup job](/reference/glossary/index.md#glossary-rollup-job). See [Rolling up historical data](/manage-data/lifecycle/rollup.md). + +$$$glossary-rollup-job$$$ {{rollup-job}} +: Background task that runs continuously to summarize documents in an [index](/reference/glossary/index.md#glossary-index) and index the summaries into a separate rollup index. The job configuration controls what data is rolled up and how often. See [Rolling up historical data](/manage-data/lifecycle/rollup.md). + +$$$glossary-rollup$$$ rollup +: Summarizes high-granularity data into a more compressed format to maintain access to historical data in a cost-effective way. See [Roll up your data](/manage-data/lifecycle/rollup.md). + +$$$glossary-routing$$$ routing +: Process of sending and retrieving data from a specific [primary shard](/reference/glossary/index.md#glossary-primary-shard). {{es}} uses a hashed routing value to choose this shard. You can provide a routing value in [indexing](/reference/glossary/index.md#glossary-index) and search requests to take advantage of caching. See the [`_routing` field](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-routing-field.md). + +$$$glossary-rule$$$ rule +: A set of [conditions](/reference/glossary/index.md#glossary-condition), schedules, and [actions](/reference/glossary/index.md#glossary-action) that enable notifications. See [{{rules-ui}}](/reference/glossary/index.md#glossary-rules). + +$$$glossary-rules$$$ Rules +: A comprehensive view of all your alerting rules. Enables you to access and manage rules for all {{kib}} apps from one place. See [{{rules-ui}}](/explore-analyze/alerts-cases.md). + +$$$glossary-runner$$$ runner +: A local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to it exist and are able to run, and creates or recreates the containers if necessary. + +$$$glossary-runtime-fields$$$ runtime field +: [Field](/reference/glossary/index.md#glossary-field) that is evaluated at query time. You access runtime fields from the search API like any other field, and {{es}} sees runtime fields no differently. See [Runtime fields](/manage-data/data-store/mapping/runtime-fields.md). + + +## S [s-glos] + +$$$glossary-saved-object$$$ saved object +: A representation of a dashboard, visualization, map, data view, or Canvas workpad that can be stored and reloaded. + +$$$glossary-saved-search$$$ saved search +: The query text, filters, and time filter that make up a search, saved for later retrieval and reuse. + +$$$glossary-scripted-field$$$ scripted field +: A field that computes data on the fly from the data in {{es}} indices. Scripted field data is shown in Discover and used in visualizations. + +$$$glossary-search-session$$$ search session +: A group of one or more queries that are executed asynchronously. The results of the session are stored for a period of time, so you can recall the query. Search sessions are user specific. + +$$$glossary-search-template$$$ search template +: A stored search you can run with different variables. See [Search templates](/solutions/search/search-templates.md). + +$$$glossary-searchable-snapshot-index$$$ searchable snapshot index +: [Index](/reference/glossary/index.md#glossary-index) whose data is stored in a [snapshot](/reference/glossary/index.md#glossary-snapshot). Searchable snapshot indices do not need [replica shards](/reference/glossary/index.md#glossary-replica-shard) for resilience, since their data is reliably stored outside the cluster. See [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md). + +$$$glossary-searchable-snapshot$$$ searchable snapshot +: [Snapshot](/reference/glossary/index.md#glossary-snapshot) of an [index](/reference/glossary/index.md#glossary-index) mounted as a [searchable snapshot index](/reference/glossary/index.md#glossary-searchable-snapshot-index). You can search this index like a regular index. See [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md). + +$$$glossary-segment$$$ segment +: Data file in a [shard](/reference/glossary/index.md#glossary-shard)'s Lucene instance. {{es}} manages Lucene segments automatically. + +$$$glossary-services-forwarder$$$ services forwarder +: Routes data internally in an {{ece}} installation. + +$$$glossary-shard$$$ shard +: Lucene instance containing some or all data for an [index](/reference/glossary/index.md#glossary-index). {{es}} automatically creates and manages these Lucene instances. There are two types of shards: [primary](/reference/glossary/index.md#glossary-primary-shard) and [replica](/reference/glossary/index.md#glossary-replica-shard). See [Clusters, nodes, and shards](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md). + +$$$glossary-shareable$$$ shareable +: A Canvas workpad that can be embedded on any webpage. Shareables enable you to display Canvas visualizations on internal wiki pages or public websites. + +$$$glossary-shipper$$$ shipper +: An instance of {{ls}} that send events to another instance of {{ls}}, or some other application. + +$$$glossary-shrink$$$ shrink +: Reduces the number of [primary shards](/reference/glossary/index.md#glossary-primary-shard) in an index. + +$$$glossary-snapshot-lifecycle-policy$$$ snapshot lifecycle policy +: Specifies how frequently to perform automatic backups of a cluster and how long to retain the resulting [snapshots](/reference/glossary/index.md#glossary-snapshot). See [Automate snapshots with {{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm). + +$$$glossary-snapshot-repository$$$ snapshot repository +: Location where [snapshots](/reference/glossary/index.md#glossary-snapshot) are stored. A snapshot repository can be a shared filesystem or a remote repository, such as Azure or Google Cloud Storage. See [Snapshot and restore](/deploy-manage/tools/snapshot-and-restore.md). + +$$$glossary-snapshot$$$ snapshot +: Backup taken of a running [cluster](/reference/glossary/index.md#glossary-cluster). You can take snapshots of the entire cluster or only specific [data streams](/reference/glossary/index.md#glossary-data-stream) and [indices](/reference/glossary/index.md#glossary-index). See [Snapshot and restore](/deploy-manage/tools/snapshot-and-restore.md). + +$$$glossary-solution$$$ solution +: In {{ecloud}}, deployments with specialized [templates](/reference/glossary/index.md#glossary-deployment-template) that are pre-configured with sensible defaults and settings for common use cases. + +$$$glossary-source_field$$$ source field +: Original JSON object provided during [indexing](/reference/glossary/index.md#glossary-index). See the [`_source` field](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-source-field.md). + +$$$glossary-space$$$ space +: A place for organizing [dashboards](/reference/glossary/index.md#glossary-dashboard), [visualizations](/reference/glossary/index.md#glossary-visualization), and other [saved objects](/reference/glossary/index.md#glossary-saved-object) by category. For example, you might have different spaces for each team, use case, or individual. See [Spaces](/deploy-manage/manage-spaces.md). + +$$$glossary-span$$$ span +: Information about the execution of a specific code path. [Spans](/solutions/observability/apps/spans.md) measure from the start to the end of an activity and can have a parent/child relationship with other spans. + +$$$glossary-split$$$ split +: Adds more [primary shards](/reference/glossary/index.md#glossary-primary-shard) to an [index](/reference/glossary/index.md#glossary-index). + +$$$glossary-stack-alert$$$ stack rule +: The general purpose rule types {{kib}} provides out of the box. Refer to [Stack rules](/explore-analyze/alerts-cases/alerts/rule-types.md#stack-rules). + +$$$glossary-standalone$$$ standalone +: This mode allows manual configuration and management of {{agent}}s locally on the systems where they are installed. See [Install standalone {{agent}}s](/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md). + +$$$glossary-stunnel$$$ stunnel +: Securely tunnels all traffic in an {{ece}} installation. + +$$$glossary-system-index$$$ system index +: [Index](/reference/glossary/index.md#glossary-index) containing configurations and other data used internally by the {{stack}}. System index names start with a dot (`.`), such as `.security`. Do not directly access or change system indices. + + +## T [t-glos] + +$$$glossary-tag$$$ tag +: A keyword or label that you assign to {{kib}} saved objects, such as dashboards and visualizations, so you can classify them in a way that is meaningful to you. Tags makes it easier for you to manage your content. See [Tags](/explore-analyze/find-and-organize/tags.md). + +$$$glossary-term-join$$$ term join +: A shared key that combines vector features with the results of an {{es}} terms aggregation. Term joins augment vector features with properties for data-driven styling and rich tooltip content in maps. + +$$$glossary-term$$$ term +: See [token](/reference/glossary/index.md#glossary-token). + +$$$glossary-text$$$ text +: Unstructured content, such as a product description or log message. You typically [analyze](/reference/glossary/index.md#glossary-analysis) text for better search. See [Text analysis](/manage-data/data-store/text-analysis.md). + +$$$glossary-time-filter$$$ time filter +: A {{kib}} control that constrains the search results to a particular time period. + +$$$glossary-time-series-data-stream$$$ time series data stream +: A type of [data stream](/reference/glossary/index.md#glossary-data-stream) optimized for indexing metrics [time series data](/reference/glossary/index.md#glossary-time-series-data). A TSDS allows for reduced storage size and for a sequence of metrics data points to be considered efficiently as a whole. See [Time series data stream](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md). + +$$$glossary-time-series-data$$$ time series data +: A series of data points, such as logs, metrics and events, that is indexed in time order. Time series data can be indexed in a [data stream](/reference/glossary/index.md#glossary-data-stream), where it can be accessed as a single named resource with the data stored across multiple backing indices. A [time series data stream](/reference/glossary/index.md#glossary-time-series-data-stream) is optimized for indexing metrics data. + +$$$glossary-timelion$$$ Timelion +: A tool for building a time series visualization that analyzes data in time order. See [Timelion](/explore-analyze/dashboards.md). + +$$$glossary-token$$$ token +: A chunk of unstructured [text](/reference/glossary/index.md#glossary-text) that's been optimized for search. In most cases, tokens are individual words. Tokens are also called terms. See [Text analysis](/manage-data/data-store/text-analysis.md). + +$$$glossary-tokenization$$$ tokenization +: Process of breaking unstructured text down into smaller, searchable chunks called [tokens](/reference/glossary/index.md#glossary-token). See [Tokenization](/manage-data/data-store/text-analysis.md#tokenization). + +$$$glossary-trace$$$ trace +: Defines the amount of time an application spends on a request. Traces are made up of a collection of transactions and spans that have a common root. + +$$$glossary-tracks$$$ tracks +: A layer type in the **Maps** application. This layer converts a series of point locations into a line, often representing a path or route. + +$$$glossary-trained-model$$$ trained model +: A {{ml}} model that is trained and tested against a labeled data set and can be referenced in an ingest pipeline or in a pipeline aggregation to perform {{classification}} or {{reganalysis}} or [{{nlp}}](/reference/glossary/index.md#glossary-nlp) on new data. + +$$$glossary-transaction$$$ transaction +: A special kind of [span](/reference/glossary/index.md#glossary-span) that has additional attributes associated with it. [Transactions](/solutions/observability/apps/transactions.md) describe an event captured by an Elastic [APM agent](/reference/glossary/index.md#glossary-apm-agent) instrumenting a service. + +$$$glossary-tsvb$$$ TSVB +: A time series data visualizer that allows you to combine an infinite number of aggregations to display complex data. See [TSVB](/explore-analyze/dashboards.md). + + +## U [u-glos] + +$$$glossary-upgrade-assistant$$$ Upgrade Assistant +: A tool that helps you prepare for an upgrade to the next major version of {{es}}. The assistant identifies the deprecated settings in your cluster and indices and guides you through resolving issues, including reindexing. See [Upgrade Assistant](/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md). + +$$$glossary-uptime$$$ Uptime +: A metric of system reliability used to monitor the status of network endpoints via HTTP/S, TCP, and ICMP. + + +## V [v-glos] + +$$$glossary-vcpu$$$ vCPU +: vCPU stands for virtual central processing unit. In {{ecloud}}, vCPUs are virtual compute units assigned to your nodes. The value is dependent on the size and hardware profile of the instance. The instance may be eligible for vCPU boosting depending on the size. + +$$$glossary-vector$$$ vector data +: Points, lines, and polygons used to represent a map. + +$$$glossary-vega$$$ Vega +: A declarative language used to create interactive visualizations. See [Vega](/explore-analyze/dashboards.md). + +$$$glossary-visualization$$$ visualization +: A graphical representation of query results in {{kib}} (e.g., a histogram, line graph, pie chart, or heat map). + + +## W [w-glos] + +$$$glossary-warm-phase$$$ warm phase +: Second possible phase in the [index lifecycle](/reference/glossary/index.md#glossary-index-lifecycle). In the warm phase, an [index](/reference/glossary/index.md#glossary-index) is generally optimized for search and no longer updated. See [Index lifecycle](/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md). + +$$$glossary-warm-tier$$$ warm tier +: [Data tier](/reference/glossary/index.md#glossary-data-tier) that contains [nodes](/reference/glossary/index.md#glossary-node) that hold time series data that is accessed less frequently and rarely needs to be updated. See [Data tiers](/manage-data/lifecycle/data-tiers.md). + +$$$glossary-watcher$$$ Watcher +: The original suite of alerting features. See [Watcher](/explore-analyze/alerts-cases/watcher.md). + +$$$glossary-wms$$$ Web Map Service (WMS) +: A layer type in the **Maps** application. Add a WMS source to provide authoritative geographic context to your map. See the [OpenGIS Web Map Service](https://www.ogc.org/standards/wms). + +$$$glossary-worker$$$ worker +: The filter thread model used by {{ls}}, where each worker receives an [event](/reference/glossary/index.md#glossary-event) and applies all filters, in order, before emitting the event to the output queue. This allows scalability across CPUs because many filters are CPU intensive. + +$$$glossary-workpad$$$ workpad +: A workspace where you build presentations of your live data in [Canvas](/reference/glossary/index.md#glossary-canvas). See [Create a workpad](/explore-analyze/visualize/canvas.md). + + +## X [x-glos] + + +## Y [y-glos] + + +## Z [z-glos] + +$$$glossary-zookeeper$$$ ZooKeeper +: A coordination service for distributed systems used by {{ece}} to store the state of the installation. Responsible for discovery of hosts, resource allocation, leader election after failure and high priority notifications. diff --git a/reference/ingestion-tools.md b/reference/ingestion-tools.md new file mode 100644 index 0000000000..03879ee6be --- /dev/null +++ b/reference/ingestion-tools.md @@ -0,0 +1,16 @@ +# Ingestion tools + +% TO-DO: Add links to "What are Ingestion tools?"% + +This section contains reference information for ingestion tools, including: + +* Fleet and agent +* APM +* Beats +* Enrich processor reference +* Logstash +* Elastic Serverless forwarder for AWS +* Search connectors +* ES Hadoop + +This document is intended for programmers who want to interact with the ingestion tools and doesn't contain information about the API libraries. \ No newline at end of file diff --git a/reference/ingestion-tools/apm/apm-agents.md b/reference/ingestion-tools/apm/apm-agents.md new file mode 100644 index 0000000000..2a793b03d0 --- /dev/null +++ b/reference/ingestion-tools/apm/apm-agents.md @@ -0,0 +1,16 @@ +# APM agents + +% TO-DO: Add links to "APM basics"% + +This section contains reference information for APM Agents features, including: + +* Android +* .NET +* Go +* Java +* Node.js +* PHP +* Python +* Ruby +* RUM JavaScript +* iOS \ No newline at end of file diff --git a/reference/ingestion-tools/cloud-enterprise/apm-settings.md b/reference/ingestion-tools/cloud-enterprise/apm-settings.md new file mode 100644 index 0000000000..25e9bf76d6 --- /dev/null +++ b/reference/ingestion-tools/cloud-enterprise/apm-settings.md @@ -0,0 +1,97 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-apm-settings.html#ece_logging_settings_legacy +--- + +# APM settings for Elastic Cloud Enterprise [ece-manage-apm-settings] + +Starting in {{stack}} version 8.0, how you change APM settings and the settings that are available to you depend on how you spin up Elastic APM. There are two modes: + +{{fleet}}-managed APM integration +: New deployments created in {{stack}} version 8.0 and later will be managed by {{fleet}}. + + * This mode requires SSL/TLS configuration. Check [TLS configuration for {{fleet}}-managed mode](#ece-edit-apm-fleet-tls) for details. + * Check [APM integration input settings](/solutions/observability/apps/configure-apm-server.md) for all other Elastic APM configuration options in this mode. + + +Standalone APM Server (legacy) +: Deployments created prior to {{stack}} version 8.0 are in legacy mode. Upgrading to or past {{stack}} 8.0 does not remove you from legacy mode. + + Check [Edit standalone APM settings (legacy)](#ece-edit-apm-standalone-settings-ece)for information on how to configure Elastic APM in this mode. + + +To learn more about the differences between these modes, or to switch from Standalone APM Server (legacy) mode to {{fleet}}-managed, check [Switch to the Elastic APM integration](/solutions/observability/apps/switch-to-elastic-apm-integration.md). + + +## TLS configuration for {{fleet}}-managed mode [ece-edit-apm-fleet-tls] + +Users running {{stack}} versions 7.16 or 7.17 need to manually configure TLS. This step is not necessary for {{stack}} versions ≥ 8.0. + +Pick one of the following options: + +1. Upload and configure a publicly signed {{es}} TLS certificates. Check [Encrypt traffic in clusters with a self-managed Fleet Server](/reference/ingestion-tools/fleet/secure-connections.md) for details. +2. Change the {{es}} hosts where {{agent}}s send data from the default public URL, to the internal URL. In {{kib}}, navigate to **Fleet** and select the **Elastic Cloud agent policy**. Click **Fleet settings** and update the {{es}} hosts URL. For example, if the current URL is `https://123abc.us-central1.gcp.foundit.no:9244`, change it to `http://123abc.containerhost:9244`. + + +## Edit standalone APM settings (legacy) [ece-edit-apm-standalone-settings-ece] + +Elastic Cloud Enterprise supports most of the legacy APM settings. Through a YAML editor in the console, you can append your APM Server properties to the `apm-server.yml` file. Your changes to the configuration file are read on startup. + +::::{important} +Be aware that some settings could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest APM settings and syntax](/solutions/observability/apps/configure-apm-server.md). +:::: + + +To change APM settings: + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From your deployment menu, go to the **Edit** page. +4. In the **APM** section, select **Edit user settings**. (For existing deployments with user settings, you may have to expand the **Edit apm-server.yml** caret instead.) +5. Update the user settings. +6. Select **Save changes**. + +::::{note} +If a setting is not supported by Elastic Cloud Enterprise, you get an error message when you try to save. We suggest changing one setting with each save, so you know which one is not supported. +:::: + + + +## Example: Enable RUM and increase the rate limit (legacy) [ece_example_enable_rum_and_increase_the_rate_limit_legacy] + +When capturing the user interaction with clients with real user monitoring (RUM), particularly for situations with concurrent clients, you can increase the number of times each IP address can send a request to the RUM endpoint. Version 6.5 includes an additional settings for the LRU cache. + +For APM Server with RUM agent version 2.x or 3.x: + +```sh +apm-server: + rum: + enabled: true + event rate: + limit: 3000 + lru_size: 5000 +``` + + +## Example: Disable RUM (legacy) [ece_example_disable_rum_legacy] + +If you know that you won’t be tracking RUM data, you can disable the endpoint proactively. + +```sh +apm-server: + rum: + enabled: false +``` + + +## Example: Adjust the event limits configuration (legacy) [ece_example_adjust_the_event_limits_configuration_legacy] + +If the size of the HTTP request frequently exceeds the maximum, you might need to change the limit on the APM Server and adjust the relevant settings in the agent. + +```sh +apm-server: + max_event_size: 407200 +``` diff --git a/reference/ingestion-tools/cloud/apm-settings.md b/reference/ingestion-tools/cloud/apm-settings.md new file mode 100644 index 0000000000..91c872df5a --- /dev/null +++ b/reference/ingestion-tools/cloud/apm-settings.md @@ -0,0 +1,374 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/cloud/current/ec-manage-apm-settings.html#ec-apm-settings +--- + +# APM settings for Elastic Cloud [ec-manage-apm-settings] + +Change how Elastic APM runs by providing your own user settings. Starting in {{stack}} version 8.0, how you change APM settings and the settings that are available to you depend on how you spin up Elastic APM. There are two modes: + +{{fleet}}-managed APM integration +: New deployments created in {{stack}} version 8.0 and later will be managed by {{fleet}}. + + Check [APM configuration reference](/solutions/observability/apps/configure-apm-server.md) for information on how to configure Elastic APM in this mode. + + +Standalone APM Server (legacy) +: Deployments created prior to {{stack}} version 8.0 are in legacy mode. Upgrading to or past {{stack}} 8.0 will not remove you from legacy mode. + + Check [Edit standalone APM settings (legacy)](#ec-edit-apm-standalone-settings) and [Supported standalone APM settings (legacy)](#ec-apm-settings) for information on how to configure Elastic APM in this mode. + + +To learn more about the differences between these modes, or to switch from Standalone APM Server (legacy) mode to {{fleet}}-managed, check [Switch to the Elastic APM integration](/solutions/observability/apps/switch-to-elastic-apm-integration.md). + +## Edit standalone APM settings (legacy) [ec-edit-apm-standalone-settings] + +User settings are appended to the `apm-server.yml` configuration file for your instance and provide custom configuration options. + +To add user settings: + +1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. + + On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From your deployment menu, go to the **Edit** page. +4. In the **APM** section, select **Edit user settings**. (For existing deployments with user settings, you may have to expand the **Edit apm-server.yml** caret instead.) +5. Update the user settings. +6. Select **Save changes**. + +::::{note} +If a setting is not supported by Elasticsearch Service, you will get an error message when you try to save. +:::: + + + +## Supported standalone APM settings (legacy) [ec-apm-settings] + +Elasticsearch Service supports the following setting when running APM in standalone mode (legacy). + +::::{tip} +Some settings that could break your cluster if set incorrectly are blocklisted. The following settings are generally safe in cloud environments. For detailed information about APM settings, check the [APM documentation](/solutions/observability/apps/configure-apm-server.md). +:::: + + +### Version 8.0+ [ec_version_8_0_3] + +This stack version removes support for some previously supported settings. These are all of the supported settings for this version: + +`apm-server.agent.config.cache.expiration` +: When using APM agent configuration, determines cache expiration from information fetched from Kibana. Defaults to `30s`. + +`apm-server.aggregation.transactions.*` +: This functionality is experimental and may be changed or removed completely in a future release. When enabled, APM Server produces transaction histogram metrics that are used to power the APM app. Shifting this responsibility from APM app to APM Server results in improved query performance and removes the need to store unsampled transactions. + +The following `apm-server.auth.anonymous.*` settings can be configured to restrict anonymous access to specified agents and/or services. This is primarily intended to allow limited access for untrusted agents, such as Real User Monitoring. Anonymous auth is automatically enabled when RUM is enabled. Otherwise, anonymous auth is disabled. When anonymous auth is enabled, only agents matching `allow_agent` and services matching `allow_service` are allowed. See below for details on default values for these. + +`apm-server.auth.anonymous.allow_agent` +: Allow anonymous access only for specified agents. + +`apm-server.auth.anonymous.allow_service` +: Allow anonymous access only for specified service names. By default, all service names are allowed. This is replacing the config option `apm-server.rum.allow_service_names`, previously available for `7.x` deployments. + +`apm-server.auth.anonymous.rate_limit.event_limit` +: Rate limiting is defined per unique client IP address, for a limited number of IP addresses. Sites with many concurrent clients should consider increasing this limit. Defaults to 1000. This is replacing the config option `apm-server.rum.event_rate.limit`, previously available for `7.x` deployments. + +`apm-server.auth.anonymous.rate_limit.ip_limit` +: Defines the maximum amount of events allowed per IP per second. Defaults to 300. The overall maximum event throughput for anonymous access is (event_limit * ip_limit). This is replacing the config option `apm-server.rum.event_rate.lru_size`, previously available for `7.x` deployments. + +`apm-server.auth.api_key.enabled` +: Enables agent authorization using Elasticsearch API Keys. This is replacing the config option `apm-server.api_key.enabled`, previously available for `7.x` deployments. + +`apm-server.auth.api_key.limit` +: Restrict how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys configured in your monitored services. Every unique API key triggers one request to Elasticsearch. This is replacing the config option `apm-server.api_key.limit`, previously available for `7.x` deployments. + +`apm-server.capture_personal_data` +: When set to `true`, the server captures the IP of the instrumented service and its User Agent. Enabled by default. + +`apm-server.default_service_environment` +: If specified, APM Server will record this value in events which have no service environment defined, and add it to agent configuration queries to Kibana when none is specified in the request from the agent. + +`apm-server.max_event_size` +: Specifies the maximum allowed size of an event for processing by the server, in bytes. Defaults to `307200`. + +`apm-server.rum.allow_headers` +: A list of Access-Control-Allow-Headers to allow RUM requests, in addition to "Content-Type", "Content-Encoding", and "Accept". + +`apm-server.rum.allow_origins` +: A list of permitted origins for real user monitoring. User-agents will send an origin header that will be validated against this list. An origin is made of a protocol scheme, host, and port, without the URL path. Allowed origins in this setting can have a wildcard `*` to match anything (for example: `http://*.example.com`). If an item in the list is a single `*`, all origins will be allowed. + +`apm-server.rum.enabled` +: Enable Real User Monitoring (RUM) Support. By default RUM is enabled. RUM does not support token based authorization. Enabled RUM endpoints will not require any authorization configured for other endpoints. + +`apm-server.rum.exclude_from_grouping` +: A regexp to be matched against a stacktrace frame’s `file_name`. If the regexp matches, the stacktrace frame is not used for calculating error groups. The default pattern excludes stacktrace frames that have a filename starting with `/webpack` + +`apm-server.rum.library_pattern` +: A regexp to be matched against a stacktrace frame’s `file_name` and `abs_path` attributes. If the regexp matches, the stacktrace frame is considered to be a library frame. + +`apm-server.rum.source_mapping.enabled` +: If a source map has previously been uploaded, source mapping is automatically applied to all error and transaction documents sent to the RUM endpoint. Sourcemapping is enabled by default when RUM is enabled. + +`apm-server.rum.source_mapping.cache.expiration` +: The `cache.expiration` determines how long a source map should be cached in memory. Note that values configured without a time unit will be interpreted as seconds. + +`apm-server.sampling.tail.enabled` +: Set to `true` to enable tail based sampling. Disabled by default. + +`apm-server.sampling.tail.policies` +: Criteria used to match a root transaction to a sample rate. + +`apm-server.sampling.tail.interval` +: Synchronization interval for multiple APM Servers. Should be in the order of tens of seconds or low minutes. + +`logging.level` +: Sets the minimum log level. The default log level is error. Available log levels are: error, warning, info, or debug. + +`logging.selectors` +: Enable debug output for selected components. To enable all selectors use ["*"]. Other available selectors are "beat", "publish", or "service". Multiple selectors can be chained. + +`logging.metrics.enabled` +: If enabled, apm-server periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics are logged on shutdown. The default is false. + +`logging.metrics.period` +: The period after which to log the internal metrics. The default is 30s. + +`max_procs` +: Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system. + +`output.elasticsearch.flush_interval` +: The maximum duration to accumulate events for a bulk request before being flushed to Elasticsearch. The value must have a duration suffix. The default is 1s. + +`output.elasticsearch.flush_bytes` +: The bulk request size threshold, in bytes, before flushing to Elasticsearch. The value must have a suffix. The default is 5MB. + + +### Version 7.17+ [ec_version_7_17] + +This stack version includes all of the settings from 7.16 and the following: + +Allow anonymous access only for specified agents and/or services. This is primarily intended to allow limited access for untrusted agents, such as Real User Monitoring. Anonymous auth is automatically enabled when RUM is enabled. Otherwise, anonymous auth is disabled. When anonymous auth is enabled, only agents matching allow_agent and services matching allow_service are allowed. See below for details on default values for these. + +`apm-server.auth.anonymous.allow_agent` +: Allow anonymous access only for specified agents. + +`apm-server.auth.anonymous.allow_service` +: Allow anonymous access only for specified service names. By default, all service names are allowed. This will be replacing the config option `apm-server.rum.allow_service_names` from `8.0` on. + +`apm-server.auth.anonymous.rate_limit.event_limit` +: Rate limiting is defined per unique client IP address, for a limited number of IP addresses. Sites with many concurrent clients should consider increasing this limit. Defaults to 1000. This will be replacing the config option`apm-server.rum.event_rate.limit` from `8.0` on. + +`apm-server.auth.anonymous.rate_limit.ip_limit` +: Defines the maximum amount of events allowed per IP per second. Defaults to 300. The overall maximum event throughput for anonymous access is (event_limit * ip_limit). This will be replacing the config option `apm-server.rum.event_rate.lru_size` from `8.0` on. + +`apm-server.auth.api_key.enabled` +: Enables agent authorization using Elasticsearch API Keys. This will be replacing the config option `apm-server.api_key.enabled` from `8.0` on. + +`apm-server.auth.api_key.limit` +: Restrict how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys configured in your monitored services. Every unique API key triggers one request to Elasticsearch. This will be replacing the config option `apm-server.api_key.limit` from `8.0` on. + + +### Supported versions before 8.x [ec_supported_versions_before_8_x_3] + +`apm-server.aggregation.transactions.*` +: This functionality is experimental and may be changed or removed completely in a future release. When enabled, APM Server produces transaction histogram metrics that are used to power the APM app. Shifting this responsibility from APM app to APM Server results in improved query performance and removes the need to store unsampled transactions. + +`apm-server.default_service_environment` +: If specified, APM Server will record this value in events which have no service environment defined, and add it to agent configuration queries to Kibana when none is specified in the request from the agent. + +`apm-server.rum.allow_service_names` +: A list of service names to allow, to limit service-specific indices and data streams created for unauthenticated RUM events. If the list is empty, any service name is allowed. + +`apm-server.ilm.setup.mapping` +: ILM policies now support configurable index suffixes. You can append the `policy_name` with an `index_suffix` based on the `event_type`, which can be one of `span`, `transaction`, `error`, or `metric`. + +`apm-server.rum.allow_headers` +: List of Access-Control-Allow-Headers to allow RUM requests, in addition to "Content-Type", "Content-Encoding", and "Accept". + +`setup.template.append_fields` +: A list of fields to be added to the Elasticsearch template and Kibana data view (formerly *index pattern*). + +`apm-server.api_key.enabled` +: Enabled by default. For any requests where APM Server accepts a `secret_token` in the authorization header, it now alternatively accepts an API Key. + +`apm-server.api_key.limit` +: Configure how many unique API keys are allowed per minute. Should be set to at least the amount of different API keys used in monitored services. Default value is 100. + +`apm-server.ilm.setup.enabled` +: When enabled, APM Server creates aliases, event type specific settings and ILM policies. If disabled, event type specific templates need to be managed manually. + +`apm-server.ilm.setup.overwrite` +: Set to `true` to apply custom policies and to properly overwrite templates when switching between using ILM and not using ILM. + +`apm-server.ilm.setup.require_policy` +: Set to `false` when policies are set up outside of APM Server but referenced in this configuration. + +`apm-server.ilm.setup.policies` +: Array of ILM policies. Each entry has a `name` and a `policy`. + +`apm-server.ilm.setup.mapping` +: Array of mappings of ILM policies to event types. Each entry has a `policy_name` and an `event_type`, which can be one of `span`, `transaction`, `error`, or `metric`. + +`apm-server.rum.source_mapping.enabled` +: When events are monitored using the RUM agent, APM Server tries to apply source mapping by default. This configuration option allows you to disable source mapping on stack traces. + +`apm-server.rum.source_mapping.cache.expiration` +: Sets how long a source map should be cached before being refetched from Elasticsearch. Default value is 5m. + +`output.elasticsearch.pipeline` +: APM comes with a default pipeline definition. This allows overriding it. To disable, you can set `pipeline: _none` + +`apm-server.agent.config.cache.expiration` +: When using APM agent configuration, determines cache expiration from information fetched from Kibana. Defaults to `30s`. + +`apm-server.ilm.enabled` +: Enables index lifecycle management (ILM) for the indices created by the APM Server. Defaults to `false`. If you’re updating an existing APM Server, you must also set `setup.template.overwrite: true`. If you don’t, the index template will not be overridden and ILM changes will not take effect. + +`apm-server.max_event_size` +: Specifies the maximum allowed size of an event for processing by the server, in bytes. Defaults to `307200`. + +`output.elasticsearch.pipelines` +: Adds an array for pipeline selector configurations that support conditionals, format string-based field access, and name mappings used to [parse data using ingest node pipelines](/solutions/observability/apps/application-performance-monitoring-apm.md). + +`apm-server.register.ingest.pipeline.enabled` +: Loads the pipeline definitions to Elasticsearch when the APM Server starts up. Defaults to `false`. + +`apm-server.register.ingest.pipeline.overwrite` +: Overwrites the existing pipeline definitions in Elasticsearch. Defaults to `true`. + +`apm-server.rum.event_rate.lru_size` +: Defines the number of unique IP addresses that can be tracked in the LRU cache, which keeps a rate limit for each of the most recently seen IP addresses. Defaults to `1000`. + +`apm-server.rum.event_rate.limit` +: Sets the rate limit per second for each IP address for events sent to the APM Server v2 RUM endpoint. Defaults to `300`. + +`apm-server.rum.enabled` +: Enables/disables Real User Monitoring (RUM) support. Defaults to `true` (enabled). + +`apm-server.rum.allow_origins` +: Specifies a list of permitted origins from user agents. The default is `*`, which allows everything. + +`apm-server.rum.library_pattern` +: Differentiates library frames against specific attributes. Refer to "Configure Real User Monitoring (RUM)" in the [Observability Guide](/solutions/observability.md) to learn more. The default value is `"node_modules|bower_components|~"`. + +`apm-server.rum.exclude_from_grouping` +: Configures the RegExp to be matched against a stacktrace frame’s `file_name`. + +`apm-server.rum.rate_limit` +: Sets the rate limit per second for each IP address for requests sent to the RUM endpoint. Defaults to `10`. + +`apm-server.capture_personal_data` +: When set to `true`, the server captures the IP of the instrumented service and its User Agent. Enabled by default. + +`setup.template.settings.index.number_of_shards` +: Specifies the number of shards for the Elasticsearch template. + +`setup.template.settings.index.number_of_replicas` +: Specifies the number of replicas for the Elasticsearch template. + +`apm-server.frontend.enabled` +: Enables/disables frontend support. + +`apm-server.frontend.allow_origins` +: Specifies the comma-separated list of permitted origins from user agents. The default is `*`, which allows everything. + +`apm-server.frontend.library_pattern` +: Differentiates library frames against [specific attributes](https://www.elastic.co/guide/en/apm/server/6.3/configuration-frontend.html). The default value is `"node_modules|bower_components|~"`. + +`apm-server.frontend.exclude_from_grouping` +: Configures the RegExp to be matched against a stacktrace frame’s `file_name`. + +`apm-server.frontend.rate_limit` +: Sets the rate limit per second per IP address for requests sent to the frontend endpoint. Defaults to `10`. + +`apm-server.capture_personal_data` +: When set to `true`, the server captures the IP address of the instrumented service and its User Agent. Enabled by default. + +`max_procs` +: Max number of CPUs used simultaneously. Defaults to the number of logical CPUs available. + +`setup.template.enabled` +: Set to false to disable loading of Elasticsearch templates used for APM indices. If set to false, you must load the template manually. + +`setup.template.name` +: Name of the template. Defaults to `apm-server`. + +`setup.template.pattern` +: The template pattern to apply to the default index settings. Default is `apm-*` + +`setup.template.settings.index.number_of_shards` +: Specifies the number of shards for the Elasticsearch template. + +`setup.template.settings.index.number_of_replicas` +: Specifies the number of replicas for the Elasticsearch template. + +`output.elasticsearch.bulk_max_size` +: Maximum number of events to bulk together in a single Elasticsearch bulk API request. By default, this number changes based on the size of the instance: + + | Instance size | Default max events | + | --- | --- | + | 512MB | 267 | + | 1GB | 381 | + | 2GB | 533 | + | 4GB | 762 | + | 8GB | 1067 | + + +`output.elasticsearch.indices` +: Array of index selector rules supporting conditionals and formatted string. + +`output.elasticsearch.index` +: The index to write the events to. If changed, `setup.template.name` and `setup.template.pattern` must be changed accordingly. + +`output.elasticsearch.worker` +: Maximum number of concurrent workers publishing events to Elasticsearch. By default, this number changes based on the size of the instance: + + | Instance size | Default max concurrent workers | + | --- | --- | + | 512MB | 5 | + | 1GB | 7 | + | 2GB | 10 | + | 4GB | 14 | + | 8GB | 20 | + + +`queue.mem.events` +: Maximum number of events to concurrently store in the internal queue. By default, this number changes based on the size of the instance: + + | Instance size | Default max events | + | --- | --- | + | 512MB | 2000 | + | 1GB | 4000 | + | 2GB | 8000 | + | 4GB | 16000 | + | 8GB | 32000 | + + +`queue.mem.flush.min_events` +: Minimum number of events to have before pushing them to Elasticsearch. By default, this number changes based on the size of the instance. + +`queue.mem.flush.timeout` +: Maximum duration before sending the events to the output if the `min_events` is not crossed. + + +### Logging settings [ec_logging_settings] + +`logging.level` +: Specifies the minimum log level. One of *debug*, *info*, *warning*, or *error*. Defaults to *info*. + +`logging.selectors` +: The list of debugging-only selector tags used by different APM Server components. Use *** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing. + +`logging.metrics.enabled` +: If enabled, APM Server periodically logs its internal metrics that have changed in the last period. Defaults to *true*. + +`logging.metrics.period` +: The period after which to log the internal metrics. Defaults to *30s*. + +::::{note} +To change logging settings you must first [enable deployment logging](/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md). +:::: + + + + diff --git a/reference/ingestion-tools/fleet/_agent_configuration_encryption.md b/reference/ingestion-tools/fleet/_agent_configuration_encryption.md new file mode 100644 index 0000000000..102138eabd --- /dev/null +++ b/reference/ingestion-tools/fleet/_agent_configuration_encryption.md @@ -0,0 +1,26 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/_elastic_agent_configuration_encryption.html +--- + +# {{agent}} configuration encryption [_agent_configuration_encryption] + +It is important for you to understand the {{agent}} security model and how it handles sensitive values in integration configurations. At a high level, {{agent}} receives configuration data from {{fleet-server}} over an encrypted connection and persists the encrypted configuration on disk. This persistence allows agents to continue to operate even if they are unable to connect to the {{fleet-server}}. + +The entire Fleet Agent Policy is encrypted at rest, but is recoverable if you have access to both the encrypted configuration data and the associated key. The key material is stored in an OS-dependent manner as described in the following sections. + + +## Darwin (macOS) [_darwin_macos] + +Key material is stored in the system keychain. The value is stored as is without any additional transformations. + + +## Windows [_windows] + +Configuration data is encrypted with [DPAPI](https://learn.microsoft.com/en-us/dotnet/standard/security/how-to-use-data-protection) `CryptProtectData` with `CRYPTPROTECT_LOCAL_MACHINE``. Additional entropy is derived from crypto/rand bytes stored in the `.seed` file. Configuration data is stored as separate files, where the name of the file is a SHA256 hash of the key, and the content of the file is encrypted with DPAPI data. The security of key data relies on file system permissions. Only the Administrator should be able to access the file. + + +## Linux [_linux] + +The encryption key is derived from crypto/rand bytes stored in the `.seed` file after PBKDF2 transformation. Configuration data is stored as separate files, where the name of the file is a SHA256 hash of the key, and the content of the file is AES256-GSM encrypted. The security of the key material largely relies on file system permissions. + diff --git a/reference/ingestion-tools/fleet/add-cloud-metadata-processor.md b/reference/ingestion-tools/fleet/add-cloud-metadata-processor.md new file mode 100644 index 0000000000..2775044934 --- /dev/null +++ b/reference/ingestion-tools/fleet/add-cloud-metadata-processor.md @@ -0,0 +1,182 @@ +--- +navigation_title: "add_cloud_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add-cloud-metadata-processor.html +--- + +# Add cloud metadata [add-cloud-metadata-processor] + + +::::{tip} +Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly. +:::: + + +The `add_cloud_metadata` processor enriches each event with instance metadata from the machine’s hosting provider. At startup the processor queries a list of hosting providers and caches the instance metadata. + +The following providers are supported: + +* Amazon Web Services (AWS) +* Digital Ocean +* Google Compute Engine (GCE) +* [Tencent Cloud](https://www.qcloud.com/?lang=en) (QCloud) +* Alibaba Cloud (ECS) +* Huawei Cloud (ECS) +* Azure Virtual Machine +* Openstack Nova + +The Alibaba Cloud and Tencent providers are disabled by default, because they require to access a remote host. Use the `providers` setting to select a list of default providers to query. + + +## Example [_example_2] + +This configuration enables the processor: + +```yaml + - add_cloud_metadata: ~ +``` + +The metadata that is added to events varies by hosting provider. For examples, refer to [Provider-specific metadata examples](#provider-specific-examples). + + +## Configuration settings [_configuration_settings] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `timeout` | No | `3s` | Maximum amount of time to wait for a successful response when detecting the hosting provider. If a timeout occurs, no instance metadata is added to the events. This makes it possible to enable this processor for all your deployments (in the cloud or on-premise). | +| `providers` | No | | List of provider names to use. If `providers` is not configured, all providers that do not access a remote endpoint are enabled by default. The list of providers may alternatively be configured with the environment variable `BEATS_ADD_CLOUD_METADATA_PROVIDERS`, by setting it to a comma-separated list of provider names.

The list of supported provider names includes:

* `alibaba` or `ecs` for the Alibaba Cloud provider (disabled by default).
* `azure` for Azure Virtual Machine (enabled by default).
* `digitalocean` for Digital Ocean (enabled by default).
* `aws` or `ec2` for Amazon Web Services (enabled by default).
* `gcp` for Google Compute Engine (enabled by default).
* `openstack` or `nova` for Openstack Nova (enabled by default).
* `openstack-ssl` or `nova-ssl` for Openstack Nova when SSL metadata APIs are enabled (enabled by default).
* `tencent` or `qcloud` for Tencent Cloud (disabled by default).
* `huawei` for Huawei Cloud (enabled by default).
| +| `overwrite` | No | `false` | Whether to overwrite existing cloud fields. If `true`, the processoroverwrites existing `cloud.*` fields. | + +The `add_cloud_metadata` processor supports SSL options to configure the http client used to query cloud metadata. + +For more information, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specifically the settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). + + +## Provider-specific metadata examples [provider-specific-examples] + +The following sections show examples for each of the supported providers. + + +### AWS [_aws] + +```json +{ + "cloud": { + "account.id": "123456789012", + "availability_zone": "us-east-1c", + "instance.id": "i-4e123456", + "machine.type": "t2.medium", + "image.id": "ami-abcd1234", + "provider": "aws", + "region": "us-east-1" + } +} +``` + + +### Digital Ocean [_digital_ocean] + +```json +{ + "cloud": { + "instance.id": "1234567", + "provider": "digitalocean", + "region": "nyc2" + } +} +``` + + +### GCP [_gcp] + +```json +{ + "cloud": { + "availability_zone": "us-east1-b", + "instance.id": "1234556778987654321", + "machine.type": "f1-micro", + "project.id": "my-dev", + "provider": "gcp" + } +} +``` + + +### Tencent Cloud [_tencent_cloud] + +```json +{ + "cloud": { + "availability_zone": "gz-azone2", + "instance.id": "ins-qcloudv5", + "provider": "qcloud", + "region": "china-south-gz" + } +} +``` + + +### Huawei Cloud [_huawei_cloud] + +```json +{ + "cloud": { + "availability_zone": "cn-east-2b", + "instance.id": "37da9890-8289-4c58-ba34-a8271c4a8216", + "provider": "huawei", + "region": "cn-east-2" + } +} +``` + + +### Alibaba Cloud [_alibaba_cloud] + +This metadata is only available when VPC is selected as the network type of the ECS instance. + +```json +{ + "cloud": { + "availability_zone": "cn-shenzhen", + "instance.id": "i-wz9g2hqiikg0aliyun2b", + "provider": "ecs", + "region": "cn-shenzhen-a" + } +} +``` + + +### Azure Virtual Machine [_azure_virtual_machine] + +```json +{ + "cloud": { + "provider": "azure", + "instance.id": "04ab04c3-63de-4709-a9f9-9ab8c0411d5e", + "instance.name": "test-az-vm", + "machine.type": "Standard_D3_v2", + "region": "eastus2" + } +} +``` + + +### Openstack Nova [_openstack_nova] + +```json +{ + "cloud": { + "instance.name": "test-998d932195.mycloud.tld", + "instance.id": "i-00011a84", + "availability_zone": "xxxx-az-c", + "provider": "openstack", + "machine.type": "m2.large" + } +} +``` + diff --git a/reference/ingestion-tools/fleet/add-fleet-server-cloud.md b/reference/ingestion-tools/fleet/add-fleet-server-cloud.md new file mode 100644 index 0000000000..ec01dde2d8 --- /dev/null +++ b/reference/ingestion-tools/fleet/add-fleet-server-cloud.md @@ -0,0 +1,83 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-cloud.html +--- + +# Deploy on Elastic Cloud [add-fleet-server-cloud] + +To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts. + +{{fleet-server}} can be provisioned and hosted on {{ecloud}}. When the Cloud deployment is created, a highly available set of {{fleet-server}}s is provisioned automatically. + +This approach might be right for you if you want to reduce on-prem compute resources and you’d like Elastic to take care of provisioning and life cycle management of your deployment. + +With this approach, multiple {{fleet-server}}s are automatically provisioned to satisfy the chosen instance size (instance sizes are modified to satisfy the scale requirement). You can also choose the resources allocated to each {{fleet-server}} and whether you want each {{fleet-server}} to be deployed in multiple availability zones. If you choose multiple availability zones to address your fault-tolerance requirements, those instances are also utilized to balance the load. + +This approach might *not* be right for you if you have restrictions on connectivity to the internet. + +:::{image} images/fleet-server-cloud-deployment.png +:alt: {{fleet-server}} Cloud deployment model +::: + + +## Compatibility and prerequisites [fleet-server-compatibility] + +{{fleet-server}} is compatible with the following Elastic products: + +* {{stack}} 7.13 or later. + + * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases). + * {{kib}} should be on the same minor version as {{es}}. + +* {{ece}} 2.10 or later + + * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`. + * The deployment template must contain an {{integrations-server}} node. + + For more information about hosting {{fleet-server}} on {{ece}}, refer to [Manage your {{integrations-server}}](/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md). + + +::::{note} +The TLS certificates used to secure connections between {{agent}} and {{fleet-server}} are managed by {{ecloud}}. You do not need to create a private key or generate certificates. +:::: + + +When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. See the following table for default port assignments: + +| Component communication | Default port | +| --- | --- | +| Elastic Agent → {{fleet-server}} | 443 | +| Elastic Agent → {{es}} | 443 | +| Elastic Agent → Logstash | 5044 | +| Elastic Agent → {{kib}} ({{fleet}}) | 443 | +| {{fleet-server}} → {{kib}} ({{fleet}}) | 443 | +| {{fleet-server}} → {{es}} | 443 | + +::::{note} +If you do not specify the port for {{es}} as 443, the {{agent}} defaults to 9200. +:::: + + + +## Setup [add-fleet-server-cloud-set-up] + +To confirm that an {{integrations-server}} is available in your deployment: + +1. Open {{fleet}}. +2. On the **Agent policies** tab, look for the **{{ecloud}} agent policy**. This policy is managed by {{ecloud}}, and contains a {{fleet-server}} integration and an Elastic APM integration. You cannot modify the policy. Confirm that the agent status is **Healthy**. + +:::::{tip} +Don’t see the agent? Make sure your deployment includes an {{integrations-server}} instance. This instance is required to use {{fleet}}. + +:::{image} images/integrations-server-hosted-container.png +:alt: Hosted {integrations-server} +:class: screenshot +::: + +::::: + + + +## Next steps [add-fleet-server-cloud-next] + +Now you’re ready to add {{agent}}s to your host systems. To learn how, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). diff --git a/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md b/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md new file mode 100644 index 0000000000..bfefc80a12 --- /dev/null +++ b/reference/ingestion-tools/fleet/add-fleet-server-kubernetes.md @@ -0,0 +1,564 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-kubernetes.html +--- + +# Deploy Fleet Server on Kubernetes [add-fleet-server-kubernetes] + +::::{note} +If your {{stack}} is orchestrated by [ECK](/deploy-manage/deploy/cloud-on-k8s.md), we recommend to deploy the {{fleet-server}} through the operator. That simplifies the process, as the operator automatically handles most of the resources configuration and setup steps. + +Refer to [Run Fleet-managed {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md) for more information. + +:::: + + +::::{important} +This guide assumes familiarity with Kubernetes concepts and resources, such as `Deployments`, `Pods`, `Secrets`, or `Services`, as well as configuring applications in Kubernetes environments. + +:::: + + +To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts. + +You can deploy {{fleet-server}} on Kubernetes and manage it yourself. In this deployment model, you are responsible for high-availability, fault-tolerance, and lifecycle management of the {{fleet-server}}. + +To deploy a {{fleet-server}} on Kubernetes and register it into {{fleet}} you will need the following details: + +* The **Policy ID** of a {{fleet}} policy configured with the {{fleet-server}} integration. +* A **Service token**, used to authenticate {{fleet-server}} with Elasticsearch. +* For outgoing traffic: + + * The **{{es}} endpoint URL** where the {{fleet-server}} should connect to, configured also in the {{es}} output associated to the policy. + * When a private or intermediate Certificate Authority (CA) is used to sign the {{es}} certificate, the **{{es}} CA file** or the **CA fingerprint**, configured also in the {{es}} output associated to the policy. + +* For incoming connections: + + * A **TLS/SSL certificate and key** for the {{fleet-server}} HTTPS endpoint, used to encrypt the traffic from the {{agent}}s. This certificate has to be valid for the **{{fleet-server}} Host URL** that {{agent}}s use when connecting to the {{fleet-server}}. + +* Extra TLS/SSL certificates and configuration parameters in case of requiring [mutual TLS](/reference/ingestion-tools/fleet/mutual-tls.md) (not covered in this document). + +This document walks you through the complete setup process, organized into the following sections: + +* [Compatibility requirements](#add-fleet-server-kubernetes-compatibility) +* [{{fleet-server}} and SSL/TLS certificates considerations](#add-fleet-server-kubernetes-cert-prereq) +* [{{fleet}} preparations](#add-fleet-server-kubernetes-add-server) +* [{{fleet-server}} installation](#add-fleet-server-kubernetes-install) +* [Troubleshoot {{fleet-server}}](#add-fleet-server-kubernetes-troubleshoot) +* [Next steps](#add-fleet-server-kubernetes-next) + + +## Compatibility [add-fleet-server-kubernetes-compatibility] + +{{fleet-server}} is compatible with the following Elastic products: + +* {{stack}} 7.13 or later. + + * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases). + * {{kib}} should be on the same minor version as {{es}}. + + + +## Prerequisites [add-fleet-server-kubernetes-prereq] + +Before deploying {{fleet-server}}, you need to: + +* Prepare the SSL/TLS configuration, server certificate, [{{fleet-server}} host settings](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-server-hosts-setting), and needed Certificate Authorities (CAs). +* Ensure components have access to the ports needed for communication. + + +### {{fleet-server}} and SSL/TLS certificates considerations [add-fleet-server-kubernetes-cert-prereq] + +This section shows the minimum requirements in terms of Transport Layer Security (TLS) certificates for the {{fleet-server}}, assuming no mutual TLS (mTLS) is needed. Refer to [One-way and mutual TLS certifications flow](/reference/ingestion-tools/fleet/tls-overview.md) and [{{agent}} deployment models with mutual TLS](/reference/ingestion-tools/fleet/mutual-tls.md) for more information about the configuration needs of both approaches. + +There are two main traffic flows for {{fleet-server}}, each with different TLS requirements: + + +#### [{{agent}} → {{fleet-server}}] inbound traffic flow [add-fleet-server-kubernetes-cert-inbound] + +In this flow {{fleet-server}} acts as the server and {{agent}} acts as the client. Therefore, {{fleet-server}} requires a TLS certificate and key, and {{agent}} will need to trust the CA certificate used to sign the {{fleet-server}} certificate. + +::::{note} +A {{fleet-server}} certificate is not required when installing the server using the **Quick start** mode, but should always be used for **production** deployments. In **Quick start** mode, the {{fleet-server}} uses a self-signed certificate and the {{agent}}s have to be enrolled with the `--insecure` option. + +:::: + + +If your organization already uses the {{stack}}, you may have a CA certificate that could be used to generate the new cert for the {{fleet-server}}. If you do not have a CA certificate, refer to [Generate a custom certificate and private key for {{fleet-server}}](/reference/ingestion-tools/fleet/secure-connections.md#generate-fleet-server-certs) for an example to generate a CA and a server certificate using the `elasticsearch-certutil` tool. + +::::{important} +Before creating the certificate, you need to know and plan in advance the [hostname / URL](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-server-hosts-setting) that the {{agent}} clients will use to access the {{fleet-server}}. This is important because the **hostname** part of the URL needs to be included in the server certificate as an `x.509 Subject Alternative Name (SAN)`. If you plan to make your {{fleet-server}} accessible through **multiple hostnames** or **FQDNs**, add all of them to the server certificate, and take in mind that the **{{fleet-server}} also needs to access the {{fleet}} URL during its bootstrap process**. + +:::: + + + +#### [{{fleet-server}} → {{es}} output] outbound traffic flow [add-fleet-server-kubernetes-cert-outbound] + +In this flow, {{fleet-server}} acts as the client and {{es}} acts as the HTTPS server. For the communication to succeed, {{fleet-server}} needs to trust the CA certificate used to sign the {{es}} certificate. If your {{es}} cluster uses certificates signed by a corporate CA or multiple intermediate CAs you will need to use them during the {{fleet-server}} setup. + +::::{note} +If your {{es}} cluster is on Elastic Cloud or if it uses a certificate signed by a public and known CA, you won’t need the {{es}} CA during the setup. + +:::: + + +In summary, you need: + +* A **server certificate and key**, valid for the {{fleet-server}} URL. The CA used to sign this certificate will be needed by the {{agent}} clients and the {{fleet-server}} itself. +* The **CA certificate** (or certificates) associated to your {{es}} cluster, except if you are sure your {{es}} certificate is fully trusted publicly. + + +### Default port assignments [default-port-assignments-kubernetes] + +When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. Refer to the following table for default port assignments: + +| | | +| --- | --- | +| Component communication | Default port | +| {{agent}} → {{fleet-server}} | 8220 | +| {{fleet-server}} → {{es}} | 9200 | +| {{fleet-server}} → {{kib}} (optional, for {{fleet}} setup) | 5601 | +| {{agent}} → {{es}} | 9200 | +| {{agent}} → Logstash | 5044 | +| {{agent}} → {{kib}} (optional, for {{fleet}} setup) | 5601 | + +In Kubernetes environments, you can adapt these ports without modifying the listening ports of the {{fleet-server}} or other applications, as traffic is managed by Kubernetes `Services`. This guide includes an example where {{agent}}s connect to the {{fleet-server}} through port `443` instead of the default `8220`. + + +## Add {{fleet-server}} [add-fleet-server-kubernetes-add-server] + +A {{fleet-server}} is an {{agent}} that is enrolled in a {{fleet-server}} policy. The policy configures the agent to operate in a special mode to serve as a {{fleet-server}} in your deployment. + + +### {{fleet}} preparations [add-fleet-server-kubernetes-preparations] + +::::{tip} +If you already have a {{fleet}} policy with the {{fleet-server}} integration, you know its ID, and you know how to generate an [{{es}} service token](elasticsearch://docs/reference/elasticsearch/command-line-tools/service-tokens-command.md) for the {{fleet-server}}, skip directly to [{{fleet-server}} installation](#add-fleet-server-kubernetes-install). + +Also note that the `service token` required by the {{fleet-server}} is different from the `enrollment tokens` used by {{agent}}s to enroll to {{fleet}}. + +:::: + + +1. In {{kib}}, open **{{fleet}} → Settings** and ensure the **Elasticsearch output** that will be used by the {{fleet-server}} policy is correctly configured, paying special attention that: + + * The **hosts** field includes a valid URL that will be reachable by the {{fleet-server}} Pod(s). + * If your {{es}} cluster uses certificates signed by private or intermediate CAs not publicly trusted, you have added the trust information in the **Elasticsearch CA trusted fingerprint** field or in the **advanced configuration** section through the `ssl.certificate_authorities` setting. For an example, refer to [Secure Connections](/reference/ingestion-tools/fleet/secure-connections.md#_encrypt_traffic_between_agents_fleet_server_and_es) documentation. + + ::::{important} + This validation step is critical. The {{es}} host URL and CA information has to be added **in both the {{es}} output and the environment variables** provided to the {{fleet-server}}. It’s a common mistake to ignore the output settings believing that the environment variables will prevail, when the environment variables are only used during the bootstrap of the {{fleet-server}}. + + If the URL that {{fleet-server}} will use to access {{es}} is different from the {{es}} URL used by other clients, you may want to create a dedicated **{{es}} output** for {{fleet-server}}. + + :::: + +2. Go to **{{fleet}} → Agent Policies** and select **Create agent policy** to create a policy for the {{fleet-server}}: + + * Set a **name** for the policy, for example `Fleet Server Policy Kubernetes`. + * Do **not** select the option **Collect system logs and metrics**. This option adds the System integration to the {{agent}} policy. Because {{fleet-server}} will run as a Kubernetes Pod without any visibility to the Kubernetes node, there won’t be a system to monitor. + * Select the **output** that the {{fleet-server}} needs to use to contact {{es}}. This should be the output that you verified in the previous step. + * Optionally, you can set the **inactivity timeout** and **inactive agent unenrollment timeout** parameters to automatically unenroll and invalidate API keys after the {{fleet-server}} agents become inactive. This is especially useful in Kubernetes environments, where {{fleet-server}} Pods are ephemeral, and new {{agent}}s appear in {{fleet}} UI after Pod recreations. + +3. Open the created policy, and from the **Integrations** tab select **Add integration**: + + * Search for and select the {{fleet-server}} integration. + * Select **Add {{fleet-server}}** to add the integration to the {{agent}} policy. + + At this point you can configure the integration settings per [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md). + + * When done, select **Save and continue**. Do not add an {{agent}} at this stage. + +4. Open the configured policy, which now includes the {{fleet-server}} integration, and select **Actions** → **Add {{fleet-server}}**. In the next dialog: + + * Confirm that the **policy for {{fleet-server}}** is properly selected. + * **Choose a deployment mode for security**: + + * If you select **Quick start**, the {{fleet-server}} generates a self-signed TLS certificate, and subsequent agents should be enrolled using the `--insecure` flag. + * If you select **Production**, you provide a TLS certificate, key and CA to the {{fleet-server}} during the deployment, and subsequent agents will need to trust the certificate’s CA. + + * Add your **{{fleet-server}} Host** information. This is the URL that clients ({{agent}}s) will use to connect to the {{fleet-server}}: + + * In **Production** mode, the {{fleet-server}} certificate must include the hostname part of the URL as an `x509 SAN`, and the {{fleet-server}} itself will need to access that URL during its bootstrap process. + * On Kubernetes environments this could be the name of the `Kubernetes service` or reverse proxy that exposes the {{fleet-server}} Pods. + * In the provided example we use `https://fleet-svc.` as the URL, which corresponds to the Kubernetes service DNS resolution. + + * Select **generate service token** to create a token for the {{fleet-server}}. + * From **Install {{fleet-server}} to a centralized host → Linux**, take note of the values of the following settings that will be needed for the {{fleet-server}} installation: + + * Service token(specified by `--fleet-server-service-token` parameter). + * {{fleet}} policy ID (specified by `--fleet-server-policy` parameter). + * {{es}} URL (specified by `--fleet-server-es` parameter). + +5. Keep the {{kib}} browser window open and continue with the [{{fleet-server}} installation](#add-fleet-server-kubernetes-install). + + When the {{fleet-server}} installation has succeeded, the **Confirm Connection** UI will show a **Connected** status. + + + +### {{fleet-server}} installation [add-fleet-server-kubernetes-install] + + +#### Installation overview [add-fleet-server-kubernetes-install-overview] + +To deploy {{fleet-server}} on Kubernetes and enroll it into {{fleet}} you need the following details: + +* **Policy ID** of the {{fleet}} policy configured with the {{fleet-server}} integration. +* **Service token**, that you can generate following the [{{fleet}} preparations](#add-fleet-server-kubernetes-preparations) or manually using the [{{es}}-service-tokens command](elasticsearch://docs/reference/elasticsearch/command-line-tools/service-tokens-command.md). +* **{{es}} endpoint URL**, configured in both the {{es}} output associated to the policy and in the Fleet Server as an environment variable. +* **{{es}} CA certificate file**, configured in both the {{es}} output associated to the policy and in the Fleet Server. +* {{fleet-server}} **certificate and key** (for **Production** deployment mode only). +* {{fleet-server}} **CA certificate file** (for **Production** deployment mode only). +* {{fleet-server}} URL (for **Production** deployment mode only). + +If you followed the [{{fleet-server}} and SSL/TLS certificates considerations](#add-fleet-server-kubernetes-cert-prereq) and [{{fleet}} preparations](#add-fleet-server-kubernetes-preparations) you should have everything ready to proceed with the {{fleet-server}} installation. + +The suggested deployment method for the {{fleet-server}} consists of: + +* A Kubernetes Deployment manifest that relies on two Secrets for its configuration: + + * A Secret named `fleet-server-config` with the main configuration parameters, such as the service token, the {{es}} URL and the policy ID. + * A Secret named `fleet-server-ssl` with all needed certificate files and the {{fleet-server}} URL. + +* A Kubernetes ClusterIP Service named `fleet-svc` that exposes the {{fleet-server}} on port 443, making it available at URLs like `https://fleet-svc`, `https://fleet-svc.` and `https://fleet-svc..svc`. + +Adapt and change the suggested manifests and deployment strategy to your needs, ensuring you feed the {{fleet-server}} with the needed configuration and certificates. For example, you can customize: + +* CPU and memory `requests` and `limits`. Refer to [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md) for more information about {{fleet-server}} resources utilization. +* Scheduling configuration, such as `affinity rules` or `tolerations`, if needed in your environment. +* Number of replicas, to scale the Fleet Server horizontally. +* Use an {{es}} CA fingerprint instead of a CA file. +* Configure other [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md). + + +#### Installation Steps [add-fleet-server-kubernetes-install-steps] + +1. Create the Secret for the {{fleet-server}} configuration. + + ```shell + kubectl create secret generic fleet-server-config \ + --from-literal=elastic_endpoint='' \ + --from-literal=elastic_service_token='' \ + --from-literal=fleet_policy_id='' + ``` + + When running the command, substitute the following values: + + * ``: Replace this with the URL of your {{es}} host, for example `'https://monitoring-es-http.default.svc:9200'`. + * ``: Use the service token provided by {{kib}} in the {{fleet}} UI. + * ``: Replace this with the ID of the created policy, for example `'dee949ac-403c-4c83-a489-0122281e4253'`. + + If you prefer to obtain a **yaml manifest** of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file. + +2. Create the Secret for the TLS/SSL configuration: + + ::::{tab-set} + + :::{tab-item} Quick start + + The following command assumes you have the {{es}} CA available as a local file. + + ```shell + kubectl create secret generic fleet-server-ssl \ + --from-file=es-ca.crt= + ``` + + When running the command, substitute the following values: + + * `` with your local file containing the {{es}} CA(s). + + If you prefer to obtain a **yaml manifest** of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file. + ::: + + :::{tab-item} Production + The following command assumes you have the {{es}} CA and the {{fleet-server}} certificate, key and CA available as local files. + + ```shell + kubectl create secret generic fleet-server-ssl \ + --from-file=es-ca.crt= \ + --from-file=fleet-ca.crt= \ + --from-file=fleet-server.crt= \ + --from-file=fleet-server.key= \ + --from-literal=fleet_url='' + ``` + + When running the command, substitute the following values: + + * `` with your local file containing the {{es}} CA(s). + * `` with your local file containing the {{fleet-server}} CA. + * `` with your local file containing the server TLS certificate for the {{fleet-server}}. + * `` with your local file containing the server TLS key for the {{fleet-server}}. + * `` with the URL that points to the {{fleet-server}}, for example `https://fleet-svc`. This URL will be used by the {{fleet-server}} during its bootstrap, and its hostname must be included in the server certificate’s x509 Subject Alternative Name (SAN) list. + + If you prefer to obtain a **yaml manifest** of the Secret to create, append `--dry-run=client -o=yaml` to the command and save the output to a file. + ::: + + :::: + + If your {{es}} cluster runs on Elastic Cloud or if it uses a publicly trusted CA, remove the `es-ca.crt` key from the proposed secret. + +3. Save the proposed Deployment manifest locally, for example as `fleet-server-dep.yaml`, and adapt it to your needs: + + ::::{tab-set} + + :::{tab-item} Production + + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: fleet-svc + spec: + type: ClusterIP + selector: + app: fleet-server + ports: + - port: 443 + protocol: TCP + targetPort: 8220 + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: fleet-server + spec: + replicas: 1 + selector: + matchLabels: + app: fleet-server + template: + metadata: + labels: + app: fleet-server + spec: + automountServiceAccountToken: false + containers: + - name: elastic-agent + image: docker.elastic.co/beats/elastic-agent:9.0.0-beta1 + env: + - name: FLEET_SERVER_ENABLE + value: "true" + - name: FLEET_SERVER_ELASTICSEARCH_HOST + valueFrom: + secretKeyRef: + name: fleet-server-config + key: elastic_endpoint + - name: FLEET_SERVER_SERVICE_TOKEN + valueFrom: + secretKeyRef: + name: fleet-server-config + key: elastic_service_token + - name: FLEET_SERVER_POLICY_ID + valueFrom: + secretKeyRef: + name: fleet-server-config + key: fleet_policy_id + - name: ELASTICSEARCH_CA + value: /mnt/certs/es-ca.crt + - name: FLEET_SERVER_CERT + value: /mnt/certs/fleet-server.crt + - name: FLEET_SERVER_CERT_KEY + value: /mnt/certs/fleet-server.key + - name: FLEET_CA + value: /mnt/certs/fleet-ca.crt + - name: FLEET_URL + valueFrom: + secretKeyRef: + name: fleet-server-ssl + key: fleet_url + - name: FLEET_SERVER_TIMEOUT + value: '60s' + - name: FLEET_SERVER_PORT + value: '8220' + ports: + - containerPort: 8220 + protocol: TCP + resources: {} + volumeMounts: + - name: certs + mountPath: /mnt/certs + readOnly: true + volumes: + - name: certs + secret: + defaultMode: 420 + optional: false + secretName: fleet-server-ssl + ``` + ::: + + :::{tab-item} Quick start + + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: fleet-svc + spec: + type: ClusterIP + selector: + app: fleet-server + ports: + - port: 443 + protocol: TCP + targetPort: 8220 + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: fleet-server + spec: + replicas: 1 + selector: + matchLabels: + app: fleet-server + template: + metadata: + labels: + app: fleet-server + spec: + automountServiceAccountToken: false + containers: + - name: elastic-agent + image: docker.elastic.co/beats/elastic-agent:9.0.0-beta1 + env: + - name: FLEET_SERVER_ENABLE + value: "true" + - name: FLEET_SERVER_ELASTICSEARCH_HOST + valueFrom: + secretKeyRef: + name: fleet-server-config + key: elastic_endpoint + - name: FLEET_SERVER_SERVICE_TOKEN + valueFrom: + secretKeyRef: + name: fleet-server-config + key: elastic_service_token + - name: FLEET_SERVER_POLICY_ID + valueFrom: + secretKeyRef: + name: fleet-server-config + key: fleet_policy_id + - name: ELASTICSEARCH_CA + value: /mnt/certs/es-ca.crt + ports: + - containerPort: 8220 + protocol: TCP + resources: {} + volumeMounts: + - name: certs + mountPath: /mnt/certs + readOnly: true + volumes: + - name: certs + secret: + defaultMode: 420 + optional: false + secretName: fleet-server-ssl + ``` + ::: + + :::: + + Manifest considerations: + + * If your {{es}} cluster runs on Elastic Cloud or if it uses a publicly trusted CA, remove the `ELASTICSEARCH_CA` environment variable from the manifest. + * Check the `image` version to ensure its aligned with the rest of your {{stack}}. + * Keep `automountServiceAccountToken` set to `false` to disable the [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md). + * Consider configuring requests and limits always as a best practice. Refer to [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md) for more information about resources utilization of the {{fleet-server}}. + * You can change the listening `port` of the service to any port of your choice, but do not change the `targetPort`, as the {{fleet-server}} Pods will listen on port 8220. + * If you want to expose the {{fleet-server}} externally, consider changing the service type to `LoadBalancer`. + +4. Deploy the configured manifest to create the {{fleet-server}} and service: + + ```shell + kubectl apply -f fleet-server-dep.yaml + ``` + + ::::{important} + Ensure the `Service`, the `Deployment` and all the referenced `Secrets` are created in the **same Namespace**. + + :::: + +5. Check the {{fleet-server}} Pod logs for errors and confirm in {{kib}} that the {{fleet-server}} agent appears as `Connected` and `Healthy` in **{{kib}} → {{fleet}}**. + + ```shell + kubectl logs fleet-server-69499449c7-blwjg + ``` + + It can take a couple of minutes for {{fleet-server}} to fully start. If you left the {{kib}} browser window open during [{{fleet}} preparations](#add-fleet-server-kubernetes-preparations) it will show **Connected** when everything has gone well. + + ::::{note} + In **Production mode**, during {{fleet-server}} bootstrap process, the {{fleet-server}} might be unable to access its own `FLEET_URL`. This is usually a temporary issue caused by the Kubernetes Service not forwarding traffic to the Pod(s). + + If the issue persists consider using `https://localhost:8220` as the `FLEET_URL` for the {{fleet-server}} configuration, and ensure that `localhost` is included in the certificate’s SAN. + + :::: + + +## Expose the {{fleet-server}} to {{agent}}s [add-fleet-server-kubernetes-expose] + +This may include the creation of a Kubernetes `service`, an `ingress` resource, and / or DNS registers for FQDNs resolution. There are multiple ways to expose applications in Kubernetes. + +Considerations when exposing {{fleet-server}}: + +* If your environment requires the {{fleet-server}} to be reachable through multiple hostnames or URLs, you can create multiple **{{fleet-server}} Hosts** in **{{fleet}} → Settings**, and create different policies for different groups of agents. +* Remember that in **Production** mode, the **hostnames** used to access the {{fleet-server}} must be part of the {{fleet-server}} certificate as `x.509 Subject Alternative Names`. +* **Align always the service listening port to the URL**. If you configure the service to listen in port 8220 use a URL like `https://service-name:8220`, and if it listens in `443` use a URL like `https://service-name`. + +Below is an end to end example of how to expose the server to external and internal clients using a LoadBalancer service. For this example we assume the following: + +* The {{fleet-server}} runs in a namespace called `elastic`. +* External clients will access {{fleet-server}} using a URL like `https://fleet.example.com`, which will be resolved in DNS to the external IP of the Load Balancer. +* Internal clients will access {{fleet-server}} using the Kubernetes service directly `https://fleet-svc-lb.elastic`. +* The server certificate has both hostnames (`fleet.example.com` and `fleet-svc-lb.elastic`) in its SAN list. + +1. Create the `LoadBalancer` Service + + ```shell + kubectl expose deployment fleet-server --name fleet-svc-lb --type LoadBalancer --port 443 --target-port 8220 + ``` + + That command creates a service named `fleet-svc-lb`, listening on port `443` and forwarding the traffic to the `fleet-server` deployment’s Pods on port `8220`. The listening `--port` (and the consequent URL) of the service can be customized, but the `--target-port` must remain on the default port (`8220`), because it’s the port used by the {{fleet-server}} application. + +2. Add `https://fleet-server.example.com` and `https://fleet-svc-lb.elastic` as a new **{{fleet-server}} Hosts** in **{{fleet}} → Settings**. Align the port of the URLs if you configured something different from `443` in the Load Balancer. +3. Create a {{fleet}} policy for external clients using the `https://fleet-server.example.com` {{fleet-server}} URL. +4. Create a {{fleet}} policy for internal clients using the `https://fleet-svc-lb.elastic` {{fleet-server}} URL. +5. You are ready now to enroll external and internal agents to the relevant policies. Refer to [Next steps](#add-fleet-server-kubernetes-next) for more details. + + +## Troubleshoot {{fleet-server}} [add-fleet-server-kubernetes-troubleshoot] + + +### Common Problems [add-fleet-server-kubernetes-troubleshoot-common] + +The following issues may occur when {{fleet-server}} settings are missing or configured incorrectly: + +* {{fleet-server}} is trying to access {{es}} at `localhost:9200` even though the `FLEET_SERVER_ELASTICSEARCH_HOST` environment variable is properly set. + + This problem occurs when the `output` of the policy associated to the {{fleet-server}} is not correctly configured. + +* TLS certificate trust issues occur even when the `ELASTICSEARCH_CA` environment variable is properly set during deployment. + + This problem occurs when the `output` of the policy associated to the {{fleet-server}} is not correctly configured. Add the **CA certificate** or **CA trusted fingerprint** to the {{es}} output associated to the {{fleet-server}} policy. + +* In **Production mode**, {{fleet-server}} enrollment fails due to `FLEET_URL` not being accessible, showing something similar to: + + ```sh + Starting enrollment to URL: https://fleet-svc/ + 1st enrollment attempt failed, retrying enrolling to URL: https://fleet-svc/ with exponential backoff (init 1s, max 10s) + Error: fail to enroll: fail to execute request to fleet-server: dial tcp 34.118.226.212:443: connect: connection refused + Error: enrollment failed: exit status 1 + ``` + + If the service and URL are correctly configured, this is usually a temporary issue caused by the Kubernetes Service not forwarding traffic to the Pod, and it should be cleared in a couple of restarts. + + As a workaround, consider using `https://localhost:8220` as the `FLEET_URL` for the {{fleet-server}} configuration, and ensure that `localhost` is included in the certificate’s SAN. + + + +## Next steps [add-fleet-server-kubernetes-next] + +Now you’re ready to add {{agent}}s to your host systems. To learn how, refer to [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md), or [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) if your {{agent}}s will also run on Kubernetes. + +When you connect {{agent}}s to {{fleet-server}}, remember to use the `--insecure` flag if the **quick start** mode was used, or to provide to the {{agent}}s the CA certificate associated to the {{fleet-server}} certificate if **production** mode was used. diff --git a/reference/ingestion-tools/fleet/add-fleet-server-mixed.md b/reference/ingestion-tools/fleet/add-fleet-server-mixed.md new file mode 100644 index 0000000000..a8cd77d01b --- /dev/null +++ b/reference/ingestion-tools/fleet/add-fleet-server-mixed.md @@ -0,0 +1,158 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-mixed.html +--- + +# Deploy Fleet Server on-premises and Elasticsearch on Cloud [add-fleet-server-mixed] + +To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts. + +Another approach is to deploy a cluster of {{fleet-server}}s on-premises and connect them back to {{ecloud}} with access to {{es}} and {{kib}}. In this [deployment model](/reference/ingestion-tools/fleet/deployment-models.md), you are responsible for high-availability, fault-tolerance, and lifecycle management of {{fleet-server}}. + +This approach might be right for you if you would like to limit the control plane traffic out of your data center. For example, you might take this approach if you are a managed service provider or a larger enterprise that segregates its networks. + +This approach might *not* be right for you if you don’t want to manage the life cycle of an extra compute resource in your environment for {{fleet-server}} to reside on. + +:::{image} images/fleet-server-on-prem-es-cloud.png +:alt: {{fleet-server}} on-premise and {{es}} on Cloud deployment model +::: + +To deploy a self-managed {{fleet-server}} on-premises to work with a hosted {{ess}}, you need to: + +* Satisfy all [compatibility requirements](#add-fleet-server-mixed-compatibility) and [prerequisites](#add-fleet-server-mixed-prereq) +* Create a [{{fleet-server}} policy](#fleet-server-create-policy) +* [Add {{fleet-server}}](#fleet-server-add-server) by installing an {{agent}} and enrolling it in an agent policy containing the {{fleet-server}} integration + + +## Compatibility [add-fleet-server-mixed-compatibility] + +{{fleet-server}} is compatible with the following Elastic products: + +* {{stack}} 7.13 or later + + * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases). + * {{kib}} should be on the same minor version as {es} + +* {{ece}} 2.9 or later—​allows you to use a hosted {{fleet-server}} on {{ecloud}}. + + * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`. + * The deployment template must contain an {{integrations-server}} node. + + For more information about hosting {{fleet-server}} on {{ece}}, refer to [Manage your {{integrations-server}}](/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md). + + + +## Prerequisites [add-fleet-server-mixed-prereq] + +Before deploying, you need to: + +* Obtain or generate a Cerfiticate Authority (CA) certificate. +* Ensure components have access to the default ports needed for communication. + + +### CA certificate [add-fleet-server-mixed-cert-prereq] + +Before setting up {{fleet-server}} using this approach, you will need a CA certificate to configure Transport Layer Security (TLS) to encrypt traffic between the {{fleet-server}}s and the {{stack}}. + +If your organization already uses the {{stack}}, you may already have a CA certificate. If you do not have a CA certificate, you can read more about generating one in [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). + +::::{note} +This is not required when testing and iterating using the **Quick start** option, but should always be used for production deployments. +:::: + + + +### Default port assignments [default-port-assignments-mixed] + +When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. See the following table for default port assignments: + +| Component communication | Default port | +| --- | --- | +| Elastic Agent → {{fleet-server}} | 8220 | +| Elastic Agent → {{es}} | 443 | +| Elastic Agent → Logstash | 5044 | +| Elastic Agent → {{kib}} ({{fleet}}) | 443 | +| {{fleet-server}} → {{kib}} ({{fleet}}) | 443 | +| {{fleet-server}} → {{es}} | 443 | + +::::{note} +If you do not specify the port for {{es}} as 443, the {{agent}} defaults to 9200. +:::: + + + +## Create a {{fleet-server}} policy [fleet-server-create-policy] + +First, create a {{fleet-server}} policy. The {{fleet-server}} policy manages and configures the {{agent}} running on the {{fleet-server}} host to launch a {{fleet-server}} process. + +To create a {{fleet-server}} policy: + +1. In {{fleet}}, open the **Agent policies** tab. +2. Click on the **Create agent policy** button, then: + + 1. Provide a meaningful name for the policy that will help you identify this {{fleet-server}} (or cluster) in the future. + 2. Ensure you select *Collect system logs and metrics* so the compute system hosting this {{fleet-server}} can be monitored. (This is not required, but is highly recommended.) + +3. After creating the {{fleet-server}} policy, navigate to the policy itself and click **Add integration**. +4. Search for and select the **{{fleet-server}}** integration. +5. Then click **Add {{fleet-server}}**. +6. Configure the {{fleet-server}}: + + 1. Expand **Change default**. Because you are deploying this {{fleet-server}} on-premises, you need to enter the *Host* address and *Port* number, `8220`. (In our example the {{fleet-server}} will be installed on the host `10.128.0.46`.) + 2. It’s recommended that you also enter the *Max agents* you intend to support with this {{fleet-server}}. This can also be modified at a later stage. This will allow the {{fleet-server}} to handle the load and frequency of updates being sent to the agent and ensure a smooth operation in a bursty environment. + + + +## Add {{fleet-server}}s [fleet-server-add-server] + +Now that the policy exists, you can add {{fleet-server}}s. + +A {{fleet-server}} is an {{agent}} that is enrolled in a {{fleet-server}} policy. The policy configures the agent to operate in a special mode to serve as a {{fleet-server}} in your deployment. + +To add a {{fleet-server}}: + +1. In {{fleet}}, open the **Agents** tab. +2. Click **Add {{fleet-server}}**. +3. This will open in-product instructions for adding a {{fleet-server}} using one of two options. Choose **Advanced**. + + :::{image} images/add-fleet-server-advanced.png + :alt: In-product instructions for adding a {{fleet-server}} in advanced mode + :class: screenshot + ::: + +4. Follow the in-product instructions to add a {{fleet-server}}. + + 1. Select the agent policy that you created for this deployment. + 2. Choose **Production** as your deployment mode. + + Production mode is the fully secured mode where TLS certificates ensure a secure communication between {{fleet-server}} and {{es}}. + + 3. Open the **{{fleet-server}} Hosts** dropdown and select **Add new {{fleet-server}} Hosts**. Specify one or more host URLs your {{agent}}s will use to connect to {{fleet-server}}. For example, `https://192.0.2.1:8220`, where `192.0.2.1` is the host IP where you will install {{fleet-server}}. + 4. A **Service Token** is required so the {{fleet-server}} can write data to the connected {{es}} instance. Click **Generate service token** and copy the generated token. + 5. Copy the installation instructions provided in {{kib}}, which include some of the known deployment parameters. + 6. Replace the value of the `--certificate-authorities` parameter with your [CA certificate](#add-fleet-server-mixed-prereq). + +5. If installation is successful, a confirmation indicates that {{fleet-server}} is set up and connected. + +After {{fleet-server}} is installed and enrolled in {{fleet}}, the newly created {{fleet-server}} policy is applied. You can see this on the {{fleet-server}} policy page. + +The {{fleet-server}} agent will also show up on the main {{fleet}} page as another agent whose life-cycle can be managed (like other agents in the deployment). + +You can update your {{fleet-server}} configuration in {{kib}} at any time by going to: **Management** → **{{fleet}}** → **Settings**. From there you can: + +* Update the {{fleet-server}} host URL. +* Configure additional outputs where agents will send data. +* Specify the location from where agents will download binaries. +* Specify proxy URLs to use for {{fleet-server}} or {{agent}} outputs. + + +## Next steps [fleet-server-install-agents] + +Now you’re ready to add {{agent}}s to your host systems. To learn how, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). + +::::{note} +For on-premises deployments, you can dedicate a policy to all the agents in the network boundary and configure that policy to include a specific {{fleet-server}} (or a cluster of {{fleet-server}}s). + +Read more in [Add a {{fleet-server}} to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-fleet-server-to-policy). + +:::: diff --git a/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md b/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md new file mode 100644 index 0000000000..a906cfe72f --- /dev/null +++ b/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md @@ -0,0 +1,166 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add-fleet-server-on-prem.html +--- + +# Deploy on-premises and self-managed [add-fleet-server-on-prem] + +To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts. + +You can deploy {{fleet-server}} on-premises and manage it yourself. In this [deployment model](/reference/ingestion-tools/fleet/deployment-models.md), you are responsible for high-availability, fault-tolerance, and lifecycle management of {{fleet-server}}. + +This approach might be right for you if you would like to limit the control plane traffic out of your data center or have requirements for fully air-gapped operations. For example, you might take this approach if you need to satisfy data governance requirements or you want agents to only have access to a private segmented network. + +This approach might *not* be right for you if you don’t want to manage the life cycle of your Elastic environment and instead would like that to be handled by Elastic. + +When using this approach, it’s recommended that you provision multiple instances of the {{fleet-server}} and use a load balancer to better scale the deployment. You also have the option to use your organization’s certificate to establish a secure connection from {{fleet-server}} to {{es}}. + +:::{image} images/fleet-server-on-prem-deployment.png +:alt: {{fleet-server}} on-premises deployment model +::: + +To deploy a self-managed {{fleet-server}}, you need to: + +* Satisfy all [compatibility requirements](#add-fleet-server-on-prem-compatibility) and [prerequisites](#add-fleet-server-on-prem-prereq). +* [Add a {{fleet-server}}](#add-fleet-server-on-prem-add-server) by installing an {{agent}} and enrolling it in an agent policy containing the {{fleet-server}} integration. + +::::{note} +You can install only a single {{agent}} per host, which means you cannot run {{fleet-server}} and another {{agent}} on the same host unless you deploy a containerized {{fleet-server}}. +:::: + + + +## Compatibility [add-fleet-server-on-prem-compatibility] + +{{fleet-server}} is compatible with the following Elastic products: + +* {{stack}} 7.13 or later. + + * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases). + * {{kib}} should be on the same minor version as {{es}}. + +* {{ece}} 2.9 or later + + * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`. + * The deployment template must contain an {{integrations-server}} node. + + For more information about hosting {{fleet-server}} on {{ece}}, refer to [Manage your {{integrations-server}}](/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md). + + + +## Prerequisites [add-fleet-server-on-prem-prereq] + +Before deploying, you need to: + +* Obtain or generate a Cerfiticate Authority (CA) certificate. +* Ensure components have access to the ports needed for communication. + + +### CA certificate [add-fleet-server-on-prem-cert-prereq] + +Before setting up {{fleet-server}} using this approach, you will need a CA certificate to configure Transport Layer Security (TLS) to encrypt traffic between the {{fleet-server}}s and the {{stack}}. + +If your organization already uses the {{stack}}, you may already have a CA certificate. If you do not have a CA certificate, you can read more about generating one in [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). + +::::{note} +This is not required when testing and iterating using the **Quick start** option, but should always be used for production deployments. +:::: + + + +### Default port assignments [default-port-assignments-on-prem] + +When {{es}} or {{fleet-server}} are deployed, components communicate over well-defined, pre-allocated ports. You may need to allow access to these ports. Refer to the following table for default port assignments: + +| Component communication | Default port | +| --- | --- | +| Elastic Agent → {{fleet-server}} | 8220 | +| Elastic Agent → {{es}} | 9200 | +| Elastic Agent → Logstash | 5044 | +| Elastic Agent → {{kib}} ({{fleet}}) | 5601 | +| {{fleet-server}} → {{kib}} ({{fleet}}) | 5601 | +| {{fleet-server}} → {{es}} | 9200 | + +::::{note} +Connectivity to {{kib}} on port 5601 is optional and not required at all times. {{agent}} and {{fleet-server}} may need to connect to {{kib}} if deployed in a container environment where an enrollment token can not be provided during deployment. +:::: + + + +## Add {{fleet-server}} [add-fleet-server-on-prem-add-server] + +A {{fleet-server}} is an {{agent}} that is enrolled in a {{fleet-server}} policy. The policy configures the agent to operate in a special mode to serve as a {{fleet-server}} in your deployment. + +To add a {{fleet-server}}: + +1. In {{fleet}}, open the **Agents** tab. +2. Click **Add {{fleet-server}}**. +3. This opens in-product instructions to add a {{fleet-server}} using one of two options: **Quick Start** or **Advanced**. + + * Use **Quick Start** if you want {{fleet}} to generate a {{fleet-server}} policy and enrollment token for you. The {{fleet-server}} policy will include a {{fleet-server}} integration plus a system integration for monitoring {{agent}}. This option generates self-signed certificates and is **not** recommended for production use cases. + + :::{image} images/add-fleet-server.png + :alt: In-product instructions for adding a {{fleet-server}} in quick start mode + :class: screenshot + ::: + + * Use **Advanced** if you want to either: + + * **Use your own {{fleet-server}} policy.** {{fleet-server}} policies manage and configure the {{agent}} running on {{fleet-server}} hosts to launch a {{fleet-server}} process. You can create a new {{fleet-server}} policy or select an existing one. Alternatively you can [create a {{fleet-server}} policy without using the UI](/reference/ingestion-tools/fleet/create-policy-no-ui.md), and then select the policy here. + * **Use your own TLS certificates.** TLS certificates encrypt traffic between {{agent}}s and {{fleet-server}}. To learn how to generate certs, refer to [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). + + ::::{note} + If you are providing your own certificates: + + * Before running the `install` command, make sure you replace the values in angle brackets. + * Note that the URL specified by `--url` must match the DNS name used to generate the certificate specified by `--fleet-server-cert`. + + :::: + + + :::{image} images/add-fleet-server-advanced.png + :alt: In-product instructions for adding a {{fleet-server}} in advanced mode + :class: screenshot + ::: + +4. Step through the in-product instructions to configure and install {{fleet-server}}. + + ::::{note} + * The fields to configure {{fleet-server}} hosts are not available if the hosts are already configured outside of {{fleet}}. For more information, refer to [{{fleet}} settings in {{kib}}](kibana://docs/reference/configuration-reference/fleet-settings.md). + * When using the **Advanced** option, it’s recommended to generate a unique service token for each {{fleet-server}}. For other ways to generate service tokens, refer to [`elasticsearch-service-tokens`](elasticsearch://docs/reference/elasticsearch/command-line-tools/service-tokens-command.md). + * If you’ve configured a non-default port for {{fleet-server}} in the {{fleet-server}} integration, you need to include the `--fleet-server-host` and `--fleet-server-port` options in the `elastic-agent install` command. Refer to the [install command documentation](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) for details. + + :::: + + + At the **Install Fleet Server to a centralized host** step, the `elastic-agent install` command installs an {{agent}} as a managed service and enrolls it in a {{fleet-server}} policy. For more {{fleet-server}} commands, refer to the [{{agent}} command reference](/reference/ingestion-tools/fleet/agent-command-reference.md). + +5. If installation is successful, a confirmation indicates that {{fleet-server}} is set up and connected. + +After {{fleet-server}} is installed and enrolled in {{fleet}}, the newly created {{fleet-server}} policy is applied. You can see this on the {{fleet-server}} policy page. + +The {{fleet-server}} agent also shows up on the main {{fleet}} page as another agent whose life-cycle can be managed (like other agents in the deployment). + +You can update your {{fleet-server}} configuration in {{kib}} at any time by going to: **Management** → **{{fleet}}** → **Settings**. From there you can: + +* Update the {{fleet-server}} host URL. +* Configure additional outputs where agents should send data. +* Specify the location from where agents should download binaries. +* Specify proxy URLs to use for {{fleet-server}} or {{agent}} outputs. + + +## Troubleshooting [add-fleet-server-on-prem-troubleshoot] + +If you’re unable to add a {{fleet}}-managed agent, click the **Agents** tab and confirm that the agent running {{fleet-server}} is healthy. + + +## Next steps [add-fleet-server-on-prem-next] + +Now you’re ready to add {{agent}}s to your host systems. To learn how, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). + +::::{note} +For on-premises deployments, you can dedicate a policy to all the agents in the network boundary and configure that policy to include a specific {{fleet-server}} (or a cluster of {{fleet-server}}s). + +Read more in [Add a {{fleet-server}} to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-fleet-server-to-policy). + +:::: diff --git a/reference/ingestion-tools/fleet/add-integration-to-policy.md b/reference/ingestion-tools/fleet/add-integration-to-policy.md new file mode 100644 index 0000000000..c9c9c28fa7 --- /dev/null +++ b/reference/ingestion-tools/fleet/add-integration-to-policy.md @@ -0,0 +1,43 @@ +--- +navigation_title: "Add an integration to an {{agent}} policy" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add-integration-to-policy.html +--- + +# Add an integration to an {{agent}} policy [add-integration-to-policy] + + +An [{{agent}} policy](/reference/ingestion-tools/fleet/agent-policy.md) consists of one or more integrations that are applied to the agents enrolled in that policy. When you add an integration, the policy created for that integration can be shared with multiple {{agent}} policies. This reduces the number of integrations policies that you need to actively manage. + +To add a new integration to one or more {{agent}} policies: + +1. In {{kib}}, go to the **Integrations** page. +2. The Integrations page shows {{agent}} integrations along with other types, such as {{beats}}. Scroll down and select **Elastic Agent only** to view only integrations that work with {{agent}}. +3. Search for and select an integration. You can select a category to narrow your search. +4. Click **Add **. +5. You can opt to install an {{agent}} if you haven’t already, or choose **Add integration only** to proceed. +6. In Step 1 on the **Add ** page, you can select the configuration settings specific to the integration. +7. In Step 2 on the page, you have two options: + + 1. If you’d like to create a new policy for your {{agent}}s, on the **New hosts** tab specify a name for the new agent policy and choose whether or not to collect system logs and metrics. Collecting logs and metrics will add the System integration to the new agent policy. + 2. If you already have an {{agent}} policy created, on the **Existing hosts** tab use the drop-down menu to specify one or more agent policies that you’d like to add the integration to. + +8. Click **Save and continue** to confirm your settings. + +This action installs the integration (if it’s not already installed) and adds it to the {{agent}} policies that you specified. {{fleet}} distributes the new integration policy to all {{agent}}s that are enrolled in the agent policies. + +You can update the settings for an installed integration at any time: + +1. In {{kib}}, go to the **Integrations** page. +2. On the **Integration policies** tab, for the integration that you like to update open the **Actions** menu and select **Edit integration**. +3. On the **Edit ** page you can update any configuration settings and also update the list of {{agent}} polices to which the integration is added. + + If you clear the **Agent policies** field, the integration will be removed from any {{agent}} policies to which it had been added. + + To identify any integrations that have been "orphaned", that is, not associated with any {{agent}} policies, check the **Agent polices** column on the **Integration policies** tab. Any integrations that are installed but not associated with an {{agent}} policy are as labeled as `No agent policies`. + + +If you haven’t deployed any {{agent}}s yet or set up agent policies, start with one of our quick start guides: + +* [Get started with logs and metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md) +* [Get started with application traces and APM](/solutions/observability/apps/get-started-with-apm.md) diff --git a/reference/ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md b/reference/ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md new file mode 100644 index 0000000000..cb3fb2afa7 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md @@ -0,0 +1,65 @@ +--- +navigation_title: "add_cloudfoundry_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_cloudfoundry_metadata-processor.html +--- + +# Add Cloud Foundry metadata [add_cloudfoundry_metadata-processor] + + +The `add_cloudfoundry_metadata` processor annotates each event with relevant metadata from Cloud Foundry applications. + +For events to be annotated with Cloud Foundry metadata, they must have a field called `cloudfoundry.app.id` that contains a reference to a Cloud Foundry application, and the configured Cloud Foundry client must be able to retrieve information for the application. + +Each event is annotated with: + +* Application Name +* Space ID +* Space Name +* Organization ID +* Organization Name + +::::{note} +Pivotal Application Service and Tanzu Application Service include this metadata in all events from the firehose since version 2.8. In these cases the metadata in the events is used, and `add_cloudfoundry_metadata` processor doesn’t modify these fields. +:::: + + +For efficient annotation, application metadata retrieved by the Cloud Foundry client is stored in a persistent cache on the filesystem. This is done so the metadata can persist across restarts of {{agent}} and its underlying programs. For control over this cache, use the `cache_duration` and `cache_retry_delay` settings. + + +## Example [_example_3] + +```yaml + - add_cloudfoundry_metadata: + api_address: https://api.dev.cfdev.sh + client_id: uaa-filebeat + client_secret: verysecret + ssl: + verification_mode: none + # To connect to Cloud Foundry over verified TLS you can specify a client and CA certificate. + #ssl: + # certificate_authorities: ["/etc/pki/cf/ca.pem"] + # certificate: "/etc/pki/cf/cert.pem" + # key: "/etc/pki/cf/cert.key" +``` + + +## Configuration settings [_configuration_settings_2] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `api_address` | No | `http://api.bosh-lite.com` | URL of the Cloud Foundry API. | +| `doppler_address` | No | `${api_address}/v2/info` | URL of the Cloud Foundry Doppler Websocket. | +| `uaa_address` | No | `${api_address}/v2/info` | URL of the Cloud Foundry UAA API. | +| `rlp_address` | No | `${api_address}/v2/info` | URL of the Cloud Foundry RLP Gateway. | +| `client_id` | Yes | | Client ID to authenticate with Cloud Foundry. | +| `client_secret` | Yes | | Client Secret to authenticate with Cloud Foundry. | +| `cache_duration` | No | `120s` | Maximum amount of time to cache an application’s metadata. | +| `cache_retry_delay` | No | `20s` | Time to wait before trying to obtain an application’s metadata again in case of error. | +| `ssl` | No | | SSL configuration to use when connecting to Cloud Foundry. For a list ofavailable settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specificallythe settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). | + diff --git a/reference/ingestion-tools/fleet/add_docker_metadata-processor.md b/reference/ingestion-tools/fleet/add_docker_metadata-processor.md new file mode 100644 index 0000000000..54624b19af --- /dev/null +++ b/reference/ingestion-tools/fleet/add_docker_metadata-processor.md @@ -0,0 +1,80 @@ +--- +navigation_title: "add_docker_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_docker_metadata-processor.html +--- + +# Add Docker metadata [add_docker_metadata-processor] + + +::::{tip} +Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly. +:::: + + +The `add_docker_metadata` processor annotates each event with relevant metadata from Docker containers. At startup the processor detects a Docker environment and caches the metadata. + +For events to be annotated with Docker metadata, the configuration must be valid, and the processor must be able to reach the Docker API. + +Each event is annotated with: + +* Container ID +* Name +* Image +* Labels + +::::{note} +When running {{agent}} in a container, you need to provide access to Docker’s unix socket in order for the `add_docker_metadata` processor to work. You can do this by mounting the socket inside the container. For example: + +`docker run -v /var/run/docker.sock:/var/run/docker.sock ...` + +To avoid privilege issues, you may also need to add `--user=root` to the `docker run` flags. Because the user must be part of the Docker group in order to access `/var/run/docker.sock`, root access is required if {{agent}} is running as non-root inside the container. + +If the Docker daemon is restarted, the mounted socket will become invalid, and metadata will stop working. When this happens, you can do one of the following: + +* Restart {{agent}} every time Docker is restarted +* Mount the entire `/var/run` directory (instead of just the socket) + +:::: + + + +## Example [_example_4] + +```yaml + - add_docker_metadata: + host: "unix:///var/run/docker.sock" + #match_fields: ["system.process.cgroup.id"] + #match_pids: ["process.pid", "process.parent.pid"] + #match_source: true + #match_source_index: 4 + #match_short_id: true + #cleanup_timeout: 60 + #labels.dedot: false + # To connect to Docker over TLS you must specify a client and CA certificate. + #ssl: + # certificate_authority: "/etc/pki/root/ca.pem" + # certificate: "/etc/pki/client/cert.pem" + # key: "/etc/pki/client/cert.key" +``` + + +## Configuration settings [_configuration_settings_3] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `host` | No | `unix:///var/run/docker.sock` | Docker socket (UNIX or TCP socket). | +| `ssl` | No | | SSL configuration to use when connecting to the Docker socket. For a list ofavailable settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specificallythe settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). | +| `match_fields` | No | | List of fields to match a container ID. At least one of the fields most hold a container ID to get the event enriched. | +| `match_pids` | No | `["process.pid", "process.parent.pid"]` | List of fields that contain process IDs. If the process is running in Docker, the event will be enriched. | +| `match_source` | No | `true` | Whether to match the container ID from a log path present in the `log.file.path` field. | +| `match_short_id` | No | `false` | Whether to match the container short ID from a log path present in the `log.file.path` field. This setting allows you to match directory names that have the first 12 characters of the container ID. For example, `/var/log/containers/b7e3460e2b21/*.log`. | +| `match_source_index` | No | `4` | Index in the source path split by a forward slash (`/`) to find the container ID. For example, the default, `4`, matches the container ID in `/var/lib/docker/containers//*.log`. | +| `cleanup_timeout` | No | `60s` | Time of inactivity before container metadata is cleaned up and forgotten. | +| `labels.dedot` | No | `false` | Whether to replace dots (`.`) in labels with underscores (`_`). | + diff --git a/reference/ingestion-tools/fleet/add_fields-processor.md b/reference/ingestion-tools/fleet/add_fields-processor.md new file mode 100644 index 0000000000..5b85386d81 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_fields-processor.md @@ -0,0 +1,59 @@ +--- +navigation_title: "add_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_fields-processor.html +--- + +# Add fields [add_fields-processor] + + +The `add_fields` processor adds fields to the event. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. The `add_fields` processor overwrites the target field if it already exists. By default, the fields that you specify are grouped under the `fields` sub-dictionary in the event. To group the fields under a different sub-dictionary, use the `target` setting. To store the fields as top-level fields, set `target: ''`. + + +## Examples [_examples_2] + +This configuration: + +```yaml + - add_fields: + target: project + fields: + name: myproject + id: '574734885120952459' +``` + +Adds these fields to any event: + +```json +{ + "project": { + "name": "myproject", + "id": "574734885120952459" + } +} +``` + +This configuration alters the event metadata: + +```yaml + - add_fields: + target: '@metadata' + fields: + op_type: "index" +``` + +When the event is ingested by {{es}}, the document will have `op_type: "index"` set as a metadata field. + + +## Configuration settings [_configuration_settings_4] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `target` | No | `fields` | Sub-dictionary to put all fields into. Set `target` to `@metadata` to add values to the event metadata instead of fields. | +| `fields` | Yes | | Fields to be added. | + diff --git a/reference/ingestion-tools/fleet/add_host_metadata-processor.md b/reference/ingestion-tools/fleet/add_host_metadata-processor.md new file mode 100644 index 0000000000..ebd64ad687 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_host_metadata-processor.md @@ -0,0 +1,96 @@ +--- +navigation_title: "add_host_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_host_metadata-processor.html +--- + +# Add Host metadata [add_host_metadata-processor] + + +::::{tip} +Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly. +:::: + + +The `add_host_metadata` processor annotates each event with relevant metadata from the host machine. + +::::{note} +If you are using {{agent}} to monitor external system, use the [`add_observer_metadata`](/reference/ingestion-tools/fleet/add_observer_metadata-processor.md) processor instead of `add_host_metadata`. +:::: + + + +## Example [_example_5] + +```yaml + - add_host_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York +``` + +The fields added to the event look like this: + +```json +{ + "host":{ + "architecture":"x86_64", + "name":"example-host", + "id":"", + "os":{ + "family":"darwin", + "type":"macos", + "build":"16G1212", + "platform":"darwin", + "version":"10.12.6", + "kernel":"16.7.0", + "name":"Mac OS X" + }, + "ip": ["192.168.0.1", "10.0.0.1"], + "mac": ["00:25:96:12:34:56", "72:00:06:ff:79:f1"], + "geo": { + "continent_name": "North America", + "country_iso_code": "US", + "region_name": "New York", + "region_iso_code": "NY", + "city_name": "New York", + "name": "nyc-dc1-rack1", + "location": "40.7128, -74.0060" + } + } +} +``` + + +## Configuration settings [_configuration_settings_5] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +::::{important} +If `host.*` fields already exist in the event, they are overwritten by default unless you set `replace_fields` to `true` in the processor configuration. +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `netinfo.enabled` | No | `true` | Whether to include IP addresses and MAC addresses as fields `host.ip` and `host.mac`. | +| `cache.ttl` | No | `5m` | Sets the cache expiration time for the internal cache used by the processor. Negative values disable caching altogether. | +| `geo.name` | No | | User-definable token to be used for identifying a discrete location. Frequently a data center, rack, or similar. | +| `geo.location` | No | | Longitude and latitude in comma-separated format. | +| `geo.continent_name` | No | | Name of the continent. | +| `geo.country_name` | No | | Name of the country. | +| `geo.region_name` | No | | Name of the region. | +| `geo.city_name` | No | | Name of the city. | +| `geo.country_iso_code` | No | | ISO country code. | +| `geo.region_iso_code` | No | | ISO region code. | +| `replace_fields` | No | `true` | Whether to replace original host fields from the event. If set `false`, original host fields from the event are not replaced by host fields from `add_host_metadata`. | + diff --git a/reference/ingestion-tools/fleet/add_id-processor.md b/reference/ingestion-tools/fleet/add_id-processor.md new file mode 100644 index 0000000000..b6dc2bb377 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_id-processor.md @@ -0,0 +1,31 @@ +--- +navigation_title: "add_id" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_id-processor.html +--- + +# Generate an ID for an event [add_id-processor] + + +The `add_id` processor generates a unique ID for an event. + + +## Example [_example_6] + +```yaml + - add_id: ~ +``` + + +## Configuration settings [_configuration_settings_6] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `target_field` | No | `@metadata._id` | Field where the generated ID will be stored. | +| `type` | No | `elasticsearch` | Type of ID to generate. Currently only `elasticsearch` is supported. The `elasticsearch` type uses the same algorithm that {{es}} uses to auto-generate document IDs. | + diff --git a/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md b/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md new file mode 100644 index 0000000000..170225524e --- /dev/null +++ b/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md @@ -0,0 +1,225 @@ +--- +navigation_title: "add_kubernetes_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_kubernetes_metadata-processor.html +--- + +# Add Kubernetes metadata [add_kubernetes_metadata-processor] + + +::::{tip} +Inputs that collect logs and metrics use this processor by default, so you do not need to configure it explicitly. +:::: + + +The `add_kubernetes_metadata` processor annotates each event with relevant metadata based on which Kubernetes Pod the event originated from. At startup it detects an `in_cluster` environment and caches the Kubernetes-related metadata. + +For events to be annotated with Kubernetes-related metadata, the Kubernetes configuration must be valid. + +Each event is annotated with: + +* Pod Name +* Pod UID +* Namespace +* Labels + +In addition, the node and namespace metadata are added to the Pod metadata. + +The `add_kubernetes_metadata` processor has two basic building blocks: + +* Indexers +* Matchers + +Indexers use Pod metadata to create unique identifiers for each one of the Pods. These identifiers help to correlate the metadata of the observed Pods with actual events. For example, the `ip_port` indexer can take a Kubernetes Pod and create identifiers for it based on all its `pod_ip:container_port` combinations. + +Matchers use information in events to construct lookup keys that match the identifiers created by the indexers. For example, when the `fields` matcher takes `["metricset.host"]` as a lookup field, it constructs a lookup key with the value of the field `metricset.host`. When one of these lookup keys matches with one of the identifiers, the event is enriched with the metadata of the identified Pod. + +For more information about available indexers and matchers, plus some examples, refer to [Indexers and matchers](#kubernetes-indexers-and-matchers). + + +## Examples [_examples_3] + +This configuration enables the processor when {{agent}} is run as a Pod in Kubernetes. + +```yaml + - add_kubernetes_metadata: + # Defining indexers and matchers manually is required for {beatname_lc}, for instance: + #indexers: + # - ip_port: + #matchers: + # - fields: + # lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + +This configuration enables the processor on an {{agent}} running as a process on the Kubernetes node: + +```yaml + - add_kubernetes_metadata: + host: + # If kube_config is not set, KUBECONFIG environment variable will be checked + # and if not present it will fall back to InCluster + kube_config: ${fleet} and {agent} Guide/.kube/config + # Defining indexers and matchers manually is required for {beatname_lc}, for instance: + #indexers: + # - ip_port: + #matchers: + # - fields: + # lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + +This configuration disables the default indexers and matchers, and then enables different indexers and matchers: + +```yaml + - add_kubernetes_metadata: + host: + # If kube_config is not set, KUBECONFIG environment variable will be checked + # and if not present it will fall back to InCluster + kube_config: ~/.kube/config + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - fields: + lookup_fields: ["metricset.host"] + #labels.dedot: true + #annotations.dedot: true +``` + + +## Configuration settings [_configuration_settings_7] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `host` | No | | Node to scope {{agent}} to in case it cannot be accurately detected, as when running {{agent}} in host network mode. | +| `scope` | No | `node` | Whether the processor should have visibility at the node level (`node`) or at the entire cluster level (`cluster`). | +| `namespace` | No | | Namespace to collect the metadata from. If no namespaces is specified, collects metadata from all namespaces. | +| `add_resource_metadata` | No | | Filters and configuration for adding extra metadata to the event. This setting accepts the following settings:

* `node` or `namespace`: Labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included, but annotations are not. To change the default behavior, you can set `include_labels`, `exclude_labels`, and `include_annotations`. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. To turn off enrichment of `node` or `namespace` metadata individually, set `enabled: false`.
* `deployment`: If the resource is `pod` and it is created from a `deployment`, the deployment name is not added by default. To enable this behavior, set `deployment: true`.
* `cronjob`: If the resource is `pod` and it is created from a `cronjob`, the cronjob name is not added by default. To enable this behavior, set `cronjob: true`.

::::{dropdown} Expand this to see an example
```yaml
add_resource_metadata:
namespace:
include_labels: ["namespacelabel1"]
# use_regex_include: false
# use_regex_exclude: false
# exclude_labels: ["namespacelabel2"]
#labels.dedot: true
#annotations.dedot: true
node:
# use_regex_include: false
include_labels: ["nodelabel2"]
include_annotations: ["nodeannotation1"]
# use_regex_exclude: false
# exclude_annotations: ["nodeannotation2"]
#labels.dedot: true
#annotations.dedot: true
deployment: true
cronjob: true
```

::::

| +| `kube_config` | No | `KUBECONFIG` environment variable, if present | Config file to use as the configuration for the Kubernetes client. | +| `kube_client_options` | No | | Additional configuration options for the Kubernetes client. Currently client QPS and burst are supported. If this setting is not configured, the Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) is used.

::::{dropdown} Expand this to see an example
```yaml
kube_client_options:
qps: 5
burst: 10
```

::::

| +| `cleanup_timeout` | No | `60s` | Time of inactivity before stopping the running configuration for a container. | +| `sync_period` | No | | Timeout for listing historical resources. | +| `labels.dedot` | No | `true` | Whether to replace dots (`.`) in labels with underscores (`_`).
`annotations.dedot` | + + +## Indexers and matchers [kubernetes-indexers-and-matchers] + +The `add_kubernetes_metadata` processor has two basic building blocks: + +* Indexers +* Matchers + + +### Indexers [_indexers] + +Indexers use Pod metadata to create unique identifiers for each one of the Pods. + +Available indexers are: + +`container` +: Identifies the Pod metadata using the IDs of its containers. + +`ip_port` +: Identifies the Pod metadata using combinations of its IP and its exposed ports. When using this indexer, metadata is identified using the combination of `ip:port` for each of the ports exposed by all containers of the pod. The `ip` is the IP of the pod. + +`pod_name` +: Identifies the Pod metadata using its namespace and its name as `namespace/pod_name`. + +`pod_uid` +: Identifies the Pod metadata using the UID of the Pod. + + +### Matchers [_matchers] + +Matchers are used to construct the lookup keys that match with the identifiers created by indexes. + +Available matchers are: + +`field_format` +: Looks up Pod metadata using a key created with a string format that can include event fields. + + This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event. + + For example, the following configuration uses the `ip_port` indexer to identify the Pod metadata by combinations of the Pod IP and its exposed ports, and uses the destination IP and port in events as match keys: + + ```yaml + - add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - field_format: + format: '%{[destination.ip]}:%{[destination.port]}' + ``` + + +`fields` +: Looks up Pod metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used. + + This matcher has an option `lookup_fields` to define the files whose value will be used for lookup. + + For example, the following configuration uses the `ip_port` indexer to identify Pods, and defines a matcher that uses the destination IP or the server IP for the lookup, the first it finds in the event: + + ```yaml + - add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - ip_port: + matchers: + - fields: + lookup_fields: ['destination.ip', 'server.ip'] + ``` + + +`logs_path` +: Looks up Pod metadata using identifiers extracted from the log path stored in the `log.file.path` field. + + This matcher has the following configuration settings: + + `logs_path` + : (Optional) Base path of container logs. If not specified, it uses the default logs path of the platform where Agent is running: for Linux - `/var/lib/docker/containers/`, Windows - `C:\\ProgramData\\Docker\\containers`. To change the default value: container ID must follow right after the `logs_path` - `/`, where `container_id` is a 64-character-long hexadecimal string. + + `resource_type` + : (Optional) Type of the resource to obtain the ID of. Valid `resource_type`: + + * `pod`: to make the lookup based on the Pod UID. When `resource_type` is set to `pod`, `logs_path` must be set as well, supported path in this case: + + * `/var/lib/kubelet/pods/` used to read logs from mounted into the Pod volumes, those logs end up under `/var/lib/kubelet/pods//volumes//...` To use `/var/lib/kubelet/pods/` as a `log_path`, `/var/lib/kubelet/pods` must be mounted into the filebeat Pods. + * `/var/log/pods/` Note: when using `resource_type: 'pod'` logs will be enriched only with Pod metadata: Pod id, Pod name, etc., not container metadata. + + * `container`: to make the lookup based on the container ID, `logs_path` must be set to `/var/log/containers/`. It defaults to `container`. + + + To be able to use `logs_path` matcher agent’s input path must be a subdirectory of directory defined in `logs_path` configuration setting. + + The default configuration is able to lookup the metadata using the container ID when the logs are collected from the default docker logs path (`/var/lib/docker/containers//...` on Linux). + + For example the following configuration would use the Pod UID when the logs are collected from `/var/lib/kubelet/pods//...`. + + ```yaml + - add_kubernetes_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - pod_uid: + matchers: + - logs_path: + logs_path: '/var/lib/kubelet/pods' + resource_type: 'pod' + ``` + + diff --git a/reference/ingestion-tools/fleet/add_labels-processor.md b/reference/ingestion-tools/fleet/add_labels-processor.md new file mode 100644 index 0000000000..63196bf03c --- /dev/null +++ b/reference/ingestion-tools/fleet/add_labels-processor.md @@ -0,0 +1,56 @@ +--- +navigation_title: "add_labels" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_labels-processor.html +--- + +# Add labels [add_labels-processor] + + +The `add_labels` processors adds a set of key-value pairs to an event. The processor flattens nested configuration objects like arrays or dictionaries into a fully qualified name by merging nested names with a dot (`.`). Array entries create numeric names starting with 0. Labels are always stored under the Elastic Common Schema compliant `labels` sub-dictionary. + + +## Example [_example_7] + +This configuration: + +```yaml + - add_labels: + labels: + number: 1 + with.dots: test + nested: + with.dots: nested + array: + - do + - re + - with.field: mi +``` + +Adds these fields to every event: + +```json +{ + "labels": { + "number": 1, + "with.dots": "test", + "nested.with.dots": "nested", + "array.0": "do", + "array.1": "re", + "array.2.with.field": "mi" + } +} +``` + + +## Configuration settings [_configuration_settings_8] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `labels` | Yes | | Dictionaries of labels to be added. | + diff --git a/reference/ingestion-tools/fleet/add_locale-processor.md b/reference/ingestion-tools/fleet/add_locale-processor.md new file mode 100644 index 0000000000..33fed145b6 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_locale-processor.md @@ -0,0 +1,44 @@ +--- +navigation_title: "add_locale" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_locale-processor.html +--- + +# Add the local time zone [add_locale-processor] + + +The `add_locale` processor enriches each event with either the machine’s time zone offset from UTC or the name of the time zone. The processor adds the a `event.timezone` value to each event. + + +## Examples [_examples_4] + +The configuration adds the processor with the default settings: + +```yaml + - add_locale: ~ +``` + +This configuration adds the processor and configures it to add the time zone abbreviation to events: + +```yaml + - add_locale: + format: abbreviation +``` + +::::{note} +The `add_locale` processor differentiates between daylight savings time (DST) and regular time. For example `CEST` indicates DST and and `CET` is regular time. +:::: + + + +## Configuration settings [_configuration_settings_9] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `format` | No | `offset` | Whether an `offset` or time zone `abbreviation` is added to the event. | + diff --git a/reference/ingestion-tools/fleet/add_network_direction-processor.md b/reference/ingestion-tools/fleet/add_network_direction-processor.md new file mode 100644 index 0000000000..5c6f5de0ba --- /dev/null +++ b/reference/ingestion-tools/fleet/add_network_direction-processor.md @@ -0,0 +1,37 @@ +--- +navigation_title: "add_network_direction" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_network_direction-processor.html +--- + +# Add network direction [add_network_direction-processor] + + +The `add_network_direction` processor attempts to compute the perimeter-based network direction when given a source and destination IP address and a list of internal networks. + + +## Example [_example_8] + +```yaml + - add_network_direction: + source: source.ip + destination: destination.ip + target: network.direction + internal_networks: [ private ] +``` + + +## Configuration settings [_configuration_settings_10] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `source` | Yes | | Source IP. | +| `destination` | Yes | | Destination IP. | +| `target` | Yes | | Target field where the network direction will be written. | +| `internal_networks` | Yes | | List of internal networks. The value can contain either CIDR blocks or a list of special values enumerated in the network section of [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions). | + diff --git a/reference/ingestion-tools/fleet/add_nomad_metadata-processor.md b/reference/ingestion-tools/fleet/add_nomad_metadata-processor.md new file mode 100644 index 0000000000..305de98330 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_nomad_metadata-processor.md @@ -0,0 +1,137 @@ +--- +navigation_title: "add_nomad_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_nomad_metadata-processor.html +--- + +# Add Nomad metadata [add_nomad_metadata-processor] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `add_nomad_metadata` processor adds fields with relevant metadata for applications deployed in Nomad. + +Each event is annotated with the following information: + +* Allocation name, identifier, and status +* Job name and type +* Namespace where the job is deployed +* Datacenter and region where the agent running the allocation is located. + + +## Example [_example_9] + +```yaml + - add_nomad_metadata: ~ +``` + + +## Configuration settings [_configuration_settings_11] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `address` | No | `http://127.0.0.1:4646` | URL of the agent API used to request the metadata. | +| `namespace` | No | | Namespace to watch. If set, only events for allocations in this namespace are annotated. | +| `region` | No | | Region to watch. If set, only events for allocations in this region are annotated. | +| `secret_id` | No | | SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token.

```json
namespace "*" {
policy = "read"
}
node {
policy = "read"
}
agent {
policy = "read"
}
```
| +| `refresh_interval` | No | `30s` | Interval used to update the cached metadata. | +| `cleanup_timeout` | No | `60s` | Time to wait before cleaning up an allocation’s associated resources after it has been removed.This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. | +| `scope` | No | `node` | Scope of the resources to watch.Specify `node` to get metadata for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. | +| `node` | No | | When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically.

For example, you can use the following configuration when {{agent}} is collecting events from all the allocations in the cluster:

```yaml
- add_nomad_metadata:
scope: global
```
| + + +## Indexers and matchers [_indexers_and_matchers] + +Indexers and matchers are used to correlate fields in events with actual metadata. {{agent}} uses this information to know what metadata to include in each event. + + +### Indexers [_indexers_2] + +Indexers use allocation metadata to create unique identifiers for each one of the Pods. + +Available indexers are: + +`allocation_name` +: Identifies allocations by their name and namespace (as `/`) + +`allocation_uuid` +: Identifies allocations by their unique identifier. + + +### Matchers [_matchers_2] + +Matchers are used to construct the lookup keys that match with the identifiers created by indexes. + + +#### `field_format` [_field_format] + +Looks up allocation metadata using a key created with a string format that can include event fields. + +This matcher has an option `format` to define the string format. This string format can contain placeholders for any field in the event. + +For example, the following configuration uses the `allocation_name` indexer to identify the allocation metadata by its name and namespace, and uses custom fields existing in the event as match keys: + +```yaml +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_name: + matchers: + - field_format: + format: '%{[labels.nomad_namespace]}/%{[fields.nomad_alloc_name]}' +``` + + +#### `fields` [_fields] + +Looks up allocation metadata using as key the value of some specific fields. When multiple fields are defined, the first one included in the event is used. + +This matcher has an option `lookup_fields` to define the fields whose value will be used for lookup. + +For example, the following configuration uses the `allocation_uuid` indexer to identify allocations, and defines a matcher that uses some fields where the allocation UUID can be found for lookup, the first it finds in the event: + +```yaml +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_uuid: + matchers: + - fields: + lookup_fields: ['host.name', 'fields.nomad_alloc_uuid'] +``` + + +#### `logs_path` [_logs_path] + +Looks up allocation metadata using identifiers extracted from the log path stored in the `log.file.path` field. + +This matcher has an optional `logs_path` option with the base path of the directory containing the logs for the local agent. + +The default configuration is able to lookup the metadata using the allocation UUID when the logs are collected under `/var/lib/nomad`. + +For example the following configuration would use the allocation UUID when the logs are collected from `/var/lib/NomadClient001/alloc//alloc/logs/...`. + +```yaml +- add_nomad_metadata: + ... + default_indexers.enabled: false + default_matchers.enabled: false + indexers: + - allocation_uuid: + matchers: + - logs_path: + logs_path: '/var/lib/NomadClient001' +``` + diff --git a/reference/ingestion-tools/fleet/add_observer_metadata-processor.md b/reference/ingestion-tools/fleet/add_observer_metadata-processor.md new file mode 100644 index 0000000000..a382642d87 --- /dev/null +++ b/reference/ingestion-tools/fleet/add_observer_metadata-processor.md @@ -0,0 +1,81 @@ +--- +navigation_title: "add_observer_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_observer_metadata-processor.html +--- + +# Add Observer metadata [add_observer_metadata-processor] + + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +The `add_observer_metadata` processor annotates each event with relevant metadata from the observer machine. + + +## Example [_example_10] + +```yaml + - add_observer_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York +``` + +The fields added to the event look like this: + +```json +{ + "observer" : { + "hostname" : "avce", + "type" : "heartbeat", + "vendor" : "elastic", + "ip" : [ + "192.168.1.251", + "fe80::64b2:c3ff:fe5b:b974", + ], + "mac" : [ + "dc:c1:02:6f:1b:ed", + ], + "geo": { + "continent_name": "North America", + "country_iso_code": "US", + "region_name": "New York", + "region_iso_code": "NY", + "city_name": "New York", + "name": "nyc-dc1-rack1", + "location": "40.7128, -74.0060" + } + } +} +``` + + +## Configuration settings [_configuration_settings_12] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `netinfo.enabled` | No | `true` | Whether to include IP addresses and MAC addresses as fields `observer.ip` and `observer.mac`. | +| `cache.ttl` | No | `5m` | Sets the cache expiration time for the internal cache used by the processor. Negative values disable caching altogether. | +| `geo.name` | No | | User-definable token to be used for identifying a discrete location. Frequently a data center, rack, or similar. | +| `geo.location` | No | | Longitude and latitude in comma-separated format. | +| `geo.continent_name` | No | | Name of the continent. | +| `geo.country_name` | No | | Name of the country. | +| `geo.region_name` | No | | Name of the region. | +| `geo.city_name` | No | | Name of the city. | +| `geo.country_iso_code` | No | | ISO country code. | +| `geo.region_iso_code` | No | | ISO region code. | + diff --git a/reference/ingestion-tools/fleet/add_process_metadata-processor.md b/reference/ingestion-tools/fleet/add_process_metadata-processor.md new file mode 100644 index 0000000000..49a790566e --- /dev/null +++ b/reference/ingestion-tools/fleet/add_process_metadata-processor.md @@ -0,0 +1,77 @@ +--- +navigation_title: "add_process_metadata" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_process_metadata-processor.html +--- + +# Add process metadata [add_process_metadata-processor] + + +The `add_process_metadata` processor enriches events with information from running processes, identified by their process ID (PID). + + +## Example [_example_11] + +```yaml + - add_process_metadata: + match_pids: [system.process.ppid] + target: system.process.parent +``` + +The fields added to the event look as follows: + +```json +"process": { + "name": "systemd", + "title": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22", + "exe": "/usr/lib/systemd/systemd", + "args": ["/usr/lib/systemd/systemd", "--switched-root", "--system", "--deserialize", "22"], + "pid": 1, + "parent": { + "pid": 0 + }, + "start_time": "2018-08-22T08:44:50.684Z", + "owner": { + "name": "root", + "id": "0" + } +}, +"container": { + "id": "b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1" +}, +``` + +Optionally, the process environment can be included, too: + +```json + ... + "env": { + "HOME": "/", + "TERM": "linux", + "BOOT_IMAGE": "/boot/vmlinuz-4.11.8-300.fc26.x86_64", + "LANG": "en_US.UTF-8", + } + ... +``` + + +## Configuration settings [_configuration_settings_13] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `match_pids` | Yes | | List of fields to lookup for a PID. The processor searches the list sequentially until the field is found in the current event, and the PID lookup is then applied to the value of this field. | +| `target` | No | event root | Destination prefix where the `process` object will be created. | +| `include_fields` | No | | List of fields to add. By default, adds all available fields except `process.env`. | +| `ignore_missing` | No | `true` | Whether to ignore missing fields. If `false`, discards events that don’t contain any of the fields specified in `match_pids` and then generates an error. If `true`, missing fields are ignored. | +| `overwrite_keys` | No | `false` | Whether to overwrite existing keys. If `false` and a target field already exists, it is not, overwritten, and an error is logged. If `true`, the target field is overwritten. | +| `restricted_fields` | No | `false` | Whether to output restricted fields. If `false`, to avoid leaking sensitive data, the `process.env` field is not output. If `true`, the field will be present in the output. | +| `host_path` | No | root directory (`/`) of host | Host path where `/proc` is mounted. For different runtime configurations of Kubernetes or Docker, set the `host_path` to overwrite the default. | +| `cgroup_prefixes` | No | `/kubepods` and `/docker` | Prefix where the container ID is inside cgroup. For different runtime configurations of Kubernetes or Docker, set `cgroup_prefixes` to overwrite the defaults. | +| `cgroup_regex` | No | | Regular expression with capture group for capturing the container ID from the cgroup path. For example:

1. `^\/.+\/.+\/.+\/([0-9a-f]{{64}}).*` matches the container ID of a cgroup like `/kubepods/besteffort/pod665fb997-575b-11ea-bfce-080027421ddf/b5285682fba7449c86452b89a800609440ecc88a7ba5f2d38bedfb85409b30b1`
2. `^\/.+\/.+\/.+\/docker-([0-9a-f]{{64}}).scope` matches the container ID of a cgroup like `/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349abe_d645_11ea_9c4c_08002709c05c.slice/docker-80d85a3a585f1575028ebe468d83093c301eda20d37d1671ff2a0be50fc0e460.scope`
3. `^\/.+\/.+\/.+\/crio-([0-9a-f]{{64}}).scope` matches the container ID of a cgroup like `/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349abe_d645_11ea_9c4c_08002709c05c.slice/crio-80d85a3a585f1575028ebe468d83093c301eda20d37d1671ff2a0be50fc0e460.scope`

If `cgroup_regex` is not set, the container ID is extracted from the cgroup file based on the `cgroup_prefixes` setting.
| +| `cgroup_cache_expire_time` | No | `30s` | Time in seconds before cgroup cache elements expire. To disable the cgroup cache, set this to `0`. In some container runtime technologies, like runc, the container’s process is also a process in the host kernel and will be affected by PID rollover/reuse. Set the expire time to a value that is smaller than the PIDs wrap around time to avoid the wrong container ID. | + diff --git a/reference/ingestion-tools/fleet/add_tags-processor.md b/reference/ingestion-tools/fleet/add_tags-processor.md new file mode 100644 index 0000000000..ee10babe4b --- /dev/null +++ b/reference/ingestion-tools/fleet/add_tags-processor.md @@ -0,0 +1,43 @@ +--- +navigation_title: "add_tags" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/add_tags-processor.html +--- + +# Add tags [add_tags-processor] + + +The `add_tags` processor adds tags to a list of tags. If the target field already exists, the tags are appended to the existing list of tags. + + +## Example [_example_12] + +This configuration: + +```yaml + - add_tags: + tags: [web, production] + target: "environment" +``` + +Adds the `environment` field to every event: + +```json +{ + "environment": ["web", "production"] +} +``` + + +## Configuration settings [_configuration_settings_14] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `tags` | Yes | | List of tags to add. | +| `target` | No | `tags` | Field the tags will be added to. Setting tags in `@metadata` is not supported. | + diff --git a/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md b/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md new file mode 100644 index 0000000000..09789303d2 --- /dev/null +++ b/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md @@ -0,0 +1,106 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/advanced-kubernetes-managed-by-fleet.html +--- + +# Advanced Elastic Agent configuration managed by Fleet [advanced-kubernetes-managed-by-fleet] + +For basic {{agent}} managed by {{fleet}} scenarios follow the steps in [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md). + +On managed {{agent}} installations it can be useful to provide the ability to configure more advanced options, such as the configuration of providers during the startup. Refer to [Providers](/reference/ingestion-tools/fleet/providers.md) for more details. + +Following steps demonstrate above scenario: + + +## Step 1: Download the {{agent}} manifest [_step_1_download_the_agent_manifest_2] + +It is advisable to follow the steps of [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md) with Kubernetes Integration installed in your policy and download the {{agent}} manifest from Kibana UI + +:::{image} images/k8skibanaUI.png +:alt: {{agent}} with K8s Package manifest +::: + +Notes +: Sample manifests can also be found [here](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml) + + +## Step 2: Create a new configmap [_step_2_create_a_new_configmap] + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: agent-node-datastreams + namespace: kube-system + labels: + k8s-app: elastic-agent +data: + agent.yml: |- + providers.kubernetes_leaderelection.enabled: false + fleet.enabled: true + fleet.access_token: "" +--- +``` + +Notes +: 1. In the above example the disablement of `kubernetes_leaderelection` provider is demonstrated. Same procedure can be followed for alternative scenarios. + + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: agent-node-datastreams + namespace: kube-system + labels: + k8s-app: elastic-agent +data: + agent.yml: |- + providers.kubernetes: + add_resource_metadata: + deployment: true + cronjob: true + fleet.enabled: true + fleet.access_token: "" +--- +``` + +1. Find more information about [Enrollment Tokens](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md). + + +## Step 3: Configure Daemonset [_step_3_configure_daemonset] + +Inside the downloaded manifest, update the Daemonset resource: + +```yaml +containers: + - name: elastic-agent + image: docker.elastic.co/elastic-agent/elastic-agent: + args: ["-c", "/etc/elastic-agent/agent.yml", "-e"] +``` + +Notes +: The is just a placeholder for the elastic-agent image version that you will download in your manifest: eg. `image: docker.elastic.co/elastic-agent/elastic-agent: 8.11.0` Important thing is to update your manifest with args details + +```yaml +volumeMounts: + - name: datastreams + mountPath: /etc/elastic-agent/agent.yml + readOnly: true + subPath: agent.yml +``` + +```yaml +volumes: + - name: datastreams + configMap: + defaultMode: 0640 + name: agent-node-datastreams +``` + + +## Important Notes [_important_notes] + +1. By default the manifests for {{agent}} managed by {{fleet}} have `hostNetwork:true`. In order to support multiple installations of {{agent}}s in the same node you should set `hostNetwork:false`. See this relevant [example](https://github.com/elastic/elastic-agent/tree/main/docs/manifests/hostnetwork) as described in [{{agent}} Manifests in order to support Kube-State-Metrics Sharding](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-ksm-sharding.md). +2. The volume `/usr/share/elastic-agent/state` must remain mounted in [elastic-agent-managed-kubernetes.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml), otherwise custom config map provided above will be overwritten. + diff --git a/reference/ingestion-tools/fleet/agent-command-reference.md b/reference/ingestion-tools/fleet/agent-command-reference.md new file mode 100644 index 0000000000..d6cd2ed315 --- /dev/null +++ b/reference/ingestion-tools/fleet/agent-command-reference.md @@ -0,0 +1,1196 @@ +--- +navigation_title: "Command reference" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-cmd-options.html +--- + +# {{agent}} command reference [elastic-agent-cmd-options] + + +{{agent}} provides commands for running {{agent}}, managing {{fleet-server}}, and doing common tasks. The commands listed here apply to both [{{fleet}}-managed](/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md) and [standalone](/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md) {{agent}}. + +::::{admonition} Restrictions +:class: important + +Note the following restrictions for running {{agent}} commands: + +* You might need to log in as a root user (or Administrator on Windows) to run the commands described here. After the {{agent}} service is installed and running, make sure you run these commands without prepending them with `./` to avoid invoking the wrong binary. +* Running {{agent}} commands using the Windows PowerShell ISE is not supported. + +:::: + + +* [diagnostics](#elastic-agent-diagnostics-command) +* [enroll](#elastic-agent-enroll-command) +* [help](#elastic-agent-help-command) +* [inspect](#elastic-agent-inspect-command) +* [install](#elastic-agent-install-command) +* [otel](#elastic-agent-otel-command) [preview] +* [privileged](#elastic-agent-privileged-command) +* [restart](#elastic-agent-restart-command) +* [run](#elastic-agent-run-command) +* [status](#elastic-agent-status-command) +* [uninstall](#elastic-agent-uninstall-command) +* [upgrade](#elastic-agent-upgrade-command) +* [logs](#elastic-agent-logs-command) +* [unprivileged](#elastic-agent-unprivileged-command) +* [version](#elastic-agent-version-command) + +
+ +## elastic-agent diagnostics [elastic-agent-diagnostics-command] + +Gather diagnostics information from the {{agent}} and component/unit it’s running. This command produces an archive that contains: + +* version.txt - version information +* pre-config.yaml - pre-configuration before variable substitution +* variables.yaml - current variable contexts from providers +* computed-config.yaml - configuration after variable substitution +* components-expected.yaml - expected computed components model from the computed-config.yaml +* components-actual.yaml - actual running components model as reported by the runtime manager +* state.yaml - current state information of all running components +* Components Directory - diagnostic information from each running component: + + * goroutine.txt - goroutine dump + * heap.txt - memory allocation of live objects + * allocs.txt - sampling past memory allocations + * threadcreate.txt - traces led to creation of new OS threads + * block.txt - stack traces that led to blocking on synchronization primitives + * mutex.txt - stack traces of holders of contended mutexes + * Unit Directory - If a given unit provides specific diagnostics, it will be placed here. + + +Note that **credentials may not be redacted** in the archive; they may appear in plain text in the configuration or policy files inside the archive. + +This command is intended for debugging purposes only. The output format and structure of the archive may change between releases. + + +### Synopsis [_synopsis] + +```shell +elastic-agent diagnostics [--file ] + [--cpu-profile] + [--exclude-events] + [--help] + [global-flags] +``` + + +### Options [_options] + +`--file` +: Specifies the output archive name. Defaults to `elastic-agent-diagnostics-.zip`, where the timestamp is the current time in UTC. + +`--help` +: Show help for the `diagnostics` command. + +`--cpu-profile` +: Additionally runs a 30-second CPU profile on each running component. This will generate an additional `cpu.pprof` file for each component. + +`--p` +: Alias for `--cpu-profile`. + +`--exclude-events` +: Exclude the events log files from the diagnostics archive. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Example [_example_38] + +```shell +elastic-agent diagnostics +``` + +
+ +## elastic-agent enroll [elastic-agent-enroll-command] + +Enroll the {{agent}} in {{fleet}}. + +Use this command to enroll the {{agent}} in {{fleet}} without installing the agent as a service. You will need to do this if you installed the {{agent}} from a DEB or RPM package and plan to use systemd commands to start and manage the service. This command is also useful for testing {{agent}} prior to installing it. + +If you’ve already installed {{agent}}, use this command to modify the settings that {{agent}} runs with. + +::::{tip} +To enroll an {{agent}} *and* install it as a service, use the [`install` command](#elastic-agent-install-command) instead. Installing as a service is the most common scenario. +:::: + + +We recommend that you run the `enroll` (or `install`) command as the root user because some integrations require root privileges to collect sensitive data. This command overwrites the `elastic-agent.yml` file in the agent directory. + +This command includes optional flags to set up [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md). + +::::{important} +This command enrolls the {{agent}} in {{fleet}}; it does not start the agent. To start the agent, either [start the service](/reference/ingestion-tools/fleet/start-stop-elastic-agent.md#start-elastic-agent-service), if one exists, or use the [`run` command](#elastic-agent-run-command) to start the agent from a terminal. +:::: + + + +### Synopsis [_synopsis_2] + +To enroll the {{agent}} in {{fleet}}: + +```shell +elastic-agent enroll --url + --enrollment-token + [--ca-sha256 ] + [--certificate-authorities ] + [--daemon-timeout ] + [--delay-enroll] + [--elastic-agent-cert ] + [--elastic-agent-cert-key ] + [--elastic-agent-cert-key-passphrase ] + [--force] + [--header ] + [--help] + [--insecure ] + [--proxy-disabled] + [--proxy-header ] + [--proxy-url ] + [--staging ] + [--tag ] + [global-flags] +``` + +To enroll the {{agent}} in {{fleet}} and set up {{fleet-server}}: + +```shell +elastic-agent enroll --fleet-server-es + --fleet-server-service-token + [--fleet-server-service-token-path ] + [--ca-sha256 ] + [--certificate-authorities ] + [--daemon-timeout ] + [--delay-enroll] + [--elastic-agent-cert ] + [--elastic-agent-cert-key ] + [--elastic-agent-cert-key-passphrase ] + [--fleet-server-cert ] <1> + [--fleet-server-cert-key ] + [--fleet-server-cert-key-passphrase ] + [--fleet-server-client-auth ] + [--fleet-server-es-ca ] + [--fleet-server-es-ca-trusted-fingerprint ] <2> + [--fleet-server-es-cert ] + [--fleet-server-es-cert-key ] + [--fleet-server-es-insecure] + [--fleet-server-host ] + [--fleet-server-policy ] + [--fleet-server-port ] + [--fleet-server-timeout ] + [--force] + [--header ] + [--help] + [--non-interactive] + [--proxy-disabled] + [--proxy-header ] + [--proxy-url ] + [--staging ] + [--tag ] + [--url ] <3> + [global-flags] +``` + +1. If no `fleet-server-cert*` flags are specified, {{agent}} auto-generates a self-signed certificate with the hostname of the machine. Remote {{agent}}s enrolling into a {{fleet-server}} with self-signed certificates must specify the `--insecure` flag. +2. Required when using self-signed certificates with {{es}}. +3. Required when enrolling in a {{fleet-server}} with custom certificates. The URL must match the DNS name used to generate the certificate specified by `--fleet-server-cert`. + + +For more information about custom certificates, refer to [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). + + +### Options [_options_2] + +`--ca-sha256 ` +: Comma-separated list of certificate authority hash pins used for certificate verification. + +`--certificate-authorities ` +: Comma-separated list of root certificates used for server verification. + +`--daemon-timeout ` +: Timeout waiting for {{agent}} daemon. + +`--delay-enroll` +: Delays enrollment to occur on first start of the {{agent}} service. This setting is useful when you don’t want the {{agent}} to enroll until the next reboot or manual start of the service, for example, when you’re preparing an image that includes {{agent}}. + +`--elastic-agent-cert` +: Certificate to use as the client certificate for the {{agent}}'s connections to {{fleet-server}}. + +`--elastic-agent-cert-key` +: Private key to use as for the {{agent}}'s connections to {{fleet-server}}. + +`--elastic-agent-cert-key-passphrase` +: The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}. The file must only contain the characters of the passphrase, no newline or extra non-printing characters. + + This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use. + + +`--enrollment-token ` +: Enrollment token to use to enroll {{agent}} into {{fleet}}. You can use the same enrollment token for multiple agents. + +`--fleet-server-cert ` +: Certificate to use for exposed {{fleet-server}} HTTPS endpoint. + +`--fleet-server-cert-key ` +: Private key to use for exposed {{fleet-server}} HTTPS endpoint. + +`--fleet-server-cert-key-passphrase ` +: Path to passphrase file for decrypting {{fleet-server}}'s private key if an encrypted private key is used. + +`--fleet-server-client-auth ` +: One of `none`, `optional`, or `required`. Defaults to `none`. {{fleet-server}}'s `client_authentication` option for client mTLS connections. If `optional`, or `required` is specified, client certificates are verified using CAs specified in the `--certificate-authorities` flag. + +`--fleet-server-es ` +: Start a {{fleet-server}} process when {{agent}} is started, and connect to the specified {{es}} URL. + +`--fleet-server-es-ca ` +: Path to certificate authority to use to communicate with {{es}}. + +`--fleet-server-es-ca-trusted-fingerprint ` +: The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint will be used to verify self-signed certificates presented by {{fleet-server}} and any inputs started by {{agent}} for communication. This flag is required when using self-signed certificates with {{es}}. + +`--fleet-server-es-cert` +: The path to the client certificate that {{fleet-server}} will use when connecting to {{es}}. + +`--fleet-server-es-cert-key` +: The path to the private key that {{fleet-server}} will use when connecting to {{es}}. + +`--fleet-server-es-insecure` +: Allows fleet server to connect to {{es}} in the following situations: + + * When connecting to an HTTP server. + * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified. + + When this flag is used the certificate verification is disabled. + + +`--fleet-server-host ` +: {{fleet-server}} HTTP binding host (overrides the policy). + +`--fleet-server-policy ` +: Used when starting a self-managed {{fleet-server}} to allow a specific policy to be used. + +`--fleet-server-port ` +: {{fleet-server}} HTTP binding port (overrides the policy). + +`--fleet-server-service-token ` +: Service token to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token-path`. + +`--fleet-server-service-token-path ` +: Service token file to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token`. + +`--fleet-server-timeout ` +: Timeout waiting for {{fleet-server}} to be ready to start enrollment. + +`--force` +: Force overwrite of current configuration without prompting for confirmation. This flag is helpful when using automation software or scripted deployments. + + ::::{note} + If the {{agent}} is already installed on the host, using `--force` may result in unpredictable behavior with duplicate {{agent}}s appearing in {{fleet}}. + :::: + + +`--header ` +: Headers used in communication with elasticsearch. + +`--help` +: Show help for the `enroll` command. + +`--insecure` +: Allow the {{agent}} to connect to {{fleet-server}} over insecure connections. This setting is required in the following situations: + + * When connecting to an HTTP server. The API keys are sent in clear text. + * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified. + * When using self-signed certificates generated by {{agent}}. + + We strongly recommend that you use a secure connection. + + +`--non-interactive` +: Install {{agent}} in a non-interactive mode. This flag is helpful when using automation software or scripted deployments. If {{agent}} is already installed on the host, the installation will terminate. + +`--proxy-disabled` +: Disable proxy support including environment variables. + +`--proxy-header ` +: Proxy headers used with CONNECT request. + +`--proxy-url ` +: Configures the proxy URL. + +`--staging ` +: Configures agent to download artifacts from a staging build. + +`--tag ` +: A comma-separated list of tags to apply to {{fleet}}-managed {{agent}}s. You can use these tags to filter the list of agents in {{fleet}}. + + ::::{note} + Currently, there is no way to remove or edit existing tags. To change the tags, you must unenroll the {{agent}}, then re-enroll it using new tags. + :::: + + +`--url ` +: {{fleet-server}} URL to use to enroll the {{agent}} into {{fleet}}. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_11] + +Enroll the {{agent}} in {{fleet}}: + +```shell +elastic-agent enroll \ + --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \ + --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== +``` + +Enroll the {{agent}} in {{fleet}} and set up {{fleet-server}}: + +```shell +elastic-agent enroll --fleet-server-es=http://elasticsearch:9200 \ + --fleet-server-service-token=AbEAAdesYXN1abMvZmxlZXQtc2VldmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg7dzEta0JDTmZUcGlDTjlwRmNVTjNVQQ \ + --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 +``` + +Start {{agent}} with {{fleet-server}} (running on a custom CA). This example assumes you’ve generated the certificates with the following names: + +* `ca.crt`: Root CA certificate +* `fleet-server.crt`: {{fleet-server}} certificate +* `fleet-server.key`: {{fleet-server}} private key +* `elasticsearch-ca.crt`: CA certificate to use to connect to {es} + +```shell +elastic-agent enroll \ + --url=https://fleet-server:8220 \ + --fleet-server-es=https://elasticsearch:9200 \ + --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \ + --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 \ + --certificate-authorities=/path/to/ca.crt \ + --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \ + --fleet-server-cert=/path/to/fleet-server.crt \ + --fleet-server-cert-key=/path/to/fleet-server.key \ + --fleet-server-port=8220 +``` + +Then enroll another {{agent}} into the {{fleet-server}} started in the previous example: + +```shell +elastic-agent enroll --url=https://fleet-server:8220 \ + --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \ + --certificate-authorities=/path/to/ca.crt +``` + +
+ +## elastic-agent help [elastic-agent-help-command] + +Show help for a specific command. + + +### Synopsis [_synopsis_3] + +```shell +elastic-agent help [--help] [global-flags] +``` + + +### Options [_options_3] + +`command` +: The name of the command. + +`--help` +: Show help for the `help` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Example [_example_39] + +```shell +elastic-agent help enroll +``` + +
+ +## elastic-agent inspect [elastic-agent-inspect-command] + +Show the current {{agent}} configuration. + +If no parameters are specified, shows the full {{agent}} configuration. + + +### Synopsis [_synopsis_4] + +```shell +elastic-agent inspect [--help] +elastic-agent inspect components [--show-config] + [--show-spec] + [--help] + [id] +``` + + +### Options [_options_4] + +`components` +: Display the current configuration for the component. This command accepts additional flags: + + `--show-config` + : Use to display the configuration in all units. + + `--show-spec` + : Use to get input/output runtime spectification for a component. + + +`--help` +: Show help for the `inspect` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_12] + +```shell +elastic-agent inspect +elastic-agent inspect components --show-config +elastic-agent inspect components log-default +``` + +
+ +## elastic-agent privileged [elastic-agent-privileged-command] + +Run {{agent}} with full superuser privileges. This is the usual, default running mode for {{agent}}. The `privileged` command allows you to switch back to running an agent with full administrative privileges when you have been running it in `unprivileged` mode. + +Refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md) for more detail. + + +### Examples [_examples_13] + +```shell +elastic-agent privileged +``` + +
+ +## elastic-agent install [elastic-agent-install-command] + +Install {{agent}} permanently on the system and manage it by using the system’s service manager. The agent will start automatically after installation is complete. On Linux (tar package), this command requires a system and service manager like systemd. + +::::{important} +If you installed {{agent}} from a DEB or RPM package, the `install` command will skip the installation itself and function as an alias of the [`enroll` command](#elastic-agent-enroll-command) instead. Note that after an upgrade of the {{agent}} using DEB or RPM the {{agent}} service needs to be restarted. +:::: + + +You must run this command as the root user (or Administrator on Windows) to write files to the correct locations. This command overwrites the `elastic-agent.yml` file in the agent directory. + +The syntax for running this command varies by platform. For platform-specific examples, refer to [*Install {{agent}}s*](/reference/ingestion-tools/fleet/install-elastic-agents.md). + + +### Synopsis [_synopsis_5] + +To install the {{agent}} as a service, enroll it in {{fleet}}, and start the `elastic-agent` service: + +```shell +elastic-agent install --url + --enrollment-token + [--base-path ] + [--ca-sha256 ] + [--certificate-authorities ] + [--daemon-timeout ] + [--delay-enroll] + [--elastic-agent-cert ] + [--elastic-agent-cert-key ] + [--elastic-agent-cert-key-passphrase ] + [--force] + [--header ] + [--help] + [--insecure ] + [--non-interactive] + [--privileged] + [--proxy-disabled] + [--proxy-header ] + [--proxy-url ] + [--staging ] + [--tag ] + [--unprivileged] + [global-flags] +``` + +To install the {{agent}} as a service, enroll it in {{fleet}}, and start a `fleet-server` process alongside the `elastic-agent` service: + +```shell +elastic-agent install --fleet-server-es + --fleet-server-service-token + [--fleet-server-service-token-path ] + [--base-path ] + [--ca-sha256 ] + [--certificate-authorities ] + [--daemon-timeout ] + [--delay-enroll] + [--elastic-agent-cert ] + [--elastic-agent-cert-key ] + [--elastic-agent-cert-key-passphrase ] + [--fleet-server-cert ] <1> + [--fleet-server-cert-key ] + [--fleet-server-cert-key-passphrase ] + [--fleet-server-client-auth ] + [--fleet-server-es-ca ] + [--fleet-server-es-ca-trusted-fingerprint ] <2> + [--fleet-server-es-cert ] + [--fleet-server-es-cert-key ] + [--fleet-server-es-insecure] + [--fleet-server-host ] + [--fleet-server-policy ] + [--fleet-server-port ] + [--fleet-server-timeout ] + [--force] + [--header ] + [--help] + [--non-interactive] + [--privileged] + [--proxy-disabled] + [--proxy-header ] + [--proxy-url ] + [--staging ] + [--tag ] + [--unprivileged] + [--url ] <3> + [global-flags] +``` + +1. If no `fleet-server-cert*` flags are specified, {{agent}} auto-generates a self-signed certificate with the hostname of the machine. Remote {{agent}}s enrolling into a {{fleet-server}} with self-signed certificates must specify the `--insecure` flag. +2. Required when using self-signed certificate on {{es}} side. +3. Required when enrolling in a {{fleet-server}} with custom certificates. The URL must match the DNS name used to generate the certificate specified by `--fleet-server-cert`. + + +For more information about custom certificates, refer to [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). + + +### Options [_options_5] + +`--base-path ` +: Install {{agent}} in a location other than the [default](/reference/ingestion-tools/fleet/installation-layout.md). Specify the custom base path for the install. + + The `--base-path` option is not currently supported with [{{elastic-defend}}](/reference/security/elastic-defend/install-endpoint.md). + + +`--ca-sha256 ` +: Comma-separated list of certificate authority hash pins used for certificate verification. + +`--certificate-authorities ` +: Comma-separated list of root certificates used for server verification. + +`--daemon-timeout ` +: Timeout waiting for {{agent}} daemon. + +`--delay-enroll` +: Delays enrollment to occur on first start of the {{agent}} service. This setting is useful when you don’t want the {{agent}} to enroll until the next reboot or manual start of the service, for example, when you’re preparing an image that includes {{agent}}. + +`--elastic-agent-cert` +: Certificate to use as the client certificate for the {{agent}}'s connections to {{fleet-server}}. + +`--elastic-agent-cert-key` +: Private key to use as for the {{agent}}'s connections to {{fleet-server}}. + +`--elastic-agent-cert-key-passphrase` +: The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}. The file must only contain the characters of the passphrase, no newline or extra non-printing characters. + + This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use. + + +`--enrollment-token ` +: Enrollment token to use to enroll {{agent}} into {{fleet}}. You can use the same enrollment token for multiple agents. + +`--fleet-server-cert ` +: Certificate to use for exposed {{fleet-server}} HTTPS endpoint. + +`--fleet-server-cert-key ` +: Private key to use for exposed {{fleet-server}} HTTPS endpoint. + +`--fleet-server-cert-key-passphrase ` +: Path to passphrase file for decrypting {{fleet-server}}'s private key if an encrypted private key is used. + +`--fleet-server-client-auth ` +: One of `none`, `optional`, or `required`. Defaults to `none`. {{fleet-server}}'s `client_authentication` option for client mTLS connections. If `optional`, or `required` is specified, client certificates are verified using CAs specified in the `--certificate-authorities` flag. + +`--fleet-server-es ` +: Start a {{fleet-server}} process when {{agent}} is started, and connect to the specified {{es}} URL. + +`--fleet-server-es-ca ` +: Path to certificate authority to use to communicate with {{es}}. + +`--fleet-server-es-ca-trusted-fingerprint ` +: The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint will be used to verify self-signed certificates presented by {{fleet-server}} and any inputs started by {{agent}} for communication. This flag is required when using self-signed certificates with {{es}}. + +`--fleet-server-es-cert` +: The path to the client certificate that {{fleet-server}} will use when connecting to {{es}}. + +`--fleet-server-es-cert-key` +: The path to the private key that {{fleet-server}} will use when connecting to {{es}}. + +`--fleet-server-es-insecure` +: Allows fleet server to connect to {{es}} in the following situations: + + * When connecting to an HTTP server. + * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified. + + When this flag is used the certificate verification is disabled. + + +`--fleet-server-host ` +: {{fleet-server}} HTTP binding host (overrides the policy). + +`--fleet-server-policy ` +: Used when starting a self-managed {{fleet-server}} to allow a specific policy to be used. + +`--fleet-server-port ` +: {{fleet-server}} HTTP binding port (overrides the policy). + +`--fleet-server-service-token ` +: Service token to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token-path`. + +`--fleet-server-service-token-path ` +: Service token file to use for communication with {{es}}. Mutually exclusive with `--fleet-server-service-token`. + +`--fleet-server-timeout ` +: Timeout waiting for {{fleet-server}} to be ready to start enrollment. + +`--force` +: Force overwrite of current configuration without prompting for confirmation. This flag is helpful when using automation software or scripted deployments. + + ::::{note} + If the {{agent}} is already installed on the host, using `--force` may result in unpredictable behavior with duplicate {{agent}}s appearing in {{fleet}}. + :::: + + +`--header ` +: Headers used in communication with elasticsearch. + +`--help` +: Show help for the `enroll` command. + +`--insecure` +: Allow the {{agent}} to connect to {{fleet-server}} over insecure connections. This setting is required in the following situations: + + * When connecting to an HTTP server. The API keys are sent in clear text. + * When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified. + * When using self-signed certificates generated by {{agent}}. + + We strongly recommend that you use a secure connection. + + +`--non-interactive` +: Install {{agent}} in a non-interactive mode. This flag is helpful when using automation software or scripted deployments. If {{agent}} is already installed on the host, the installation will terminate. + +`--privileged` +: Run {{agent}} with full superuser privileges. This is the usual, default running mode for {{agent}}. The `--privileged` option allows you to switch back to running an agent with full administrative privileges when you have been running it in `unprivileged`. + +See the `--unprivileged` option and [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md) for more detail. + +`--proxy-disabled` +: Disable proxy support including environment variables. + +`--proxy-header ` +: Proxy headers used with CONNECT request. + +`--proxy-url ` +: Configures the proxy URL. + +`--staging ` +: Configures agent to download artifacts from a staging build. + +`--tag ` +: A comma-separated list of tags to apply to {{fleet}}-managed {{agent}}s. You can use these tags to filter the list of agents in {{fleet}}. + + ::::{note} + Currently, there is no way to remove or edit existing tags. To change the tags, you must unenroll the {{agent}}, then re-enroll it using new tags. + :::: + + +`--unprivileged` +: Run {{agent}} without full superuser privileges. This option is useful in organizations that limit `root` access on Linux or macOS systems, or `admin` access on Windows systems. For details and limitations for running {{agent}} in this mode, refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md). + + Note that changing to `unprivileged` mode is prevented if the agent is currently enrolled in a policy that includes an integration that requires administrative access, such as the {{elastic-defend}} integration. + + [preview] To run {{agent}} without superuser privileges as a pre-existing user or group, for instance under an Active Directory account, you can specify the user or group, and the password to use. + + For example: + + ```shell + elastic-agent install --unprivileged --user="my.path\username" --password="mypassword" + ``` + + ```shell + elastic-agent install --unprivileged --group="my.path\groupname" --password="mypassword" + ``` + + +`--url ` +: {{fleet-server}} URL to use to enroll the {{agent}} into {{fleet}}. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_14] + +Install the {{agent}} as a service, enroll it in {{fleet}}, and start the `elastic-agent` service: + +```shell +elastic-agent install \ + --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \ + --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== +``` + +Install the {{agent}} as a service, enroll it in {{fleet}}, and start a `fleet-server` process alongside the `elastic-agent` service: + +```shell +elastic-agent install --fleet-server-es=http://elasticsearch:9200 \ + --fleet-server-service-token=AbEAAdesYXN1abMvZmxlZXQtc2VldmVyL3Rva2VuLTE2MTkxMzg3MzIzMTg7dzEta0JDTmZUcGlDTjlwRmNVTjNVQQ \ + --fleet-server-policy=a35fd620-26f6-11ec-8bd9-3374690f57b6 +``` + +Start {{agent}} with {{fleet-server}} (running on a custom CA). This example assumes you’ve generated the certificates with the following names: + +* `ca.crt`: Root CA certificate +* `fleet-server.crt`: {{fleet-server}} certificate +* `fleet-server.key`: {{fleet-server}} private key +* `elasticsearch-ca.crt`: CA certificate to use to connect to {es} + +```shell +elastic-agent install \ + --url=https://fleet-server:8220 \ + --fleet-server-es=https://elasticsearch:9200 \ + --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \ + --fleet-server-policy=a35fd520-26f5-11ec-8bd9-3374690g57b6 \ + --certificate-authorities=/path/to/ca.crt \ + --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \ + --fleet-server-cert=/path/to/fleet-server.crt \ + --fleet-server-cert-key=/path/to/fleet-server.key \ + --fleet-server-port=8220 +``` + +Then install another {{agent}} and enroll it into the {{fleet-server}} started in the previous example: + +```shell +elastic-agent install --url=https://fleet-server:8220 \ + --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \ + --certificate-authorities=/path/to/ca.crt +``` + +
+ +## elastic-agent otel [elastic-agent-otel-command] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +Run {{agent}} as an [OpenTelemetry Collector](/reference/ingestion-tools/fleet/otel-agent.md). + + +### Synopsis [_synopsis_6] + +```shell +elastic-agent otel [flags] +elastic-agent otel [command] +``` + +::::{note} +You can also run the `./otelcol` command, which calls `./elastic-agent otel` and passes any arguments to it. +:::: + + + +### Available commands [_available_commands] + +`validate` +: Validates the OpenTelemetry collector configuration without running the collector. + + +### Flags [_flags] + +`--config=file:/path/to/first --config=file:path/to/second` +: Locations to the config file(s). Note that only a single location can be set per flag entry, for example `--config=file:/path/to/first --config=file:path/to/second`. + +`--feature-gates flag` +: Comma-delimited list of feature gate identifiers. Prefix with `-` to disable the feature. Prefixing with `+` or no prefix will enable the feature. + +`-h, --help` +: Get help for the `otel` sub-command. Use `elastic-agent otel [command] --help` for more information about a command. + +`--set string` +: Set an arbitrary component config property. The component has to be defined in the configuration file and the flag has a higher precedence. Array configuration properties are overridden and maps are joined. For example, `--set=processors::batch::timeout=2s`. + + +### Examples [_examples_15] + +Run {{agent}} as on OTel Collector using the supplied `otel.yml` configuration file. + +```shell +./elastic-agent otel --config otel.yml +``` + +Change the default verbosity setting in the {{agent}} OTel configuration from `detailed` to `normal`. + +```shell +./elastic-agent otel --config otel.yml --set "exporters::debug::verbosity=normal" +``` + +
+ +## elastic-agent restart [elastic-agent-restart-command] + +Restart the currently running {{agent}} daemon. + + +### Synopsis [_synopsis_7] + +```shell +elastic-agent restart [--help] [global-flags] +``` + + +### Options [_options_6] + +`--help` +: Show help for the `restart` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_16] + +```shell +elastic-agent restart +``` + +
+ +## elastic-agent run [elastic-agent-run-command] + +Start the `elastic-agent` process. + + +### Synopsis [_synopsis_8] + +```shell +elastic-agent run [global-flags] +``` + + +### Global flags [elastic-agent-global-flags] + +These flags are valid whenever you run `elastic-agent` on the command line. + +`-c ` +: The configuration file to use. If not specified, {{agent}} uses `{path.config}/elastic-agent.yml`. + +`--e` +: Log to stderr and disable syslog/file output. + +`--environment ` +: The environment in which the agent will run. + +`--path.config ` +: The directory where {{agent}} looks for its configuration file. The default varies by platform. + +`--path.home ` +: The root directory of {{agent}}. `path.home` determines the location of the configuration files and data directory. + + If not specified, {{agent}} uses the current working directory. + + +`--path.logs ` +: Path to the log output for {{agent}}. The default varies by platform. + +`--v` +: Set log level to INFO. + + +### Example [_example_40] + +```shell +elastic-agent run -c myagentconfig.yml +``` + +
+ +## elastic-agent status [elastic-agent-status-command] + +Returns the current status of the running {{agent}} daemon and of each process in the {{agent}}. The last known status of the {{fleet}} server is also returned. The `output` option controls the level of detail and formatting of the information. + + +### Synopsis [_synopsis_9] + +```shell +elastic-agent status [--output ] + [--help] + [global-flags] +``` + + +### Options [_options_7] + +`--output ` +: Output the status information in either `human` (the default), `full`, `json`, or `yaml`. `human` returns limited information when {{agent}} is in the `HEALTHY` state. If any components or units are not in `HEALTHY` state, then full details are displayed for that component or unit. `full`, `json` and `yaml` always return the full status information. Components map to individual processes running underneath {{agent}}, for example {{filebeat}} or {{endpoint-sec}}. Units map to discrete configuration units within that process, for example {{filebeat}} inputs or {{metricbeat}} modules. + +When the output is `json` or `yaml`, status codes are returned as numerical values. The status codes can be mapped using the following table: + ++ + +| Code | Status | +| --- | --- | +| 0 | `STARTING` | +| 1 | `CONFIGURING` | +| 2 | `HEALTHY` | +| 3 | `DEGRADED` | +| 4 | `FAILED` | +| 5 | `STOPPING` | +| 6 | `UPGRADING` | +| 7 | `ROLLBACK` | + +`--help` +: Show help for the `status` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_17] + +```shell +elastic-agent status +``` + +
+ +## elastic-agent uninstall [elastic-agent-uninstall-command] + +Permanently uninstall {{agent}} from the system. + +You must run this command as the root user (or Administrator on Windows) to remove files. + +::::{important} +Be sure to run the `uninstall` command from a directory outside of where {{agent}} is installed. + +For example, on a Windows system the install location is `C:\Program Files\Elastic\Agent`. Run the uninstall command from `C:\Program Files\Elastic` or `\tmp`, or even your default home directory: + +```shell +C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall +``` + +:::: + + +:::::::{tab-set} + +::::::{tab-item} macOS +::::{tip} +You must run this command as the root user. +:::: + + +```shell +sudo /Library/Elastic/Agent/elastic-agent uninstall +``` +:::::: + +::::::{tab-item} Linux +::::{tip} +You must run this command as the root user. +:::: + + +```shell +sudo /opt/Elastic/Agent/elastic-agent uninstall +``` +:::::: + +::::::{tab-item} Windows +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, run: + +```shell +C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall +``` +:::::: + +::::::: + +### Synopsis [_synopsis_10] + +```shell +elastic-agent uninstall [--force] [--help] [global-flags] +``` + + +### Options [_options_8] + +`--force` +: Uninstall {{agent}} and do not prompt for confirmation. This flag is helpful when using automation software or scripted deployments. + +`--skip-fleet-audit` +: Skip auditing with the {{fleet-server}}. + +`--help` +: Show help for the `uninstall` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_18] + +```shell +elastic-agent uninstall +``` + +
+ +## elastic-agent unprivileged [elastic-agent-unprivileged-command] + +Run {{agent}} without full superuser privileges. This is useful in organizations that limit `root` access on Linux or macOS systems, or `admin` access on Windows systems. For details and limitations for running {{agent}} in this mode, refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md). + +Note that changing a running {{agent}} to `unprivileged` mode is prevented if the agent is currently enrolled with a policy that contains the {{elastic-defend}} integration. + +[preview] To run {{agent}} without superuser privileges as a pre-existing user or group, for instance under an Active Directory account, add either a `--user` or `--group` parameter together with a `--password` parameter. + + +### Examples [_examples_19] + +Run {{agent}} without administrative privileges: + +```shell +elastic-agent unprivileged +``` + +[preview] Run {{agent}} without administrative privileges, as a pre-existing user: + +```shell +elastic-agent unprivileged --user="my.pathl\username" --password="mypassword" +``` + +[preview] Run {{agent}} without administrative privileges, as a pre-existing group: + +```shell +elastic-agent unprivileged --group="my.pathl\groupname" --password="mypassword" +``` + +
+ +## elastic-agent upgrade [elastic-agent-upgrade-command] + +Upgrade the currently running {{agent}} to the specified version. This should only be used with agents running in standalone mode. Agents enrolled in {{fleet}} should be upgraded through {{fleet}}. + + +### Synopsis [_synopsis_11] + +```shell +elastic-agent upgrade [--source-uri ] [--help] [flags] +``` + + +### Options [_options_9] + +`version` +: The version of {{agent}} to upgrade to. + +`--source-uri ` +: The source URI to download the new version from. By default, {{agent}} uses the Elastic Artifacts URL. + +`--skip-verify` +: Skip the package verification process. This option is not recommended as it is insecure. + +`--pgp-path ` +: Use a locally stored copy of the PGP key to verify the upgrade package. + +`--pgp-uri ` +: Use the specified online PGP key to verify the upgrade package. + +`--help` +: Show help for the `upgrade` command. + +For details about using the `--skip-verify`, `--pgp-path `, and `--pgp-uri ` package verification options, refer to [Verifying {{agent}} package signatures](/reference/ingestion-tools/fleet/upgrade-standalone.md#upgrade-standalone-verify-package). + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Examples [_examples_20] + +```shell +elastic-agent upgrade 7.10.1 +``` + +
+ +## elastic-agent logs [elastic-agent-logs-command] + +Show the logs of the running {{agent}}. + + +### Synopsis [_synopsis_12] + +```shell +elastic-agent logs [--follow] [--number ] [--component ] [--no-color] [--help] [global-flags] +``` + + +### Options [_options_10] + +`--follow` or `-f` +: Follow log updates until the command is interrupted (for example with `Ctrl-C`). + +`--number ` or `-n ` +: How many lines of logs to print. If logs following is enabled, affects the initial output. + +`--component ` or `-C ` +: Filter logs based on the component name. + +`--no-color` +: Disable color based on log-level of each entry. + +`--help` +: Show help for the `logs` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Example [_example_41] + +```shell +elastic-agent logs -n 100 -f -C "system/metrics-default" +``` + +
+ +## elastic-agent version [elastic-agent-version-command] + +Show the version of {{agent}}. + + +### Synopsis [_synopsis_13] + +```shell +elastic-agent version [--help] [global-flags] +``` + + +### Options [_options_11] + +`--help` +: Show help for the `version` command. + +For more flags, see [Global flags](#elastic-agent-global-flags). + + +### Example [_example_42] + +```shell +elastic-agent version +``` + +
diff --git a/reference/ingestion-tools/fleet/agent-environment-variables.md b/reference/ingestion-tools/fleet/agent-environment-variables.md new file mode 100644 index 0000000000..3597d0b456 --- /dev/null +++ b/reference/ingestion-tools/fleet/agent-environment-variables.md @@ -0,0 +1,99 @@ +--- +navigation_title: "Environment variables" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/agent-environment-variables.html +--- + +# {{agent}} environment variables [agent-environment-variables] + + +Use environment variables to configure {{agent}} when running in a containerized environment. Variables on this page are grouped by action type: + +* [Common variables](#env-common-vars) +* [Configure {{kib}}:](#env-prepare-kibana-for-fleet) prepare the {{fleet}} plugin in {kib} +* [Configure {{fleet-server}}:](#env-bootstrap-fleet-server) bootstrap {{fleet-server}} on an {agent} +* [Configure {{agent}} and {{fleet}}:](#env-enroll-agent) enroll an {agent} + + +## Common variables [env-common-vars] + +To limit the number of environment variables that need to be set, the following common variables are available. These variables can be used across all {{agent}} actions, but have a lower precedence than action-specific environment variables. + +These common variables are useful, for example, when using the same {{es}} and {{kib}} credentials to prepare the {{fleet}} plugin in {{kib}}, configure {{fleet-server}}, and enroll an {{agent}}. + +| Settings | Description | +| --- | --- | +| $$$env-common-elasticsearch-host$$$
`ELASTICSEARCH_HOST`
| (string) The {{es}} host to communicate with.

**Default:** `http://elasticsearch:9200`
| +| $$$env-common-elasticsearch-username$$$
`ELASTICSEARCH_USERNAME`
| (string) The basic authentication username used to connect to {{kib}} and retrieve a `service_token` for {{fleet}}.

**Default:** none
| +| $$$env-common-elasticsearch-password$$$
`ELASTICSEARCH_PASSWORD`
| (string) The basic authentication password used to connect to {{kib}} and retrieve a `service_token` for {{fleet}}.

**Default:** none
| +| $$$env-common-elasticsearch-api-key$$$
`ELASTICSEARCH_API_KEY`
| (string) API key used for authenticating to Elasticsearch.

**Default:** none
| +| $$$env-common-elasticsearch-ca$$$
`ELASTICSEARCH_CA`
| (string) The path to a certificate authority.

By default, {{agent}} uses the list of trusted certificate authorities (CA) from the operating system where it is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, use this config to add the path to the `.pem` file that contains your CA’s certificate.

**Default:** `""`
| +| $$$env-common-kibana-host$$$
`KIBANA_HOST`
| (string) The {{kib}} host.

**Default:** `http://kibana:5601`
| +| $$$env=common-kibana-username$$$
`KIBANA_USERNAME`
| (string) The basic authentication username used to connect to {{kib}} to retrieve a `service_token`.

**Default:** `elastic`
| +| $$$env=common-kibana-password$$$
`KIBANA_PASSWORD`
| (string) The basic authentication password used to connect to {{kib}} to retrieve a `service_token`.

**Default:** `changeme`
| +| $$$env-common-kibana-ca$$$
`KIBANA_CA`
| (string) The path to a certificate authority.

By default, {{agent}} uses the list of trusted certificate authorities (CA) from the operating system where it is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, use this config to add the path to the `.pem` file that contains your CA’s certificate.

**Default:** `""`
| +| $$$env-common-elastic-netinfo$$$
`ELASTIC_NETINFO`
| (bool) When `false`, disables `netinfo.enabled` parameter of `add_host_metadata` processor. Setting this to `false` is recommended for large scale setups where the host.ip and host.mac fields index size increases.

By default, {{agent}} initializes the `add_host_metadata` processor. The `netinfo.enabled` parameter defines ingestion of IP addresses and MAC addresses as fields `host.ip` and `host.mac`. For more information see [add_host_metadata](/reference/ingestion-tools/fleet/add_host_metadata-processor.md)

**Default:** `"false"`
| + + +## Prepare {{kib}} for {{fleet}} [env-prepare-kibana-for-fleet] + +Settings used to prepare the {{fleet}} plugin in {{kib}}. + +| Settings | Description | +| --- | --- | +| $$$env-fleet-kib-kibana-fleet-host$$$
`KIBANA_FLEET_HOST`
| (string) The {{kib}} host to enable {{fleet}} on. Overrides `FLEET_HOST` when set.

**Default:** `http://kibana:5601`
| +| $$$env-fleet-kib-kibana-fleet-username$$$
`KIBANA_FLEET_USERNAME`
| (string) The basic authentication username used to connect to {{kib}} and retrieve a `service_token` to enable {{fleet}}. Overrides `ELASTICSEARCH_USERNAME` when set.

**Default:** `elastic`
| +| $$$env-fleet-kib-kibana-fleet-password$$$
`KIBANA_FLEET_PASSWORD`
| (string) The basic authentication password used to connect to {{kib}} and retrieve a `service_token` to enable {{fleet}}. Overrides `ELASTICSEARCH_PASSWORD` when set.

**Default:** `changeme`
| +| $$$env-fleet-kib-kibana-fleet-ca$$$
`KIBANA_FLEET_CA`
| (string) The path to a certificate authority. Overrides `KIBANA_CA` when set.

By default, {{agent}} uses the list of trusted certificate authorities (CA) from the operating system where it is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, use this config to add the path to the `.pem` file that contains your CA’s certificate.

**Default:** `""`
| + + +## Bootstrap {{fleet-server}} [env-bootstrap-fleet-server] + +Settings used to bootstrap {{fleet-server}} on this {{agent}}. At least one {{fleet-server}} is required in a deployment. + +| Settings | Description | +| --- | --- | +| $$$env-bootstrap-fleet-fleet-server-enable$$$
`FLEET_SERVER_ENABLE`
| (int) Set to `1` to bootstrap {{fleet-server}} on this {{agent}}. When set to `1`, this automatically forces {{fleet}} enrollment as well.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-elasticsearch-host$$$
`FLEET_SERVER_ELASTICSEARCH_HOST`
| (string) The {{es}} host for {{fleet-server}} to communicate with. Overrides `ELASTICSEARCH_HOST` when set.

**Default:** `http://elasticsearch:9200`
| +| $$$env-bootstrap-fleet-fleet-server-elasticsearch-ca$$$
`FLEET_SERVER_ELASTICSEARCH_CA`
| (string) The path to a certificate authority. Overrides `ELASTICSEARCH_CA` when set.

By default, {{agent}} uses the list of trusted certificate authorities (CA) from the operating system where it is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, use this config to add the path to the `.pem` file that contains your CA’s certificate.

**Default:** `""`
| +| $$$env-bootstrap-fleet-fleet-server-es-cert$$$
`FLEET_SERVER_ES_CERT`
| (string) The path to the mutual TLS client certificate that {{fleet-server}} will use to connect to {{es}}.

**Default:** `""`
| +| $$$env-bootstrap-fleet-fleet-server-es-cert-key$$$
`FLEET_SERVER_ES_CERT_KEY`
| (string) The path to the mutual TLS private key that {{fleet-server}} will use to connect to {{es}}.

**Default:** `""`
| +| $$$env-bootstrap-fleet-fleet-server-insecure-http$$$
`FLEET_SERVER_INSECURE_HTTP`
| (bool) When `true`, {{fleet-server}} is exposed over insecure or unverified HTTP. Setting this to `true` is not recommended.

**Default:** `false`
| +| $$$env-bootstrap-fleet-fleet-server-service-token$$$
`FLEET_SERVER_SERVICE_TOKEN`
| (string) Service token to use for communication with {{es}} and {{kib}} if [`KIBANA_FLEET_SETUP`](#env-prepare-kibana-for-fleet) is enabled. If the service token value and service token path are specified the value may be used for setup and the path is passed to the agent in the container.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-service-token-path$$$
`FLEET_SERVER_SERVICE_TOKEN_PATH`
| (string) The path to the service token file to use for communication with {{es}} and {{kib}} if [`KIBANA_FLEET_SETUP`](#env-prepare-kibana-for-fleet) is enabled. If the service token value and service token path are specified the value may be used for setup and the path is passed to the agent in the container.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-policy-name$$$
`FLEET_SERVER_POLICY_NAME`
| (string) The name of the policy for {{fleet-server}} to use on itself. Overrides `FLEET_TOKEN_POLICY_NAME` when set.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-policy-id$$$
`FLEET_SERVER_POLICY_ID`
| (string) The policy ID for {{fleet-server}} to use on itself.
| +| $$$env-bootstrap-fleet-fleet-server-host$$$
`FLEET_SERVER_HOST`
| (string) The binding host for {{fleet-server}} HTTP. Overrides the host defined in the policy.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-port$$$
`FLEET_SERVER_PORT`
| (string) The binding port for {{fleet-server}} HTTP. Overrides the port defined in the policy.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-cert$$$
`FLEET_SERVER_CERT`
| (string) The path to the certificate to use for HTTPS.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-cert-key$$$
`FLEET_SERVER_CERT_KEY`
| (string) The path to the private key for the certificate used for HTTPS.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-cert-key-passphrase$$$
`FLEET_SERVER_CERT_KEY_PASSPHRASE`
| (string) The path to the private key passphrase for an encrypted private key file.

**Default:** none
| +| $$$env-bootstrap-fleet-fleet-server-client-auth$$$
`FLEET_SERVER_CLIENT_AUTH`
| (string) One of `none`, `optional`, or `required`. {{fleet-server}}'s client authentication option for client mTLS connections. If `optional` or `required` is specified, client certificates are verified using CAs.

**Default:** `none`
| +| $$$env-bootstrap-fleet-fleet-server-es-ca-trusted-fingerprint$$$
`FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT`
| (string) The SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint is used to verify self-signed certificates presented by {{fleet-server}} and any inputs started by {{agent}} for communication. This flag is required when using self-signed certificates with {{es}}.

**Default:** `""`
| +| $$$env-bootstrap-fleet-fleet-daemon-timeout$$$
`FLEET_DAEMON_TIMEOUT`
| (duration) Set to indicate how long {{fleet-server}} will wait during the bootstrap process for {{elastic-agent}}.
| +| $$$env-bootstrap-fleet-fleet-server-timeout$$$
`FLEET_SERVER_TIMEOUT`
| (duration) Set to indicate how long {{agent}} will wait for {{fleet-server}} to check in as healthy.
| + + +## Enroll {{agent}} [env-enroll-agent] + +Settings used to enroll an {{agent}} into a {{fleet-server}}. + +| Settings | Description | +| --- | --- | +| $$$env-enroll-elastic-agent-cert$$$
`ELASTIC_AGENT_CERT`
| (string) The path to the mutual TLS client certificate that {{agent}} will use to connect to {{fleet-server}}.
| +| $$$env-enroll-elastic-agent-cert-key$$$
`ELASTIC_AGENT_CERT_KEY`
| (string) The path to the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}.
| +| $$$env-enroll-elastic-agent-cert-key-passphrase$$$
`ELASTIC_AGENT_CERT_KEY_PASSPHRASE`
| (string) The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}. The file must only contain the characters of the passphrase, no newline or extra non-printing characters.

This option is only used if the `--elastic-agent-cert-key` is encrypted and requires a passphrase to use.
| +| $$$env-enroll-elastic-agent-tag$$$
`ELASTIC_AGENT_TAGS`
| (string) A comma-separated list of tags to apply to {{fleet}}-managed {{agent}}s. You can use these tags to filter the list of agents in {{fleet}}.
| +| $$$env-enroll-fleet-enroll$$$
`FLEET_ENROLL`
| (bool) Set to `1` to enroll the {{agent}} into {{fleet-server}}.

**Default:** `false`
| +| $$$env-enroll-fleet-force$$$
`FLEET_FORCE`
| (bool) Set to `true` to force overwrite of the current {{agent}} configuration without prompting for confirmation. This flag is helpful when using automation software or scripted deployments.

**Default:** `false`
| +| $$$env-enroll-fleet-url$$$
`FLEET_URL`
| (string) URL to enroll the {{fleet-server}} into.

**Default:** `""`
| +| $$$env-enroll-fleet-enrollment-token$$$
`FLEET_ENROLLMENT_TOKEN`
| (string) The token to use for enrollment.

**Default:** `""`
| +| $$$env-enroll-fleet-token-name$$$
`FLEET_TOKEN_NAME`
| (string) The token name to use to fetch the token from {{kib}}.

**Default:** `""`
| +| $$$env-enroll-fleet-token-policy-name$$$
`FLEET_TOKEN_POLICY_NAME`
| (string) The token policy name to use to fetch the token from {{kib}}.

**Default:** `false`
| +| $$$env-enroll-fleet-ca$$$
`FLEET_CA`
| (string) The path to a certificate authority. Overrides `ELASTICSEARCH_CA` when set.

By default, {{agent}} uses the list of trusted certificate authorities (CA) from the operating system where it is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, use this config to add the path to the `.pem` file that contains your CA’s certificate.

**Default:** `false`
| +| $$$env-enroll-fleet-insecure$$$
`FLEET_INSECURE`
| (bool) When `true`, {{agent}} communicates with {{fleet-server}} over insecure or unverified HTTP. Setting this to `true` is not recommended.

**Default:** `false`
| +| $$$env-enroll-kibana-fleet-host$$$
`KIBANA_FLEET_HOST`
| (string) The {{kib}} host to enable {{fleet}} on. Overrides `FLEET_HOST` when set.

**Default:** `http://kibana:5601`
| +| $$$env-enroll-kibana-fleet-username$$$
`KIBANA_FLEET_USERNAME`
| (string) The basic authentication username used to connect to {{kib}} and retrieve a `service_token` to enable {{fleet}}. Overrides `ELASTICSEARCH_USERNAME` when set.

**Default:** `elastic`
| +| $$$env-enroll-kibana-fleet-password$$$
`KIBANA_FLEET_PASSWORD`
| (string) The basic authentication password used to connect to {{kib}} and retrieve a `service_token` to enable {{fleet}}. Overrides `ELASTICSEARCH_PASSWORD` when set.

**Default:** `changeme`
| +| $$$env-enroll-kibana-fleet-ca$$$
`KIBANA_FLEET_CA`
| (string) The path to a certificate authority. Overrides `KIBANA_CA` when set.

By default, {{agent}} uses the list of trusted certificate authorities (CA) from the operating system where it is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, use this config to add the path to the `.pem` file that contains your CA’s certificate.

**Default:** `""`
| + diff --git a/reference/ingestion-tools/fleet/agent-health-status.md b/reference/ingestion-tools/fleet/agent-health-status.md new file mode 100644 index 0000000000..ae3654316e --- /dev/null +++ b/reference/ingestion-tools/fleet/agent-health-status.md @@ -0,0 +1,53 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/agent-health-status.html +--- + +# Elastic Agent health status [agent-health-status] + +The {{agent}} [monitoring documentation](/reference/ingestion-tools/fleet/monitor-elastic-agent.md) describes the features available through the {{fleet}} UI for you to view {{agent}} status and activity, access metrics and diagnostics, enable alerts, and more. + +For details about how the {{agent}} status is monitored by {{fleet}}, including connectivity, check-in frequency, and similar, see the following: + +* [How does {{agent}} connect to the {{fleet}} to report its availability and health, and receive policy updates?](#agent-health-status-connect-to-fleet) +* [We use stack monitoring to monitor the status of our cluster. Is monitoring of {{agent}} and the status shown in {{fleet}} using stack monitoring as well?](#agent-health-status-stack-monitoring) +* [There are many components that make up {{agent}}. How does {{agent}} ensure that these components/processes are up and running, and healthy?](#agent-health-status-other-components) +* [If {{agent}} goes down, is an alert generated by {{fleet}}?](#agent-health-status-outage) +* [How long does it take for {{agent}} to report a status change?](#agent-health-status-report-timing) + + +## How does {{agent}} connect to the {{fleet}} to report its availability and health, and receive policy updates? [agent-health-status-connect-to-fleet] + +After enrollment, {{agent}} regularly initiates a check-in to {{fleet-server}} using HTTP long-polling ({{fleet-server}} is either deployed on-premises or deployed as part of {{es}} in {{ecloud}}). + +The HTTP long-polling request is kept open until there’s a configuration change that {{agent}} needs to consume, an action that is sent to the agent, or a 5 minute timeout has elapsed. After 5 minutes, the agent will again send another check-in to start the process over again. + +The frequency of check-ins can be configured to a new value with the condition that it may affect the maximum number of agents that can connect to {{fleet}}. Our regular scale testing of the solution doesn’t modify this parameter. + +:::{image} images/agent-health-status.png +:alt: Diagram of connectivity between agents +:class: screenshot +::: + + +## We use stack monitoring to monitor the status of our cluster. Is monitoring of {{agent}} and the status shown in {{fleet}} using stack monitoring as well? [agent-health-status-stack-monitoring] + +No. The health monitoring of {{agent}} and its inputs, as reported in {{fleet}}, is done completely outside of what stack monitoring provides. + + +## There are many components that make up {{agent}}. How does {{agent}} ensure that these components/processes are up and running, and healthy? [agent-health-status-other-components] + +{{agent}} is essentially a supervisor that (at a minimum) will deploy a {{filebeat}} instance for log collection and a {{metricbeat}} instance for metrics collection from the system and applications running on that system. As a supervisor, it also ensures that these spawned processes are running and healthy. Using gRPC, {{agent}} communicates with the underlying processes once every 30 seconds, ensuring their health. If there’s no response, the agent will transfer to being `Unhealthy` with the result and details reported to {{fleet}}. + + +## If {{agent}} goes down, is an alert generated by {{fleet}}? [agent-health-status-outage] + +No. Alerts would have to be created in {{kib}} on the indices that show the total count of agents at each specific state. Refer to [Enable alerts and ML jobs based on {{fleet}} and {{agent}} status](/reference/ingestion-tools/fleet/monitor-elastic-agent.md#fleet-alerting) in the {{agent}} monitoring documentation for the steps to configure alerting. Generating alerts on status change on individual agents is currently planned for a future release. + + +## How long does it take for {{agent}} to report a status change? [agent-health-status-report-timing] + +Some {{agent}} states are reported immediately, such as when the agent has become `Unhealthy`. Some other states are derived after a certain criteria is met. Refer to [View agent status overview](/reference/ingestion-tools/fleet/monitor-elastic-agent.md#view-agent-status) in the {{agent}} monitoring documentation for details about monitoring agent status. + +Transition from an `Offline` state to an `Inactive` state is configurable by the user and that transition can be fine tuned by [Setting the inactivity timeout parameter](/reference/ingestion-tools/fleet/set-inactivity-timeout.md). + diff --git a/reference/ingestion-tools/fleet/agent-policy.md b/reference/ingestion-tools/fleet/agent-policy.md new file mode 100644 index 0000000000..83693f7965 --- /dev/null +++ b/reference/ingestion-tools/fleet/agent-policy.md @@ -0,0 +1,404 @@ +--- +navigation_title: "Policies" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/agent-policy.html +--- + +# {{agent}} policies [agent-policy] + + +A policy is a collection of inputs and settings that defines the data to be collected by an {{agent}}. Each {{agent}} can only be enrolled in a single policy. + +Within an {{agent}} policy is a set of individual integration policies. These integration policies define the settings for each input type. The available settings in an integration depend on the version of the integration in use. + +{{fleet}} uses {{agent}} policies in two ways: + +* Policies are stored in a plain-text YAML file and sent to each {{agent}} to configure its inputs. +* Policies provide a visual representation of an {{agent}}s configuration in the {{fleet}} UI. + + +## Policy benefits [policy-benefits] + +{{agent}} policies have many benefits that allow you to: + +* Apply a logical grouping of inputs aimed for a particular set of hosts. +* Maintain flexibility in large-scale deployments by quickly testing changes before rolling them out. +* Provide a way to group and manage larger swaths of your infrastructure landscape. + +For example, it might make sense to create a policy per operating system type: Windows, macOS, and Linux hosts. Or, organize policies by functional groupings of how the hosts are used: IT email servers, Linux servers, user work-stations, etc. Or perhaps by user categories: engineering department, marketing department, etc. + + +## Policy types [agent-policy-types] + +In most use cases, {{fleet}} provides complete central management of {{agent}}s. However some use cases, like running in Kubernetes or using our hosted {{ess}} on {{ecloud}}, require {{agent}} infrastructure management outside of {{fleet}}. With this in mind, there are two types of {{agent}} policies: + +* **regular policy**: The default use case, where {{fleet}} provides full central management for {{agent}}s. Users can manage {{agent}} infrastructure by adding, removing, or upgrading {{agent}}s. Users can also manage {{agent}} configuration by updating the {{agent}} policy. +* **hosted policy**: A policy where *something else* provides central management for {{agent}}s. For example, in Kubernetes, adding, removing, and upgrading {{agent}}s should be configured directly in Kubernetes. Allowing {{fleet}} users to manage {{agent}}s would conflict with any Kubernetes configuration. + + ::::{tip} + Hosted policies also apply when using our hosted {{ess}} on {{ecloud}}. {{ecloud}} is responsible for hosting {{agent}}s and assigning them to a policy. Platform operators, who create and manage Elastic deployments can add, upgrade, and remove {{agent}}s through the {{ecloud}} console. + :::: + + +Hosted policies display a lock icon in the {{fleet}} UI, and actions are restricted. The following table illustrates the {{fleet}} user actions available to different policy types: + +| {{fleet}} user action | Regular policy | Hosted policy | +| --- | --- | --- | +| [Create a policy](#create-a-policy) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Add an integration](#add-integration) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Apply a policy](#apply-a-policy) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Edit or delete an integration](#policy-edit-or-delete) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Copy a policy](#copy-policy) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Edit or delete a policy](#policy-main-settings) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Add custom fields](#add-custom-fields) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Configure agent monitoring](#change-policy-enable-agent-monitoring) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Change the output of a policy](#change-policy-output) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Add a {{fleet-server}} to a policy](#add-fleet-server-to-policy) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Configure secret values in a policy](#agent-policy-secret-values) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Set the maximum CPU usage](#agent-policy-limit-cpu) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Set the {{agent}} log level](#agent-policy-log-level) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Change the {{agent}} binary download location](#agent-binary-download-settings) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Set the {{agent}} host name format](#fleet-agent-hostname-format-settings) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | +| [Set an unenrollment timeout for inactive agents](#fleet-agent-unenrollment-timeout) | ![yes](images/green-check.svg "") | ![no](images/red-x.svg "") | + +See also the [recommended scaling options](#agent-policy-scale) for an {{agent}} policy. + + +## Create a policy [create-a-policy] + +To manage your {{agent}}s and the data they collect, create a new policy: + +In {{fleet}}, open the **Agent policies** tab and click **Create agent policy**. + +1. Name your policy. All other fields are optional and can be modified later. By default, each policy enables the *system* integration, which collects system information and metrics. +2. Create the agent policy: + + * To use the UI, click **Create agent policy**. + * To use the {{fleet}} API, click **Preview API request** and run the request. + + +Also see [Create an agent policy without using the UI](/reference/ingestion-tools/fleet/create-policy-no-ui.md). + + +## Add an integration to a policy [add-integration] + +An {{agent}} policy consists of one or more integrations that are applied to the agents enrolled in that policy. When you add an integration, the policy created for that integration can be shared with multiple {{agent}} policies. This reduces the number of integrations policies that you need to actively manage. + +To add a new integration to one or more {{agent}} policies: + +1. In {{fleet}}, click **Agent policies**. Click the name of a policy you want to add an integration to. +2. Click **Add **. +3. The Integrations page shows {{agent}} integrations along with other types, such as {{beats}}. Scroll down and select **Elastic Agent only** to view only integrations that work with {{agent}}. +4. You can opt to install an {{agent}} if you haven’t already, or choose **Add integration only** to proceed. +5. In Step 1 on the **Add ** page, you can select the configuration settings specific to the integration. +6. In Step 2 on the page, you have two options: + + 1. If you’d like to create a new policy for your {{agent}}s, on the **New hosts** tab specify a name for the new agent policy and choose whether or not to collect system logs and metrics. Collecting logs and metrics will add the System integration to the new agent policy. + 2. If you already have an {{agent}} policy created, on the **Existing hosts** tab use the drop-down menu to specify one or more agent policies that you’d like to add the integration to. Please note this this feature, known as "reusable integrations", requires an [Enterprise subscription](https://www.elastic.co/subscriptions). + +7. Click **Save and continue** to confirm your settings. + +This action installs the integration and adds it to the {{agent}} policies that you specified. {{fleet}} distributes the new integration policy to all {{agent}}s that are enrolled in the agent policies. + +You can update the settings for an installed integration at any time: + +1. In {{kib}}, go to the **Integrations** page. +2. On the **Integration policies** tab, for the integration that you like to update open the **Actions** menu and select **Edit integration**. +3. On the **Edit ** page you can update any configuration settings and also update the list of {{agent}} polices to which the integration is added. + + If you clear the **Agent policies** field, the integration will be removed from any {{agent}} policies to which it had been added. + + To identify any integrations that have been "orphaned", that is, not associated with any {{agent}} policies, check the **Agent polices** column on the **Integration policies** tab. Any integrations that are installed but not associated with an {{agent}} policy are as labeled as `No agent policies`. + + + +## Apply a policy [apply-a-policy] + +You can apply policies to one or more {{agent}}s. To apply a policy: + +1. In {{fleet}}, click **Agents**. +2. Select the {{agent}}s you want to assign to the new policy. + + After selecting one or more {{agent}}s, click **Assign to new policy** under the Actions menu. + + :::{image} images/apply-agent-policy.png + :alt: Assign to new policy dropdown + :class: screenshot + ::: + + Unable to select multiple agents? Confirm that your subscription level supports selective agent policy reassignment in {{fleet}}. For more information, refer to [{{stack}} subscriptions](https://www.elastic.co/subscriptions). + +3. Select the {{agent}} policy from the dropdown list, and click **Assign policy**. + +The {{agent}} status indicator and {{agent}} logs indicate that the policy is being applied. It may take a few minutes for the policy change to complete before the {{agent}} status updates to "Healthy". + + +## Edit or delete an integration policy [policy-edit-or-delete] + +Integrations can easily be reconfigured or deleted. To edit or delete an integration policy: + +1. In {{fleet}}, click **Agent policies**. Click the name of the policy you want to edit or delete. +2. Search or scroll to a specific integration. Open the **Actions** menu and select **Edit integration** or **Delete integration**. + + Editing or deleting an integration is permanent and cannot be undone. If you make a mistake, you can always re-configure or re-add an integration. + + +Any saved changes are immediately distributed and applied to all {{agent}}s enrolled in the given policy. + +To update any secret values in an integration policy, refer to [Configure secret values in a policy](#agent-policy-secret-values). + + +## Copy a policy [copy-policy] + +Policy definitions are stored in a plain-text YAML file that can be downloaded or copied to another policy: + +1. In {{fleet}}, click **Agent policies**. Click the name of the policy you want to copy or download. +2. To copy a policy, click **Actions → Copy policy**. Name the new policy, and provide a description. The exact policy definition is copied to the new policy. + + Alternatively, view and download the policy definition by clicking **Actions → View policy**. + + + +## Edit or delete a policy [policy-main-settings] + +You can change high-level configurations like a policy’s name, description, default namespace, and agent monitoring status as necessary: + +1. In {{fleet}}, click **Agent policies**. Click the name of the policy you want to edit or delete. +2. Click the **Settings** tab, make changes, and click **Save changes** + + Alternatively, click **Delete policy** to delete the policy. Existing data is not deleted. Any agents assigned to a policy must be unenrolled or assigned to a different policy before a policy can be deleted. + + + +## Add custom fields [add-custom-fields] + +Use this setting to add a custom field and value set to all data collected from the {{agents}} enrolled in an {{agent}} policy. Custom fields are useful when you want to identify or visualize all of the data from a group of agents, and possibly manipulate the data downstream. + +To add a custom field: + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Custom fields**. +3. Click **Add field**. +4. Specify a field name and value. + + :::{image} images/agent-policy-custom-field.png + :alt: Sceen capture showing the UI to add a custom field and value + :class: screenshot + ::: + +5. Click **Add another field** for additional fields. Click **Save changes** when you’re done. + +To edit a custom field: + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Custom fields**. Any custom fields that have been configured are shown. +3. Click the edit icon to update a field or click the delete icon to remove it. + +Note that adding custom tags is not supported for a small set of inputs: + +* `apm` +* `cloudbeat` and all `cloudbeat/*` inputs +* `cloud-defend` +* `fleet-server` +* `pf-elastic-collector`, `pf-elastic-symbolizer`, and `pf-host-agent` +* `endpoint` inputs. Instead, use the advanced settings (`*.advanced.document_enrichment.fields`) of the {{elastic-defend}} Integration. + + +## Configure agent monitoring [change-policy-enable-agent-monitoring] + +Use these settings to collect monitoring logs and metrics from {{agent}}. All monitoring data will be written to the specified **Default namespace**. + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Agent monitoring**. +3. Select whether to collect agent logs, agent metrics, or both, from the {{agents}} that use the policy. + + When this setting is enabled an {{agent}} integration is created automatically. + +4. Expand the **Advanced monitoring options** section to access [advanced settings](#advanced-agent-monitoring-settings). +5. Save your changes for the updated monitoring settings to take effect. + + +### Advanced agent monitoring settings [advanced-agent-monitoring-settings] + +**HTTP monitoring endpoint** + +Enabling this setting exposes a `/liveness` API endpoint that you can use to monitor {{agent}} health according to the following HTTP codes: + +* `200`: {{agent}} is healthy. The endpoint returns a `200` OK status as long as {{agent}} is responsive and can process configuration changes. +* `500`: A component or unit is in a failed state. +* `503`: The agent coordinator is unresponsive. + +You can pass a `failon` parameter to the `/liveness` endpoint to determine what component state will result in a `500` status. For example, `curl 'localhost:6792/liveness?failon=degraded'` will return `500` if a component is in a degraded state. + +The possible values for `failon` are: + +* `degraded`: Return an error if a component is in a degraded state or failed state, or if the agent coordinator is unresponsive. +* `failed`: Return an error if a unit is in a failed state, or if the agent coordinator is unresponsive. +* `heartbeat`: Return an error only if the agent coordinator is unresponsive. + +If no `failon` parameter is provided, the default `failon` behavior is `heartbeat`. + +The HTTP monitoring endpoint can also be [used with Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request), to restart the container for example. + +When you enable this setting, you need to provide the host URL and port where the endpoint can be accessed. Using the default `localhost` is recommended. + +When the HTTP monitoring endpoint is enabled you can also select to **Enable profiling at `/debug/pprof`**. This controls whether the {{agent}} exposes the `/debug/pprof/` endpoints together with the monitoring endpoints. + +The heap profiles available from `/debug/pprof/` are included in [{{agent}} diagnostics](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-diagnostics-command) by default. CPU profiles are also included when the `--cpu-profile` option is included. For full details about the profiles exposed by `/debug/pprof/` refer to the [pprof package documentation](https://pkg.go.dev/net/http/pprof). + +Profiling at `/debug/pprof` is disabled by default. Data produced by these endpoints can be useful for debugging but present a security risk. It’s recommended to leave this option disabled if the monitoring endpoint is accessible over a network. + +**Diagnostics rate limiting** + +You can set a rate limit for the action handler for diagnostics requests coming from {{fleet}}. The setting affects only {{fleet}}-managed {{agents}}. By default, requests are limited to an interval of `1m` and a burst value of `1`. This setting does not affect diagnostics collected through the CLI. + +**Diagnostics file upload** + +This setting configures retries for the file upload client handling diagnostics requests coming from {{fleet}}. The setting affects only {{fleet}}-managed {{agents}}. By default, a maximum of `10` retries are allowed with an initial duration of `1s` and a backoff duration of `1m`. The client may retry failed requests with exponential backoff. + + +## Change the output of a policy [change-policy-output] + +Assuming your [{{stack}} subscription level](https://www.elastic.co/subscriptions) supports per-policy outputs, you can change the output of a policy to send data to a different output. + +1. In {{fleet}}, click **Settings** and view the list of available outputs. If necessary, click **Add output** to add a new output with the settings you require. For more information, refer to [Output settings](/reference/ingestion-tools/fleet/fleet-settings.md#output-settings). +2. Click **Agent policies**. Click the name of the policy you want to change, then click **Settings**. +3. Set **Output for integrations** and (optionally) **Output for agent monitoring** to use a different output, for example, {{ls}}. You might need to scroll down to see these options. + + Unable to select a different output? Confirm that your subscription level supports per-policy outputs in {{fleet}}. + + :::{image} images/agent-output-settings.png + :alt: Screen capture showing the {{ls}} output policy selected in an agent policy + :class: screenshot + ::: + +4. Save your changes. + +Any {{agent}}s enrolled in the agent policy will begin sending data to the specified outputs. + + +## Add a {{fleet-server}} to a policy [add-fleet-server-to-policy] + +If you want to connect multiple agents to a specific on-premises {{fleet-server}}, you can add that {{fleet-server}} to a policy. + +:::{image} images/add-fleet-server-to-policy.png +:alt: Screen capture showing how to add a {{fleet-server}} to a policy when creating or updating the policy. +:class: screenshot +::: + +When the policy is saved, all agents assigned to the policy are configured to use the new {{fleet-server}} as the controller. + +Make sure that the {{agent}}s assigned to this policy all have connectivity to the {{fleet-server}} that you added. Lack of connectivity will prevent the {{agent}} from checking in with the {{fleet-server}} and receiving policy updates, but the agents will still forward data to the cluster. + + +## Configure secret values in a policy [agent-policy-secret-values] + +When you create an integration policy you often need to provide sensitive information such as an API key or a password. To help ensure that data can’t be accessed inappropriately, any secret values used in an integration policy are stored separately from other policy details. + +As well, after you’ve saved a secret value in {{fleet}}, the value is hidden in both the {{fleet}} UI and in the agent policy definition. When you view the agent policy (**Actions → View policy**), an environment variable is displayed in place of any secret values, for example `${SECRET_0}`. + +::::{warning} +In order for sensitive values to be stored secretly in {{fleet}}, all configured {{fleet-server}}s must be on version 8.10.0 or higher. +:::: + + +Though secret values stored in {{fleet}} are hidden, they can be updated. To update a secret value in an integration policy: + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Search or scroll to a specific integration. Open the **Actions** menu and select **Edit integration**. Any secret information is marked as being hidden. +3. Click the link to replace the secret value with a new one. + + :::{image} images/fleet-policy-hidden-secret.png + :alt: Screen capture showing a hidden secret value as part of an integration policy + :class: screenshot + ::: + +4. Click **Save integration**. The original secret value is overwritten in the policy. + + +## Set the maximum CPU usage [agent-policy-limit-cpu] + +You can limit the amount of CPU consumed by {{agent}}. This parameter limits the number of operating system threads that can be executing Go code simultaneously in each Go process. You can specify an integer value not less than `0`, which is the default value that stands for "all available CPUs". + +This limit applies independently to the agent and each underlying Go process that it supervises. For example, if {{agent}} is configured to supervise two {{beats}} with a CPU usage limit of `2` set in the policy, then the total CPU limit is six, where each of the three processes (one {{agent}} and two {{beats}}) may execute independently on two CPUs. + +This setting is similar to the {{beats}} [`max_procs`](beats://docs/reference/filebeat/configuration-general-options.md#_max_procs) setting. For more detail, refer to the [GOMAXPROCS](https://pkg.go.dev/runtime#GOMAXPROCS) function in the Go runtime documentation. + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Advanced settings**. +3. Set **Limit CPU usage** as needed. For example, to limit Go processes supervised by {{agent}} to two operating system threads each, set this value to `2`. + + +## Set the {{agent}} log level [agent-policy-log-level] + +You can set the minimum log level that {{agents}} using the selected policy will send to the configured output. The default setting is `info`. + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Advanced settings**. +3. Set the **Agent logging level**. +4. Save your changes. + +You can also set the log level for an individual agent: + +1. In {{fleet}}, click **Agents**. Under the **Host** header, select the {{agent}} you want to edit. +2. On the **Logs** tab, set the **Agent logging level** and apply your changes. Or, you can choose to reset the agent to use the logging level specified in the agent policy. + + +## Change the {{agent}} binary download location [agent-binary-download-settings] + +{{agent}}s must be able to access the {{artifact-registry}} to download binaries during upgrades. By default {{agent}}s download artifacts from the artifact registry at `https://artifacts.elastic.co/downloads/`. + +For {{agent}}s that cannot access the internet, you can specify agent binary download settings, and then configure agents to download their artifacts from the alternate location. For more information about running {{agent}}s in a restricted environment, refer to [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md). + +To change the binary download location: + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Agent binary download**. +3. Specify the address where you are hosting the artifacts repository or select the default to use the location specified in the {{fleet}} [agent binary download settings](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-agent-binary-download-settings). + + +## Set the {{agent}} host name format [fleet-agent-hostname-format-settings] + +The **Host name format** setting controls the format of information provided about the current host through the [host.name](/reference/ingestion-tools/fleet/host-provider.md) key, in events produced by {{agent}}. + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Host name format**. +3. Select one of the following: + + * **Hostname**: Information about the current host is in a non-fully-qualified format (`somehost`, rather than `somehost.example.com`). This is the default reporting format. + * **Fully Qualified Domain Name (FQDN)**: Information about the current host is in FQDN format (`somehost.example.com` rather than `somehost`). This helps you to distinguish between hosts on different domains that have similar names. The fully qualified hostname allows each host to be more easily identified when viewed in {{kib}}, for example. + +4. Save your changes. + +::::{note} +FQDN reporting is not currently supported in APM. +:::: + + +For FQDN reporting to work as expected, the hostname of the current host must either: + +* Have a CNAME entry defined in DNS. +* Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup. + +If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host in a non-fully-qualified format. + + +## Set an unenrollment timeout for inactive agents [fleet-agent-unenrollment-timeout] + +You can configure a length of time after which any inactive {{agent}}s are automatically unenrolled and their API keys invalidated. This setting is useful when you have agents running in an ephemeral environment, such as Docker or {{k8s}}, and you want to prevent inactive agents from consuming unused API keys. + +To configure an unenrollment timeout for inactive agents: + +1. In {{fleet}}, click **Agent policies**. Select the name of the policy you want to edit. +2. Click the **Settings** tab and scroll to **Inactive agent unenrollment timeout**. +3. Specify an unenrollment timeout period in seconds. +4. Save your changes. + +After you set an unenrollment timeout, any inactive agents are unenrolled automatically after the specified period of time. The unenroll task runs every ten minutes, and it unenrolls a maximum of one thousand agents at a time. + + +## Policy scaling recommendations [agent-policy-scale] + +A single instance of {{fleet}} supports a maximum of 1000 {{agent}} policies. If more policies are configured, UI performance might be impacted. The maximum number of policies is not affected by the number of spaces in which the policies are used. + +If you are using {{agent}} with [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), the maximum supported number of {{agent}} policies is 500. diff --git a/reference/ingestion-tools/fleet/agent-processors.md b/reference/ingestion-tools/fleet/agent-processors.md new file mode 100644 index 0000000000..8a1976824b --- /dev/null +++ b/reference/ingestion-tools/fleet/agent-processors.md @@ -0,0 +1,103 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-processor-configuration.html +--- + +# Agent processors [elastic-agent-processor-configuration] + +{{agent}} processors are lightweight processing components that you can use to parse, filter, transform, and enrich data at the source. For example, you can use processors to: + +* reduce the number of exported fields +* enhance events with additional metadata +* perform additional processing and decoding +* sanitize data + +Each processor receives an event, applies a defined action to the event, and returns the event. If you define a list of processors, they are executed in the order they are defined. + +```yaml +event -> processor 1 -> event1 -> processor 2 -> event2 ... +``` + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](#limitations) +:::: + + + +## Where are processors valid? [where-valid] + +The processors described in this section are valid: + +* **Under integration settings in the Integrations UI in {{kib}}**. For example, when configuring an Nginx integration, you can define processors for a specific dataset under **Advanced options**. The processor in this example adds geo metadata to the Nginx access logs collected by {{agent}}: + + :::{image} images/add-processor.png + :alt: Screen showing how to add a processor to an integration policy + :class: screenshot + ::: + + ::::{note} + Some integrations do not currently support processors. + :::: + +* **Under input configuration settings for standalone {{agent}}s**. For example: + + ```yaml + inputs: + - type: logfile + use_output: default + data_stream: + namespace: default + streams: + - data_stream: + dataset: nginx.access + type: logs + ignore_older: 72h + paths: + - /var/log/nginx/access.log* + tags: + - nginx-access + exclude_files: + - .gz$ + processors: + - add_host_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: '40.7128, -74.0060' + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York + - add_locale: null + ``` + + +You can define processors that apply to a specific input defined in the configuration. Applying a processor to all the inputs on a global basis is currently not supported. + + +## What are some limitations of using processors? [limitations] + +Processors have the following limitations. + +* Cannot enrich events with data from {{es}} or other custom data sources. +* Cannot process data after it’s been converted to the Elastic Common Schema (ECS) because the conversion is performed by {{es}} ingest pipelines. This means that your processor configuration cannot refer to fields that are created by ingest pipelines or {{ls}} because those fields are created *after* the processor runs, not before. +* May break integration ingest pipelines in {{es}} if the user-defined processing removes or alters fields expected by ingest pipelines. +* If you create new fields via processors, you are responsible for setting up field mappings in the `*-@custom` component template and making sure the new mappings are aligned with ECS. + + +## What other options are available for processing data? [processing-options] + +The {{stack}} provides several options for processing data collected by {{agent}}. The option you choose depends on what you need to do: + +| If you need to…​ | Do this…​ | +| --- | --- | +| Sanitize or enrich raw data at the source | Use an {{agent}} processor | +| Convert data to ECS, normalize field data, or enrich incoming data | Use [ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md#pipelines-for-fleet-elastic-agent) | +| Define or alter the schema at query time | Use [runtime fields](/manage-data/data-store/mapping/runtime-fields.md) | +| Do something else with your data | Use [Logstash plugins](logstash://docs/reference/filter-plugins.md) | + + +## How are {{agent}} processors different from {{ls}} plugins or ingest pipelines? [how-different] + +Logstash plugins and ingest pipelines both require you to send data to another system for processing. Processors, on the other hand, allow you to apply processing logic at the source. This means that you can filter out data you don’t want to send across the connection, and you can spread some of the processing load across host systems running on edge nodes. diff --git a/reference/ingestion-tools/fleet/agent-provider.md b/reference/ingestion-tools/fleet/agent-provider.md new file mode 100644 index 0000000000..9ffe0171df --- /dev/null +++ b/reference/ingestion-tools/fleet/agent-provider.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/agent-provider.html +--- + +# Agent provider [agent-provider] + +Provides information about the {{agent}}. The available keys are: + +| Key | Type | Description | +| --- | --- | --- | +| `agent.id` | `string` | Current agent ID | +| `agent.version` | `object` | Current agent version information object | +| `agent.version.version` | `string` | Current agent version | +| `agent.version.commit` | `string` | Version commit | +| `agent.version.build_time` | `date` | Version build time | +| `agent.version.snapshot` | `boolean` | Version is snapshot build | + diff --git a/reference/ingestion-tools/fleet/air-gapped.md b/reference/ingestion-tools/fleet/air-gapped.md new file mode 100644 index 0000000000..31528b678b --- /dev/null +++ b/reference/ingestion-tools/fleet/air-gapped.md @@ -0,0 +1,300 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/air-gapped.html +--- + +# Air-gapped environments [air-gapped] + +When running {{agent}}s in a restricted or closed network, you need to take extra steps to make sure: + +* {{kib}} is able to reach the {{package-registry}} to download package metadata and content. +* {{agent}}s are able to download binaries during upgrades from the {{artifact-registry}}. + +The {{package-registry}} must therefore be accessible from {{kib}} via an HTTP Proxy and/or self-hosted. + +The {{artifact-registry}} must therefore be accessible from {{kib}} via an HTTP Proxy and/or self-hosted. + +::::{tip} +See the {{elastic-sec}} Solution documentation for air-gapped [offline endpoints](/reference/security/elastic-defend/offline-endpoint.md). + +:::: + + +When upgrading all the components in an air-gapped environment, it is recommended that you upgrade in the following order: + +1. Upgrade the {{package-registry}}. +2. Upgrade the {{stack}} including {{kib}}. +3. Upgrade the {{artifact-registry}} and ensure the latest {{agent}} binaries are available. +4. Upgrade the on-premise {{fleet-server}}. +5. In {{fleet}}, issue an upgrade for all the {{agent}}s. + + +## Enable air-gapped mode for {{fleet}} [air-gapped-mode-flag] + +Set the following property in {{kib}} to enable air-gapped mode in {{fleet}}. This allows {{fleet}} to intelligently skip certain requests or operations that shouldn’t be attempted in air-gapped environments. + +```yaml +xpack.fleet.isAirGapped: true +``` + + +## Configure {{agents}} to download a PGP/GPG key from {{fleet-server}} [air-gapped-pgp-fleet] + +Starting from version 8.9.0, when {{agent}} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent. This process has a backup mechanism that will use the key coming from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` instead of the one it already has. + +In an air-gapped environment, an {{agent}} which doesn’t have access to a PGP/GPG key from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` would fail to be upgraded. For versions 8.9.0 to 8.10.3, you can resolve this problem following the steps described in the associated [known issue](https://www.elastic.co/guide/en/fleet/8.9/release-notes-8.9.0.html#known-issues-8.9.0) documentation. + +Starting in version 8.10.4, you can resolve this problem by configuring {{agents}} to download the PGP/GPG key from {{fleet-server}}. + +Starting in version 8.10.4, {{agent}} will: + +1. Verify the binary signature with the key bundled in the agent. +2. If the verification doesn’t pass, the agent will download the PGP/GPG key from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` and verify it. +3. If that verification doesn’t pass, the agent will download the PGP/GPG key from {{fleet-server}} and verify it. +4. If that verification doesn’t pass, the upgrade is blocked. + +By default, {{fleet-server}} serves {{agents}} with the key located in `FLEETSERVER_BINARY_DIR/elastic-agent-upgrade-keys/default.pgp`. The key is served through the {{fleet-server}} endpoint `GET /api/agents/upgrades/{{major}}.{minor}.{{patch}}/pgp-public-key`. + +If there isn’t a `default.pgp` key in the `FLEETSERVER_BINARY_DIR/elastic-agent-upgrade-keys/default.pgp` directory, {{fleet-server}} instead will attempt to retrieve a PGP/GPG key from the URL that you can specify with the `server.pgp.upstream_url` setting. + +You can prevent {{fleet}} from downloading the PGP/GPG key from `server.pgp.upstream_url` by manually downloading it from `https://artifacts.elastic.co/GPG-KEY-elastic-agent` and storing it at `FLEETSERVER_BINARY_DIR/elastic-agent-upgrade-keys/default.pgp`. + +To set a custom URL for {{fleet-server}} to access a PGP/GPG key and make it available to {{agents}}: + +1. In {{kib}}, go to **Management > {{fleet}} > Agent policies**. +2. Select a policy for the agents that you want to upgrade. +3. On the policy page, in the **Actions** menu for the {{fleet-server}} integration, select **Edit integration**. +4. In the {{fleet-server}} settings section expand **Change defaults** and **Advanced options**. +5. In the **Custom fleet-server configurations** field, add the setting `server.pgp.upstream_url` with the full URL where the PGP/GPG key can be accessed. For example: + +```yaml +server.pgp.upstream_url: +``` + +The setting `server.pgp.upstream_url` must point to a web server hosting the PGP/GPG key, which must be reachable by the host where {{fleet-server}} is installed. + +Note that: + +* `server.pgp.upstream_url` may be specified as an `http` endpoint (instead of `https`). +* For an `https` endpoint, the CA for {{fleet-server}} to connect to `server.pgp.upstream_url` must be trusted by {{fleet-server}} using the `--certificate-authorities` setting that is used globally for {{agent}}. + + +## Use a proxy server to access the {{package-registry}} [air-gapped-proxy-server] + +By default {{kib}} downloads package metadata and content from the public {{package-registry}} at [epr.elastic.co](https://epr.elastic.co/). + +If you can route traffic to the public endpoint of the {{package-registry}} through a network gateway, set the following property in {{kib}} to use a proxy server: + +```yaml +xpack.fleet.registryProxyUrl: your-nat-gateway.corp.net +``` + +For more information, refer to [Using a proxy server with {{agent}} and {{fleet}}](/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md). + + +## Host your own {{package-registry}} [air-gapped-diy-epr] + +::::{note} +The {{package-registry}} packages include signatures used in [package verification](/reference/ingestion-tools/fleet/package-signatures.md). By default, {{fleet}} uses the Elastic public GPG key to verify package signatures. If you ever need to change this GPG key, use the `xpack.fleet.packageVerification.gpgKeyPath` setting in `kibana.yml`. For more information, refer to [{{fleet}} settings](kibana://docs/reference/configuration-reference/fleet-settings.md). +:::: + + +If routing traffic through a proxy server is not an option, you can host your own {{package-registry}}. + +The {{package-registry}} can be deployed and hosted onsite using one of the available Docker images. These docker images include the {{package-registry}} and a selection of packages. + +There are different distributions available: + +* 9.0.0-beta1 (recommended): `docker.elastic.co/package-registry/distribution:9.0.0-beta1` - Selection of packages from the production repository released with {{stack}} 9.0.0-beta1. +* lite-9.0.0-beta1: `docker.elastic.co/package-registry/distribution:lite-9.0.0-beta1` - Subset of the most commonly used packages from the production repository released with {{stack}} 9.0.0-beta1. This image is a good candidate to start using {{fleet}} in air-gapped environments. +* production: `docker.elastic.co/package-registry/distribution:production` - Packages available in the production registry ([https://epr.elastic.co](https://epr.elastic.co)). Please note that this image is updated every time a new version of a package gets published. +* lite: `docker.elastic.co/package-registry/distribution:lite` - Subset of the most commonly used packages available in the production registry ([https://epr.elastic.co](https://epr.elastic.co)). Please note that this image is updated every time a new version of a package gets published. + +::::{warning} +Version 9.0.0-beta1 of the {{package-registry}} distribution has not yet been released. + +:::: + + +To update the distribution image, re-pull the image and then restart the docker container. + +Every distribution contains packages that can be used by different versions of the {{stack}}. The {{package-registry}} API exposes a {{kib}} version constraint that allows for filtering packages that are compatible with a particular version. + +::::{note} +These steps use the standard Docker CLI, but you can create a Kubernetes manifest based on this information. These images can also be used with other container runtimes compatible with Docker images. +:::: + + +1. Pull the Docker image from the public Docker registry: + + ```sh + docker pull docker.elastic.co/package-registry/distribution:9.0.0-beta1 + ``` + +2. Save the Docker image locally: + + ```sh + docker save -o package-registry-9.0.0-beta1.tar docker.elastic.co/package-registry/distribution:9.0.0-beta1 + ``` + + ::::{tip} + Check the image size to ensure that you have enough disk space. + :::: + +3. Transfer the image to the air-gapped environment and load it: + + ```sh + docker load -i package-registry-9.0.0-beta1.tar + ``` + +4. Run the {{package-registry}}: + + ```sh + docker run -it -p 8080:8080 docker.elastic.co/package-registry/distribution:9.0.0-beta1 + ``` + +5. (Optional) You can monitor the health of your {{package-registry}} with requests to the root path: + + ```sh + docker run -it -p 8080:8080 \ + --health-cmd "curl -f -L http://127.0.0.1:8080/health" \ + docker.elastic.co/package-registry/distribution:9.0.0-beta1 + ``` + + + +### Connect {{kib}} to your hosted {{package-registry}} [air-gapped-diy-epr-kibana] + +Use the `xpack.fleet.registryUrl` property in the {{kib}} config to set the URL of your hosted package registry. For example: + +```yaml +xpack.fleet.registryUrl: "http://package-registry.corp.net:8080" +``` + + +### TLS configuration of the {{package-registry}} [air-gapped-tls] + +You can configure the {{package-registry}} to listen on a secure HTTPS port using TLS. + +For example, given a key and a certificate pair available in `/etc/ssl`, you can start the {{package-registry}} listening on the 443 port using the following command: + +```sh +docker run -it -p 443:443 \ + -v /etc/ssl/package-registry.key:/etc/ssl/package-registry.key:ro \ + -v /etc/ssl/package-registry.crt:/etc/ssl/package-registry.crt:ro \ + -e EPR_ADDRESS=0.0.0.0:443 \ + -e EPR_TLS_KEY=/etc/ssl/package-registry.key \ + -e EPR_TLS_CERT=/etc/ssl/package-registry.crt \ + docker.elastic.co/package-registry/distribution:9.0.0-beta1 +``` + +The {{package-registry}} supports TLS versions from 1.0 to 1.3. The minimum version accepted can be configured with `EPR_TLS_MIN_VERSION`, it defaults to 1.0. If you want to restrict the supported versions from 1.2 to 1.3, you can use `EPR_TLS_MIN_VERSION=1.2`. + + +### Using custom CA certificates [_using_custom_ca_certificates] + +If you are using self-signed certificates or certificates issued by a custom Certificate Authority (CA), you need to set the file path to your CA in the `NODE_EXTRA_CA_CERTS` environment variable in the {{kib}} startup files. + +```text +NODE_EXTRA_CA_CERTS="/etc/kibana/certs/ca-cert.pem" +``` + + +## Host your own artifact registry for binary downloads [host-artifact-registry] + +{{agent}}s must be able to access the {{artifact-registry}} to download binaries during upgrades. By default {{agent}}s download artifacts from `https://artifacts.elastic.co/downloads/`. + +To make binaries available in an air-gapped environment, you can host your own custom artifact registry, and then configure {{agent}}s to download binaries from it. + +1. Create a custom artifact registry in a location accessible to your {{agent}}s: + + 1. Download the latest release artifacts from the public {{artifact-registry}} at `https://artifacts.elastic.co/downloads/`. For example, the following cURL commands download all the artifacts that may be needed to upgrade {{agent}}s running on Linux x86_64. You may replace `x86_64` with `arm64` for the ARM64 version. The exact list depends on which integrations you’re using. Make sure to also download the corresponding sha512, and PGP Signature (.asc) files for each binary. These are used for file integrity validations during installations and upgrades. + + ```shell + curl -O https://artifacts.elastic.co/downloads/apm-server/apm-server-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/apm-server/apm-server-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/apm-server/apm-server-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/osquerybeat/osquerybeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/osquerybeat/osquerybeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/osquerybeat/osquerybeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/cloudbeat/cloudbeat-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/cloudbeat/cloudbeat-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/cloudbeat/cloudbeat-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/endpoint-dev/endpoint-security-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/endpoint-dev/endpoint-security-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/endpoint-dev/endpoint-security-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/fleet-server/fleet-server-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/fleet-server/fleet-server-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/fleet-server/fleet-server-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-host-agent-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-host-agent-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-host-agent-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-x86_64.tar.gz.asc + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-x86_64.tar.gz + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-x86_64.tar.gz.sha512 + curl -O https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-x86_64.tar.gz.asc + ``` + + 2. On your HTTP file server, group the artifacts into directories and sub-directories that follow the same convention used by the {{artifact-registry}}: + + ```shell + //--- + ``` + + Where: + + * `` is in the format `beats/elastic-agent`, `fleet-server`, `endpoint-dev`, and so on. + * `` is in the format `elastic-agent`, `endpoint-security`, or `fleet-server` and so on. + * `arch-package-type` is in the format `linux-x86_64`, `linux-arm64`, `windows_x86_64`, `darwin_x86_64`, or darwin_aarch64`. + * If you’re using the DEB package manager: + + * The 64bit variant has the format `--amd64.deb`. + * The aarch64 variant has the format `--arm64.deb`. + + * If you’re using the RPM package manager: + + * The 64bit variant has a format `--x86_64.rpm`. + * The aarch64 variant has a format `--aarch64.rpm`. + + + ::::{tip} + * If you’re ever in doubt, visit the [{{agent}} download page](https://www.elastic.co/downloads/elastic-agent) to see what URL the various binaries are downloaded from. + * Make sure you have a plan or automation in place to update your artifact registry when new versions of {{agent}} are available. + + :::: + +2. Add the agent binary download location to {{fleet}} settings: + + 1. Open **{{fleet}} → Settings**. + 2. Under **Agent Binary Download**, click **Add agent binary source** to add the location of your artifact registry. For more detail about these settings, refer to [Agent binary download settings](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-agent-binary-download-settings). If you want all {{agent}}s to download binaries from this location, set it as the default. + +3. If your artifact registry is not the default, edit your agent policies to override the default: + + 1. Go to **{{fleet}} → Agent policies** and click the policy name to edit it. + 2. Click **Settings**. + 3. Under **Agent Binary Download**, select your artifact registry. + + When you trigger an upgrade for any {{agent}}s enrolled in the policy, the binaries are downloaded from your artifact registry instead of the public repository. + + +**Not using {{fleet}}?** For standalone {{agent}}s, you can set the binary download location under `agent.download.sourceURI` in the [`elastic-agent.yml`](/reference/ingestion-tools/fleet/elastic-agent-reference-yaml.md) file, or run the [`elastic-agent upgrade`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-upgrade-command) command with the `--source-uri` flag specified. diff --git a/reference/ingestion-tools/fleet/certificates-rotation.md b/reference/ingestion-tools/fleet/certificates-rotation.md new file mode 100644 index 0000000000..313e5f58fb --- /dev/null +++ b/reference/ingestion-tools/fleet/certificates-rotation.md @@ -0,0 +1,192 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/certificates-rotation.html +--- + +# Rotate SSL/TLS CA certificates [certificates-rotation] + +In some scenarioes you may want to rotate your configured certificate authorities (CAs), for instance if your chosen CAs are due to expire. Refer to the following steps to rotate certificates between connected components: + +* [Rotating a {{fleet-server}} CA](#certificates-rotation-agent-fs) +* [Rotating an {{es}} CA for connections from {{fleet-server}}](#certificates-rotation-fs-es) +* [Rotating an {{es}} CA for connections from {{agent}}](#certificates-rotation-agent-es) + + +## Rotating a {{fleet-server}} CA [certificates-rotation-agent-fs] + +{{agent}} communicates with {{fleet-server}} to receive policies and to check for updates. There are two methods to rotate a CA certificate on {{fleet-server}} for connections from {{agent}}. The first method requires {{agent}} to re-enroll with {{fleet-server}} one or more times. The second method avoids re-enrollment and requires overwriting the existing CA file with a new certificate. + +**Option 1: To renew an expiring CA certificate on {{fleet-server}} with {{agent}} re-enrollments** + +Using this method, the {{agent}} with an old or expiring CA configured will be re-enrolled with {{fleet-server}} using a new CA. + +1. Update the {{agent}} with the new {{fleet-server}} CA: + + The {{agent}} should already have a CA configured. Use the [`elastic-agent enroll`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-enroll-command) command to re-enroll the agent with an updated, comma separated set of CAs to use. + + ```shell + elastic-agent enroll \ + --url= \ + --enrollment-token= \ + ... \ + --certificate-authorities + ``` + + A new agent enrollment will cause a new agent to appear in {{fleet}}. This may be considered disruptive, however the old agent entry will transition to an offline state. A new agent enrollment is required in order for the {{fleet-server}} configuration to be modified to accept multiple certificate authorities. + + At this point, all TLS connections are still relying on the original CA that was provided (`original_CA`) in order to authenticate {{fleet-server}} certificates. + +2. Rotate the certificates on {{fleet-server}}: + + This procedure will reissue new certificates based on the new CA. Re-enroll {{fleet-server}} with all the new certificates: + + ```shell + elastic-agent enroll ... + --url= \ + --enrollment-token= \ + ... \ + --fleet-server-cert --certificate-authorities + ``` + + This will cause the TLS connections to the {{agents}} to reset and will load the relevant new CA and certificates to the {{fleet-server}} configuration. + +3. The {{agents}} will automatically establish new TLS connections as part of their normal operation: + + The new CA (`new_CA`) on the agent installed in Step 1 will be used to authenticate the certificates used by {{fleet-server}}. + + Note that if the original CA (`original CA`) was compromised, then it may need to be removed from the agent’s CA list. To achieve this you need to enroll the agent again: + + ```shell + elastic-agent enroll ... + --url= \ + --enrollment-token= \ + ... \ + --certificate-authorities + ``` + + +**Option 2: To renew an expiring CA certificate on {{es}} without {{agent}} re-enrollments** + +Option 1 results in multiple {{agent}} enrollments. Another option to avoid multiple enrollments is to overwrite the CA files with the new CA or certificate. This method uses a single file with multiple CAs that can be replaced. + +To use this option it is assumed that: + +* {{agent}}s have already been enrolled using a file that contains the Certificate Authority: + + ```shell + elastic-agent enroll ... + --url= \ + --enrollment-token= \ + ... \ + --certificate-authorities= + ``` + +* The {{agent}} running {{fleet-server}} has already been enrolled with the following secure connection options, where each option points to files that contain the certificates and keys: + + ```shell + elastic-agent enroll ... + --url= \ + --enrollment-token= \ + ... \ + --certificate-authorities= \ + --fleet-server-cert= \ + --fleet-server-cert-key= + ``` + + +To update the {{agent}} and {{fleet-server}} configurations: + +1. Update the configuration with the new CA by changing the content of `CA.pem` to include the new CA. + + ```shell + cat new_ca.pem >> CA.pem + ``` + +2. Restart the {{agents}}. Note that this is not a re-enrollment. Restarting will force the {{agents}} to reload the CAs. + + ```shell + elastic-agent restart + ``` + +3. For the {{agent}} that is running {{fleet-server}}, overwrite the original `certificate`, `certificate-key`, and the `certificate-authority` with the new ones to use. + + ```shell + cat new-cert.pem > cert.pem + cat new-key.pem > key.pem + cat new_ca.pem > CA.pem + ``` + +4. Restart the {{agent}} that is running {{fleet-server}}. + + ```shell + elastic-agent restart + ``` + +5. If the original certificate needs to be removed from the {{agents}}, overwrite the `CA.pem` with the new CA only: + + ```shell + cat new_ca.pem > CA.pem + ``` + +6. Finally, restart the {{agents}} again. + + ```shell + elastic-agent restart + ``` + + + +## Rotating an {{es}} CA for connections from {{fleet-server}} [certificates-rotation-fs-es] + +{{fleet-server}} communicates with {{es}} to send status information to {{fleet}} about {{agent}}s and to retrieve updated policies to ship out to all {{agent}}s enrolled in a given policy. If you have {{fleet-server}} [deployed on-premise](/reference/ingestion-tools/fleet/deployment-models.md), you may wish to rotate your configured CA certificate, for instance if the certificate is due to expire. + +To rotate a CA certificate on {{es}} for connections from {{fleet-server}}: + +1. Update the {{fleet-server}} with the new {{fleet-server}} CA: + + The {{agent}} running {{fleet-server}} should already have a CA configured. Use the [`elastic-agent enroll`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-enroll-command) command to re-enroll the agent running {{fleet-server}} with an updated, comma separated set of CAs to use. + + ```shell + elastic-agent enroll \ + --fleet-server-es= \ + --fleet-server-service-token= \ + ... \ + --fleet-server-es-ca + ``` + + A new agent enrollment will cause two {{fleet-server}} agents to appear in {{fleet}}. This may be considered disruptive, however the old agent entry will transition to offline. A new agent enrollment is required in order for the {{fleet-server}} configuration to be modified to accept multiple certificate authorities. + + At this point, all TLS connections are still relying on the original CA that was provided (`original_ES_CA`) in order to authenticate {{es}} certificates. Re-enrolling the {{fleet-server}} will cause the agents going through that {{fleet-server}} to also reset their TLS, but the connections will be re-established as required. + +2. Rotate the certificates on {{es}}. + + {{es}} will use new certificates based on the new {{es}} CA. Since the {{fleet-server}} has the original and the new {{es}} CAs in a chain, it will accept original and new certificates from {{es}}. + + Note that if the original {{es}} CA (`original_ES CA`) was compromised, then it may need to be removed from the {{fleet-server}}'s CA list. To achieve this you need to enroll the {{fleet-server}} agent again (if re-enrollment is a concern then use a file to hold the certificates and certificate-authority): + + ```shell + elastic-agent enroll \ + --fleet-server-es= \ + --fleet-server-service-token= \ + ... \ + --fleet-server-es-ca + ``` + + + +## Rotating an {{es}} CA for connections from {{agent}} [certificates-rotation-agent-es] + +Using configuration information from a policy delivered by {{fleet-server}}, {{agent}} collects data and sends it to {{es}}. + +To rotate a CA certificate on {{es}} for connections from {{agent}}: + +1. In {{fleet}} open the **Settings** tab. +2. In the **Outputs** section, click the edit button for the {{es}} output that requires a certificate rotation. +3. In the **Elasticsearch CA trusted fingerprint** field, add the new trusted fingerprint to use. This is the SHA-256 fingerprint (hash) of the certificate authority used to self-sign {{es}} certificates. This fingerprint will be used to verify self-signed certificates presented by {{es}}. + + If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normally. + + :::{image} images/certificate-rotation-agent-es.png + :alt: Screen capture of the Edit Output UI: Elasticsearch CA trusted fingerprint + :class: screenshot + ::: diff --git a/reference/ingestion-tools/fleet/community_id-processor.md b/reference/ingestion-tools/fleet/community_id-processor.md new file mode 100644 index 0000000000..2ae10d6228 --- /dev/null +++ b/reference/ingestion-tools/fleet/community_id-processor.md @@ -0,0 +1,54 @@ +--- +navigation_title: "community_id" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/community_id-processor.html +--- + +# Community ID Network Flow Hash [community_id-processor] + + +The `community_id` processor computes a network flow hash according to the [Community ID Flow Hash specification](https://github.com/corelight/community-id-spec). + +The flow hash is useful for correlating all network events related to a single flow. For example, you can filter on a community ID value and you might get back the Netflow records from multiple collectors and layer 7 protocol records from the Network Packet Capture integration. + +By default the processor is configured to read the flow parameters from the appropriate Elastic Common Schema (ECS) fields. If you are processing ECS data, no parameters are required. + + +## Examples [_examples_5] + +```yaml + - community_id: +``` + +If the data does not conform to ECS, you can customize the field names that the processor reads from. You can also change the target field that the computed hash is written to. For example: + +```yaml + - community_id: + fields: + source_ip: my_source_ip + source_port: my_source_port + destination_ip: my_dest_ip + destination_port: my_dest_port + iana_number: my_iana_number + transport: my_transport + icmp_type: my_icmp_type + icmp_code: my_icmp_code + target: network.community_id +``` + +If the necessary fields are not present in the event, the processor silently continues without adding the target field. + + +## Configuration settings [_configuration_settings_15] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | No | | Field names that the processor reads from:

`source_ip`
: Field containing the source IP address.

`source_port`
: Field containing the source port.

`destination_ip`
: Field containing the destination IP address.

`destination_port`
: Field containing the destination port.

`iana_number`
: Field containing the IANA number. The following protocol numbers are currently supported: 1 ICMP, 2 IGMP, 6 TCP, 17 UDP, 47 GRE, 58 ICMP IPv6, 88 EIGRP, 89 OSPF, 103 PIM, and 132 SCTP.

`transport`
: Field containing the transport protocol. Used only when the `iana_number` field is not present.

`icmp_type`
: Field containing the ICMP type.

`icmp_code`
: Field containing the ICMP code.
| +| `target` | No | | Field that the computed hash is written to. | +| `seed` | No | | Seed for the community ID hash. Must be between 0 and 65535 (inclusive). Theseed can prevent hash collisions between network domains, such as a staging andproduction network that use the same addressing scheme. This setting results ina 16-bit unsigned integer that gets incorporated into all generated hashes. | + diff --git a/reference/ingestion-tools/fleet/conditions-based-autodiscover.md b/reference/ingestion-tools/fleet/conditions-based-autodiscover.md new file mode 100644 index 0000000000..8eb429132c --- /dev/null +++ b/reference/ingestion-tools/fleet/conditions-based-autodiscover.md @@ -0,0 +1,314 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/conditions-based-autodiscover.html +--- + +# Conditions based autodiscover [conditions-based-autodiscover] + +You can define autodiscover conditions in each input to allow {{agent}} to automatically identify Pods and start monitoring them using predefined integrations. Refer to [Inputs](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md) to get an idea. + +::::{important} +Condition definition is supported both in **{{agent}} managed by {{fleet}}** and in **standalone** scenarios. +:::: + + +For more information about variables and conditions in input configurations, refer to [Variables and conditions in input configurations](/reference/ingestion-tools/fleet/dynamic-input-configuration.md). You can find available variables of autodiscovery in [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md). + +## Example: Target Pods by label [_example_target_pods_by_label] + +To automatically identify a Redis Pod and monitor it with the Redis integration, uncomment the following input configuration inside the [{{agent}} Standalone manifest](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml): + +```yaml +- name: redis + type: redis/metrics + use_output: default + meta: + package: + name: redis + version: 0.3.6 + data_stream: + namespace: default + streams: + - data_stream: + dataset: redis.info + type: metrics + metricsets: + - info + hosts: + - '${kubernetes.pod.ip}:6379' + idle_timeout: 20s + maxconn: 10 + network: tcp + period: 10s + condition: ${kubernetes.labels.app} == 'redis' +``` + +The condition `${kubernetes.labels.app} == 'redis'` will make the {{agent}} look for a Pod with the label `app:redis` within the scope defined in its manifest. + +For a list of provider fields that you can use in conditions, refer to [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md). Some examples of conditions usage are: + +1. For a pod with label `app.kubernetes.io/name=ingress-nginx` the condition should be `condition: ${kubernetes.labels.app.kubernetes.io/name} == "ingress-nginx"`. +2. For a pod with annotation `prometheus.io/scrape: "true"` the condition should be `${kubernetes.annotations.prometheus.io/scrape} == "true"`. +3. For a pod with name `kube-scheduler-kind-control-plane` the condition should be `${kubernetes.pod.name} == "kube-scheduler-kind-control-plane"`. + +The `redis` input defined in the {{agent}} manifest only specifies the`info` metricset. To learn about other available metricsets and their configuration settings, refer to the [Redis module page](beats://docs/reference/metricbeat/metricbeat-module-redis.md). + +To deploy Redis, you can apply the following example manifest: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: redis + labels: + k8s-app: redis + app: redis +spec: + containers: + - image: redis + imagePullPolicy: IfNotPresent + name: redis + ports: + - name: redis + containerPort: 6379 + protocol: TCP +``` + +You should now be able to see Redis data flowing in on index `metrics-redis.info-default`. Make sure the port in your Redis manifest file matches the port used in the Redis input. + +::::{note} +All assets (dashboards, ingest pipelines, and so on) related to the Redis integration are not installed. You need to explicitly [install them through {{kib}}](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md). +:::: + + +Conditions can also be used in inputs configuration in order to set the target host dynamically for a targeted Pod based on its labels. This is useful for datasets that target specific pods like `kube-scheduler` or `kube-controller-manager`. The following configuration will enable `kubernetes.scheduler` dataset only for pods which have the label `component=kube-scheduler` defined. + +```yaml +- data_stream: + dataset: kubernetes.scheduler + type: metrics + metricsets: + - scheduler + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + hosts: + - 'https://${kubernetes.pod.ip}:10259' + period: 10s + ssl.verification_mode: none + condition: ${kubernetes.labels.component} == 'kube-scheduler' +``` + +::::{note} +Pods' labels and annotations can be used in autodiscover conditions. In the case of labels or annotations that include dots(`.`), they can be used in conditions exactly as they are defined in the pods. For example `condition: ${kubernetes.labels.app.kubernetes.io/name} == 'ingress-nginx'`. This should not be confused with the dedoted (by default) labels and annotations stored into Elasticsearch([Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md)). +:::: + + +::::{warning} +Before the 8.6 release, labels used in autodiscover conditions were dedoted in case the `labels.dedot` parameter was set to `true` in Kubernetes Provider configuration (by default `true`). The same did not apply for annotations. This was fixed in 8.6 release. Refer to the Release Notes section of the version 8.6.0 documentation. +:::: + + +::::{warning} +In some "As a Service" Kubernetes implementations, like GKE, the control plane nodes or even the Pods running on them won’t be visible. In these cases, it won’t be possible to use scheduler metricsets, necessary for this example. Refer [scheduler and controller manager](beats://docs/reference/metricbeat/metricbeat-module-kubernetes.md#_scheduler_and_controllermanager) to find more information. +:::: + + +Following the Redis example, if you deploy another Redis Pod with a different port, it should be detected. To check this, go, for example, to the field `service.address` under `metrics-redis.info-default`. It should be displaying two different services. + +To obtain the policy generated by this configuration, connect to {{agent}} container: + +```sh +kubectl exec -n kube-system --stdin --tty elastic-agent-standalone-id -- /bin/bash +``` + +Do not forget to change the `elastic-agent-standalone-id` to your {{agent}} Pod’s name. Moreover, make sure that your Pod is inside `kube-system`. If not, change `-n kube-system` to the correct namespace. + +Inside the container [inspect the output](/reference/ingestion-tools/fleet/agent-command-reference.md) of the configuration file you used for the {{agent}}: + +```sh +elastic-agent inspect --variables --variables-wait 1s -c /etc/elastic-agent/agent.yml +``` + +::::{dropdown} You should now be able to see the generated policy. If you look for the `scheduler`, it will look similar to this. +```yaml +- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + hosts: + - https://172.19.0.2:10259 + index: metrics-kubernetes.scheduler-default + meta: + package: + name: kubernetes + version: 1.9.0 + metricsets: + - scheduler + module: kubernetes + name: kubernetes-node-metrics + period: 10s + processors: + - add_fields: + fields: + labels: + component: kube-scheduler + tier: control-plane + namespace: kube-system + namespace_labels: + kubernetes_io/metadata_name: kube-system + namespace_uid: 03d6fd2f-7279-4db4-9a98-51e50bbe5c62 + node: + hostname: kind-control-plane + labels: + beta_kubernetes_io/arch: amd64 + beta_kubernetes_io/os: linux + kubernetes_io/arch: amd64 + kubernetes_io/hostname: kind-control-plane + kubernetes_io/os: linux + node-role_kubernetes_io/control-plane: "" + node_kubernetes_io/exclude-from-external-load-balancers: "" + name: kind-control-plane + uid: b8d65d6b-61ed-49ef-9770-3b4f40a15a8a + pod: + ip: 172.19.0.2 + name: kube-scheduler-kind-control-plane + uid: f028ad77-c82a-4f29-ba7e-2504d9b0beef + target: kubernetes + - add_fields: + fields: + cluster: + name: kind + url: kind-control-plane:6443 + target: orchestrator + - add_fields: + fields: + dataset: kubernetes.scheduler + namespace: default + type: metrics + target: data_stream + - add_fields: + fields: + dataset: kubernetes.scheduler + target: event + - add_fields: + fields: + id: "" + snapshot: false + version: 8.3.0 + target: elastic_agent + - add_fields: + fields: + id: "" + target: agent + ssl.verification_mode: none +``` + +:::: + + + +## Example: Dynamic logs path [_example_dynamic_logs_path] + +To set the log path of Pods dynamically in the configuration, use a variable in the {{agent}} policy to return path information from the provider: + +```yaml +- name: container-log + id: container-log-${kubernetes.pod.name}-${kubernetes.container.id} + type: filestream + use_output: default + meta: + package: + name: kubernetes + version: 1.9.0 + data_stream: + namespace: default + streams: + - data_stream: + dataset: kubernetes.container_logs + type: logs + prospector.scanner.symlinks: true + parsers: + - container: ~ + paths: + - /var/log/containers/*${kubernetes.container.id}.log +``` + +::::{dropdown} The policy generated by this configuration will look similar to this for every Pod inside the scope defined in the manifest. +```yaml +- id: container-log-etcd-kind-control-plane-af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819 + index: logs-kubernetes.container_logs-default + meta: + package: + name: kubernetes + version: 1.9.0 + name: container-log + parsers: + - container: null + paths: + - /var/log/containers/*af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819.log + processors: + - add_fields: + fields: + id: af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819 + image: + name: registry.k8s.io/etcd:3.5.4-0 + runtime: containerd + target: container + - add_fields: + fields: + container: + name: etcd + labels: + component: etcd + tier: control-plane + namespace: kube-system + namespace_labels: + kubernetes_io/metadata_name: kube-system + namespace_uid: 03d6fd2f-7279-4db4-9a98-51e50bbe5c62 + node: + hostname: kind-control-plane + labels: + beta_kubernetes_io/arch: amd64 + beta_kubernetes_io/os: linux + kubernetes_io/arch: amd64 + kubernetes_io/hostname: kind-control-plane + kubernetes_io/os: linux + node-role_kubernetes_io/control-plane: "" + node_kubernetes_io/exclude-from-external-load-balancers: "" + name: kind-control-plane + uid: b8d65d6b-61ed-49ef-9770-3b4f40a15a8a + pod: + ip: 172.19.0.2 + name: etcd-kind-control-plane + uid: 08970fcf-bb93-487e-b856-02399d81fb29 + target: kubernetes + - add_fields: + fields: + cluster: + name: kind + url: kind-control-plane:6443 + target: orchestrator + - add_fields: + fields: + dataset: kubernetes.container_logs + namespace: default + type: logs + target: data_stream + - add_fields: + fields: + dataset: kubernetes.container_logs + target: event + - add_fields: + fields: + id: "" + snapshot: false + version: 8.3.0 + target: elastic_agent + - add_fields: + fields: + id: "" + target: agent + prospector.scanner.symlinks: true + type: filestream +``` + +:::: + + + diff --git a/reference/ingestion-tools/fleet/config-file-example-apache.md b/reference/ingestion-tools/fleet/config-file-example-apache.md new file mode 100644 index 0000000000..9a68969432 --- /dev/null +++ b/reference/ingestion-tools/fleet/config-file-example-apache.md @@ -0,0 +1,134 @@ +--- +navigation_title: "Apache HTTP Server" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/config-file-example-apache.html +--- + +# Config file example: Apache HTTP Server [config-file-example-apache] + + +Include these sample settings in your standalone {{agent}} `elastic-agent.yml` configuration file to ingest data from Apache HTTP server. + +* [Apache HTTP Server logs](#config-file-example-apache-logs) +* [Apache HTTP Server metrics](#config-file-example-apache-metrics) + +## Apache HTTP Server logs [config-file-example-apache-logs] + +```yaml +outputs: <1> + default: + type: elasticsearch <2> + hosts: + - '{elasticsearch-host-url}' <3> + api_key: "my_api_key" <4> +agent: + download: <5> + sourceURI: 'https://artifacts.elastic.co/downloads/' + monitoring: <6> + enabled: true + use_output: default + namespace: default + logs: true + metrics: true +inputs: <7> + - id: "insert a unique identifier here" <8> + name: apache-1 + type: logfile <9> + use_output: default + data_stream: <10> + namespace: default + streams: + - id: "insert a unique identifier here" <11> + data_stream: + dataset: apache.access <12> + type: logs + paths: <13> + - /var/log/apache2/access.log* + - /var/log/apache2/other_vhosts_access.log* + - /var/log/httpd/access_log* + tags: + - apache-access + exclude_files: + - .gz$ + - id: "insert a unique identifier here" <11> + data_stream: + dataset: apache.error <12> + type: logs + paths: <13> + - /var/log/apache2/error.log* + - /var/log/httpd/error_log* + exclude_files: + - .gz$ + tags: + - apache-error + processors: + - add_locale: null +``` + +1. For available output settings, refer to [Configure outputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md). +2. For settings specific to the {{es}} output, refer to [Configure the {{es}} output](/reference/ingestion-tools/fleet/elasticsearch-output.md). +3. The URL of the Elasticsearch cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. +4. An [API key](/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) used to authenticate with the {{es}} cluster. +5. For available download settings, refer to [Configure download settings for standalone Elastic Agent upgrades](/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md). +6. For available monitoring settings, refer to [Configure monitoring for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md). +7. For available input settings, refer to [Configure inputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md). +8. Specify a unique ID for the input. +9. For available input types, refer to [{{agent}} inputs](/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md). +10. Learn about [Data streams](/reference/ingestion-tools/fleet/data-streams.md) for time series data. +11. Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{{user-defined-unique-id}}-apache.access` or `{{user-defined-unique-id}}-apache.error`) is a recommended practice, but any unique ID will work. +12. Refer to [Logs](integration-docs://docs/reference/apache.md#apache-logs) in the Apache HTTP Server integration documentation for the logs available to ingest and exported fields. +13. Path to the log files to be monitored. + + + +## Apache HTTP Server metrics [config-file-example-apache-metrics] + +```yaml +outputs: <1> + default: + type: elasticsearch <2> + hosts: + - '{elasticsearch-host-url}' <3> + api_key: "my_api_key" <4> +agent: + download: <5> + sourceURI: 'https://artifacts.elastic.co/downloads/' + monitoring: <6> + enabled: true + use_output: default + namespace: default + logs: true + metrics: true +inputs: <7> + type: apache/metrics <8> + use_output: default + data_stream: <9> + namespace: default + streams: + - id: "insert a unique identifier here" <10> + data_stream: <8> + dataset: apache.status <11> + type: metrics + metricsets: <12> + - status + hosts: + - 'http://127.0.0.1' + period: 30s + server_status_path: /server-status +``` + +1. For available output settings, refer to [Configure outputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md). +2. For settings specific to the {{es}} output, refer to [Configure the {{es}} output](/reference/ingestion-tools/fleet/elasticsearch-output.md). +3. The URL of the Elasticsearch cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. +4. An [API key](/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) used to authenticate with the {{es}} cluster. +5. For available download settings, refer to [Configure download settings for standalone Elastic Agent upgrades](/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md). +6. For available monitoring settings, refer to [Configure monitoring for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md). +7. For available input settings, refer to [Configure inputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md). +8. For available input types, refer to [{{agent}} inputs](/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md). +9. Learn about [Data streams](/reference/ingestion-tools/fleet/data-streams.md) for time series data. +10. Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{{user-defined-unique-id}}-apache.status`) is a recommended practice, but any unique ID will work. +11. A user-defined dataset. You can specify anything that makes sense to signify the source of the data. +12. Refer to [Metrics](integration-docs://docs/reference/apache.md#apache-metrics) in the Apache HTTP Server integration documentation for the type of metrics collected and exported fields. + + + diff --git a/reference/ingestion-tools/fleet/config-file-example-nginx.md b/reference/ingestion-tools/fleet/config-file-example-nginx.md new file mode 100644 index 0000000000..3454d4b371 --- /dev/null +++ b/reference/ingestion-tools/fleet/config-file-example-nginx.md @@ -0,0 +1,141 @@ +--- +navigation_title: "Nginx HTTP Server" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/config-file-example-nginx.html +--- + +# Config file example: Nginx HTTP Server [config-file-example-nginx] + + +Include these sample settings in your standalone {{agent}} `elastic-agent.yml` configuration file to ingest data from Nginx HTTP Server. + +* [Nginx HTTP Server logs](#config-file-example-nginx-logs) +* [Nginx HTTP Server metrics](#config-file-example-nginx-metrics) + +## Nginx HTTP Server logs [config-file-example-nginx-logs] + +```yaml +outputs: <1> + default: + type: elasticsearch <2> + hosts: + - '{elasticsearch-host-url}' <3> + api_key: "my_api_key" <4> +agent: + download: <5> + sourceURI: 'https://artifacts.elastic.co/downloads/' + monitoring: <6> + enabled: true + use_output: default + namespace: default + logs: true + metrics: true +inputs: <7> + - id: "insert a unique identifier here" <8> + name: nginx-1 + type: logfile <9> + use_output: default + data_stream: <10> + namespace: default + streams: + - id: "insert a unique identifier here" <11> + data_stream: + dataset: nginx.access <12> + type: logs + ignore_older: 72h + paths: <13> + - /var/log/nginx/access.log* + tags: + - nginx-access + exclude_files: + - .gz$ + processors: + - add_locale: null + - id: "insert a unique identifier here" <11> + data_stream: + dataset: nginx.error <12> + type: logs + ignore_older: 72h + paths: <13> + - /var/log/nginx/error.log* + tags: + - nginx-error + exclude_files: + - .gz$ + multiline: + pattern: '^\d{4}\/\d{2}\/\d{2} ' + negate: true + match: after + processors: + - add_locale: null +``` + +1. For available output settings, refer to [Configure outputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md). +2. For settings specific to the {{es}} output, refer to [Configure the {{es}} output](/reference/ingestion-tools/fleet/elasticsearch-output.md). +3. The URL of the {{es}} cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. +4. An [API key](/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) used to authenticate with the {{es}} cluster. +5. For available download settings, refer to [Configure download settings for standalone Elastic Agent upgrades](/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md). +6. For available monitoring settings, refer to [Configure monitoring for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md). +7. For available input settings, refer to [Configure inputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md). +8. A user-defined ID to uniquely identify the input stream. +9. For available input types, refer to [{{agent}} inputs](/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md). +10. Learn about [Data streams](/reference/ingestion-tools/fleet/data-streams.md) for time series data. +11. Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{{user-defined-unique-id}}-nginx.access` or `{{user-defined-unique-id}}-nginx.error`) is a recommended practice, but any unique ID will work. +12. Refer to [Logs reference](integration-docs://docs/reference/nginx.md#nginx-logs-reference) in the Nginx HTTP integration documentation for the logs available to ingest and exported fields. +13. Path to the log files to be monitored. + + + +### Nginx HTTP Server metrics [config-file-example-nginx-metrics] + +```yaml +outputs: <1> + default: + type: elasticsearch <2> + hosts: + - '{elasticsearch-host-url}' <3> + api_key: "my_api_key" <4> +agent: + download: <5> + sourceURI: 'https://artifacts.elastic.co/downloads/' + monitoring: <6> + enabled: true + use_output: default + namespace: default + logs: true + metrics: true +inputs: <7> + - id: "insert a unique identifier here" <8> + type: nginx/metrics <9> + use_output: default + data_stream: <10> + namespace: default + streams: + - id: "insert a unique identifier here" <11> + data_stream: <10> + dataset: nginx.stubstatus <12> + type: metrics + metricsets: <13> + - stubstatus + hosts: + - 'http://127.0.0.1:80' + period: 10s + server_status_path: /nginx_status +``` + +1. For available output settings, refer to [Configure outputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md). +2. For settings specific to the {{es}} output, refer to [Configure the {{es}} output](/reference/ingestion-tools/fleet/elasticsearch-output.md). +3. The URL of the Elasticsearch cluster where output should be sent, including the port number. For example `https://12345ab6789cd12345ab6789cd.us-central1.gcp.cloud.es.io:443`. +4. An [API key](/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) used to authenticate with the {{es}} cluster. +5. For available download settings, refer to [Configure download settings for standalone Elastic Agent upgrades](/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md). +6. For available monitoring settings, refer to [Configure monitoring for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md). +7. For available input settings, refer to [Configure inputs for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md). +8. A user-defined ID to uniquely identify the input stream. +9. For available input types, refer to [{{agent}} inputs](/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md). +10. Learn about [Data streams](/reference/ingestion-tools/fleet/data-streams.md) for time series data. +11. Specify a unique ID for each individual input stream. Naming the ID by appending the associated `data_stream` dataset (for example `{{user-defined-unique-id}}-nginx.stubstatus`) is a recommended practice, but any unique ID will work. +12. A user-defined dataset. You can specify anything that makes sense to signify the source of the data. +13. Refer to [Metrics reference](integration-docs://docs/reference/nginx.md#nginx-metrics-reference) in the Nginx integration documentation for the type of metrics collected and exported fields. + + + diff --git a/reference/ingestion-tools/fleet/config-file-examples.md b/reference/ingestion-tools/fleet/config-file-examples.md new file mode 100644 index 0000000000..586cdc5933 --- /dev/null +++ b/reference/ingestion-tools/fleet/config-file-examples.md @@ -0,0 +1,14 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/config-file-examples.html +--- + +# Config file examples [config-file-examples] + +These examples show a basic, sample configuration to include in a standalone {{agent}} `elastic-agent.yml` [configuration file](/reference/ingestion-tools/fleet/structure-config-file.md) to gather data from various source types. + +* [Apache HTTP Server](/reference/ingestion-tools/fleet/config-file-example-apache.md) +* [Nginx HTTP Server](/reference/ingestion-tools/fleet/config-file-example-apache.md) + + + diff --git a/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md b/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md new file mode 100644 index 0000000000..36aa27da5a --- /dev/null +++ b/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md @@ -0,0 +1,63 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-configuration.html +--- + +# Configure standalone Elastic Agents [elastic-agent-configuration] + +::::{tip} +To get started quickly, use {{kib}} to create and download a standalone policy file. You’ll still need to deploy and manage the file, though. For more information, refer to [Create a standalone {{agent}} policy](/reference/ingestion-tools/fleet/create-standalone-agent-policy.md) or try out our example: [Use standalone {{agent}} to monitor nginx](/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md). +:::: + + +Standalone {{agent}}s are manually configured and managed locally on the systems where they are installed. They are useful when you are not interested in centrally managing agents in {{fleet}}, either due to your company’s security requirements, or because you prefer to use another configuration management system. + +To configure standalone {{agent}}s, specify settings in the `elastic-agent.yml` policy file deployed with the agent. Prior to installation, the file is located in the extracted {{agent}} package. After installation, the file is copied to the directory described in [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md). To apply changes after installation, you must modify the installed file. + +For installation details, refer to [Install standalone {{agent}}s](/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md). + +Alternatively, you can put input configurations in YAML files into the folder `{path.config}/inputs.d` to separate your configuration into multiple smaller files. The YAML files in the `inputs.d` folder should contain input configurations only. Any other configurations are ignored. The files are reloaded at the same time as the standalone configuration. + +::::{tip} +The first line of the configuration must be `inputs`. Then you can list the inputs you would like to run. Each input in the policy must have a unique value for the `id` key. If the `id` key is missing its value defaults to the empty string `""`. +:::: + + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + data_stream.namespace: default + paths: [/path/to/file] + use_output: default + + - id: unique-system-metrics-id + type: system/metrics + data_stream.namespace: default + use_output: default + streams: + - metricset: cpu + data_stream.dataset: system.cpu +``` + +The following sections describe some settings you might need to configure to run an {{agent}} standalone. For a full reference example, refer to the [elastic-agent.reference.yml](/reference/ingestion-tools/fleet/elastic-agent-reference-yaml.md) file. + +The settings described here are available for standalone {{agent}}s. Settings for {{fleet}}-managed agents are specified through the UI. You do not set them explicitly in a policy file. + + + + + + + + + + + + + + + + + + diff --git a/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md b/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md new file mode 100644 index 0000000000..afbd0caf2f --- /dev/null +++ b/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md @@ -0,0 +1,113 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/configuring-kubernetes-metadata.html +--- + +# Configuring Kubernetes metadata enrichment on Elastic Agent [configuring-kubernetes-metadata] + +Kubernetes [metadata](/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md#beats-metadata) refer to contextual information extracted from Kubernetes resources. Metadata information enrich metrics and logs collected from a Kubernetes cluster, enabling deeper insights into Kubernetes environments. + +When the {{agent}}'s policy includes the [{{k8s}} Integration](integration-docs://docs/reference/kubernetes.md) which configures the collection of Kubernetes related metrics and container logs, the mechanisms used for the metadata enrichment are: + +* [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) for log collection +* Kubernetes metadata enrichers for metrics + +In case the {{agent}}'s policy does not include the Kubernetes integration, but {{agent}} runs inside a Kubernetes environment, the Kubernetes metadata are collected by the [add_kubernetes_metadata](/reference/ingestion-tools/fleet/add_kubernetes_metadata-processor.md). The processor is configurable when {{agent}} is managed by {{fleet}}. + + +## Kubernetes Logs [_kubernetes_logs] + +When it comes to container logs collection, the [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) is used. It monitors for pod resources in the cluster and associates each container log file with a corresponding pod’s container object. That way, when a log file is parsed and an event is ready to be published to {{es}}, the internal mechanism knows to which actual container this log file belongs. The linkage is established by the container’s ID, which forms an integral part of the filename for the log. The Kubernetes autodiscover provider has already collected all the metadata for that container, leveraging pod, namespace and node watchers. Thus, the events are enriched with the relevant metadata. + +In order to configure the metadata collection, the Kubernetes provider needs to be configured. All the available configuration options of the **Kubernetes provider** can be found in the [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) documentation. + +* For **Standalone {{agent}} configuration:** + +Follow information of `add_resource_metadata` parameter of [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: agent-node-datastreams + namespace: kube-system + labels: + k8s-app: elastic-agent +data: + agent.yml: |- + kubernetes.provider + add_resource_metadata: + namespace: + #use_regex_include: false + include_labels: ["namespacelabel1"] + #use_regex_exclude: false + #exclude_labels: ["namespacelabel2"] + node: + #use_regex_include: false + include_labels: ["nodelabel2"] + include_annotations: ["nodeannotation1"] + #use_regex_exclude: false + #exclude_labels: ["nodelabel3"] + #deployment: false + #cronjob: false +``` + +* **Managed {{agent}} configuration**: + +The Kubernetes provider can be configured following the steps in [Advanced {{agent}} configuration managed by {{fleet}}](/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md). + + +## Kubernetes metrics [_kubernetes_metrics] + +The {{agent}} metrics collection implements metadata enrichment based on watchers, a mechanism used to continuously monitor Kubernetes resources for changes and updates. Specifically, the different datasets share a set of resource watchers. Those watchers (pod, node, namespace, deployment, daemonset etc.) are responsible for watching for all different resource events (creation, update and deletion) by subscribing to the Kubernetes watch API. This enables real-time synchronization of application state with the state of the Kubernetes cluster. So, they keep an up-to-date shared cache store of all the resources' information and metadata. Whenever metrics are collected by the different sources (kubelet, kube-state-metrics), before they get published to {{es}} as events, they are enriched with needed metadata. + +The metadata enrichment can be configured by editing the Kubernetes integration. **Only in metrics collection**, metadata enrichment can be disabled by switching off the `Add Metadata` toggle in every dataset. Extra resource metadata such as node, namespace labels and annotations, as well as deployment and cronjob information can be configured per dataset. + +* **Managed {{agent}} configuration**: + +:::{image} images/kubernetes_metadata.png +:alt: metadata configuration +::: + +::::{note} +add_resource_metadata block needs to be configured to all datasets that are enabled +:::: + + +* For **Standalone {{agent}} configuration**: + +```yaml +[output trunctated ...] +- data_stream: + dataset: kubernetes.state_pod + type: metrics + metricsets: + - state_pod + add_metadata: true + hosts: + - 'kube-state-metrics:8080' + period: 10s + add_resource_metadata: + namespace: + enabled: true + #use_regex_include: false + include_labels: ["namespacelabel1"] + #use_regex_exclude: false + #exclude_labels: ["namespacelabel2"] + node: + enabled: true + #use_regex_include: false + include_labels: ["nodelabel2"] + include_annotations: ["nodeannotation1"] + #use_regex_exclude: false + #exclude_labels: ["nodelabel3"] + #deployment: false + #cronjob: false +``` + +The `add_resource_metadata` block configures the watcher’s enrichment functionality. See [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) for full description of add_resource_metadata. Same configuration parameters apply. + + +## Note [_note] + +Although the `add_kubernetes_metadata` processor is by default enabled when using elastic-agent, it is skipped whenever Kubernetes integration is detected. diff --git a/reference/ingestion-tools/fleet/convert-processor.md b/reference/ingestion-tools/fleet/convert-processor.md new file mode 100644 index 0000000000..874e68845f --- /dev/null +++ b/reference/ingestion-tools/fleet/convert-processor.md @@ -0,0 +1,43 @@ +--- +navigation_title: "convert" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/convert-processor.html +--- + +# Convert field type [convert-processor] + + +The `convert` processor converts a field in the event to a different type, such as converting a string to an integer. + +The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, and `ip`. + +The `ip` type is effectively an alias for `string`, but with an added validation that the value is an IPv4 or IPv6 address. + + +## Example [_example_13] + +```yaml + - convert: + fields: + - {from: "src_ip", to: "source.ip", type: "ip"} + - {from: "src_port", to: "source.port", type: "integer"} + ignore_missing: true + fail_on_error: false +``` + + +## Configuration settings [_configuration_settings_16] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | List of fields to convert. The list must contain at least one item. Each item must have a `from` key that specifies the source field. The `to` key is optional and specifies where to assign the converted value. If `to` is omitted, the `from` field is updated in-place. The `type` key specifies the data type to convert the value to. If `type` is omitted, the processor copies or renames the field without any type conversion. | +| `ignore_missing` | No | `false` | Whether to ignore missing `from` keys. If `true` and the `from` key is not found in the event, the processor continues to the next field. If `false`, the processor returns an error and does not process the remaining fields. | +| `fail_on_error` | No | `true` | Whether to fail when a type conversion error occurs. If `false`, type conversion failures are ignored, and the processor continues to the next field. | +| `tag` | No | | Identifier for this processor. Useful for debugging. | +| `mode` | No | `copy` | When both `from` and `to` are defined for a field, `mode` controls whether to `copy` or `rename` the field when the type conversion is successful. | + diff --git a/reference/ingestion-tools/fleet/copy_fields-processor.md b/reference/ingestion-tools/fleet/copy_fields-processor.md new file mode 100644 index 0000000000..bc53d43eda --- /dev/null +++ b/reference/ingestion-tools/fleet/copy_fields-processor.md @@ -0,0 +1,52 @@ +--- +navigation_title: "copy_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/copy_fields-processor.html +--- + +# Copy fields [copy_fields-processor] + + +The `copy_fields` processor takes the value of a field and copies it to a new field. + +You cannot use this processor to replace an existing field. If the target field already exists, you must [drop](/reference/ingestion-tools/fleet/drop_fields-processor.md) or [rename](/reference/ingestion-tools/fleet/rename-processor.md) the field before using `copy_fields`. + + +## Example [_example_14] + +This configuration: + +```yaml + - copy_fields: + fields: + - from: message + to: event.original + fail_on_error: false + ignore_missing: true +``` + +Copies the original `message` field to `event.original`: + +```json +{ + "message": "my-interesting-message", + "event": { + "original": "my-interesting-message" + } +} +``` + + +## Configuration settings [_configuration_settings_17] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | List of `from` and `to` pairs to copy from and to. You can use the `@metadata.` prefix to copy values from or to event metadata. | +| `fail_on_error` | No | `true` | Whether to fail if an error occurs. If `true` and an error occurs, any changes are reverted, and the original is returned. If `false`, processing continues even if an error occurs. | +| `ignore_missing` | No | `false` | Whether to ignore events that lack the source field. If `false`, the processing of an event will fail if a field is missing. | + diff --git a/reference/ingestion-tools/fleet/create-policy-no-ui.md b/reference/ingestion-tools/fleet/create-policy-no-ui.md new file mode 100644 index 0000000000..3c2e5e0c00 --- /dev/null +++ b/reference/ingestion-tools/fleet/create-policy-no-ui.md @@ -0,0 +1,83 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/create-a-policy-no-ui.html +--- + +# Create an agent policy without using the UI [create-a-policy-no-ui] + +For use cases where you want to provide a default agent policy or support automation, you can set up an agent policy without using the {{fleet}} UI. To do this, either use the {{fleet}} API or add a preconfigured policy to {{kib}}: + + +## Option 1. Create an agent policy with the API [use-api-to-create-policy] + +```sh +curl -u : --request POST \ + --url /api/fleet/agent_policies?sys_monitoring=true \ + --header 'content-type: application/json' \ + --header 'kbn-xsrf: true' \ + --data '{"name":"Agent policy 1","namespace":"default","monitoring_enabled":["logs","metrics"]}' +``` + +In this API call: + +* `sys_monitoring=true` adds the system integration to the agent policy +* `monitoring_enabled` turns on {{agent}} monitoring + +For more information, refer to [{{kib}} {{fleet}} APIs](/reference/ingestion-tools/fleet/fleet-api-docs.md). + + +## Option 2. Create agent policies with preconfiguration [use-preconfiguration-to-create-policy] + +Add preconfigured policies to `kibana.yml` config. + +For example, the following example adds a {{fleet-server}} policy for self-managed setup: + +```yaml +xpack.fleet.packages: + - name: fleet_server + version: latest +xpack.fleet.agentPolicies: + - name: Fleet Server policy + id: fleet-server-policy + namespace: default + package_policies: + - name: fleet_server-1 + package: + name: fleet_server +``` + +The following example creates an agent policy for general use, and customizes the `period` setting for the `system.core` data stream. You can find all available inputs and variables in the **Integrations** app in {{kib}}. + +```yaml +xpack.fleet.packages: + - name: system + version: latest + - name: elastic_agent + version: latest +xpack.fleet.agentPolicies: + - name: Agent policy 1 + id: agent-policy-1 + namespace: default + monitoring_enabled: + - logs + - metrics + package_policies: + - package: + name: system + name: System Integration 1 + id: preconfigured-system-1 + inputs: + system-system/metrics: + enabled: true + vars: + '[system.hostfs]': home/test + streams: + '[system.core]': + enabled: true + vars: + period: 20s + system-winlog: + enabled: false +``` + +For more information about preconfiguration settings, refer to the [{{kib}} documentation](kibana://docs/reference/configuration-reference/fleet-settings.md). diff --git a/reference/ingestion-tools/fleet/create-standalone-agent-policy.md b/reference/ingestion-tools/fleet/create-standalone-agent-policy.md new file mode 100644 index 0000000000..59fc486889 --- /dev/null +++ b/reference/ingestion-tools/fleet/create-standalone-agent-policy.md @@ -0,0 +1,74 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/create-standalone-agent-policy.html +--- + +# Create a standalone Elastic Agent policy + +To get started quickly, use {{kib}} to add integrations to an agent policy, then download the policy to use as a starting point for your standalone {{agent}} policy. This approach saves time, is less error prone, and populates the policy with a lot of details that are tedious to add manually. Also, adding integrations in {{kib}} loads required assets, such as index templates, and ingest pipelines, before you start your {{agent}}s. + +::::{tip} +If you’re a {{fleet}} user and already have an agent policy you want to use in standalone mode, go to **{{fleet}} > Agents** and click **Add agent**. Follow the steps under **Run standalone** to download the policy file. +:::: + + +You don’t need {{fleet}} to perform the following steps, but on self-managed clusters, API keys must be enabled in the {{es}} configuration (set `xpack.security.authc.api_key.enabled: true`). + +## Create a standalone policy [create-standalone-policy] + +1. From the main menu in {{kib}}, click **Add integrations**, and search for the {{agent}} integration you want to use. Read the description to make sure the integration works with {{agent}}. +2. Click the integration to see more details about it, then click **Add **. + + :::{image} images/add-integration-standalone.png + :alt: Add Nginx integration screen with agent policy selected + :class: screenshot + ::: + + ::::{note} + If you’re adding your first integration and no {{agent}}s are installed, {{kib}} may display a page that walks you through configuring the integration and installing {{agent}}. If you see this page, click **Install {{agent}}**, then click the **standalone mode** link. Follow the in-product instructions instead of the steps described here. + :::: + +3. Under **Configure integration**, enter a name and description for the integration. +4. Click the down arrow next to enabled streams and make sure the settings are correct for your host. +5. Under **Apply to agent policy**, select an existing policy, or click **Create agent policy** and create a new one. +6. When you’re done, save and continue. + + A popup window gives you the option to add {{agent}} to your hosts. + + :::{image} images/add-agent-to-hosts.png + :alt: Popup window showing the option to add {{agent}} to your hosts + :class: screenshot + ::: + +7. (Optional) To add more integrations to the agent policy, click **Add {{agent}} later** and go back to the **Integrations** page. Repeat the previous steps for each integration. +8. When you’re done adding integrations, in the popup window, click **Add {{agent}} to your hosts** to open the **Add agent** flyout. +9. Click **Run standalone** and follow the in-product instructions to download {{agent}} (if you haven’t already). +10. Click **Download Policy** to download the policy file. + + :::{image} images/download-agent-policy.png + :alt: Add data screen with option to download the default agent policy + :class: screenshot + ::: + + +The downloaded policy already contains a default {{es}} address and port for your setup. You may need to change them if you use a proxy or load balancer. Modify the policy, as required, making sure that you provide credentials for connecting to {es} + +If you need to add integrations to the policy *after* deploying it, you’ll need to run through these steps again and re-deploy the updated policy to the host where {{agent}} is running. + +For detailed information about starting the agent, including the permissions needed for the {{es}} user, refer to [Install standalone {{agent}}s](/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md). + + +## Upgrade standalone agent policies after upgrading an integration [update-standalone-policies] + +Because standalone agents are not managed by {{fleet}}, they are unable to upgrade to new integration package versions automatically. When you upgrade an integration in {{kib}} (or it gets upgraded automatically), you’ll need to update the standalone policy to use new features and capabilities. + +You’ll also need to update the standalone policy if you want to add new integrations. + +To update your standalone policy, use the same steps you used to create and download the original policy file: + +1. Follow the steps under [Create a standalone {{agent}} policy](#create-standalone-policy) to create and download a new policy, then compare the new policy file to the old one. +2. Either use the new policy and apply your customizations to it, or update your old policy to include changes, such as field changes, added by the upgrade. + +::::{important} +Make sure you update the standalone agent policy in the directory where {{agent}} is running, not the directory where you downloaded the installation package. Refer to [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md) for the location of installed {{agent}} files. +:::: diff --git a/reference/ingestion-tools/fleet/data-streams-advanced-features.md b/reference/ingestion-tools/fleet/data-streams-advanced-features.md new file mode 100644 index 0000000000..a4dd429e00 --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams-advanced-features.md @@ -0,0 +1,203 @@ +--- +navigation_title: "Advanced data stream features" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams-advanced-features.html +--- + +# Enabling and disabling advanced indexing features for {{fleet}}-managed data streams [data-streams-advanced-features] + + +{{fleet}} provides support for several advanced features around its data streams, including: + +* [Time series data streams (TSDS)](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md) +* [Synthetic `_source`](elasticsearch://docs/reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source) + +These features can be enabled and disabled for {{fleet}}-managed data streams by using the index template API and a few key settings. Note that in versions 8.17.0 and later, Synthetic `_source` requires an Enterprise license. + +::::{note} +If you are already making use of `@custom` component templates for ingest or retention customization (as shown for example in [Tutorial: Customize data retention policies](/reference/ingestion-tools/fleet/data-streams-ilm-tutorial.md)), exercise care to ensure you don’t overwrite your customizations when making these requests. +:::: + + +We recommended using [{{kib}} Dev Tools](/explore-analyze/query-filter/tools.md) to run the following requests. Replace `` with the name of a given integration data stream. For example specifying `metrics-nginx.stubstatus` results in making a PUT request to `_component_template/metrics-nginx.stubstatus@custom`. Use the index management interface to explore what integration data streams are available to you. + +Once you’ve executed a given request below, you also need to execute a data stream rollover to ensure any incoming data is ingested with your new settings immediately. For example: + +```sh +POST metrics-nginx.stubstatus-default/_rollover +``` + +Refer to the following steps to enable or disable advanced data stream features: + +* [Disable synthetic `_source`](#data-streams-advanced-synthetic-disable) + + +## Enable TSDS [data-streams-advanced-tsds-enable] + +::::{note} +TSDS uses synthetic `_source`, so if you want to trial both features you need to enable only TSDS. +:::: + + +Due to restrictions in the {{es}} API, TSDS must be enabled at the **index template** level. So, you’ll need to make some sequential requests to enable or disable TSDS. + +1. Send a GET request to retrieve the index template: + + ```json + GET _index_template/ + ``` + +2. Use the JSON payload returned from the GET request to populate a PUT request, for example: + + ```json + PUT _index_template/ + { + # You can copy & paste this directly from the GET request above + "index_patterns": [ + "" + ], + + # Make sure this is added + "template": { + "settings": { + "index": { + "mode": "time_series" + } + } + }, + + # You can copy & paste this directly from the GET request above + "composed_of": [ + "@package", + "@custom", + ".fleet_globals-1", + ".fleet_agent_id_verification-1" + ], + + # You can copy & paste this directly from the GET request above + "priority": 200, + + # Make sure this is added + "data_stream": { + "allow_custom_routing": false + } + } + ``` + + + +## Disable TSDS [data-streams-advanced-tsds-disable] + +To disable TSDS, follow the same procedure as to [enable TSDS](#data-streams-advanced-tsds-enable), but specify `null` for `index.mode` instead of `time_series`. Follow the steps below or you can copy the [NGINX example](#data-streams-advanced-tsds-disable-nginx-example). + +1. Send a GET request to retrieve the index template: + + ```json + GET _index_template/ + ``` + +2. Use the JSON payload returned from the GET request to populate a PUT request, for example: + + ```json + PUT _index_template/ + { + # You can copy/paste this directly from the GET request above + "index_patterns": [ + "" + ], + + # Make sure this is added + "template": { + "settings": { + "index": { + "mode": null + } + } + }, + + # You can copy/paste this directly from the GET request above + "composed_of": [ + "@package", + "@custom", + ".fleet_globals-1", + ".fleet_agent_id_verification-1" + ], + + # You can copy/paste this directly from the GET request above + "priority": 200, + + # Make sure this is added + "data_stream": { + "allow_custom_routing": false + } + } + ``` + + For example, the following payload disables TSDS on `nginx.stubstatus`: + + $$$data-streams-advanced-tsds-disable-nginx-example$$$ + + ```json + { + "index_patterns": [ + "metrics-nginx.stubstatus-*" + ], + + "template": { + "settings": { + "index": { + "mode": null + } + } + }, + + "composed_of": [ + "metrics-nginx.stubstatus@package", + "metrics-nginx.stubstatus@custom", + ".fleet_globals-1", + ".fleet_agent_id_verification-1" + ], + + "priority": 200, + + "data_stream": { + "allow_custom_routing": false + } + } + ``` + + + +## Enable synthetic `_source` [data-streams-advanced-synthetic-enable] + +```json +PUT _component_template/@custom +{ + "settings": { + "index": { + "mapping": { + "source": { + "mode": "synthetic" + } + } + } + } +} +``` + + +## Disable synthetic `_source` [data-streams-advanced-synthetic-disable] + +```json +PUT _component_template/@custom +{ + "settings": { + "index": { + "mapping": { + "source": {"mode": "stored"} + } + } + } +} +``` + diff --git a/reference/ingestion-tools/fleet/data-streams-ilm-tutorial.md b/reference/ingestion-tools/fleet/data-streams-ilm-tutorial.md new file mode 100644 index 0000000000..adeb76b45d --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams-ilm-tutorial.md @@ -0,0 +1,26 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams-ilm-tutorial.html +--- + +# Tutorials: Customize data retention policies [data-streams-ilm-tutorial] + +These tutorials explain how to apply a custom {{ilm-init}} policy to an integration’s data stream. + + +## Before you begin [data-streams-general-info] + +For certain features you’ll need to use a slightly different procedure to manage the index lifecycle: + +* APM: For verions 8.15 and later, refer to [Index lifecycle management](/solutions/observability/apps/index-lifecycle-management.md). +* Synthetic monitoring: Refer to [Manage data retention](/solutions/observability/apps/manage-data-retention.md). +* Universal Profiling: Refer to [Universal Profiling index life cycle management](/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md). + + +## Identify your scenario [data-streams-scenarios] + +How you apply an ILM policy depends on your use case. Choose a scenario for the detailed steps. + +* **[Scenario 1](/reference/ingestion-tools/fleet/data-streams-scenario1.md)**: You want to apply an ILM policy to all logs or metrics data streams across all namespaces. +* **[Scenario 2](/reference/ingestion-tools/fleet/data-streams-scenario2.md)**: You want to apply an ILM policy to selected data streams in an integration. +* **[Scenario 3](/reference/ingestion-tools/fleet/data-streams-scenario3.md)**: You want apply an ILM policy for data streams in a selected namespace in an integration. diff --git a/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md b/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md new file mode 100644 index 0000000000..85e47e4744 --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md @@ -0,0 +1,209 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams-pipeline-tutorial.html +--- + +# Tutorial: Transform data with custom ingest pipelines [data-streams-pipeline-tutorial] + +This tutorial explains how to add a custom ingest pipeline to an Elastic Integration. Custom pipelines can be used to add custom data processing, like adding fields, obfuscate sensitive information, and more. + +**Scenario:** You have {{agent}}s collecting system metrics with the System integration. + +**Goal:** Add a custom ingest pipeline that adds a new field to each {{es}} document before it is indexed. + + +## Step 1: Create a custom ingest pipeline [data-streams-pipeline-one] + +Create a custom ingest pipeline that will be called by the default integration pipeline. In this tutorial, we’ll create a pipeline that adds a new field to our documents. + +1. In {{kib}}, navigate to **Stack Management** → **Ingest Pipelines*** → ***Create pipeline** → **New pipeline**. +2. Name your pipeline. We’ll call this one, `add_field`. +3. Select **Add a processor**. Fill out the following information: + + * Processor: "Set" + * Field: `test` + * Value: `true` + + The [Set processor](elasticsearch://docs/reference/ingestion-tools/enrich-processor/set-processor.md) sets a document field and associates it with the specified value. + +4. Click **Add**. +5. Click **Create pipeline**. + + +## Step 2: Apply your ingest pipeline [data-streams-pipeline-two] + +Add a custom pipeline to an integration by calling it from the default ingest pipeline. The custom pipeline will run after the default pipeline but before the final pipeline. + + +### Edit integration [_edit_integration] + +Add a custom pipeline to an integration from the **Edit integration** workflow. The integration must already be configured and installed before a custom pipeline can be added. To enter this workflow, do the following: + +1. Navigate to **{{fleet}}** +2. Select the relevant {{agent}} policy +3. Search for the integration you want to edit +4. Select **Actions** → **Edit integration** + + +### Select a data stream [_select_a_data_stream] + +Most integrations write to multiple data streams. You’ll need to add the custom pipeline to each data stream individually. + +1. Find the first data stream you wish to edit and select **Change defaults**. For this tutorial, find the data stream configuration titled, **Collect metrics from System instances**. +2. Scroll to **System CPU metrics** and under **Advanced options** select **Add custom pipeline**. + + This will take you to the **Create pipeline** workflow in **Stack management**. + + + +### Add the pipeline [_add_the_pipeline] + +Add the pipeline you created in step one. + +1. Select **Add a processor**. Fill out the following information: + + * Processor: "Pipeline" + * Pipeline name: "add_field" + * Value: `true` + +2. Click **Create pipeline** to return to the **Edit integration** page. + + +### Roll over the data stream (optional) [_roll_over_the_data_stream_optional] + +For pipeline changes to take effect immediately, you must roll over the data stream. If you do not, the changes will not take effect until the next scheduled roll over. Select **Apply now and rollover**. + +After the data stream rolls over, note the name of the custom ingest pipeline. In this tutorial, it’s `metrics-system.cpu@custom`. The name follows the pattern `-@custom`: + +* type: `metrics` +* dataset: `system.cpu` +* Custom ingest pipeline designation: `@custom` + + +### Repeat [_repeat] + +Add the custom ingest pipeline to any other data streams you wish to update. + + +## Step 3: Test the ingest pipeline (optional) [data-streams-pipeline-three] + +Allow time for new data to be ingested before testing your pipeline. In a new window, open {{kib}} and navigate to **{{kib}} Dev tools**. + +Use an [exists query](elasticsearch://docs/reference/query-languages/query-dsl-exists-query.md) to ensure that the new field, "test" is being applied to documents. + +```console +GET metrics-system.cpu-default/_search <1> +{ + "query": { + "exists": { + "field": "test" <2> + } + } +} +``` + +1. The data stream to search. In this tutorial, we’ve edited the `metrics-system.cpu` type and dataset. `default` is the default namespace. Combining all three of these gives us a data stream name of `metrics-system.cpu-default`. +2. The name of the field set in step one. + + +If your custom pipeline is working correctly, this query will return at least one document. + + +## Step 4: Add custom mappings [data-streams-pipeline-four] + +Now that a new field is being set in your {{es}} documents, you’ll want to assign a new mapping for that field. Use the `@custom` component template to apply custom mappings to an integration data stream. + +In the **Edit integration** workflow, do the following: + +1. Under **Advanced options** select the pencil icon to edit the `@custom` component template. +2. Define the new field for your indexed documents. Select **Add field** and add the following information: + + * Field name: `test` + * Field type: `Boolean` + +3. Click **Add field**. +4. Click **Review** to fast-forward to the review step and click **Save component template** to return to the **Edit integration** workflow. +5. For changes to take effect immediately, select **Apply now and rollover**. + + +## Step 5: Test the custom mappings (optional) [data-streams-pipeline-five] + +Allow time for new data to be ingested before testing your mappings. In a new window, open {{kib}} and navigate to **{{kib}} Dev tools**. + +Use the [Get field mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-mapping) to ensure that the custom mapping has been applied. + +```console +GET metrics-system.cpu-default/_mapping/field/test <1> +``` + +1. The data stream to search. In this tutorial, we’ve edited the `metrics-system.cpu` type and dataset. `default` is the default namespace. Combining all three of these gives us a data stream name of `metrics-system.cpu-default`. + + +The result should include `type: "boolean"` for the specified field. + +```json +".ds-metrics-system.cpu-default-2022.08.10-000002": { + "mappings": { + "test": { + "full_name": "test", + "mapping": { + "test": { + "type": "boolean" + } + } + } + } +} +``` + + +## Step 6: Add an ingest pipeline for a data type [data-streams-pipeline-six] + +The previous steps demonstrated how to create a custom ingest pipeline that adds a new field to each {{es}} document generated for the Systems integration CPU metrics (`system.cpu`) dataset. + +You can create an ingest pipeline to process data at various levels of customization. An ingest pipeline processor can be applied: + +* Globally to all events +* To all events of a certain type (for example `logs` or `metrics`) +* To all events of a certain type in an integration +* To all events in a specific dataset + +Let’s create a new custom ingest pipeline `logs@custom` that processes all log events. + +1. Open {{kib}} and navigate to **{{kib}} Dev tools**. +2. Run a [pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) request to add a new field `my-logs-field`: + + ```console + PUT _ingest/pipeline/logs@custom + { + "processors": [ + { + "set": { + "description": "Custom field for all log events", + "field": "my-logs-field", + "value": "true" + } + } + ] + } + ``` + +3. Allow some time for new data to be ingested, and then use a new [exists query](elasticsearch://docs/reference/query-languages/query-dsl-exists-query.md) to confirm that the new field "my-logs-field" is being applied to log event documents. + + For this example, we’ll check the System integration `system.syslog` dataset: + + ```console + GET /logs-system.syslog-default/_search?pretty + { + "query": { + "exists": { + "field": "my-logs-field" + } + } + } + ``` + + +With the new pipeline applied, this query should return at least one document. + +You can modify your pipeline API request as needed to apply custom processing at various levels. Refer to [Ingest pipelines](/reference/ingestion-tools/fleet/data-streams.md#data-streams-pipelines) to learn more. diff --git a/reference/ingestion-tools/fleet/data-streams-scenario1.md b/reference/ingestion-tools/fleet/data-streams-scenario1.md new file mode 100644 index 0000000000..38bfa66a8f --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams-scenario1.md @@ -0,0 +1,87 @@ +--- +navigation_title: "Scenario 1" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams-scenario1.html +--- + +# Scenario 1: Apply an ILM policy to all data streams generated from Fleet integrations across all namespaces [data-streams-scenario1] + + +::::{note} +This tutorial uses a `logs@custom` and a `metrics@custom` component template which are available in versions 8.13 and later. For versions later than 8.4 and earlier than 8.13, you instead need to use the `@custom component template` and add the ILM policy to that template. This needs to be done for every newly added integration. +:::: + + +Mappings and settings for data streams can be customized through the creation of `*@custom` component templates, which are referenced by the index templates created by each integration. The easiest way to configure a custom index lifecycle policy per data stream is to edit this template. + +This tutorial explains how to apply a custom index lifecycle policy to all of the data streams associated with the `System` integration, as an example. Similar steps can be used for any other integration. Setting a custom index lifecycle policy must be done separately for all logs and for all metrics, as described in the following steps. + + +## Step 1: Create an index lifecycle policy [data-streams-scenario1-step1] + +1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Click **Create policy**. + +Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**. + + +## Step 2: Create a component template for the `logs` index templates [data-streams-scenario1-step2] + +The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: + +1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Select **Index Templates**. +3. Search for `system` to see all index templates associated with the System integration. +4. Select any `logs-*` index template to view the associated component templates. For example, you can select the `logs-system.application` index template. + + :::{image} images/component-templates-list.png + :alt: List of component templates available for the index template + :class: screenshot + ::: + +5. Select `logs@custom` in the list to view the component template properties. +6. For a newly added integration, the component template won’t exist yet. Select **Create component template** to create it. If the component template already exists, click **Manage** to update it. +7. On the **Logistics** page, keep all defaults and click **Next**. +8. On the **Index settings** page, in the **Index settings** field, specify the ILM policy that you created. For example: + + ```json + { + "index": { + "lifecycle": { + "name": "my-ilm-policy" + } + } + } + ``` + +9. Click **Next**. +10. For both the **Mappings** and **Aliases** pages, keep all defaults and click **Next**. +11. Finally, on the **Review** page, review the summary and request. If everything looks good, select **Create component template**. + + :::{image} images/review-component-template01.png + :alt: Review details for the new component template + :class: screenshot + ::: + + + +## Step 3: Roll over the data streams (optional) [data-streams-scenario1-step3] + +To confirm that the index template is using the `logs@custom` component template with your custom ILM policy: + +1. Reopen the **Index Management** page and open the **Component Templates** tab. +2. Search for `logs@` and select the `logs@custom` component template. +3. The **Summary** shows the list of all data streams that use the component template, and the **Settings** view shows your newly configured ILM policy. + +New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of each data stream using the {{ref}}/indices-rollover-index.html[{{es}} rollover API. + +For example: + +```bash +POST /logs-system.auth/_rollover/ +``` + + +## Step 4: Repeat these steps for the metrics data streams [data-streams-scenario1-step4] + +You’ve now applied a custom index lifecycle policy to all of the `logs-*` data streams in the `System` integration. For the metrics data streams, you can repeat steps 2 and 3, using a `metrics-*` index template and the `metrics@custom` component template. diff --git a/reference/ingestion-tools/fleet/data-streams-scenario2.md b/reference/ingestion-tools/fleet/data-streams-scenario2.md new file mode 100644 index 0000000000..91250430ec --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams-scenario2.md @@ -0,0 +1,81 @@ +--- +navigation_title: "Scenario 2" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams-scenario2.html +--- + +# Scenario 2: Apply an ILM policy to specific data streams generated from Fleet integrations across all namespaces [data-streams-scenario2] + + +Mappings and settings for data streams can be customized through the creation of `*@custom` component templates, which are referenced by the index templates created by the {{es}} apm-data plugin. The easiest way to configure a custom index lifecycle policy per data stream is to edit this template. + +This tutorial explains how to apply a custom index lifecycle policy to the `logs-system.auth` data stream. + + +## Step 1: Create an index lifecycle policy [data-streams-scenario2-step1] + +1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Click **Create policy**. + +Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**. + + +## Step 2: View index templates [data-streams-scenario2-step2] + +The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: + +1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Select **Index Templates**. +3. Search for `system` to see all index templates associated with the System integration. +4. Select the index template that matches the data stream for which you want to set up an ILM policy. For this example, you can select the `logs-system.auth` index template. + + :::{image} images/index-template-system-auth.png + :alt: List of component templates available for the logs-system.auth index template + :class: screenshot + ::: + +5. In the **Summary**, select `logs-system.auth@custom` from the list to view the component template properties. +6. For a newly added integration, the component template won’t exist yet. Select **Create component template** to create it. If the component template already exists, click **Manage** to update it. + + 1. On the **Logistics** page, keep all defaults and click **Next**. + 2. On the **Index settings** page, in the **Index settings** field, specify the ILM policy that you created. For example: + + ```json + { + "index": { + "lifecycle": { + "name": "my-ilm-policy" + } + } + } + ``` + + 3. Click **Next**. + 4. For both the **Mappings** and **Aliases** pages, keep all defaults and click **Next**. + 5. Finally, on the **Review** page, review the summary and request. If everything looks good, select **Create component template**. + + :::{image} images/review-component-template02.png + :alt: Review details for the new component template + :class: screenshot + ::: + + + +## Step 3: Roll over the data streams (optional) [data-streams-scenario2-step3] + +To confirm that the index template is using the `logs@custom` component template with your custom ILM policy: + +1. Reopen the **Index Management** page and open the **Component Templates** tab. +2. Search for `system` and select the `logs-system.auth@custom` component template. +3. The **Summary** shows the list of all data streams that use the component template, and the **Settings** view shows your newly configured ILM policy. + +New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of the data stream using the {{ref}}/indices-rollover-index.html[{{es}} rollover API: + +```bash +POST /logs-system.auth/_rollover/ +``` + + +## Step 4: Repeat these steps for other data streams [data-streams-scenario2-step4] + +You’ve now applied a custom index lifecycle policy to the `logs-system.auth` data stream in the `System` integration. Repeat these steps for any other data streams for which you’d like to configure a custom ILM policy. diff --git a/reference/ingestion-tools/fleet/data-streams-scenario3.md b/reference/ingestion-tools/fleet/data-streams-scenario3.md new file mode 100644 index 0000000000..f0d5ed9ea2 --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams-scenario3.md @@ -0,0 +1,152 @@ +--- +navigation_title: "Scenario 3" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams-scenario3.html +--- + +# Scenario 3: Apply an ILM policy with integrations using multiple namespaces [data-streams-scenario3] + + +In this scenario, you have {{agent}}s collecting system metrics with the System integration in two environments—​one with the namespace `development`, and one with `production`. + +**Goal:** Customize the {{ilm-init}} policy for the `system.network` data stream in the `production` namespace. Specifically, apply the built-in `90-days-default` {{ilm-init}} policy so that data is deleted after 90 days. + +::::{note} +* This scenario involves cloning an index template. We strongly recommend repeating this procedure on every minor {{stack}} upgrade in order to avoid missing any possible changes to the structure of the managed index template(s) that are shipped with integrations. +* If you cloned an index template to customize the data retention policy on an {{es}} version prior to 8.13, you must update the index template in the clone to use the `ecs@mappings` component template on {{es}} version 8.13 or later. See [Update index template cloned before {{es}} 8.13](#data-streams-pipeline-update-cloned-template-before-8.13) for the step-by-step instructions. + +:::: + + + +## Step 1: View data streams [data-streams-ilm-one] + +The **Data Streams** view in {{kib}} shows you the data streams, index templates, and {{ilm-init}} policies associated with a given integration. + +1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Data Streams**. +2. Search for `system` to see all data streams associated with the System integration. +3. Select the `metrics-system.network-{{namespace}}` data stream to view its associated index template and {{ilm-init}} policy. As you can see, the data stream follows the [Data stream naming scheme](/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme) and starts with its type, `metrics-`. + + :::{image} images/data-stream-info.png + :alt: Data streams info + :class: screenshot + ::: + + + +## Step 2: Create a component template [data-streams-ilm-two] + +For your changes to continue to be applied in future versions, you must put all custom index settings into a component template. The component template must follow the data stream naming scheme, and end with `@custom`: + +```text +--@custom +``` + +For example, to create custom index settings for the `system.network` data stream with a namespace of `production`, the component template name would be: + +```text +metrics-system.network-production@custom +``` + +1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates** +2. Click **Create component template**. +3. Use the template above to set the name—​in this case, `metrics-system.network-production@custom`. Click **Next**. +4. Under **Index settings**, set the {{ilm-init}} policy name under the `lifecycle.name` key: + + ```json + { + "lifecycle": { + "name": "90-days-default" + } + } + ``` + +5. Continue to **Review** and ensure your request looks similar to the image below. If it does, click **Create component template**. + + :::{image} images/create-component-template.png + :alt: Create component template + :class: screenshot + ::: + + + +## Step 3: Clone and modify the existing index template [data-streams-ilm-three] + +Now that you’ve created a component template, you need to create an index template to apply the changes to the correct data stream. The easiest way to do this is to duplicate and modify the integration’s existing index template. + +::::{warning} +Please note the following: * When duplicating the index template, do not change or remove any managed properties. This may result in problems when upgrading. Cloning the index template of an integration package involves some risk as any changes made to the original index template when it is upgraded will not be propagated to the cloned version. * These steps assume that you want to have a namespace specific ILM policy, which requires index template cloning. Cloning the index template of an integration package involves some risk because any changes made to the original index template as part of package upgrades are not propagated to the cloned version. See [Cloning the index template of an integration package](/reference/ingestion-tools/fleet/integrations-assets-best-practices.md#assets-restrictions-cloning-index-template) for details. + ++ If you want to change the ILM Policy, the number of shards, or other settings for the datastreams of one or more integrations, but **the changes do not need to be specific to a given namespace**, it’s strongly recommended to use a `@custom` component template, as described in [Scenario 1](/reference/ingestion-tools/fleet/data-streams-scenario1.md) and [Scenario 2](/reference/ingestion-tools/fleet/data-streams-scenario2.md), so as to avoid the problems mentioned above. See the [ILM](/reference/ingestion-tools/fleet/data-streams.md#data-streams-ilm) section for details. + +:::: + + +1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Index Templates**. +2. Find the index template you want to clone. The index template will have the `` and `` in its name, but not the ``. In this case, it’s `metrics-system.network`. +3. Select **Actions** > **Clone**. +4. Set the name of the new index template to `metrics-system.network-production`. +5. Change the index pattern to include a namespace—​in this case, `metrics-system.network-production*`. This ensures the previously created component template is only applied to the `production` namespace. +6. Set the priority to `250`. This ensures that the new index template takes precedence over other index templates that match the index pattern. +7. Under **Component templates**, search for and add the component template created in the previous step. To ensure your namespace-specific settings are applied over other custom settings, the new template should be added below the existing `@custom` template. +8. Create the index template. + +:::{image} images/create-index-template.png +:alt: Create index template +:class: screenshot +::: + + +## Step 4: Roll over the data stream (optional) [data-streams-ilm-four] + +To confirm that the data stream is now using the new index template and {{ilm-init}} policy, you can either repeat Step 1, or navigate to **{{dev-tools-app}}** and run the following: + +```bash +GET /_data_stream/metrics-system.network-production <1> +``` + +1. The name of the data stream we’ve been hacking on + + +The result should include the following: + +```json +{ + "data_streams" : [ + { + ... + "template" : "metrics-system.network-production", <1> + "ilm_policy" : "90-days-default", <2> + ... + } + ] +} +``` + +1. The name of the custom index template created in step three +2. The name of the {{ilm-init}} policy applied to the new component template in step two + + +New {{ilm-init}} policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover using the [{{es}} rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover): + +```bash +POST /metrics-system.network-production/_rollover/ +``` + + +## Update index template cloned before {{es}} 8.13 [data-streams-pipeline-update-cloned-template-before-8.13] + +If you cloned an index template to customize the data retention policy on an {{es}} version prior to 8.13, you must update the index cloned index template to add the `ecs@mappings` component template on {{es}} version 8.13 or later. + +To update the cloned index template: + +1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Index Templates**. +2. Find the index template you cloned. The index template will have the `` and `` in its name. +3. Select **Manage** > **Edit**. +4. Select **(2) Component templates** +5. In the **Search component templates** field, search for `ecs@mappings`. +6. Click on the **+ (plus)** icon to add the `ecs@mappings` component template. +7. Move the `ecs@mappings` component template right below the `@package` component template. +8. Save the index template. + +Roll over the data stream to apply the changes. diff --git a/reference/ingestion-tools/fleet/data-streams.md b/reference/ingestion-tools/fleet/data-streams.md new file mode 100644 index 0000000000..d3f73b6d84 --- /dev/null +++ b/reference/ingestion-tools/fleet/data-streams.md @@ -0,0 +1,258 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/data-streams.html +--- + +# Data streams [data-streams] + +{{agent}} uses data streams to store time series data across multiple indices while giving you a single named resource for requests. Data streams are well-suited for logs, metrics, traces, and other continuously generated data. They offer a host of benefits over other indexing strategies: + +* **Reduced number of fields per index**: Indices only need to store a specific subset of your data–meaning no more indices with hundreds of thousands of fields. This leads to better space efficiency and faster queries. As an added bonus, only relevant fields are shown in Discover. +* **More granular data control**: For example, file system, load, CPU, network, and process metrics are sent to different indices–each potentially with its own rollover, retention, and security permissions. +* **Flexible**: Use the custom namespace component to divide and organize data in a way that makes sense to your use case or company. +* **Fewer ingest permissions required**: Data ingestion only requires permissions to append data. + + +## Data stream naming scheme [data-streams-naming-scheme] + +{{agent}} uses the Elastic data stream naming scheme to name data streams. The naming scheme splits data into different streams based on the following components: + +`type` +: A generic `type` describing the data, such as `logs`, `metrics`, `traces`, or `synthetics`. + +`dataset` +: The `dataset` is defined by the integration and describes the ingested data and its structure for each index. For example, you might have a dataset for process metrics with a field describing whether the process is running or not, and another dataset for disk I/O metrics with a field describing the number of bytes read. + +`namespace` +: A user-configurable arbitrary grouping, such as an environment (`dev`, `prod`, or `qa`), a team, or a strategic business unit. A `namespace` can be up to 100 bytes in length (multibyte characters will count toward this limit faster). Using a namespace makes it easier to search data from a given source by using a matching pattern. You can also use matching patterns to give users access to data when creating user roles. + + By default the namespace defined for an {{agent}} policy is propagated to all integrations in that policy. if you’d like to define a more granular namespace for a policy: + + 1. In {{kib}}, go to **Integrations**. + 2. On the **Installed integrations** tab, select the integration that you’d like to update. + 3. Open the **Integration policies** tab. + 4. From the **Actions** menu next to the integration, select **Edit integration**. + 5. Open the advanced options and update the **Namespace** field. Data streams from the integration will now use the specified namespace rather than the default namespace inherited from the {{agent}} policy. + + +The naming scheme separates each components with a `-` character: + +```text +-- +``` + +For example, if you’ve set up the Nginx integration with a namespace of `prod`, {{agent}} uses the `logs` type, `nginx.access` dataset, and `prod` namespace to store data in the following data stream: + +```text +logs-nginx.access-prod +``` + +Alternatively, if you use the APM integration with a namespace of `dev`, {{agent}} stores data in the following data stream: + +```text +traces-apm-dev +``` + +All data streams, and the pre-built dashboards that they ship with, are viewable on the {{fleet}} Data Streams page: + +:::{image} images/kibana-fleet-datastreams.png +:alt: Data streams page +:class: screenshot +::: + +::::{tip} +If you’re familiar with the concept of indices, you can think of each data stream as a separate index in {{es}}. Under the hood though, things are a bit more complex. All of the juicy details are available in [{{es}} Data streams](/manage-data/data-store/data-streams.md). +:::: + + + +## {{data-sources-cap}} [data-streams-data-view] + +When searching your data in {{kib}}, you can use a [{{data-source}}](/explore-analyze/find-and-organize/data-views.md) to search across all or some of your data streams. + + +## Index templates [data-streams-index-templates] + +An index template is a way to tell {{es}} how to configure an index when it is created. For data streams, the index template configures the stream’s backing indices as they are created. + +{{es}} provides the following built-in, ECS based templates: `logs-*-*`, `metrics-*-*`, and `synthetics-*-*`. {{agent}} integrations can also provide dataset-specific index templates, like `logs-nginx.access-*`. These templates are loaded when the integration is installed, and are used to configure the integration’s data streams. + + +### Edit the {{es}} index template [data-streams-index-templates-edit] + +::::{warning} +Custom index mappings may conflict with the mappings defined by the integration and may break the integration in {{kib}}. Do not change or customize any default mappings. +:::: + + +When you install an integration, {{fleet}} creates two default `@custom` component templates: + +* A `@custom` component template allowing customization across all documents of a given data stream type, named following the pattern: `@custom`. +* A `@custom` component template for each data stream, named following the pattern: `@custom`. + +The `@custom` component template specific to a datastream has higher precedence over the data stream type `@custom` component template. + +You can edit a `@custom` component template to customize your {{es}} indices: + +1. Open {{kib}} and navigate to to **{{stack-manage-app}}** > **Index Management** > **Data Streams**. +2. Find and click the name of the integration data stream, such as `logs-cisco_ise.log-default`. +3. Click the index template link for the data stream to see the list of associated component templates. +4. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates**. +5. Search for the name of the data stream’s custom component template and click the edit icon. +6. Add any custom index settings, metadata, or mappings. For example, you may want to: + + * Customize the index lifecycle policy applied to a data stream. See [Configure a custom index lifecycle policy](/solutions/observability/apps/index-lifecycle-management.md#apm-data-streams-custom-policy) in the APM Guide for a walk-through. + + Specify lifecycle name in the **index settings**: + + ```json + { + "index": { + "lifecycle": { + "name": "my_policy" + } + } + } + ``` + + * Change the number of [replicas](/deploy-manage/distributed-architecture/reading-and-writing-documents.md) per index. Specify the number of replica shards in the **index settings**: + + ```json + { + "index": { + "number_of_replicas": "2" + } + } + ``` + + +Changes to component templates are not applied retroactively to existing indices. For changes to take effect, you must create a new write index for the data stream. You can do this with the {{es}} [Rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover). + + +## Index lifecycle management ({{ilm-init}}) [data-streams-ilm] + +Use the [index lifecycle management](/manage-data/lifecycle/index-lifecycle-management.md) ({{ilm-init}}) feature in {{es}} to manage your {{agent}} data stream indices as they age. For example, create a new index after a certain period of time, or delete stale indices to enforce data retention standards. + +Installed integrations may have one or many associated data streams—​each with an associated {{ilm-init}} policy. By default, these data streams use an {{ilm-init}} policy that matches their data type. For example, the data stream `metrics-system.logs-*`, uses the metrics {{ilm-init}} policy as defined in the `metrics-system.logs` index template. + +Want to customize your index lifecycle management? See [Tutorials: Customize data retention policies](/reference/ingestion-tools/fleet/data-streams-ilm-tutorial.md). + + +## Ingest pipelines [data-streams-pipelines] + +{{agent}} integration data streams ship with a default [ingest pipeline](/manage-data/ingest/transform-enrich/ingest-pipelines.md) that preprocesses and enriches data before indexing. The default pipeline should not be directly edited as changes can easily break the functionality of the integration. + +Starting in version 8.4, all default ingest pipelines call a non-existent and non-versioned "`@custom`" ingest pipeline. If left uncreated, this pipeline has no effect on your data. However, if added to a data stream and customized, this pipeline can be used for custom data processing, adding fields, sanitizing data, and more. + +Starting in version 8.12, ingest pipelines can be configured to process events at various levels of customization. + +::::{note} +If you create a custom index pipeline, Elastic is not responsible for ensuring that it indexes and behaves as expected. Creating a custom pipeline involves custom processing of the incoming data, which should be done with caution and tested carefully. +:::: + + +`global@custom` +: Apply processing to all events + + For example, the following [pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) request adds a new field `my-global-field` for all events: + + ```console + PUT _ingest/pipeline/global@custom + { + "processors": [ + { + "set": { + "description": "Process all events", + "field": "my-global-field", + "value": "foo" + } + } + ] + } + ``` + + +`${type}` +: Apply processing to all events of a given data type. + + For example, the following request adds a new field `my-logs-field` for all log events: + + ```console + PUT _ingest/pipeline/logs@custom + { + "processors": [ + { + "set": { + "description": "Process all log events", + "field": "my-logs-field", + "value": "foo" + } + } + ] + } + ``` + + +`${type}-${package}.integration` +: Apply processing to all events of a given type in an integration + + For example, the following request creates a `logs-nginx.integration@custom` pipeline that adds a new field `my-nginx-field` for all log events in the Nginx integration: + + ```console + PUT _ingest/pipeline/logs-nginx.integration@custom + { + "processors": [ + { + "set": { + "description": "Process all nginx events", + "field": "my-nginx-field", + "value": "foo" + } + } + ] + } + ``` + + Note that `.integration` is included in the pipeline pattern to avoid possible collision with existing dataset pipelines. + + +`${type}-${dataset}` +: Apply processing to a specific dataset. + + For example, the following request creates a `metrics-system.cpu@custom` pipeline that adds a new field `my-system.cpu-field` for all CPU metrics events in the System integration: + + ```console + PUT _ingest/pipeline/metrics-system.cpu@custom + { + "processors": [ + { + "set": { + "description": "Process all events in the system.cpu dataset", + "field": "my-system.cpu-field", + "value": "foo" + } + } + ] + } + ``` + + +Custom pipelines can directly contain processors or you can use the pipeline processor to call other pipelines that can be shared across multiple data streams or integrations. These pipelines will persist across all version upgrades. + +::::{warning} +:name: data-streams-pipelines-warning + +If you have a custom pipeline defined that matches the naming scheme used for any {{fleet}} custom ingest pipelines, this can produce unintended results. For example, if you have a pipeline named like one of the following: + +* `global@custom` +* `traces@custom` +* `traces-apm@custom` + +The pipeline may be unexpectedly called for other data streams in other integrations. To avoid this problem, avoid the naming schemes defined above when naming your custom pipelines. + +Refer to the breaking change in the 8.12.0 Release Notes for more detail and workaround options. + +:::: + + +See [Tutorial: Transform data with custom ingest pipelines](/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md) to get started. diff --git a/reference/ingestion-tools/fleet/debug-standalone-agents.md b/reference/ingestion-tools/fleet/debug-standalone-agents.md new file mode 100644 index 0000000000..2a27f38475 --- /dev/null +++ b/reference/ingestion-tools/fleet/debug-standalone-agents.md @@ -0,0 +1,191 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/debug-standalone-agents.html +--- + +# Debug standalone Elastic Agents [debug-standalone-agents] + +When you run standalone {{agent}}s, you are responsible for monitoring the status of your deployed {{agent}}s. You cannot view the status or logs in {{fleet}}. + +Use the following tips to help identify potential issues. + +Also refer to [Troubleshoot common problems](/troubleshoot/ingest/fleet/common-problems.md) for guidance on specific problems. + +::::{note} +You might need to log in as a root user (or Administrator on Windows) to run these commands. +:::: + + + +## Check the status of the running {{agent}} [_check_the_status_of_the_running_agent] + +To check the status of the running {{agent}} daemon and other processes managed by {{agent}}, run the `status` command. For example: + +```shell +elastic-agent status +``` + +Returns something like: + +```yaml +State: HEALTHY +Message: Running +Fleet State: STOPPED +Fleet Message: (no message) +Components: + * log (HEALTHY) + Healthy: communicating with pid '25423' + * filestream (HEALTHY) + Healthy: communicating with pid '25424' +``` + +By default, this command returns the status in human-readable format. Use the `--output` flag to change it to `json` or `yaml`. + +For more information about this command, refer to [elastic-agent status](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-status-command). + + +## Inspect {{agent}} and related logs [inspect-standalone-agent-logs] + +If the {{agent}} status is unhealthy, or behaving unexpectedly, inspect the logs of the running {{agent}}. + +The log location varies by platform. {{agent}} logs are in the folders described in [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md). {{beats}} and {{fleet-server}} logs are in folders named for the output (for example, `default`). + +Start by investigating any errors you see in the {{agent}} and related logs. Also look for repeated lines that might indicate problems like connection issues. If the {{agent}} and related logs look clean, check the host operating system logs for out of memory (OOM) errors related to the {{agent}} or any of its processes. + + +## Increase the log level of the running {{agent}} [increase-log-level] + +The log level of the running agent is set to `info` by default. At this level, {{agent}} will log informational messages, including the number of events that are published. It also logs any warnings, errors, or critical errors. + +To increase the log level, set it to `debug` in the `elastic-agent.yml` file. + +The `debug` setting configures {{agent}} to log debug messages, including a detailed printout of all flushed events, plus all the information collected at other log levels. + +Set other options if you want write logs to a file. For example: + +```yaml +agent.logging.level: debug +agent.logging.to_files: true +agent.logging.files: + path: /var/log/elastic-agent + name: elastic-agent + keepfiles: 7 + permissions: 0600 +``` + +For other log settings, refer to [Logging](/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md). + + +## Expose /debug/pprof/ endpoints with the monitoring endpoint [expose-debug-endpoint] + +Profiling data produced by the `/debug/pprof/` endpoints can be useful for debugging, but presents a security risk. Do not expose these endpoints if the monitoring endpoint is accessible over a network. (By default, the monitoring endpoint is bound to a local Unix socket or Windows npipe and not accessible over a network.) + +To expose the `/debug/pprof/` endpoints, set `agent.monitoring.pprof: true` in the `elastic-agent.yml` file. For more information about monitoring settings, refer to [Monitoring](/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md). + +After exposing the endpoints, you can access the HTTP handler bound to a socket for {{beats}} or the {{agent}}. For example: + +```shell +sudo curl --unix-socket /Library/Elastic/Agent/data/tmp/default/filebeat/filebeat.sock http://socket/ | json_pp +``` + +Returns something like: + +```json +{ + "beat" : "filebeat", + "binary_arch" : "amd64", + "build_commit" : "93708bd74e909e57ed5d9bea3cf2065f4cc43af3", + "build_time" : "2022-01-28T09:53:29.000Z", + "elastic_licensed" : true, + "ephemeral_id" : "421e2525-9360-41db-9395-b9e627fbbe6e", + "gid" : "0", + "hostname" : "My-MacBook-Pro.local", + "name" : "My-MacBook-Pro.local", + "uid" : "0", + "username" : "root", + "uuid" : "fc0cc98b-b6d8-4eef-abf5-2d5f26adc7e8", + "version" : "7.17.0" +} +``` + +Likewise, the following request: + +```shell +sudo curl --unix-socket /Library/Elastic/Agent/data/tmp/elastic-agent.sock http://socket/stats | json_pp +``` + +Returns something like: + +```shell +{ + "beat" : { + "cpu" : { + "system" : { + "ticks" : 16272, + "time" : { + "ms" : 16273 + } + }, + "total" : { + "ticks" : 42981, + "time" : { + "ms" : 42982 + }, + "value" : 42981 + }, + "user" : { + "ticks" : 26709, + "time" : { + "ms" : 26709 + } + } + }, + "info" : { + "ephemeral_id" : "ea8fec0d-f7dd-4577-85d7-a2c38583c9c6", + "uptime" : { + "ms" : 5885653 + }, + "version" : "7.17.0" + }, + "memstats" : { + "gc_next" : 13027776, + "memory_alloc" : 7771632, + "memory_sys" : 39666696, + "memory_total" : 757970208, + "rss" : 58990592 + }, + "runtime" : { + "goroutines" : 101 + } + }, + "system" : { + "cpu" : { + "cores" : 12 + }, + "load" : { + "1" : 4.8892, + "15" : 2.6748, + "5" : 3.0537, + "norm" : { + "1" : 0.4074, + "15" : 0.2229, + "5" : 0.2545 + } + } + } +} +``` + + +## Inspect the {{agent}} configuration [inspect-configuration] + +To inspect the running {{agent}} configuration use the [elastic-agent inspect](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-inspect-command) command. + +To analyze the current state of the agent, inspect log files, and see the configuration of {{agent}} and the sub-processes it starts, run the `diagnostics` command. For example: + +```shell +elastic-agent diagnostics +``` + +For more information about this command, refer to [elastic-agent diagnostics](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-diagnostics-command). + diff --git a/reference/ingestion-tools/fleet/decode-json-fields.md b/reference/ingestion-tools/fleet/decode-json-fields.md new file mode 100644 index 0000000000..2bc78badae --- /dev/null +++ b/reference/ingestion-tools/fleet/decode-json-fields.md @@ -0,0 +1,43 @@ +--- +navigation_title: "decode_json_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode-json-fields.html +--- + +# Decode JSON fields [decode-json-fields] + + +The `decode_json_fields` processor decodes fields containing JSON strings and replaces the strings with valid JSON objects. + + +## Example [_example_19] + +```yaml + - decode_json_fields: + fields: ["field1", "field2", ...] + process_array: false + max_depth: 1 + target: "" + overwrite_keys: false + add_error_key: true +``` + + +## Configuration settings [_configuration_settings_22] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | Fields containing JSON strings to decode. | +| `process_array` | No | `false` | Whether to process arrays. | +| `max_depth` | No | `1` | Maximum parsing depth. A value of `1` decodes the JSON objects in fields indicated in `fields`. A value of `2` also decodes the objects embedded in the fields of these parsed documents. | +| `target` | No | | Field under which the decoded JSON will be written. By default, the decoded JSON object replaces the string field from which it was read. To merge the decoded JSON fields into the root of the event, specify `target` with an empty string (`target: ""`). Note that the `null` value (`target:`) is treated as if the field was not set. | +| `overwrite_keys` | No | `false` | Whether existing keys in the event are overwritten by keys from the decoded JSON object. | +| `expand_keys` | No | | Whether keys in the decoded JSON should be recursively de-dotted and expanded into a hierarchical object structure. For example, `{"a.b.c": 123}` would be expanded into `{"a":{"b":{"c":123}}}`. | +| `add_error_key` | No | `false` | If `true` and an error occurs while decoding JSON keys, the `error` field will become a part of the event with the error message. If `false`, there will not be any error in the event’s field. | +| `document_id` | No | | JSON key that’s used as the document ID. If configured, the field will be removed from the original JSON document and stored in `@metadata._id`. | + diff --git a/reference/ingestion-tools/fleet/decode_base64_field-processor.md b/reference/ingestion-tools/fleet/decode_base64_field-processor.md new file mode 100644 index 0000000000..e00d9b5144 --- /dev/null +++ b/reference/ingestion-tools/fleet/decode_base64_field-processor.md @@ -0,0 +1,43 @@ +--- +navigation_title: "decode_base64_field" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode_base64_field-processor.html +--- + +# Decode Base64 fields [decode_base64_field-processor] + + +The `decode_base64_field` processor specifies a field to base64 decode. + +To overwrite fields, either rename the target field or use the `drop_fields` processor to drop the field, and then rename the field. + + +## Example [_example_15] + +In this example, `field1` is decoded in `field2`. + +```yaml + - decode_base64_field: + field: + from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings_18] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | Contains:

* `from: "old-key"`, where `from` is the origin
* `to: "new-key"`, where `to` is the target field name
| +| `ignore_missing` | No | `false` | Whether to ignore missing keys. If `true`, missing keys that should be base64 decoded are ignored and no error is logged. If `false`, an error is logged and the behavior of `fail_on_error` is applied. | +| `fail_on_error` | No | `true` | Whether to fail if an error occurs. If `true` and an error occurs, an error is logged and the event is dropped. If `false`, an error is logged, but the event is not modified. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/decode_cef-processor.md b/reference/ingestion-tools/fleet/decode_cef-processor.md new file mode 100644 index 0000000000..090bc00bca --- /dev/null +++ b/reference/ingestion-tools/fleet/decode_cef-processor.md @@ -0,0 +1,47 @@ +--- +navigation_title: "decode_cef" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode_cef-processor.html +--- + +# Decode CEF [decode_cef-processor] + + +The `decode_cef` processor decodes Common Event Format (CEF) messages. + +::::{note} +This processor only works with log inputs. +:::: + + + +## Example [_example_16] + +In this example, the `message` field is decoded as CEF after it is renamed to `event.original`. It is best to rename `message` to `event.original` because the decoded CEF data contains its own `message` field. + +```yaml + - rename: + fields: + - {from: "message", to: "event.original"} + - decode_cef: + field: event.original +``` + + +## Configuration settings [_configuration_settings_19] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | No | `message` | Source field containing the CEF message to be parsed. | +| `target_field` | No | `cef` | Target field where the parsed CEF object will be written. | +| `ecs` | No | `true` | Whether to generate Elastic Common Schema (ECS) fields from the CEF data. Certain CEF header and extension values will be used to populate ECS fields. | +| `timezone` | No | `UTC` | IANA time zone name (for example, `America/New_York`) or fixed time offset (for example, `+0200`) to use when parsing times that do not contain a time zone. Specify `Local` to use the machine’s local time zone. | +| `ignore_missing` | No | `false` | Whether to ignore errors when the source field is missing. | +| `ignore_failure` | No | false | Whether to ignore failures when the source field does not contain a CEF message. | +| `id` | No | | Identifier for this processor instance. Useful for debugging. | + diff --git a/reference/ingestion-tools/fleet/decode_csv_fields-processor.md b/reference/ingestion-tools/fleet/decode_csv_fields-processor.md new file mode 100644 index 0000000000..59cd5dad5e --- /dev/null +++ b/reference/ingestion-tools/fleet/decode_csv_fields-processor.md @@ -0,0 +1,52 @@ +--- +navigation_title: "decode_csv_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode_csv_fields-processor.html +--- + +# Decode CSV fields [decode_csv_fields-processor] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `decode_csv_fields` processor decodes fields containing records in comma-separated format (CSV). It will output the values as an array of strings. + +::::{note} +This processor only works with log inputs. +:::: + + + +## Example [_example_17] + +```yaml + - decode_csv_fields: + fields: + message: decoded.csv + separator: "," + ignore_missing: false + overwrite_keys: true + trim_leading_space: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings_20] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | A mapping from the source field containing the CSV data to the destination field to which the decoded array will be written. | +| `separator` | No | comma character (`,`) | Character to use as a column separator. To use a TAB character, set this value to "\t". | +| `ignore_missing` | No | `false` | Whether to ignore events that lack the source field. If `false`, events missing the source field will fail processing. | +| `overwrite_keys` | No | `false` | Whether the target field is overwritten if it already exists. If `false`, processing of an event fails if the target field already exists. | +| `trim_leading_space` | No | `false` | Whether extra space after the separator is trimmed from values. This works even if the separator is also a space. | +| `fail_on_error` | No | `true` | Whether to fail if an error occurs. If `true` and an error occurs, any changes to the event are reverted, and the original event is returned. If `false`, processing continues even if an error occurs. | + diff --git a/reference/ingestion-tools/fleet/decode_duration-processor.md b/reference/ingestion-tools/fleet/decode_duration-processor.md new file mode 100644 index 0000000000..63d19e7b4a --- /dev/null +++ b/reference/ingestion-tools/fleet/decode_duration-processor.md @@ -0,0 +1,31 @@ +--- +navigation_title: "decode_duration" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode_duration-processor.html +--- + +# Decode duration [decode_duration-processor] + + +The `decode_duration` processor decodes a Go-style duration string into a specific `format`. + +For more information about the Go `time.Duration` string style, refer to the [Go documentation](https://pkg.go.dev/time#Duration). + + +## Example [_example_18] + +```yaml +processors: + - decode_duration: + field: "app.rpc.cost" + format: "milliseconds" +``` + + +## Configuration settings [_configuration_settings_21] + +| Name | Required | Default | Description | | +| --- | --- | --- | --- | --- | +| `field` | yes | | Which field of event needs to be decoded as `time.Duration` | | +| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` | | + diff --git a/reference/ingestion-tools/fleet/decode_xml-processor.md b/reference/ingestion-tools/fleet/decode_xml-processor.md new file mode 100644 index 0000000000..dc7c0f9192 --- /dev/null +++ b/reference/ingestion-tools/fleet/decode_xml-processor.md @@ -0,0 +1,91 @@ +--- +navigation_title: "decode_xml" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode_xml-processor.html +--- + +# Decode XML [decode_xml-processor] + + +The `decode_xml` processor decodes XML data that is stored under the `field` key. It outputs the result into the `target_field`. + + +## Examples [_examples_6] + +This example demonstrates how to decode an XML string contained in the `message` field and write the resulting fields into the root of the document. Any fields that already exist are overwritten. + +```yaml + - decode_xml: + field: message + target_field: "" + overwrite_keys: true +``` + +By default any decoding errors that occur will stop the processing chain, and the error will be added to the `error.message` field. To ignore all errors and continue to the next processor, set `ignore_failure: true`. To specifically ignore failures caused by `field` not existing, set `ignore_missing: true`. + +```yaml + - decode_xml: + field: example + target_field: xml + ignore_missing: true + ignore_failure: true +``` + +By default the names of all keys converted from XML are converted to lowercase. To disable this behavior, set `to_lower: false`, for example: + +```yaml + - decode_xml: + field: message + target_field: xml + to_lower: false +``` + +Example XML input: + +```xml + + + William H. Gaddis + The Recognitions + One of the great seminal American novels of the 20th century. + + +``` + +Will produce the following output: + +```json +{ + "xml": { + "catalog": { + "book": { + "author": "William H. Gaddis", + "review": "One of the great seminal American novels of the 20th century.", + "seq": "1", + "title": "The Recognitions" + } + } + } +} +``` + + +## Configuration settings [_configuration_settings_23] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | `message` | Source field containing the XML. | +| `target_field` | No | | The field under which the decoded XML will be written. By default the decoded XML object replaces the field from which it was read. To merge the decoded XML fields into the root of the event, specify `target_field` with an empty string (`target_field: ""`). Note that the `null` value (`target_field:`) is treated as if the field was not set at all. | +| `overwrite_keys` | No | `true` | Whether keys that already exist in the event are overwritten by keys from the decoded XML object. | +| `to_lower` | No | `true` | Whether to convert all keys to lowercase. | +| `document_id` | No | | XML key to use as the document ID. If configured, the field will be removed from the original XML document and stored in `@metadata._id`. | +| `ignore_missing` | No | `false` | Whether to return an error if a specified field does not exist. | +| `ignore_failure` | No | `false` | Whether to ignore all errors produced by the processor. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/decode_xml_wineventlog-processor.md b/reference/ingestion-tools/fleet/decode_xml_wineventlog-processor.md new file mode 100644 index 0000000000..3580d531ac --- /dev/null +++ b/reference/ingestion-tools/fleet/decode_xml_wineventlog-processor.md @@ -0,0 +1,157 @@ +--- +navigation_title: "decode_xml_wineventlog" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decode_xml_wineventlog-processor.html +--- + +# Decode XML Wineventlog [decode_xml_wineventlog-processor] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `decode_xml_wineventlog` processor decodes Windows Event Log data in XML format that is stored under the `field` key. It outputs the result into the `target_field`. + + +## Examples [_examples_7] + +```yaml + - decode_xml_wineventlog: + field: event.original + target_field: winlog +``` + +```json +{ + "event": { + "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success" + } +} +``` + +Will produce the following output: + +```json +{ + "event": { + "original": "4672001254800x802000000000000011303SecurityvagrantS-1-5-18SYSTEMNT AUTHORITY0x3e7SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeSpecial privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilegeInformationSpecial LogonInfoSecurityMicrosoft Windows security auditing.Audit Success", + "action": "Special Logon", + "code": "4672", + "kind": "event", + "outcome": "success", + "provider": "Microsoft-Windows-Security-Auditing", + }, + "host": { + "name": "vagrant", + }, + "log": { + "level": "information", + }, + "winlog": { + "channel": "Security", + "outcome": "success", + "activity_id": "{ffb23523-1f32-0000-c335-b2ff321fd701}", + "level": "information", + "event_id": 4672, + "provider_name": "Microsoft-Windows-Security-Auditing", + "record_id": 11303, + "computer_name": "vagrant", + "keywords_raw": 9232379236109516800, + "opcode": "Info", + "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}", + "event_data": { + "SubjectUserSid": "S-1-5-18", + "SubjectUserName": "SYSTEM", + "SubjectDomainName": "NT AUTHORITY", + "SubjectLogonId": "0x3e7", + "PrivilegeList": "SeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege" + }, + "task": "Special Logon", + "keywords": [ + "Audit Success" + ], + "message": "Special privileges assigned to new logon.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tSYSTEM\n\tAccount Domain:\t\tNT AUTHORITY\n\tLogon ID:\t\t0x3E7\n\nPrivileges:\t\tSeAssignPrimaryTokenPrivilege\n\t\t\tSeTcbPrivilege\n\t\t\tSeSecurityPrivilege\n\t\t\tSeTakeOwnershipPrivilege\n\t\t\tSeLoadDriverPrivilege\n\t\t\tSeBackupPrivilege\n\t\t\tSeRestorePrivilege\n\t\t\tSeDebugPrivilege\n\t\t\tSeAuditPrivilege\n\t\t\tSeSystemEnvironmentPrivilege\n\t\t\tSeImpersonatePrivilege\n\t\t\tSeDelegateSessionUserImpersonatePrivilege", + "process": { + "pid": 652, + "thread": { + "id": 4660 + } + } + } +} +``` + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + + +## Configuration settings [_configuration_settings_24] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | `message` | Source field containing the XML. | +| `target_field` | Yes | `winlog` | The field under which the decoded XML will be written. To merge the decoded XML fields into the root of the event, specify `target_field` with an empty string (`target_field: ""`). | +| `overwrite_keys` | No | `true` | Whether keys that already exist in the event are overwritten by keys from the decoded XML object. | +| `map_ecs_fields` | No | `true` | Whether to map additional ECS fields when possible. Note that ECS field keys are placed outside of `target_field`. | +| `ignore_missing` | No | `false` | Whether to return an error if a specified field does not exist. | +| `ignore_failure` | No | `false` | Whether to ignore all errors produced by the processor. | + + +## Field mappings [wineventlog-field-mappings] + +The field mappings are as follows: + +| Event Field | Source XML Element | Notes | +| --- | --- | --- | +| `winlog.channel` | `` | | +| `winlog.event_id` | `` | | +| `winlog.provider_name` | `` | `Name` attribute | +| `winlog.record_id` | `` | | +| `winlog.task` | `` | | +| `winlog.computer_name` | `` | | +| `winlog.keywords` | `` | list of each `Keyword` | +| `winlog.opcodes` | `` | | +| `winlog.provider_guid` | `` | `Guid` attribute | +| `winlog.version` | `` | | +| `winlog.time_created` | `` | `SystemTime` attribute | +| `winlog.outcome` | `` | "success" if bit 0x20000000000000 is set, "failure" if 0x10000000000000 is set | +| `winlog.level` | `` | converted to lowercase | +| `winlog.message` | `` | line endings removed | +| `winlog.user.identifier` | `` | | +| `winlog.user.domain` | `` | | +| `winlog.user.name` | `` | | +| `winlog.user.type` | `` | converted from integer to String | +| `winlog.event_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element | +| `winlog.user_data` | `` | map where `Name` attribute in Data element is key, and value is the value of the Data element | +| `winlog.activity_id` | `` | | +| `winlog.related_activity_id` | `` | | +| `winlog.kernel_time` | `` | | +| `winlog.process.pid` | `` | | +| `winlog.process.thread.id` | `` | | +| `winlog.processor_id` | `` | | +| `winlog.processor_time` | `` | | +| `winlog.session_id` | `` | | +| `winlog.user_time` | `` | | +| `winlog.error.code` | `` | | + +If `map_ecs_fields` is enabled then the following field mappings are also performed: + +| Event Field | Source XML or other field | Notes | +| --- | --- | --- | +| `event.code` | `winlog.event_id` | | +| `event.kind` | `"event"` | | +| `event.provider` | `` | `Name` attribute | +| `event.action` | `` | | +| `event.host.name` | `` | | +| `event.outcome` | `winlog.outcome` | | +| `log.level` | `winlog.level` | | +| `message` | `winlog.message` | | +| `error.code` | `winlog.error.code` | | +| `error.message` | `winlog.error.message` | | + diff --git a/reference/ingestion-tools/fleet/decompress_gzip_field-processor.md b/reference/ingestion-tools/fleet/decompress_gzip_field-processor.md new file mode 100644 index 0000000000..f8510ca159 --- /dev/null +++ b/reference/ingestion-tools/fleet/decompress_gzip_field-processor.md @@ -0,0 +1,43 @@ +--- +navigation_title: "decompress_gzip_field" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/decompress_gzip_field-processor.html +--- + +# Decompress gzip fields [decompress_gzip_field-processor] + + +The `decompress_gzip_field` processor specifies a field to gzip decompress. + +To overwrite fields, either first rename the target field, or use the `drop_fields` processor to drop the field, and then decompress the field. + + +## Example [_example_20] + +In this example, `field1` is decompressed in `field2`. + +```yaml + - decompress_gzip_field: + field: + from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings_25] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | Contains:

* `from: "old-key"`, where `from` is the origin
* `to: "new-key"`, where `to` is the target field name
| +| `ignore_missing` | No | `false` | Whether to ignore missing keys. If `true`, no error is logged if a key that should be decompressed is missing. | +| `fail_on_error` | No | `true` | If `true` and an error occurs, decompression of fields is stopped, and the original event is returned. If `false`, decompression continues even if an error occurs during decoding. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/deployment-models.md b/reference/ingestion-tools/fleet/deployment-models.md new file mode 100644 index 0000000000..ec2b52228c --- /dev/null +++ b/reference/ingestion-tools/fleet/deployment-models.md @@ -0,0 +1,39 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-deployment-models.html +--- + +# Deployment models [fleet-deployment-models] + +There are various models for setting up {{agents}} to work with {{es}}. The recommended approach is to use {{fleet}}, a web-based UI in Kibana, to centrally manage all of your {{agents}} and their policies. Using {{fleet}} requires having an instance of {{fleet-server}} that acts as the interface between the {{fleet}} UI and your {{agents}}. + +For an overview of {{fleet-server}}, including details about how it communicates with {{es}}, how to ensure high availability, and more, refer to [What is {{fleet-server}}?](/reference/ingestion-tools/fleet/fleet-server.md). + +The requirements for setting up {{fleet-server}} differ, depending on your particular deployment model: + +{{serverless-full}} +: In a [{{serverless-short}}](/deploy-manage/deploy/elastic-cloud/serverless.md) environment, {{fleet-server}} is offered as a service, it is configured and scaled automatically without the need for any user intervention. + +{{ess}} +: If you’re running {{es}} and {{kib}} hosted on [{{ess}}](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md), no extra setup is required unless you want to scale your deployment. {{ess}} runs a hosted version of {{integrations-server}} that includes {{fleet-server}}. For details about this deployment model, refer to [Deploy on {{ecloud}}](/reference/ingestion-tools/fleet/add-fleet-server-cloud.md). + +{{ess}} with {{fleet-server}} on-premise +: When you use a hosted {{ess}} deployment you may still choose to run {{fleet-server}} on-premise. For details about this deployment model and set up instructions, refer to [Deploy {{fleet-server}} on-premises and {{es}} on Cloud](/reference/ingestion-tools/fleet/add-fleet-server-mixed.md). + +Docker and Kubernetes +: You can deploy {{fleet}}-managed {{agent}} in Docker or on Kubernetes. Refer to [Run {{agent}} in a container](/reference/ingestion-tools/fleet/elastic-agent-container.md) or [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) for all of the configuration instructions. For a Kubernetes install we also have a [Helm chart](/reference/ingestion-tools/fleet/install-on-kubernetes-using-helm.md) available to simplify the installation. Details for configuring {{fleet-server}} are included with the {{agent}} install steps. + +{{eck}} +: You can deploy {{fleet}}-managed {{agent}} in an {{ecloud}} Kubernetes environment that provides configuration and management capabilities for the full {{stack}}. For details, refer to [Run {{fleet}}-managed {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md). + +Self-managed +: For self-managed deployments, you must install and host {{fleet-server}} yourself. For details about this deployment model and set up instructions, refer to [Deploy on-premises and self-managed](/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md). + + + + + + + + + diff --git a/reference/ingestion-tools/fleet/detect_mime_type-processor.md b/reference/ingestion-tools/fleet/detect_mime_type-processor.md new file mode 100644 index 0000000000..70059fefb0 --- /dev/null +++ b/reference/ingestion-tools/fleet/detect_mime_type-processor.md @@ -0,0 +1,37 @@ +--- +navigation_title: "detect_mime_type" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/detect_mime_type-processor.html +--- + +# Detect mime type [detect_mime_type-processor] + + +The `detect_mime_type` processor attempts to detect a mime type for a field that contains a given stream of bytes. + + +## Example [_example_21] + +In this example, `http.request.body.content` is used as the source, and `http.request.mime_type` is set to the detected mime type. + +```yaml + - detect_mime_type: + field: http.request.body.content + target: http.request.mime_type +``` + + +## Configuration settings [_configuration_settings_26] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | Field used as the data source. | +| `target` | Yes | | Field to populate with the detected type. You can use the `@metadata.` prefixto set the value in the event metadata instead of fields. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/dissect-processor.md b/reference/ingestion-tools/fleet/dissect-processor.md new file mode 100644 index 0000000000..75d48e5296 --- /dev/null +++ b/reference/ingestion-tools/fleet/dissect-processor.md @@ -0,0 +1,87 @@ +--- +navigation_title: "dissect" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/dissect-processor.html +--- + +# Dissect strings [dissect-processor] + + +The `dissect` processor tokenizes incoming strings using defined patterns. + + +## Example [_example_22] + +```yaml + - dissect: + tokenizer: "%{key1} %{key2} %{key3|convert_datatype}" + field: "message" + target_prefix: "dissect" +``` + +For a full example, see [Dissect example](#dissect-example). + + +## Configuration settings [_configuration_settings_27] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `tokenizer` | Yes | | Field used to define the **dissection** pattern. You can provide an optional convert datatype after the key by using a pipe character (`|`) as a separator to convert the value from `string` to `integer`, `long`, `float`, `double`, `boolean`, or `IP`. | +| `field` | No | `message` | Event field to tokenize. | +| `target_prefix` | No | `dissect` | Name of the field where the values will be extracted. When an empty string is defined, the processor creates the keys at the root of the event. When the target key already exists in the event, the processor won’t replace it and log an error; you need to either drop or rename the key before using dissect, or enable the `overwrite_keys` flag. | +| `ignore_failure` | No | `false` | Whether to return an error if the tokenizer fails to match the message field. If `true`, the processor silently restores the original event, allowing execution of subsequent processors (if any). If `false`, the processor logs an error, preventing execution of other processors. | +| `overwrite_keys` | No | `false` | Whether to overwrite existing keys. If `true`, the processor overwrites existing keys in the event. If `false`, the processor fails if the key already exists. | +| `trim_values` | No | `none` | Enables the trimming of the extracted values. Useful to remove leading and trailing spaces. Possible values are:

* `none`: no trimming is performed.
* `left`: values are trimmed on the left (leading).
* `right`: values are trimmed on the right (trailing).
* `all`: values are trimmed for leading and trailing.
| +| `trim_chars` | No | (`" "`) to trim space characters | Set of characters to trim from values when `trim_values` is enabled. To trim multiple characters, set this value to a string containing all characters to trim. For example, `trim_chars: " \t"` trims spaces and tabs. | + +For tokenization to be successful, all keys must be found and extracted. If a key cannot be found, an error is logged, and no modification is done on the original event. + +::::{note} +A key can contain any characters except reserved suffix or prefix modifiers: `/`,`&`, `+`, `#` and `?`. +:::: + + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + + +## Dissect example [dissect-example] + +For this example, imagine that an application generates the following messages: + +```sh +"321 - App01 - WebServer is starting" +"321 - App01 - WebServer is up and running" +"321 - App01 - WebServer is scaling 2 pods" +"789 - App02 - Database will be restarted in 5 minutes" +"789 - App02 - Database is up and running" +"789 - App02 - Database is refreshing tables" +``` + +Use the `dissect` processor to split each message into three fields, for example, `service.pid`, `service.name`, and `service.status`: + +```yaml + - dissect: + tokenizer: '"%{service.pid|integer} - %{service.name} - %{service.status}"' + field: "message" + target_prefix: "" +``` + +This configuration produces fields like: + +```json +"service": { + "pid": 321, + "name": "App01", + "status": "WebServer is up and running" +}, +``` + +`service.name` is an ECS [keyword field](elasticsearch://docs/reference/elasticsearch/mapping-reference/keyword.md), which means that you can use it in {{es}} for filtering, sorting, and aggregations. + +When possible, use ECS-compatible field names. For more information, see the [Elastic Common Schema](ecs://docs/reference/index.md) documentation. + diff --git a/reference/ingestion-tools/fleet/dns-processor.md b/reference/ingestion-tools/fleet/dns-processor.md new file mode 100644 index 0000000000..b09bf01fdc --- /dev/null +++ b/reference/ingestion-tools/fleet/dns-processor.md @@ -0,0 +1,77 @@ +--- +navigation_title: "dns" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/dns-processor.html +--- + +# DNS Reverse Lookup [dns-processor] + + +The `dns` processor performs reverse DNS lookups of IP addresses. It caches the responses that it receives in accordance to the time-to-live (TTL) value contained in the response. It also caches failures that occur during lookups. Each instance of this processor maintains its own independent cache. + +The processor uses its own DNS resolver to send requests to nameservers and does not use the operating system’s resolver. It does not read any values contained in `/etc/hosts`. + +This processor can significantly slow down your pipeline’s throughput if you have a high latency network or slow upstream nameserver. The cache will help with performance, but if the addresses being resolved have a high cardinality, cache benefits are diminished due to the high miss ratio. + +For example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve is 500 events per second (1000 milliseconds / 2 milliseconds). If you have a high cache hit ratio, your throughput can be higher. + + +## Examples [_examples_8] + +This is a minimal configuration example that resolves the IP addresses contained in two fields. + +```yaml + - dns: + type: reverse + fields: + source.ip: source.hostname + destination.ip: destination.hostname +``` + +This examples shows all configuration options. + +```yaml + - dns: + type: reverse + action: append + transport: tls + fields: + server.ip: server.hostname + client.ip: client.hostname + success_cache: + capacity.initial: 1000 + capacity.max: 10000 + min_ttl: 1m + failure_cache: + capacity.initial: 1000 + capacity.max: 10000 + ttl: 1m + nameservers: ['192.0.2.1', '203.0.113.1'] + timeout: 500ms + tag_on_failure: [_dns_reverse_lookup_failed] +``` + + +## Configuration settings [_configuration_settings_28] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `type` | Yes | | Type of DNS lookup to perform. The only supported type is `reverse`, which queries for a PTR record. | +| `action` | No | `append` | Defines the behavior of the processor when the target field already exists in the event. The options are `append` and `replace`. | +| `fields` | Yes | | Mapping of source field names to target field names. The value of the source field is used in the DNS query, and the result is written to the target field. | +| `success_cache.capacity.initial` | No | `1000` | Initial number of items that the success cache is allocated to hold. When initialized, the processor will allocate memory for this number of items. | +| `success_cache.capacity.max` | No | `10000` | Maximum number of items that the success cache can hold. When the maximum capacity is reached, a random item is evicted. | +| `success_cache.min_ttl` | Yes | `1m` | Duration of the minimum alternative cache TTL for successful DNS responses. Ensures that `TTL=0` successful reverse DNS responses can be cached. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". | +| `failure_cache.capacity.initial` | No | `1000` | Initial number of items that the failure cache is allocated to hold. When initialized, the processor will allocate memory for this number of items. | +| `failure_cache.capacity.max` | No | `10000` | Maximum number of items that the failure cache can hold. When the maximum capacity is reached, a random item is evicted. | +| `failure_cache.ttl` | No | `1m` | Duration for which failures are cached. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". | +| `nameservers` | Yes (on Windows) | | List of nameservers to query. If there are multiple servers, the resolver queries them in the order listed. If none are specified, it reads the nameservers listed in `/etc/resolv.conf` once at initialization. On Windows you must always supply at least one nameserver. | +| `timeout` | No | `500ms` | Duration after which a DNS query will timeout. This is timeout for each DNS request, so if you have two nameservers, the total timeout will be 2 times this value. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". | +| `tag_on_failure` | No | `false` | List of tags to add to the event when any lookup fails. The tags are only added once even if multiple lookups fail. By default no tags are added upon failure. | +| `transport` | No | `udp` | Type of transport connection that should be used: `tls` (DNS over TLS) or `udp`. | + diff --git a/reference/ingestion-tools/fleet/docker-provider.md b/reference/ingestion-tools/fleet/docker-provider.md new file mode 100644 index 0000000000..93caca8a3f --- /dev/null +++ b/reference/ingestion-tools/fleet/docker-provider.md @@ -0,0 +1,61 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/docker-provider.html +--- + +# Docker Provider [docker-provider] + +Provides inventory information from Docker. The available dynamic variables are: + +| Key | Type | Description | +| --- | --- | --- | +| `docker.container.id` | `string` | ID of the container | +| `docker.container.name` | `string` | Name of the container | +| `docker.container.image.name` | `string` | Image of the container | +| `docker.container.labels` | `object` | Labels of the container | + +To set the container ID dynamically in the configuration, use a variable in the {{agent}} policy to return container ID information from the provider: + +```yaml +inputs: + - id: 'docker-container-logs-${docker.container.id}' + type: filestream + paths: + - '/var/lib/docker/containers/${docker.container.id}/*-json.log' +``` + +Sample of the policy generated by this configuration will look like: + +```yaml +inputs: + - id: docker-container-logs-b9b898d9c2a1126384d38e9f857b3985480cd05c8e74ffc8b628d92245f5a103 + streams: + paths: + - /var/lib/docker/containers/b9b898d9c2a1126384d38e9f857b3985480cd05c8e74ffc8b628d92245f5a103/*-json.log + processors: + - add_fields: + fields: + id: b9b898d9c2a1126384d38e9f857b3985480cd05c8e74ffc8b628d92245f5a103 + image: image-name:latest + labels: + key: value + name: container-name + target: container + - id: docker-container-596bbd114498253985e6a5c4f0f7bf2d9eb8fcd4fe3e6cb53bdfba0cdc7036c8 + type: filestream + streams: + paths: + - /var/lib/docker/containers/596bbd114498253985e6a5c4f0f7bf2d9eb8fcd4fe3e6cb53bdfba0cdc7036c8/*-json.log + processors: + - add_fields: + fields: + id: 596bbd114498253985e6a5c4f0f7bf2d9eb8fcd4fe3e6cb53bdfba0cdc7036c8 + image: other-image-name:latest + labels: + key: value + name: other-container-name + target: container +``` +:::{note} +Docker provider ensures that each docker container event is enriched with the container’s metadata, and hence the inputs will be populated with the `add_fields` processor which will be responsible for adding the proper container’s metadata. +::: diff --git a/reference/ingestion-tools/fleet/drop_event-processor.md b/reference/ingestion-tools/fleet/drop_event-processor.md new file mode 100644 index 0000000000..b5c3206b17 --- /dev/null +++ b/reference/ingestion-tools/fleet/drop_event-processor.md @@ -0,0 +1,27 @@ +--- +navigation_title: "drop_event" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/drop_event-processor.html +--- + +# Drop events [drop_event-processor] + + +The `drop_event` processor drops the entire event if the associated condition is fulfilled. The condition is mandatory, because without one, all the events are dropped. + + +## Example [_example_23] + +```yaml + - drop_event: + when: + condition +``` + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + diff --git a/reference/ingestion-tools/fleet/drop_fields-processor.md b/reference/ingestion-tools/fleet/drop_fields-processor.md new file mode 100644 index 0000000000..227f0121ff --- /dev/null +++ b/reference/ingestion-tools/fleet/drop_fields-processor.md @@ -0,0 +1,42 @@ +--- +navigation_title: "drop_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/drop_fields-processor.html +--- + +# Drop fields from events [drop_fields-processor] + + +The `drop_fields` processor specifies which fields to drop if a certain condition is fulfilled. The condition is optional. If it’s missing, the specified fields are always dropped. The `@timestamp` and `type` fields cannot be dropped, even if they show up in the `drop_fields` list. + + +## Example [_example_24] + +```yaml + - drop_fields: + when: + condition + fields: ["field1", "field2", ...] + ignore_missing: false +``` + +::::{note} +If you define an empty list of fields under `drop_fields`, no fields are dropped. +:::: + + + +## Configuration settings [_configuration_settings_29] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | If non-empty, a list of matching field names will be removed. Any element in array can contain a regular expression delimited by two slashes (*/reg_exp/*), in order to match (name) and remove more than one field. | +| `ignore_missing` | No | `false` | If `true`, the processor ignores missing fields and does not return an error. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/dynamic-input-configuration.md b/reference/ingestion-tools/fleet/dynamic-input-configuration.md new file mode 100644 index 0000000000..832c095624 --- /dev/null +++ b/reference/ingestion-tools/fleet/dynamic-input-configuration.md @@ -0,0 +1,582 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/dynamic-input-configuration.html +--- + +# Variables and conditions in input configurations [dynamic-input-configuration] + +When running {{agent}} in some environments, you might not know all the input configuration details up front. To solve this problem, the input configuration accepts variables and conditions that get evaluated at runtime using information from the running environment. Similar to autodiscovery, these capabilities allow you to apply configurations dynamically. + +Let’s consider a unique agent policy that is deployed on two machines: a Linux machine named "linux-app" and a Windows machine named "winapp". Notice that the configuration has some variable references: `${host.name}` and `${host.platform}`: + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + streams: + - paths: /var/log/${host.name}/another.log + condition: ${host.platform} == "linux" + - path: c:/service/app.log + condition: ${host.platform} == "windows" +``` + +At runtime, {{agent}} resolves variables and evaluates the conditions based on values provided by the environment, generating two possible input configurations. + +On the Windows machine: + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + streams: + - path: c:/service/app.log +``` + +On the Linux machine: + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + streams: + - paths: /var/log/linux-app/another.log +``` + +Using variable substitution along with conditions allows you to create concise, but flexible input configurations that adapt to their deployed environment. + + +## Variable substitution [variable-substitution] + +The syntax for variable substitution is `${var}`, where `var` is the name of a variable defined by a provider. A *provider* defines key/value pairs that are used for variable substitution and conditions. + +{{agent}} supports a variety of providers, such as `host` and `local`, that supply variables to {{agent}}. For example, earlier you saw `${host.name}` used to resolve the path to the host’s log file based on the `{host.platform}` value. Both of these values were provided by the `host` provider. + +All providers are enabled by default when {{agent}} starts. If a provider cannot be configured, its variables are ignored. + +Refer to [Providers](/reference/ingestion-tools/fleet/providers.md) for more detail. + +The following agent policy uses a custom key named `foo` to resolve a value defined by a local provider: + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + streams: + - paths: /var/log/${foo}/another.log + +providers: + local: + vars: + foo: bar +``` + +The policy generated by this configuration looks like this: + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + streams: + - paths: /var/log/bar/another.log +``` + +When an input uses a variable substitution that is not present in the current key/value mappings being evaluated, the input is removed in the result. + +For example, this agent policy uses an unknown key: + +```yaml +inputs: + - id: logfile-foo + type: logfile + path: "/var/log/foo" + - id: logfile-unknown + type: logfile + path: "${ unknown.key }" +``` + +The policy generated by this configuration looks like this: + +```yaml +inputs: + - id: logfile-foo + type: logfile + path: "/var/log/foo" +``` + + +## Alternative variables and constants [_alternative_variables_and_constants] + +Variable substitution can also define alternative variables or a constant. + +To define a constant, use either `'` or `"`. When a constant is reached during variable evaluation, any remaining variables are ignored, so a constant should be the last entry in the substitution. + +To define alternatives, use `|` followed by the next variable or constant. The power comes from allowing the input to define the preference order of the substitution by chaining multiple variables together. + +For example, the following agent policy chains together multiple variables to set the log path based on information provided by the running container environment. The constant `/var/log/other` is used to end of the path, which is common to both providers: + +```yaml +inputs: + - id: logfile-foo + type: logfile + path: "/var/log/foo" + - id: logfile-container + type: logfile + path: "${docker.paths.log|kubernetes.container.paths.log|'/var/log/other'}" +``` + + +## Escaping variables [_escaping_variables] + +In some cases the `${var}` syntax causes an issue with using a value where the actually wanted variable is `${var}`. In this case double `$$` can be provided for the variable. + +The double `$$` causes the variable to be ignored and the extra `$` is removed from the beginning. + +For example, the following agent policy uses the escaped variable so the actual value is used instead. + +```yaml +inputs: + - id: logfile-foo + type: logfile + path: "/var/log/foo" + processors: + - add_tags: + tags: [$${development}] + target: "environment" +``` + +The policy generated by this configuration looks like this: + +```yaml +inputs: + - id: logfile-foo + type: logfile + path: "/var/log/foo" + processors: + - add_tags: + tags: [${development}] + target: "environment" +``` + + +## Conditions [conditions] + +A condition is a boolean expression that you can specify in your agent policy to control whether a configuration is applied to the running {{agent}}. You can set a condition on inputs, streams, or even processors. + +In this example, the input is applied if the host platform is Linux: + +```yaml +inputs: + - id: unique-logfile-id + type: logfile + streams: + - paths: + - /var/log/syslog + condition: ${host.platform} == 'linux' +``` + +In this example, the stream is applied if the host platform is not Windows: + +```yaml +inputs: + - id: unique-system-metrics-id + type: system/metrics + streams: + - metricset: load + data_stream.dataset: system.cpu + condition: ${host.platform} != 'windows' +``` + +In this example, the processor is applied if the host platform is not Windows: + +```yaml +inputs: + - id: unique-system-metrics-id + type: system/metrics + streams: + - metricset: load + data_stream.dataset: system.cpu + processors: + - add_fields: + fields: + platform: ${host.platform} + to: host + condition: ${host.platform} != 'windows' +``` + + +### Condition syntax [condition-syntax] + +The conditions supported by {{agent}} are based on [EQL](elasticsearch://docs/reference/query-languages/eql-syntax.md)'s boolean syntax, but add support for variables from providers and functions to manipulate the values. + +**Supported operators:** + +* Full PEMDAS math support for `+ - * / %`. +* Relational operators `< <= >= > == !=` +* Logical operators `and` and `or` + +**Functions:** + +* Array functions [`arrayContains`](#arrayContains-function) +* Dict functions [`hasKey`](#hasKey-function) (not in EQL) +* Length functions [`length`](#length-function) +* Math functions [`add`](#add-function), [`subtract`](#subtract-function), [`multiply`](#multiply-function), [`divide`](#divide-function), [`modulo`](#modulo-function) +* String functions [`concat`](#concat-function), [`endsWith`](#endsWith-function), [`indexOf`](#indexOf-function), [`match`](#match-function), [`number`](#number-function), [`startsWith`](#startsWith-function), [`string`](#string-function), [`stringContains`](#stringContains-function) + +**Types:** + +* Booleans `true` and `false` + + +### Condition examples [condition-examples] + +Run only when a specific label is included. + +```eql +arrayContains(${docker.labels}, 'monitor') +``` + +Skip on Linux platform or macOS. + +```eql +${host.platform} != "linux" and ${host.platform} != "darwin" +``` + +Run only for specific labels. + +```eql +arrayContains(${docker.labels}, 'monitor') or arrayContains(${docker.label}, 'production') +``` + + +### Function reference [condition-function-reference] + +The condition syntax supports the following functions. + + +#### `add` [add-function] + +`add(Number, Number) Number` + +Usage: + +```eql +add(1, 2) == 3 +add(5, ${foo}) >= 5 +``` + + +#### `arrayContains` [arrayContains-function] + +`arrayContains(Array, String) Boolean` + +Usage: + +```eql +arrayContains(${docker.labels}, 'monitor') +``` + + +#### `concat` [concat-function] + +`concat(String, String) String` + +::::{note} +Parameters are coerced into strings before the concatenation. +:::: + + +Usage: + +```eql +concat("foo", "bar") == "foobar" +concat(${var1}, ${var2}) != "foobar" +``` + + +#### `divide` [divide-function] + +`divide(Number, Number) Number` + +Usage: + +```eql +divide(25, 5) > 0 +divide(${var1}, ${var2}) > 7 +``` + + +#### `endsWith` [endsWith-function] + +`endsWith(String, String) Boolean` + +Usage: + +```eql +endsWith("hello world", "hello") == true +endsWith(${var1}, "hello") != true +``` + + +#### `hasKey` [hasKey-function] + +`hasKey(Dictionary, String) Boolean` + +Usage: + +```eql +hasKey(${host}, "platform") +``` + + +#### `indexOf` [indexOf-function] + +`indexOf(String, String, Number?) Number` + +::::{note} +Returns -1 if the string is not found. +:::: + + +Usage: + +```eql +indexOf("hello", "llo") == 2 +indexOf(${var1}, "hello") >= 0 +``` + + +#### `length` [length-function] + +`length(Array|Dictionary|string)` + +Usage: + +```eql +length("foobar") > 2 +length(${docker.labels}) > 0 +length(${host}) > 2 +``` + + +#### `match` [match-function] + +`match(String, Regexp) boolean` + +::::{note} +`Regexp` supports Go’s regular expression syntax. Conditions that use regular expressions are more expensive to run. If speed is critical, consider using `endWiths` or `startsWith`. +:::: + + +Usage: + +```eql +match("hello world", "^hello") == true +match(${var1}, "world$") == true +``` + + +#### `modulo` [modulo-function] + +`modulo(number, number) Number` + +Usage: + +```eql +modulo(25, 5) > 0 +modulo(${var1}, ${var2}) == 0 +``` + + +#### `multiply` [multiply-function] + +`multiply(Number, Number) Number` + +Usage: + +```eql +multiply(5, 5) == 25 +multiple(${var1}, ${var2}) > x +``` + + +#### `number` [number-function] + +`number(String) Integer` + +Usage: + +```eql +number("42") == 42 +number(${var1}) == 42 +``` + + +#### `startsWith` [startsWith-function] + +`startsWith(String, String) Boolean` + +Usage: + +```eql +startsWith("hello world", "hello") == true +startsWith(${var1}, "hello") != true +``` + + +#### `string` [string-function] + +`string(Number) String` + +Usage: + +```eql +string(42) == "42" +string(${var1}) == "42" +``` + + +#### `stringContains` [stringContains-function] + +`stringContains(String, String) Boolean` + +Usage: + +```eql +stringContains("hello world", "hello") == true +stringContains(${var1}, "hello") != true +``` + + +#### `subtract` [subtract-function] + +`subtract(Number, Number) Number` + +Usage: + +```eql +subtract(5, 1) == 4 +subtract(${foo}, 2) != 2 +``` + + +### Debugging [debug-configs] + +To debug configurations that include variable substitution and conditions, use the `inspect` command. This command shows the configuration that’s generated after variables are replaced and conditions are applied. + +First run the {{agent}}. For this example, we’ll use the following agent policy: + +```yaml +outputs: + default: + type: elasticsearch + hosts: [127.0.0.1:9200] + apikey: + +providers: + local_dynamic: + items: + - vars: + key: value1 + processors: + - add_fields: + fields: + custom: match1 + target: dynamic + - vars: + key: value2 + processors: + - add_fields: + fields: + custom: match2 + target: dynamic + - vars: + key: value3 + processors: + - add_fields: + fields: + custom: match3 + target: dynamic + +inputs: + - id: unique-logfile-id + type: logfile + enabled: true + streams: + - paths: + - /var/log/${local_dynamic.key} +``` + +Then run `elastic-agent inspect --variables` to see the generated configuration. For example: + +```shell +$ ./elastic-agent inspect --variables +inputs: +- enabled: true + id: unique-logfile-id-local_dynamic-0 + original_id: unique-logfile-id + processors: + - add_fields: + fields: + custom: match1 + target: dynamic + streams: + - paths: + - /var/log/value1 + type: logfile +- enabled: true + id: unique-logfile-id-local_dynamic-1 + original_id: unique-logfile-id + processors: + - add_fields: + fields: + custom: match2 + target: dynamic + streams: + - paths: + - /var/log/value2 + type: logfile +- enabled: true + id: unique-logfile-id-local_dynamic-2 + original_id: unique-logfile-id + processors: + - add_fields: + fields: + custom: match3 + target: dynamic + streams: + - paths: + - /var/log/value3 + type: logfile +outputs: + default: + apikey: + hosts: + - 127.0.0.1:9200 + type: elasticsearch +providers: + local_dynamic: + items: + - processors: + - add_fields: + fields: + custom: match1 + target: dynamic + vars: + key: value1 + - processors: + - add_fields: + fields: + custom: match2 + target: dynamic + vars: + key: value2 + - processors: + - add_fields: + fields: + custom: match3 + target: dynamic + vars: + key: value3 + +--- +``` diff --git a/reference/ingestion-tools/fleet/edit-delete-integration-policy.md b/reference/ingestion-tools/fleet/edit-delete-integration-policy.md new file mode 100644 index 0000000000..8be89c2888 --- /dev/null +++ b/reference/ingestion-tools/fleet/edit-delete-integration-policy.md @@ -0,0 +1,19 @@ +--- +navigation_title: "Edit or delete an integration policy" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/edit-or-delete-integration-policy.html +--- + +# Edit or delete an {{agent}} integration policy [edit-or-delete-integration-policy] + + +To edit or delete an integration policy: + +1. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select the integration. +2. Click the **Policies** tab to see the list of integration policies. +3. Scroll to a specific integration policy. Open the **Actions** menu and select **Edit integration** or **Delete integration**. + + Editing or deleting an integration is permanent and cannot be undone. If you make a mistake, you can always re-configure or re-add an integration. + + +Any saved changes are immediately distributed and applied to all {{agent}}s enrolled in the given policy. diff --git a/reference/ingestion-tools/fleet/elastic-agent-container.md b/reference/ingestion-tools/fleet/elastic-agent-container.md new file mode 100644 index 0000000000..c9c22183e0 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-container.md @@ -0,0 +1,464 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-container.html +--- + +# Run Elastic Agent in a container [elastic-agent-container] + +You can run {{agent}} inside a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the [Elastic Docker registry](https://www.docker.elastic.co/r/elastic-agent/elastic-agent). If you are running in Kubernetes, refer to [run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md). + +Note that running {{elastic-agent}} in a container is supported only in Linux environments. For this reason we don’t currently provide {{agent}} container images for Windows. + +Considerations: + +* When {{agent}} runs inside a container, it cannot be upgraded through {{fleet}} as it expects that the container itself is upgraded. +* Enrolling and running an {{agent}} is usually a two-step process. However, this doesn’t work in a container, so a special subcommand, `container`, is called. This command allows environment variables to configure all properties, and runs the `enroll` and `run` commands as a single command. + + +## What you need [_what_you_need] + +* [Docker installed](https://docs.docker.com/get-docker/). +* {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it. + + ::::{tab-set} + + :::{tab-item} Elasticsearch Service + To get started quickly, spin up a deployment of our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service). The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). + ::: + + :::{tab-item} Self-managed + To install and run {{es}} and {{kib}}, see [Installing the {{stack}}](/deploy-manage/deploy/self-managed/deploy-cluster.md). + ::: + + :::: + +## Step 1: Pull the image [_step_1_pull_the_image] + +There are two images for Elastic Agent, elastic-agent and elastic-agent-complete. The elastic-agent image contains all the binaries for running Beats, while the elastic-agent-complete image contains these binaries plus additional dependencies to run browser monitors through Elastic Synthetics. Refer to [Synthetic monitoring via Elastic Agent and Fleet](/solutions/observability/apps/get-started.md) for more information. + +Run the `docker pull` command against the Elastic Docker registry: + +```terminal +docker pull docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 +``` + +Alternately, you can use the hardened [Wolfi](https://github.com/wolfi-dev/) image. Using Wolfi images requires Docker version 20.10.10 or later. For details about why the Wolfi images have been introduced, refer to our article [Reducing CVEs in Elastic container images](https://www.elastic.co/blog/reducing-cves-in-elastic-container-images). + +```terminal +docker pull docker.elastic.co/elastic-agent/elastic-agent-wolfi:9.0.0-beta1 +``` + +If you want to run Synthetics tests, run the docker pull command to fetch the elastic-agent-complete image: + +```terminal +docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:9.0.0-beta1 +``` +To run Synthetics tests using the hardened [Wolfi](https://github.com/wolfi-dev/) image, run: + +```terminal +docker pull docker.elastic.co/elastic-agent/elastic-agent-complete-wolfi:9.0.0-beta1 +``` + +## Step 2: Optional: Verify the image [_step_2_optional_verify_the_image] + +Although it’s optional, we highly recommend verifying the signatures included with your downloaded Docker images to ensure that the images are valid. + +Elastic images are signed with Cosign which is part of the [Sigstore](https://www.sigstore.dev) project. Cosign supports container signing, verification, and storage in an OCI registry. Install the appropriate Cosign application for your operating system. + +Run the following commands to verify the **elastic-agent** container image signature for Elastic Agent v9.0.0-beta1: + +```terminal +wget https://artifacts.elastic.co/cosign.pub <1> +cosign verify --key cosign.pub docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 <2> +``` +1. Download the Elastic public key to verify container signature +2. Verify the container against the Elastic public key + +If you’re using the elastic-agent-complete image, run the commands as follows: + +```terminal +wget https://artifacts.elastic.co/cosign.pub +cosign verify --key cosign.pub docker.elastic.co/elastic-agent/elastic-agent-complete:9.0.0-beta1 +``` +The command prints the check results and the signature payload in JSON format, for example: + +```terminal +Verification for docker.elastic.co/elastic-agent/elastic-agent-complete:9.0.0-beta1 -- +The following checks were performed on each of these signatures: + - The cosign claims were validated + - Existence of the claims in the transparency log was verified offline + - The signatures were verified against the specified public key +``` + +## Step 3: Get aware of the Elastic Agent container command [_step_3_get_aware_of_the_elastic_agent_container_command] + +The Elastic Agent container command offers a wide variety of options. To see the full list, run: + +```terminal +docker run --rm docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 elastic-agent container -h +``` + +## Step 4: Run the Elastic Agent image [_step_4_run_the_elastic_agent_image] + + +::::{tab-set} + +:::{tab-item} Elastic Cloud + +```terminal +docker run \ + --env FLEET_ENROLL=1 \ <1> + --env FLEET_URL= \ <2> + --env FLEET_ENROLLMENT_TOKEN= \ <3> + --rm docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 <4> +``` + +1. Set to 1 to enroll the {{agent}} into {{fleet-server}}. +2. URL to enroll the {{fleet-server}} into. You can find it in {{kib}}. Select **Management → {{fleet}} → Fleet Settings**, and copy the {{fleet-server}} host URL. +3. The token to use for enrollment. Close the flyout panel and select **Enrollment tokens**. Find the Agent policy you want to enroll {{agent}} into, and display and copy the secret token. To learn how to create a policy, refer to [Create an agent policy without using the UI](/reference/ingestion-tools/fleet/create-policy-no-ui.md). +4. If you want to run **elastic-agent-complete** image, replace `elastic-agent` to `elastic-agent-complete`. Use the `elastic-agent` user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to [Synthetics {{fleet}} Quickstart](/solutions/observability/apps/get-started.md) for more information. + +Refer to [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md) for all available options. +::: + +:::{tab-item} Self-managed + +If you’re running a self-managed cluster and want to run your own {{fleet-server}}, run the following command, which will spin up both {{agent}} and {{fleet-server}} in a container: + +```terminal +docker run \ + --env FLEET_SERVER_ENABLE=true \ <1> + --env FLEET_SERVER_ELASTICSEARCH_HOST= \ <2> + --env FLEET_SERVER_SERVICE_TOKEN= \ <3> + --env FLEET_SERVER_POLICY_ID= \ <4> + -p 8220:8220 \ <5> + --rm docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 <6> +``` + +1. Set to 1 to bootstrap Fleet Server on this Elastic Agent. +2. Your cluster’s {{es}} host URL. +3. The {{fleet}} service token. [Generate one in the {{fleet}} UI](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md#create-fleet-enrollment-tokens) if you don’t have one already. +4. ID of the {{fleet-server}} policy. We recommend only having one fleet-server policy. To learn how to create a policy, refer to [Create an agent policy without using the UI](/reference/ingestion-tools/fleet/create-policy-no-ui.md). +5. publish container port 8220 to host. +6. If you want to run the **elastic-agent-complete** image, replace `elastic-agent` with `elastic-agent-complete`. Use the `elastic-agent` user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to [Synthetics {{fleet}} Quickstart](/solutions/observability/apps/get-started.md) for more information. + +Refer to [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md) for all available options. +::: + +:::: + +If you need to run {{fleet-server}} as well, adjust the `docker run` command above by adding these environment variables: + +```terminal + --env FLEET_SERVER_ENABLE=true \ <1> + --env FLEET_SERVER_ELASTICSEARCH_HOST= \ <2> + --env FLEET_SERVER_SERVICE_TOKEN= <3> +``` + +1. Set to `true` to bootstrap {{fleet-server}} on this {{agent}}. This automatically forces {{fleet}} enrollment as well. +2. The Elasticsearch host for Fleet Server to communicate with, for example `http://elasticsearch:9200`. +3. Service token to use for communication with {{es}} and {{kib}}. + + +:::{tip} +**Running {{agent}} on a read-only file system** + +If you’d like to run {{agent}} in a Docker container on a read-only file system, you can do so by specifying the `--read-only` option. {{agent}} requires a stateful directory to store application data, so with the `--read-only` option you also need to use the `--mount` option to specify a path to where that data can be stored. + +For example: + +```bash +docker run --rm --mount source=$(pwd)/state,destination=/state -e {STATE_PATH}=/state --read-only docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 <1> +``` + +1. Where `{STATE_PATH}` is the path to a stateful directory to mount where {{agent}} application data can be stored. + +You can also add `type=tmpfs` to the mount parameter (`--mount type=tmpfs,destination=/state...`) to specify a temporary file storage location. This should be done with caution as it can cause data duplication, particularly for logs, when the container is restarted, as no state data is persisted. + +::: + + +## Step 5: View your data in {{kib}} [_step_5_view_your_data_in_kib] + +1. Launch {{kib}}: + + ::::{tab-set} + + :::{tab-item} Elasticsearch Service + 1. [Log in](https://cloud.elastic.co/) to your {{ecloud}} account. + 2. Navigate to the {{kib}} endpoint in your deployment. + ::: + + :::{tab-item} Self-managed + Point your browser to [http://localhost:5601](http://localhost:5601), replacing `localhost` with the name of the {{kib}} host. + ::: + + :::: + +2. To check if your {{agent}} is enrolled in {{fleet}}, go to **Management → {{fleet}} → Agents**. + + :::{image} images/kibana-fleet-agents.png + :alt: {{agent}}s {{fleet}} page + :class: screenshot + ::: + +3. To view data flowing in, go to **Analytics → Discover** and select the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can’t see these indexes, [create a data view](/explore-analyze/find-and-organize/data-views.md) for them. +4. To view predefined dashboards, either select **Analytics→Dashboard** or [install assets through an integration](/reference/ingestion-tools/fleet/view-integration-assets.md). + + +## Docker compose [_docker_compose] + +You can run {{agent}} in docker-compose. The example below shows how to enroll an {{agent}}: + +```yaml +version: "3" +services: + elastic-agent: + image: docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 <1> + container_name: elastic-agent + restart: always + user: root <2> + environment: + - FLEET_ENROLLMENT_TOKEN= + - FLEET_ENROLL=1 + - FLEET_URL= +``` + +1. Switch `elastic-agent` to `elastic-agent-complete` if you intend to use the complete version. Use the `elastic-agent` user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to [Synthetics {{fleet}} Quickstart](/solutions/observability/apps/get-started.md) for more information. +2. Synthetic browser monitors require this set to `elastic-agent`. + + +If you need to run {{fleet-server}} as well, adjust the docker-compose file above by adding these environment variables: + +```yaml + - FLEET_SERVER_ENABLE=true + - FLEET_SERVER_ELASTICSEARCH_HOST= + - FLEET_SERVER_SERVICE_TOKEN= +``` + +Refer to [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md) for all available options. + + +## Logs [_logs] + +Since a container supports only a single version of {{agent}}, logs and state are stored a bit differently than when running an {{agent}} outside of a container. The logs can be found under: `/usr/share/elastic-agent/state/data/logs/*`. + +It’s important to note that only the logs from the {{agent}} process itself are logged to `stdout`. Subprocess logs are not. Each subprocess writes its own logs to the `default` directory inside the logs directory: + +```bash +/usr/share/elastic-agent/state/data/logs/default/* +``` + +::::{tip} +Running into errors with {{fleet-server}}? Check the fleet-server subprocess logs for more information. +:::: + +## Debugging [_debugging] + +A monitoring endpoint can be enabled to expose resource usage and event processing data. The endpoint is compatible with {{agent}}s running in both {{fleet}} mode and Standalone mode. + +Enable the monitoring endpoint in `elastic-agent.yml` on the host where the {{agent}} is installed. A sample configuration looks like this: + +```yaml +agent.monitoring: + enabled: true <1> + logs: true <2> + metrics: true <3> + http: + enabled: true <4> + host: localhost <5> + port: 6791 <6> +``` + +1. Enable monitoring of running processes. +2. Enable log monitoring. +3. Enable metrics monitoring. +4. Expose {{agent}} metrics over HTTP. By default, sockets and named pipes are used. +5. The hostname, IP address, Unix socket, or named pipe that the HTTP endpoint will bind to. When using IP addresses, we recommend only using `localhost`. +6. The port that the HTTP endpoint will bind to. + + +The above configuration exposes a monitoring endpoint at `http://localhost:6791/processes`. + +::::{dropdown} `http://localhost:6791/processes` output +```json +{ + "processes":[ + { + "id":"metricbeat-default", + "pid":"36923", + "binary":"metricbeat", + "source":{ + "kind":"configured", + "outputs":[ + "default" + ] + } + }, + { + "id":"filebeat-default-monitoring", + "pid":"36924", + "binary":"filebeat", + "source":{ + "kind":"internal", + "outputs":[ + "default" + ] + } + }, + { + "id":"metricbeat-default-monitoring", + "pid":"36925", + "binary":"metricbeat", + "source":{ + "kind":"internal", + "outputs":[ + "default" + ] + } + } + ] +} +``` + +:::: + + +Each process ID in the `/processes` output can be accessed for more details. + +::::{dropdown} `http://localhost:6791/processes/{{process-name}}` output +```json +{ + "beat":{ + "cpu":{ + "system":{ + "ticks":537, + "time":{ + "ms":537 + } + }, + "total":{ + "ticks":795, + "time":{ + "ms":796 + }, + "value":795 + }, + "user":{ + "ticks":258, + "time":{ + "ms":259 + } + } + }, + "info":{ + "ephemeral_id":"eb7e8025-7496-403f-9f9a-42b20439c737", + "uptime":{ + "ms":75332 + }, + "version":"7.14.0" + }, + "memstats":{ + "gc_next":23920624, + "memory_alloc":20046048, + "memory_sys":76104712, + "memory_total":60823368, + "rss":83165184 + }, + "runtime":{ + "goroutines":58 + } + }, + "libbeat":{ + "config":{ + "module":{ + "running":4, + "starts":4, + "stops":0 + }, + "reloads":1, + "scans":1 + }, + "output":{ + "events":{ + "acked":0, + "active":0, + "batches":0, + "dropped":0, + "duplicates":0, + "failed":0, + "toomany":0, + "total":0 + }, + "read":{ + "bytes":0, + "errors":0 + }, + "type":"elasticsearch", + "write":{ + "bytes":0, + "errors":0 + } + }, + "pipeline":{ + "clients":4, + "events":{ + "active":231, + "dropped":0, + "failed":0, + "filtered":0, + "published":231, + "retry":112, + "total":231 + }, + "queue":{ + "acked":0, + "max_events":4096 + } + } + }, + "metricbeat":{ + "system":{ + "cpu":{ + "events":8, + "failures":0, + "success":8 + }, + "filesystem":{ + "events":80, + "failures":0, + "success":80 + }, + "memory":{ + "events":8, + "failures":0, + "success":8 + }, + "network":{ + "events":135, + "failures":0, + "success":135 + } + } + }, + "system":{ + "cpu":{ + "cores":8 + }, + "load":{ + "1":2.5957, + "15":5.415, + "5":3.5815, + "norm":{ + "1":0.3245, + "15":0.6769, + "5":0.4477 + } + } + } +} +``` + +:::: + + diff --git a/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md b/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md new file mode 100644 index 0000000000..8aab3bcfbe --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md @@ -0,0 +1,77 @@ +--- +navigation_title: "Inputs" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-input-configuration.html +--- + +# Configure inputs for standalone {{agent}}s [elastic-agent-input-configuration] + + +The `inputs` section of the `elastic-agent.yml` file specifies how {{agent}} locates and processes input data. + +* [Sample metrics input configuration](#elastic-agent-input-configuration-sample-metrics) +* [Sample log files input configuration](#elastic-agent-input-configuration-sample-logs) + + +## Sample metrics input configuration [elastic-agent-input-configuration-sample-metrics] + +By default {{agent}} collects system metrics, such as CPU, memory, network, and file system metrics, and sends them to the default output. For example, to define datastreams for `cpu`, `memory`, `network` and `filesystem` metrics, this is the configuration: + +```yaml +- type: system/metrics <1> + id: unique-system-metrics-id <2> + data_stream.namespace: default <3> + use_output: default <4> + streams: + - metricsets: <5> + - cpu + data_stream.dataset: system.cpu <6> + - metricsets: + - memory + data_stream.dataset: system.memory + - metricsets: + - network + data_stream.dataset: system.network + - metricsets: + - filesystem + data_stream.dataset: system.filesystem +``` + +1. The name of the input. Refer to [{{agent}} inputs](/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md) for the list of what’s available. +2. A unique ID for the input. +3. A user-defined namespace. +4. The name of the `output` to use. If not specified, `default` will be used. +5. The set of enabled module metricsets.Refer to the {{metricbeat}} [System module](beats://docs/reference/metricbeat/metricbeat-module-system.md) for a list of available options. The metricset fields can be configured. + +6. A user-defined dataset. It can contain anything that makes sense to signify the source of the data. + + + +## Sample log files input configuration [elastic-agent-input-configuration-sample-logs] + +To enable {{agent}} to collect log files, you can use a configuration like the following. + +```yaml +- type: filestream <1> + id: your-input-id <2> + streams: + - id: your-filestream-stream-id <3> + data_stream: <4> + dataset: generic + paths: + - /var/log/*.log +``` + +1. The name of the input. Refer to [{{agent}} inputs](/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md) for the list of what’s available. +2. A unique ID for the input. +3. A unique ID for the data stream to track the state of the ingested files. +4. The streams block is required only if multiple streams are used on the same input. Refer to the {{filebeat}} [filestream](beats://docs/reference/filebeat/filebeat-input-filestream.md) documentation for a list of available options. Also, specifically for the `filestream` input type, refer to the [simplified log ingestion](/reference/ingestion-tools/fleet/elastic-agent-simplified-input-configuration.md) for an example of ingesting a set of logs specified as an array. + + +The input in this example harvests all files in the path `/var/log/*.log`, that is, all logs in the directory `/var/log/` that end with `.log`. All patterns supported by [Go Glob](https://golang.org/pkg/path/filepath/#Glob) are also supported here. + +To fetch all files from a predefined level of subdirectories, use this pattern: `/var/log/*/*.log`. This fetches all `.log` files from the subfolders of `/var/log`. It does not fetch log files from the `/var/log` folder itself. Currently it is not possible to recursively fetch all files in all subdirectories of a directory. + + + + diff --git a/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md b/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md new file mode 100644 index 0000000000..8609e55b97 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md @@ -0,0 +1,139 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-inputs-list.html +--- + +# Elastic Agent inputs [elastic-agent-inputs-list] + +When you [configure inputs](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md) for standalone {{agents}}, the following values are supported for the input `type` parameter. + +**Expand any section to view the available inputs:** + +::::{dropdown} Audit the activities of users and processes on your systems +:name: elastic-agent-inputs-list-auditbeat + +| Input | Description | Learn more | +| --- | --- | --- | +| `audit/auditd` | Receives audit events from the Linux Audit Framework that is a part of the Linux kernel. | [Auditd Module](beats://docs/reference/auditbeat/auditbeat-module-auditd.md) ({{auditbeat}} docs) | +| `audit/file_integrity` | Sends events when a file is changed (created, updated, or deleted) on disk. The events contain file metadata and hashes. | [File Integrity Module](beats://docs/reference/auditbeat/auditbeat-module-file_integrity.md) ({{auditbeat}} docs) | +| `audit/system` | [beta] Collects various security related information about a system. All datasets send both periodic state information (e.g. all currently running processes) and real-time changes (e.g. when a new process starts or stops). | [System Module](beats://docs/reference/auditbeat/auditbeat-module-system.md) ({{auditbeat}} docs) | + +:::: + + +::::{dropdown} Collect metrics from operating systems and services running on your servers +:name: elastic-agent-inputs-list-metricbeat + +| Input | Description | Learn more | +| --- | --- | --- | +| `activemq/metrics` | Periodically fetches JMX metrics from Apache ActiveMQ. | [ActiveMQ module](beats://docs/reference/metricbeat/metricbeat-module-activemq.md) ({{metricbeat}} docs) | +| `apache/metrics` | Periodically fetches metrics from [Apache HTTPD](https://httpd.apache.org/) servers. | [Apache module](beats://docs/reference/metricbeat/metricbeat-module-apache.md) ({{metricbeat}} docs) | +| `aws/metrics` | Periodically fetches monitoring metrics from AWS CloudWatch using [GetMetricData API](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.md) for AWS services. | [AWS module](beats://docs/reference/metricbeat/metricbeat-module-aws.md) ({{metricbeat}} docs) | +| `awsfargate/metrics` | [beta] Retrieves various metadata, network metrics, and Docker stats about tasks and containers. | [AWS Fargate module](beats://docs/reference/metricbeat/metricbeat-module-awsfargate.md) ({{metricbeat}} docs) | +| `azure/metrics` | Collects and aggregates Azure logs and metrics from a variety of sources into a common data platform where it can be used for analysis, visualization, and alerting. | [Azure module](beats://docs/reference/metricbeat/metricbeat-module-azure.md) ({{metricbeat}} docs) | +| `beat/metrics` | Collects metrics about any Beat or other software based on libbeat. | [Beat module](beats://docs/reference/metricbeat/metricbeat-module-beat.md) ({{metricbeat}} docs) | +| `cloudfoundry/metrics` | Connects to Cloud Foundry loggregator to gather container, counter, and value metrics into a common data platform where it can be used for analysis, visualization, and alerting. | [Cloudfoundry module](beats://docs/reference/metricbeat/metricbeat-module-cloudfoundry.md) ({{metricbeat}} docs) | +| `containerd/metrics` | [beta] Collects cpu, memory and blkio statistics about running containers controlled by containerd runtime. | [Containerd module](beats://docs/reference/metricbeat/metricbeat-module-containerd.md) ({{metricbeat}} docs) | +| `docker/metrics` | Fetches metrics from [Docker](https://www.docker.com/) containers. | [Docker module](beats://docs/reference/metricbeat/metricbeat-module-docker.md) ({{metricbeat}} docs) | +| `elasticsearch/metrics` | Collects metrics about {{es}}. | [Elasticsearch module](beats://docs/reference/metricbeat/metricbeat-module-elasticsearch.md) ({{metricbeat}} docs) | +| `etcd/metrics` | This module targets Etcd V2 and V3. When using V2, metrics are collected using [Etcd v2 API](https://coreos.com/etcd/docs/latest/v2/api.md). When using V3, metrics are retrieved from the `/metrics`` endpoint as intended for [Etcd v3](https://coreos.com/etcd/docs/latest/metrics.md). | [Etcd module](beats://docs/reference/metricbeat/metricbeat-module-etcd.md) ({{metricbeat}} docs) | +| `gcp/metrics` | Periodically fetches monitoring metrics from Google Cloud Platform using [Stackdriver Monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp) for Google Cloud Platform services. | [Google Cloud Platform module](beats://docs/reference/metricbeat/metricbeat-module-gcp.md) ({{metricbeat}} docs) | +| `haproxy/metrics` | Collects stats from [HAProxy](http://www.haproxy.org/). It supports collection from TCP sockets, UNIX sockets, or HTTP with or without basic authentication. | [HAProxy module](beats://docs/reference/metricbeat/metricbeat-overview.md) ({{metricbeat}} docs) | +| `http/metrics` | Used to call arbitrary HTTP endpoints for which a dedicated Metricbeat module is not available. | [HTTP module](beats://docs/reference/metricbeat/metricbeat-module-http.md) ({{metricbeat}} docs) | +| `iis/metrics` | Periodically retrieve IIS web server related metrics. | [IIS module](beats://docs/reference/metricbeat/metricbeat-module-iis.md) ({{metricbeat}} docs) | +| `jolokia/metrics` | Collects metrics from [Jolokia agents](https://jolokia.org/reference/html/agents.md) running on a target JMX server or dedicated proxy server. | [Jolokia module](beats://docs/reference/metricbeat/metricbeat-module-jolokia.md) ({{metricbeat}} docs) | +| `kafka/metrics` | Collects metrics from the [Apache Kafka](https://kafka.apache.org/intro) event streaming platform. | [Kafka module](beats://docs/reference/metricbeat/metricbeat-module-kafka.md) ({{metricbeat}} docs) | +| `kibana/metrics` | Collects metrics about {{Kibana}}. | [{{kib}} module](beats://docs/reference/metricbeat/metricbeat-module-kibana.md) ({{metricbeat}} docs) | +| `kubernetes/metrics` | As one of the main pieces provided for Kubernetes monitoring, this module is capable of fetching metrics from several components. | [Kubernetes module](beats://docs/reference/metricbeat/metricbeat-module-kubernetes.md) ({{metricbeat}} docs) | +| `linux/metrics` | [beta] Reports on metrics exclusive to the Linux kernel and GNU/Linux OS. | [Linux module](beats://docs/reference/metricbeat/metricbeat-module-linux.md) ({{metricbeat}} docs) | +| `logstash/metrics` | collects metrics about {{ls}}. | [{{ls}} module](beats://docs/reference/metricbeat/metricbeat-module-logstash.md) ({{metricbeat}} docs) | +| `memcached/metrics` | Collects metrics about the [memcached](https://memcached.org/) memory object caching system. | [Memcached module](beats://docs/reference/metricbeat/metricbeat-module-memcached.md) ({{metricbeat}} docs) | +| `mongodb/metrics` | Periodically fetches metrics from [MongoDB](https://www.mongodb.com/) servers. | [MongoDB module](beats://docs/reference/metricbeat/metricbeat-module-mongodb.md) ({{metricbeat}} docs) | +| `mssql/metrics` | The [Microsoft SQL 2017](https://www.microsoft.com/en-us/sql-server/sql-server-2017) Metricbeat module. It is still under active development to add new Metricsets and introduce enhancements. | [MSSQL module](beats://docs/reference/metricbeat/metricbeat-module-mssql.md) ({{metricbeat}} docs) | +| `mysql/metrics` | Periodically fetches metrics from [MySQL](https://www.mysql.com/) servers. | [MySQL module](beats://docs/reference/metricbeat/metricbeat-module-mysql.md) ({{metricbeat}} docs) | +| `nats/metrics` | Uses the [Nats monitoring server APIs](https://nats.io/documentation/managing_the_server/monitoring/) to collect metrics. | [NATS module](beats://docs/reference/metricbeat/metricbeat-module-nats.md) ({{metricbeat}} docs) | +| `nginx/metrics` | Periodically fetches metrics from [Nginx](https://nginx.org/) servers. | [Nginx module](beats://docs/reference/metricbeat/metricbeat-module-nginx.md) ({{metricbeat}} docs) | +| `oracle/metrics` | The [Oracle](https://www.oracle.com/) module for Metricbeat. It is under active development with feedback from the community. A single Metricset for Tablespace monitoring is added so the community can start gathering metrics from their nodes and contributing to the module. | [Oracle module](beats://docs/reference/metricbeat/metricbeat-module-oracle.md) ({{metricbeat}} docs) | +| `postgresql/metrics` | Periodically fetches metrics from [PostgreSQL](https://www.postgresql.org/) servers. | [PostgresSQL module](beats://docs/reference/metricbeat/metricbeat-module-postgresql.md) ({{metricbeat}} docs) | +| `prometheus/metrics` | Periodically scrapes metrics from [Prometheus exporters](https://prometheus.io/docs/instrumenting/exporters/). | [Prometheus module](beats://docs/reference/metricbeat/metricbeat-module-prometheus.md) ({{metricbeat}} docs) | +| `rabbitmq/metrics` | Uses the [HTTP API](http://www.rabbitmq.com/management.md) created by the management plugin to collect RabbitMQ metrics. | [RabbitMQ module](beats://docs/reference/metricbeat/metricbeat-module-rabbitmq.md) ({{metricbeat}} docs) | +| `redis/metrics` | Periodically fetches metrics from [Redis](http://redis.io/) servers. | [Redis module](beats://docs/reference/metricbeat/metricbeat-module-redis.md) ({{metricbeat}} docs) | +| `sql/metrics` | Allows you to execute custom queries against an SQL database and store the results in {{es}}. | [SQL module](beats://docs/reference/metricbeat/metricbeat-module-sql.md) ({{metricbeat}} docs) | +| `stan/metrics` | Uses [STAN monitoring server APIs](https://github.com/nats-io/nats-streaming-server/blob/master/server/monitor.go) to collect metrics. | [Stan module](beats://docs/reference/metricbeat/metricbeat-module-stan.md) ({{metricbeat}} docs) | +| `statsd/metrics` | Spawns a UDP server and listens for metrics in StatsD compatible format. | [Statsd module](beats://docs/reference/metricbeat/metricbeat-module-statsd.md) ({{metricbeat}} docs) | +| `syncgateway/metrics` | [beta] Monitor a Sync Gateway instance by using its REST API. | [SyncGateway module](beats://docs/reference/metricbeat/metricbeat-module-syncgateway.md) ({{metricbeat}} docs) | +| `system/metrics` | Allows you to monitor your server metrics, including CPU, load, memory, network, processes, sockets, filesystem, fsstat, uptime, and more. | [System module](beats://docs/reference/metricbeat/metricbeat-module-system.md) ({{metricbeat}} docs) | +| `traefik/metrics` | Periodically fetches metrics from a [Traefik](https://traefik.io/) instance. | [Traefik module](beats://docs/reference/metricbeat/metricbeat-module-traefik.md) ({{metricbeat}} docs) | +| `uwsgi/metrics` | By default, collects the uWSGI stats metricset, using [StatsServer](https://uwsgi-docs.readthedocs.io/en/latest/StatsServer.md). | [uWSGI module](beats://docs/reference/metricbeat/metricbeat-module-uwsgi.md) ({{metricbeat}} docs) | +| `vsphere/metrics` | Uses the [Govmomi](https://github.com/vmware/govmomi) library to collect metrics from any Vmware SDK URL (ESXi/VCenter). | [vSphere module](beats://docs/reference/metricbeat/metricbeat-module-vsphere.md) ({{metricbeat}} docs) | +| `windows/metrics` | Collects metrics from Windows systems. | [Windows module](beats://docs/reference/metricbeat/metricbeat-module-windows.md) ({{metricbeat}} docs) | +| `zookeeper/metrics` | Fetches statistics from the ZooKeeper service. | [ZooKeeper module](beats://docs/reference/metricbeat/metricbeat-module-zookeeper.md) ({{metricbeat}} docs) | + +:::: + + +::::{dropdown} Forward and centralize log data +:name: elastic-agent-inputs-list-filebeat + +| Input | Description | Learn more | +| --- | --- | --- | +| `aws-cloudwatch` | Stores log filesfrom Amazon Elastic Compute Cloud(EC2), AWS CloudTrail, Route53, and other sources. | [AWS CloudWatch input](beats://docs/reference/filebeat/filebeat-input-aws-cloudwatch.md) ({{filebeat}} docs) | +| `aws-s3` | Retrieves logs from S3 objects that are pointed to by S3 notification events read from an SQS queue or directly polling list of S3 objects in an S3 bucket. | [AWS S3 input](beats://docs/reference/filebeat/filebeat-input-aws-s3.md) ({{filebeat}} docs) | +| `azure-blob-storage` | Reads content from files stored in containers which reside on your Azure Cloud. | [Azure Blob Storage](beats://docs/reference/filebeat/filebeat-input-azure-blob-storage.md) ({{filebeat}} docs) | +| `azure-eventhub` | Reads messages from an azure eventhub. | [Azure eventhub input](beats://docs/reference/filebeat/filebeat-input-azure-eventhub.md) ({{filebeat}} docs) | +| `cel` | Reads messages from a file path or HTTP API with a variety of payloads using the [Common Expression Language (CEL)](https://opensource.google.com/projects/cel) and the [mito](https://pkg.go.dev/github.com/elastic/mito/lib) CEL extension libraries. | [Common Expression Language input](beats://docs/reference/filebeat/filebeat-input-cel.md) ({{filebeat}} docs) | +| `cloudfoundry` | Gets HTTP access logs, container logs and error logs from Cloud Foundry. | [Cloud Foundry input](beats://docs/reference/filebeat/filebeat-input-cloudfoundry.md) ({{filebeat}} docs) | +| `cometd` | Streams the real-time events from a Salesforce generic subscription Push Topic. | [CometD input](beats://docs/reference/filebeat/filebeat-input-cometd.md) ({{filebeat}} docs) | +| `container` | Reads containers log files. | [Container input](beats://docs/reference/filebeat/filebeat-input-container.md) ({{filebeat}} docs) | +| `docker` | Alias for `container`. | - | +| `log/docker` | Alias for `container`. | n/a | +| `entity-analytics` | Collects identity assets, such as users, from external identity providers. | [Entity Analytics input](beats://docs/reference/filebeat/filebeat-input-entity-analytics.md) ({{filebeat}} docs) | +| `event/file` | Alias for `log`. | n/a | +| `event/tcp` | Alias for `tcp`. | n/a | +| `filestream` | Reads lines from active log files. Replaces and imporoves on the `log` input. | [filestream input](beats://docs/reference/filebeat/filebeat-input-filestream.md) ({{filebeat}} docs) | +| `gcp-pubsub` | Reads messages from a Google Cloud Pub/Sub topic subscription. | [GCP Pub/Sub input](beats://docs/reference/filebeat/filebeat-input-gcp-pubsub.md) ({{filebeat}} docs) | +| `gcs` | [beta] Reads content from files stored in buckets which reside on your Google Cloud. | [Google Cloud Storage input](beats://docs/reference/filebeat/filebeat-input-gcs.md) ({{filebeat}} docs) | +| `http_endpoint` | [beta] Initializes a listening HTTP server that collects incoming HTTP POST requests containing a JSON body. | [HTTP Endpoint input](beats://docs/reference/filebeat/filebeat-input-http_endpoint.md) ({{filebeat}} docs) | +| `httpjson` | Read messages from an HTTP API with JSON payloads. | [HTTP JSON input](beats://docs/reference/filebeat/filebeat-input-httpjson.md) ({{filebeat}} docs) | +| `journald` | [beta] A system service that collects and stores logging data. | [Journald input](beats://docs/reference/filebeat/filebeat-input-journald.md) ({{filebeat}} docs) | +| `kafka` | Reads from topics in a Kafka cluster. | [Kafka input](beats://docs/reference/filebeat/filebeat-input-kafka.md) ({{filebeat}} docs) | +| `log` | DEPRECATED: Please use the `filestream` input instead. | n/a | +| `logfile` | Alias for `log`. | n/a | +| `log/redis_slowlog` | Alias for `redis`. | n/a | +| `log/syslog` | Alias for `syslog`. | n/a | +| `mqtt` | Reads data transmitted using lightweight messaging protocol for small and mobile devices, optimized for high-latency or unreliable networks. | [MQTT input](beats://docs/reference/filebeat/filebeat-input-mqtt.md) ({{filebeat}} docs) | +| `netflow` | Reads NetFlow and IPFIX exported flows and options records over UDP. | [NetFlow input](beats://docs/reference/filebeat/filebeat-input-netflow.md) ({{filebeat}} docs) | +| `o365audit` | [beta] Retrieves audit messages from Office 365 and Azure AD activity logs. | [Office 365 Management Activity API input](beats://docs/reference/filebeat/filebeat-input-o365audit.md) ({{filebeat}} docs) | +| `osquery` | Collects and decodes the result logs written by [osqueryd](https://osquery.readthedocs.io/en/latest/introduction/using-osqueryd/) in the JSON format. | - | +| `redis` | [beta] Reads entries from Redis slowlogs. | [Redis input](beats://docs/reference/filebeat/filebeat-overview.md) ({{filebeat}} docs) | +| `syslog` | Reads Syslog events as specified by RFC 3164 and RFC 5424, over TCP, UDP, or a Unix stream socket. | [Syslog input](beats://docs/reference/filebeat/filebeat-input-syslog.md) ({{filebeat}} docs) | +| `tcp` | Reads events over TCP. | [TCP input](beats://docs/reference/filebeat/filebeat-input-tcp.md) ({{filebeat}} docs) | +| `udp` | Reads events over UDP. | [UDP input](beats://docs/reference/filebeat/filebeat-input-udp.md) ({{filebeat}} docs) | +| `unix` | [beta] Reads events over a stream-oriented Unix domain socket. | [Unix input](beats://docs/reference/filebeat/filebeat-overview.md) ({{filebeat}} docs) | +| `winlog` | Reads from one or more event logs using Windows APIs, filters the events based on user-configured criteria, then sends the event data to the configured outputs ({{es}} or {{ls}}). | [Winlogbeat Overview](beats://docs/reference/winlogbeat/_winlogbeat_overview.md) ({{winlogbeat}} docs) | + +:::: + + +::::{dropdown} Monitor the status of your services +:name: elastic-agent-inputs-list-heartbeat + +| Input | Description | Learn more | +| --- | --- | --- | +| `synthetics/http` | Connect via HTTP and optionally verify that the host returns the expected response. | [HTTP options](beats://docs/reference/heartbeat/monitor-http-options.md) ({{heartbeat}} docs) | +| `synthetics/icmp` | Use ICMP (v4 and v6) Echo Requests to check the configured hosts. | [ICMP options](beats://docs/reference/heartbeat/monitor-icmp-options.md) ({{heartbeat}} docs) | +| `synthetics/tcp` | Connect via TCP and optionally verify the endpoint by sending and/or receiving a custom payload. | [TCP options](beats://docs/reference/heartbeat/monitor-tcp-options.md) ({{heartbeat}} docs) | + +:::: + + +::::{dropdown} View network traffic between the servers of your network +:name: elastic-agent-inputs-list-packetbeat + +| Input | Description | Learn more | +| --- | --- | --- | +| `packet` | Sniffs the traffic between your servers, parses the application-level protocols on the fly, and correlates the messages into transactions. | [Packetbeat overview](beats://docs/reference/packetbeat/packetbeat-overview.md) ({{packetbeat}} docs) | + +:::: + + diff --git a/reference/ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md b/reference/ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md new file mode 100644 index 0000000000..437d690284 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md @@ -0,0 +1,37 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-kubernetes-autodiscovery.html +--- + +# Kubernetes autodiscovery with Elastic Agent [elastic-agent-kubernetes-autodiscovery] + +When you run applications on containers, they become moving targets to the monitoring system. Autodiscover allows you to track them and adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running. + +To use autodiscover, you will need to modify the manifest file of the {{agent}}. Refer to [Run {{agent}} Standalone on Kubernetes](/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md) to learn how to retrieve and configure it. + +There are two different ways to use autodiscover: + +* [Conditions based autodiscover](/reference/ingestion-tools/fleet/conditions-based-autodiscover.md) +* [Hints annotations based autodiscover](/reference/ingestion-tools/fleet/hints-annotations-autodiscovery.md) + + +## How to configure autodiscover [_how_to_configure_autodiscover] + +`Conditions Based Autodiscover` is more suitable for scenarios when users know the different group of containers they want to monitor in advance. It is advisable to choose conditions-based configuration when administrators can configure specific conditions that match their needs. Conditions are supported in both Managed and Standalone {{agent}}. + +`Hints Based Autodiscover` is suitable for more generic scenarios, especially when users don’t know the exact configuration of the system to monitor and can not create in advance conditions. Additionally a big advantage of Hints Autodiscover is the ability to offer dynamic configuration of inputs based on annotations from Pods/Containers. If dynamic configuration is needed, then Hints should be enabled. Hints are supported only in Standalone {{agent}} mode. + +**Best Practises when you configure autodiscover:** + +* Always define alternatives and default values to your variables that are used in conditions or [hint templates](eg. See `auth.basic` set as `auth.basic.user: ${kubernetes.hints.nginx.access.username|kubernetes.hints.nginx.username|''}`` in [nginx.yml](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-standalone/templates.d/nginx.yml#L8))) + +::::{important} +When an input uses a variable substitution that is not present in the current key/value mappings being evaluated, the input is removed in the result. (See more information in [Variables and conditions in input configurations](/reference/ingestion-tools/fleet/dynamic-input-configuration.md)) +:::: + + +* To debug configurations that include variable substitution and conditions, use the inspect command of {{agent}}. (See more information in [Variables and conditions in input configurations](/reference/ingestion-tools/fleet/dynamic-input-configuration.md) in **Debugging** Section) +* In Condition Based autodiscover is advisable to define a generic last condition that will act as your default condition and will be validated when all others fail or don’t apply. If applicable, such conditions might help to identify processing and troubleshoot possible problems. + + + diff --git a/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md b/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md new file mode 100644 index 0000000000..75172019fc --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-monitoring-configuration.md @@ -0,0 +1,41 @@ +--- +navigation_title: "Monitoring" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-monitoring-configuration.html +--- + +# Configure monitoring for standalone {{agent}}s [elastic-agent-monitoring-configuration] + + +{{agent}} monitors {{beats}} by default. To turn off or change monitoring settings, set options under `agent.monitoring` in the `elastic-agent.yml` file. + +This example configures {{agent}} monitoring: + +```yaml +agent.monitoring: + # enabled turns on monitoring of running processes + enabled: true + # enables log monitoring + logs: true + # enables metrics monitoring + metrics: true + # exposes /debug/pprof/ endpoints for Elastic Agent and Beats + # enable these endpoints if the monitoring endpoint is set to localhost + pprof.enabled: false + # specifies output to be used + use_output: monitoring + http: + # exposes a /buffer endpoint that holds a history of recent metrics + buffer.enabled: false +``` + +To turn off monitoring, set `agent.monitoring.enabled` to `false`. When set to `false`, {{beats}} monitoring is turned off, and all other options in this section are ignored. + +To enable monitoring, set `agent.monitoring.enabled` to `true`. Also set the `logs` and `metrics` settings to control whether logs, metrics, or both are collected. If neither setting is specified, monitoring is turned off. Set `use_output` to specify the output to which monitoring events are sent. + +You can also add the setting `agent.monitoring.http.enabled: true` to expose a `/liveness` endpoint. By default, the endpoint returns a `200` OK status as long as {{agent}}'s internal main loop is responsive and can process configuration changes. It can be configured to also monitor the component states and return an error if anything is degraded or has failed. + +The `agent.monitoring.pprof.enabled` option controls whether the {{agent}} and {{beats}} expose the `/debug/pprof/` endpoints with the monitoring endpoints. It is set to `false` by default. Data produced by these endpoints can be useful for debugging but present a security risk. It is recommended that this option remains `false` if the monitoring endpoint is accessible over a network. + +The `agent.monitoring.http.buffer.enabled` option controls whether the {{agent}} and {{beats}} collect metrics into an in-memory buffer and expose these through a `/buffer` endpoint. It is set to `false` by default. This data can be useful for debugging or if the {{agent}} has issues communicating with {{es}}. Enabling this option may slightly increase process memory usage. + diff --git a/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md b/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md new file mode 100644 index 0000000000..e5128874a6 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md @@ -0,0 +1,42 @@ +--- +navigation_title: "Outputs" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-output-configuration.html +--- + +# Configure outputs for standalone {{agent}}s [elastic-agent-output-configuration] + + +The `outputs` section of the `elastic-agent.yml` file specifies where to send data. You can specify multiple outputs to pair specific inputs with specific outputs. + +This example configures two outputs: `default` and `monitoring`: + +```yaml +outputs: + default: + type: elasticsearch + hosts: [127.0.0.1:9200] + api_key: "my_api_key" + + monitoring: + type: elasticsearch + api_key: VuaCfGcBCdbkQm-e5aOx:ui2lp2axTNmsyakw9tvNnw + hosts: ["localhost:9200"] + ca_sha256: "7lHLiyp4J8m9kw38SJ7SURJP4bXRZv/BNxyyXkCcE/M=" +``` + +::::{note} +A default output configuration is required. + +:::: + + +{{agent}} currently supports these outputs: + +* [{{es}}](/reference/ingestion-tools/fleet/elasticsearch-output.md) +* [Kafka](/reference/ingestion-tools/fleet/kafka-output.md) +* [{{ls}}](/reference/ingestion-tools/fleet/logstash-output.md) + + + + diff --git a/reference/ingestion-tools/fleet/elastic-agent-proxy-config.md b/reference/ingestion-tools/fleet/elastic-agent-proxy-config.md new file mode 100644 index 0000000000..d0996b784d --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-proxy-config.md @@ -0,0 +1,24 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-proxy-config.html +--- + +# When to configure proxy settings [elastic-agent-proxy-config] + +Configure proxy settings for {{agent}} when it must connect through a proxy server to: + +* Download artifacts from `artifacts.elastic.co` for subprocesses or binary upgrades (use [Agent binary download settings](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-agent-binary-download-settings)) +* Send data to {es} +* Retrieve agent policies from {fleet-server} +* Retrieve agent policies from {{es}} (only needed for agents running {{fleet-server}}) + +:::{image} images/agent-proxy-server.png +:alt: Image showing connections between {agent} +::: + +If {{fleet}} is unable to access the {{package-registry}} because {{kib}} is behind a proxy server, you may also need to set the registry proxy URL in the {{kib}} configuration. + +:::{image} images/fleet-epr-proxy.png +:alt: Image showing connections between {{fleet}} and the {package-registry} +::: + diff --git a/reference/ingestion-tools/fleet/elastic-agent-reference-yaml.md b/reference/ingestion-tools/fleet/elastic-agent-reference-yaml.md new file mode 100644 index 0000000000..952e7c4460 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-reference-yaml.md @@ -0,0 +1,393 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-reference-yaml.html +--- + +# Reference YAML [elastic-agent-reference-yaml] + +The {{agent}} installation includes an `elastic-agent.reference.yml` file that describes all the settings available in a standalone configuration. + +To ensure that you’re accessing the latest version, refer to the original [`elastic-agent.reference.yml` file](https://github.com/elastic/elastic-agent/blob/main/elastic-agent.reference.yml) in the `elastic/elastic-agent` repository. A copy is included here for your convenience. + +Each section of the file and available settings are also described in [Structure of a config file](/reference/ingestion-tools/fleet/structure-config-file.md). + +```yaml +## Agent Configuration Example ######################### + +# This file is an example configuration file highlighting only the most common +# options. The elastic-agent.reference.yml file from the same directory contains all the +# supported options with more comments. You can use it as a reference. + +###################################### +# Fleet configuration +###################################### +outputs: + default: + type: elasticsearch + hosts: [127.0.0.1:9200] + api_key: "example-key" + # username: "elastic" + # password: "changeme" + + # Performance preset for elasticsearch outputs. One of "balanced", "throughput", + # "scale", "latency" and "custom". + # The default if unspecified is "custom". + preset: balanced + +inputs: + - type: system/metrics + # Each input must have a unique ID. + id: unique-system-metrics-input + # Namespace name must conform to the naming conventions for Elasticsearch indices, cannot contain dashes (-), and cannot exceed 100 bytes + # For index naming restrictions, see https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create#indices-create-api-path-params + data_stream.namespace: default + use_output: default + streams: + - metricsets: + - cpu + # Dataset name must conform to the naming conventions for Elasticsearch indices, cannot contain dashes (-), and cannot exceed 100 bytes + data_stream.dataset: system.cpu + - metricsets: + - memory + data_stream.dataset: system.memory + - metricsets: + - network + data_stream.dataset: system.network + - metricsets: + - filesystem + data_stream.dataset: system.filesystem + +# # Collecting log files +# - type: filestream +# # Input ID allowing Elastic Agent to track the state of this input. Must be unique. +# id: your-input-id +# streams: +# # Stream ID for this data stream allowing Filebeat to track the state of the ingested files. Must be unique. +# # Each filestream data stream creates a separate instance of the Filebeat filestream input. +# - id: your-filestream-stream-id +# data_stream: +# dataset: generic +# paths: +# - /var/log/*.log + +# management: +# # Mode of management, the Elastic Agent support two modes of operation: +# # +# # local: The Elastic Agent will expect to find the inputs configuration in the local file. +# # +# # Default is local. +# mode: "local" + +# fleet: +# access_api_key: "" +# kibana: +# # kibana minimal configuration +# hosts: ["localhost:5601"] +# ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + +# # optional values +# #protocol: "https" +# #service_token: "example-token" +# #path: "" +# #ssl.verification_mode: full +# #ssl.supported_protocols: [TLSv1.2, TLSv1.3] +# #ssl.cipher_suites: [] +# #ssl.curve_types: [] +# reporting: +# # Reporting threshold indicates how many events should be kept in-memory before reporting them to fleet. +# #reporting_threshold: 10000 +# # Frequency used to check the queue of events to be sent out to fleet. +# #reporting_check_frequency_sec: 30 + +# agent.download: +# # source of the artifacts, requires elastic like structure and naming of the binaries +# # e.g /windows-x86.zip +# sourceURI: "https://artifacts.elastic.co/downloads/beats/" +# # path to the directory containing downloaded packages +# target_directory: "${path.data}/downloads" +# # timeout for downloading package +# timeout: 120s +# # install_path describes the location of installed packages/programs. It is also used +# # for reading program specifications. +# install_path: "${path.data}/install" +# # retry_sleep_init_duration is the duration to sleep for before the first retry attempt. This +# # duration will increase for subsequent retry attempts in a randomized exponential backoff manner. +# retry_sleep_init_duration: 30s + +# agent.process: +# # timeout for creating new processes. when process is not successfully created by this timeout +# # start operation is considered a failure +# spawn_timeout: 30s +# # timeout for stopping processes. when process is not stopped by this timeout then the process. +# # is force killed +# stop_timeout: 30s + +# agent.grpc: +# # listen address for the GRPC server that spawned processes connect back to. +# address: localhost +# # port for the GRPC server that spawned processes connect back to. +# port: 6789 +# # max_message_size limits the message size in agent internal communication +# # default is 100MB +# max_message_size: 104857600 + +# agent.retry: +# # Enabled determines whether retry is possible. Default is false. +# enabled: true +# # RetriesCount specifies number of retries. Default is 3. +# # Retry count of 1 means it will be retried one time after one failure. +# retriesCount: 3 +# # Delay specifies delay in ms between retries. Default is 30s +# delay: 30s +# # MaxDelay specifies maximum delay in ms between retries. Default is 300s +# maxDelay: 5m +# # Exponential determines whether delay is treated as exponential. +# # With 30s delay and 3 retries: 30, 60, 120s +# # Default is false +# exponential: false + +# agent.limits: +# # limits the number of operating system threads that can execute user-level Go code simultaneously. +# # Translates into the GOMAXPROCS runtime parameter for each Go process started by the agent and the agent itself. +# # By default is set to `0` which means using all available CPUs. +# go_max_procs: 0 + +# agent.monitoring: +# # enabled turns on monitoring of running processes +# enabled: false +# # enables log monitoring +# logs: false +# # enables metrics monitoring +# metrics: false +# # metrics_period defines how frequent we should sample monitoring metrics. Default is 60 seconds. +# metrics_period: 60s +# # exposes /debug/pprof/ endpoints +# # recommended that these endpoints are only enabled if the monitoring endpoint is set to localhost +# pprof.enabled: false +# # The name of the output to use for monitoring data. +# use_output: monitoring +# # Exposes agent metrics using http, by default sockets and named pipes are used. +# # +# # `http` Also exposes a /liveness endpoint that will return an HTTP code depending on agent status: +# # 200: Agent is healthy +# # 500: A component or unit is in a failed state +# # 503: The agent coordinator is unresponsive +# # +# # You can pass a `failon` parameter to the /liveness endpoint to determine what component state will result in a 500. +# # For example: `curl 'localhost:6792/liveness?failon=degraded'` will return 500 if a component is in a degraded state. +# # The possible values for `failon` are: +# # `degraded`: return an error if a component is in a degraded state or failed state, or if the agent coordinator is unresponsive. +# # `failed`: return an error if a unit is in a failed state, or if the agent coordinator is unresponsive. +# # `heartbeat`: return an error only if the agent coordinator is unresponsive. +# # If no `failon` parameter is provided, the default behavior is `failon=heartbeat` +# http: +# # enables http endpoint +# enabled: false +# # The HTTP endpoint will bind to this hostname, IP address, unix socket or named pipe. +# # When using IP addresses, it is recommended to only use localhost. +# host: localhost +# # Port on which the HTTP endpoint will bind. Default is 0 meaning feature is disabled. +# port: 6791 +# # Metrics buffer endpoint +# buffer.enabled: false +# # Configuration for the diagnostics action handler +# diagnostics: +# # Rate limit for the action handler. Does not affect diagnostics collected through the CLI. +# limit: +# # Rate limit interval. +# interval: 1m +# # Rate limit burst. +# burst: 1 +# # Configuration for the file-upload client. Client may retry failed requests with an exponential backoff. +# uploader: +# # Max retries allowed when uploading a chunk. +# max_retries: 10 +# # Initial duration of the backoff. +# init_dur: 1s +# # Max duration of the backoff. +# max_dur: 1m + +# # Allow fleet to reload its configuration locally on disk. +# # Notes: Only specific process configuration and external input configurations will be reloaded. +# agent.reload: +# # enabled configure the Elastic Agent to reload or not the local configuration. +# # +# # Default is true +# enabled: true + +# # period define how frequent we should look for changes in the configuration. +# period: 10s + +# Feature Flags + +# This section enables or disables feature flags supported by Agent and its components. +#agent.features: +# fqdn: +# enabled: false + +# Logging + +# There are four options for the log output: file, stderr, syslog, eventlog +# The file output is the default. + +# Sets log level. The default log level is info. +# Available log levels are: error, warning, info, debug +#agent.logging.level: info + +# Enable debug output for selected components. To enable all selectors use ["*"] +# Other available selectors are "beat", "publish", "service" +# Multiple selectors can be chained. +#agent.logging.selectors: [ ] + +# Send all logging output to stderr. The default is false. +agent.logging.to_stderr: true + +# Send all logging output to syslog. The default is false. +#agent.logging.to_syslog: false + +# Send all logging output to Windows Event Logs. The default is false. +#agent.logging.to_eventlog: false + +# If enabled, Elastic-Agent periodically logs its internal metrics that have changed +# in the last period. For each metric that changed, the delta from the value at +# the beginning of the period is logged. Also, the total values for +# all non-zero internal metrics are logged on shutdown. This setting is also passed +# to beats running under the agent. The default is true. +#agent.logging.metrics.enabled: true + +# The period after which to log the internal metrics. The default is 30s. +#agent.logging.metrics.period: 30s + +# Logging to rotating files. Set logging.to_files to false to disable logging to +# files. +#agent.logging.to_files: true +#agent.logging.files: + # Configure the path where the logs are written. The default is the logs directory + # under the home path (the binary location). + #path: /var/log/elastic-agent + + # The name of the files where the logs are written to. + #name: elastic-agent + + # Configure log file size limit. If limit is reached, log file will be + # automatically rotated + #rotateeverybytes: 20971520 # = 20MB + + # Number of rotated log files to keep. Oldest files will be deleted first. + #keepfiles: 7 + + # The permissions mask to apply when rotating log files. The default value is 0600. + # Must be a valid Unix-style file permissions mask expressed in octal notation. + #permissions: 0600 + + # Enable log file rotation on time intervals in addition to size-based rotation. + # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h + # are boundary-aligned with minutes, hours, days, weeks, months, and years as + # reported by the local system clock. All other intervals are calculated from the + # Unix epoch. Defaults to disabled. + #interval: 0 + + # Rotate existing logs on startup rather than appending to the existing + # file. Defaults to true. + # rotateonstartup: true + +# Set to true to log messages in JSON format. +#agent.logging.json: false + +#=============================== Events Logging =============================== +# Some outputs will log raw events on errors like indexing errors in the +# Elasticsearch output, to prevent logging raw events (that may contain +# sensitive information) together with other log messages, a different +# log file, only for log entries containing raw events, is used. It will +# use the same level, selectors and all other configurations from the +# default logger, but it will have it's own file configuration. +# +# Having a different log file for raw events also prevents event data +# from drowning out the regular log files. +# +# IMPORTANT: No matter the default logger output configuration, raw events +# will **always** be logged to a file configured by `agent.logging.event_data.files`. + +# agent.logging.event_data: +# Logging to rotating files. Set agent.logging.to_files to false to disable logging to +# files. +#agent.logging.event_data.to_files: true +#agent.logging.event_data: + # Configure the path where the logs are written. The default is the logs directory + # under the home path (the binary location). + #path: /var/log/filebeat + + # The name of the files where the logs are written to. + #name: filebeat-event-data + + # Configure log file size limit. If the limit is reached, log file will be + # automatically rotated. + #rotateeverybytes: 5242880 # = 5MB + + # Number of rotated log files to keep. The oldest files will be deleted first. + #keepfiles: 2 + + # The permissions mask to apply when rotating log files. The default value is 0600. + # Must be a valid Unix-style file permissions mask expressed in octal notation. + #permissions: 0600 + + # Enable log file rotation on time intervals in addition to the size-based rotation. + # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h + # are boundary-aligned with minutes, hours, days, weeks, months, and years as + # reported by the local system clock. All other intervals are calculated from the + # Unix epoch. Defaults to disabled. + #interval: 0 + + # Rotate existing logs on startup rather than appending them to the existing + # file. Defaults to false. + # rotateonstartup: false + +# Providers + +# Providers supply the key/values pairs that are used for variable substitution +# and conditionals. Each provider's keys are automatically prefixed with the name +# of the provider. + +# All registered providers are enabled by default. + +# Disable all providers by default and only enable explicitly configured providers. +# agent.providers.initial_default: false + +#providers: + +# Agent provides information about the running agent. +# agent: +# enabled: true + +# Docker provides inventory information from Docker. +# docker: +# enabled: true +# host: "unix:///var/run/docker.sock" +# cleanup_timeout: 60 + +# Env providers information about the running environment. +# env: +# enabled: true + +# Host provides information about the current host. +# host: +# enabled: true + +# Local provides custom keys to use as variable. +# local: +# enabled: true +# vars: +# foo: bar + +# Local dynamic allows you to define multiple key/values to generate multiple configurations. +# local_dynamic: +# enabled: true +# items: +# - vars: +# my_var: key1 +# - vars: +# my_var: key2 +# - vars: +# my_var: key3 +``` + diff --git a/reference/ingestion-tools/fleet/elastic-agent-simplified-input-configuration.md b/reference/ingestion-tools/fleet/elastic-agent-simplified-input-configuration.md new file mode 100644 index 0000000000..a715be21d7 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-simplified-input-configuration.md @@ -0,0 +1,24 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-simplified-input-configuration.html +--- + +# Simplified log ingestion [elastic-agent-simplified-input-configuration] + +There is a simplified option for ingesting log files with {{agent}}. The simplest input configuration to ingest the file `/var/log/my-application/log-file.log` is: + +```yaml +inputs: + - type: filestream <1> + id: unique-id-per-input <2> + paths: <3> + - /var/log/my-application/log-file.log +``` + +1. The input type must be `filestream`. +2. A unique ID for the input. +3. An array containing all log file paths. + + +For other custom options to configure the input, refer to the [filestream input](beats://docs/reference/filebeat/filebeat-input-filestream.md) in the {{filebeat}} documentation. + diff --git a/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md b/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md new file mode 100644 index 0000000000..8ea7325291 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md @@ -0,0 +1,53 @@ +--- +navigation_title: "SSL/TLS" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-ssl-configuration.html +--- + +# Configure SSL/TLS for standalone {{agent}}s [elastic-agent-ssl-configuration] + + +There are a number of SSL configuration settings available depending on whether you are configuring a client, server, or both. See the following tables for available settings: + +* [Table 7, Common configuration options](#common-ssl-options). These settings are valid in both client and server configurations. +* [Table 8, Client configuration options](#client-ssl-options) +* [Table 9, Server configuration options](#server-ssl-options) + +::::{tip} +For more information about using certificates, refer to [Secure connections](/reference/ingestion-tools/fleet/secure.md). +:::: + + +$$$common-ssl-options$$$ + +| Setting | Description | +| --- | --- | +| $$$ssl.ca_sha256-common-setting$$$
`ssl.ca_sha256`
| (string) This configures a certificate pin that you can use to ensure that a specific certificate is part of the verified chain.

The pin is a base64 encoded string of the SHA-256 of the certificate.

::::{note}
This check is not a replacement for the normal SSL validation, but it adds additional validation. If this setting is used with `verification_mode` set to `none`, the check will always fail because it will not receive any verified chains.
::::

| +| $$$ssl.cipher_suites-common-setting$$$
`ssl.cipher_suites`
| (list) The list of cipher suites to use. The first entry has the highest priority. If this option is omitted, the Go crypto library’s [default suites](https://golang.org/pkg/crypto/tls/) are used (recommended). Note that TLS 1.3 cipher suites are not individually configurable in Go, so they are not included in this list.

The following cipher suites are available:

* ECDHE-ECDSA-AES-128-CBC-SHA
* ECDHE-ECDSA-AES-128-CBC-SHA256: TLS 1.2 only. Disabled by default.
* ECDHE-ECDSA-AES-128-GCM-SHA256: TLS 1.2 only.
* ECDHE-ECDSA-AES-256-CBC-SHA
* ECDHE-ECDSA-AES-256-GCM-SHA384: TLS 1.2 only.
* ECDHE-ECDSA-CHACHA20-POLY1305: TLS 1.2 only.
* ECDHE-ECDSA-RC4-128-SHA: Disabled by default. RC4 not recommended.
* ECDHE-RSA-3DES-CBC3-SHA
* ECDHE-RSA-AES-128-CBC-SHA
* ECDHE-RSA-AES-128-CBC-SHA256: TLS 1.2 only. Disabled by default.
* ECDHE-RSA-AES-128-GCM-SHA256: TLS 1.2 only.
* ECDHE-RSA-AES-256-CBC-SHA
* ECDHE-RSA-AES-256-GCM-SHA384: TLS 1.2 only.
* ECDHE-RSA-CHACHA20-POLY1205: TLS 1.2 only.
* ECDHE-RSA-RC4-128-SHA: Disabled by default. RC4 not recommended.
* RSA-3DES-CBC3-SHA
* RSA-AES-128-CBC-SHA
* RSA-AES-128-CBC-SHA256: TLS 1.2 only. Disabled by default.
* RSA-AES-128-GCM-SHA256: TLS 1.2 only.
* RSA-AES-256-CBC-SHA
* RSA-AES-256-GCM-SHA384: TLS 1.2 only.
* RSA-RC4-128-SHA: Disabled by default. RC4 not recommended.

Here is a list of acronyms used in defining the cipher suites:

* 3DES: Cipher suites using triple DES
* AES-128/256: Cipher suites using AES with 128/256-bit keys.
* CBC: Cipher using Cipher Block Chaining as block cipher mode.
* ECDHE: Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange.
* ECDSA: Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication.
* GCM: Galois/Counter mode is used for symmetric key cryptography.
* RC4: Cipher suites using RC4.
* RSA: Cipher suites using RSA.
* SHA, SHA256, SHA384: Cipher suites using SHA-1, SHA-256 or SHA-384.
| +| $$$ssl.curve_types-common-setting$$$
`ssl.curve_types`
| (list) The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange).

The following elliptic curve types are available:

* P-256
* P-384
* P-521
* X25519
| +| $$$ssl.enabled-common-setting$$$
`ssl.enabled`
| (boolean) Enables or disables the SSL configuration.

**Default:** `true`

::::{note}
SSL settings are disabled if either `enabled` is set to `false` or the `ssl` section is missing.
::::

| +| $$$ssl.supported_protocols-common-setting$$$
`ssl.supported_protocols`
| (list) List of allowed SSL/TLS versions. If the SSL/TLS server supports none of the specified versions, the connection will be dropped during or after the handshake. The list of allowed protocol versions include: `TLSv1.1`, `TLSv1.2`, and `TLSv1.3`.

**Default:** `[TLSv1.2, TLSv1.3]`
| + +$$$client-ssl-options$$$ + +| Setting | Description | +| --- | --- | +| $$$ssl.certificate-client-setting$$$
`ssl.certificate`
| (string) The path to the certificate for SSL client authentication. This setting is only required if `client_authentication` is specified. If `certificate` is not specified, client authentication is not available, and the connection might fail if the server requests client authentication. If the SSL server does not require client authentication, the certificate will be loaded, but not requested or used by the server.

Example:

```yaml
ssl.certificate: "/path/to/cert.pem"
```

When this setting is configured, the `ssl.key` setting is also required.

Specify a path, or embed a certificate directly in the `YAML` configuration:

```yaml
ssl.certificate: |
-----BEGIN CERTIFICATE-----
CERTIFICATE CONTENT APPEARS HERE
-----END CERTIFICATE-----
```
| +| $$$ssl.certificate_authorities-client-setting$$$
`ssl.certificate` `_authorities`
| (list) The list of root certificates for verifications (required). If `certificate_authorities` is empty or not set, the system keystore is used. If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well.

Example:

```yaml
ssl.certificate_authorities: ["/path/to/root/ca.pem"]
```

Specify a list of files that {{agent}} will read, or embed a certificate directly in the `YAML` configuration:

```yaml
ssl.certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
CERTIFICATE CONTENT APPEARS HERE
-----END CERTIFICATE-----
```
| +| $$$ssl.key-client-setting$$$
`ssl.key`
| (string) The client certificate key used for client authentication. Only required if `client_authentication` is configured.

Example:

```yaml
ssl.key: "/path/to/cert.key"
```

Specify a path, or embed the private key directly in the `YAML` configuration:

```yaml
ssl.key: |
-----BEGIN PRIVATE KEY-----
KEY CONTENT APPEARS HERE
-----END PRIVATE KEY-----
```
| +| $$$ssl.key_passphrase-client-setting$$$
`ssl.key_passphrase`
| (string) The passphrase used to decrypt an encrypted key stored in the configured `key` file.
| +| $$$ssl.verification_mode-client-setting$$$
`ssl.verification` `_mode`
| (string) Controls the verification of server certificates. Valid values are:

`full`
: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate.

`strict`
: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error.

`certificate`
: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.

`none`
: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged.

**Default:** `full`
| +| $$$ssl.ca_trusted_fingerprint$$$
`ssl.ca_trusted` `_fingerprint`
| (string) A HEX encoded SHA-256 of a CA certificate. If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normally.

Example:

```yaml
ssl.ca_trusted_fingerprint: 3b24d33844d6553...826
```
| + +$$$server-ssl-options$$$ + +| Setting | Description | +| --- | --- | +| $$$ssl.certificate-server-setting$$$
`ssl.certificate`
| (string) The path to the certificate for SSL server authentication. If the certificate is not specified, startup will fail.

Example:

```yaml
ssl.certificate: "/path/to/server/cert.pem"
```

When this setting is configured, the `key` setting is also required.

Specify a path, or embed a certificate directly in the `YAML` configuration:

```yaml
ssl.certificate: |
-----BEGIN CERTIFICATE-----
CERTIFICATE CONTENT APPEARS HERE
-----END CERTIFICATE-----
```
| +| $$$ssl.certificate_authorities-server-setting$$$
`ssl.certificate` `_authorities`
| (list) The list of root certificates for client verifications is only required if `client_authentication` is configured. If `certificate_authorities` is empty or not set, and `client_authentication` is configured, the system keystore is used. If `certificate_authorities` is self-signed, the host system needs to trust that CA cert too.

Example:

```yaml
ssl.certificate_authorities: ["/path/to/root/ca.pem"]
```

Specify a list of files that {{agent}} will read, or embed a certificate directly in the `YAML` configuration:

```yaml
ssl.certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
CERTIFICATE CONTENT APPEARS HERE
-----END CERTIFICATE-----
```
| +| $$$ssl.client_authentication-server-setting$$$
`ssl.client_` `authentication`
| (string) Configures client authentication. The valid options are:

`none`
: Disables client authentication.

`optional`
: When a client certificate is supplied, the server will verify it.

`required`
: Requires clients to provide a valid certificate.

**Default:** `required` (if `certificate_authorities` is set); otherwise, `none`
| +| $$$ssl.key-server-setting$$$
`ssl.key`
| (string) The server certificate key used for authentication (required).

Example:

```yaml
ssl.key: "/path/to/server/cert.key"
```

Specify a path, or embed the private key directly in the `YAML` configuration:

```yaml
ssl.key: |
-----BEGIN PRIVATE KEY-----
KEY CONTENT APPEARS HERE
-----END PRIVATE KEY-----
```
| +| $$$ssl.key_passphrase-server-setting$$$
`ssl.key_passphrase`
| (string) The passphrase used to decrypt an encrypted key stored in the configured `key` file.
| +| $$$ssl.renegotiation-server-setting$$$
`ssl.renegotiation`
| (string) Configures the type of TLS renegotiation to support. The valid options are:

`never`
: Disables renegotiation.

`once`
: Allows a remote server to request renegotiation once per connection.

`freely`
: Allows a remote server to request renegotiation repeatedly.

**Default:** `never`
| +| $$$ssl.verification_mode-server-setting$$$
`ssl.verification` `_mode`
| (string) Controls the verification of client certificates. Valid values are:

`full`
: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate.

`strict`
: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error.

`certificate`
: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.

`none`
: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged.

**Default:** `full`
| + diff --git a/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md b/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md new file mode 100644 index 0000000000..6fb426556c --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md @@ -0,0 +1,21 @@ +--- +navigation_title: "Agent download" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-standalone-download.html +--- + +# Configure download settings for standalone {{agent}} upgrades [elastic-agent-standalone-download] + + +The `agent.download` section of the elastic-agent.yml config file contains settings for where to download and store artifacts used for {{agent}} upgrades. + +$$$elastic-agent-standalone-download-settings$$$ + +| Setting | Description | +| --- | --- | +| $$$agent.download.sourceURI$$$
`sourceURI`
| (string) Path to the location of artifacts used during {{agent}} upgrade.
| +| $$$agent.download.target_directory$$$
`target_directory`
| (string) Path to the directory where download artifacts are stored.
| +| $$$agent.download.timeout$$$
`timeout`
| (string) The HTTP request timeout in seconds for the download package attempt.
| +| $$$agent.download.install_path$$$
`install_path`
| (string) The location of installed packages and programs, as well as program specifications.
| +| $$$agent.download.retry_sleep_init_duration$$$
`retry_sleep_init_duration`
| (string) The duration in seconds to sleep for before the first retry attempt.
| + diff --git a/reference/ingestion-tools/fleet/elastic-agent-standalone-feature-flags.md b/reference/ingestion-tools/fleet/elastic-agent-standalone-feature-flags.md new file mode 100644 index 0000000000..c842be0720 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-standalone-feature-flags.md @@ -0,0 +1,48 @@ +--- +navigation_title: "Feature flags" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-standalone-feature-flags.html +--- + +# Configure feature flags for standalone {{agent}}s [elastic-agent-standalone-feature-flags] + + +The Feature Flags section of the elastic-agent.yml config file contains settings in {{agent}} that are disabled by default. These may include experimental features, changes to behaviors within {{agent}} or its components, or settings that could cause a breaking change. For example a setting that changes information included in events might be inconsistent with the naming pattern expected in your configured {{agent}} output. + +To enable any of the settings listed on this page, change the associated `enabled` flag from `false` to `true`. + +```yaml +agent.features: + mysetting: + enabled: true +``` + + +## Feature flag configuration settings [elastic-agent-standalone-feature-flag-settings] + +You can specify the following settings in the Feature Flag section of the `elastic-agent.yml` config file. + +Fully qualified domain name (FQDN) +: When enabled, information provided about the current host through the [host.name](/reference/ingestion-tools/fleet/host-provider.md) key, in events produced by {{agent}}, is in FQDN format (`somehost.example.com` rather than `somehost`). This helps you to distinguish between hosts on different domains that have similar names. With `fqdn` enabled, the fully qualified hostname allows each host to be more easily identified when viewed in {{kib}}. + + ::::{note} + FQDN reporting is not currently supported in APM. + :::: + + + For FQDN reporting to work as expected, the hostname of the current host must either: + + * Have a CNAME entry defined in DNS. + * Have one of its corresponding IP addresses respond successfully to a reverse DNS lookup. + + If neither pre-requisite is satisfied, `host.name` continues to report the hostname of the current host as if the FQDN feature flag were not enabled. + + To enable fully qualified domain names set `enabled: true` for the `fqdn` setting: + + ```yaml + agent.features: + fqdn: + enabled: true + ``` + + diff --git a/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md b/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md new file mode 100644 index 0000000000..958d18d1f2 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md @@ -0,0 +1,60 @@ +--- +navigation_title: "Logging" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-standalone-logging-config.html +--- + +# Configure logging for standalone {{agent}}s [elastic-agent-standalone-logging-config] + + +The Logging section of the `elastic-agent.yml` config file contains settings for configuring the logging output. The logging system can write logs to the `syslog`, `file`, `stderr`, `eventlog`, or rotate log files. If you do not explicitly configure logging, the `stderr` output is used. + +This example configures {{agent}} logging: + +```yaml +agent.logging.level: info +agent.logging.to_files: true +agent.logging.files: + path: /var/log/elastic-agent + name: elastic-agent + keepfiles: 7 + permissions: 0600 +``` + + +## Logging configuration settings [elastic-agent-standalone-logging-settings] + +You can specify the following settings in the Logging section of the `elastic-agent.yml` config file. + +Some outputs will log raw events on errors like indexing errors in the Elasticsearch output, to prevent logging raw events (that may contain sensitive information) together with other log messages, a different log file, only for log entries containing raw events, is used. It will use the same level, selectors and all other configurations from the default logger, but it will have it’s own file configuration. + +Having a different log file for raw events also prevents event data from drowning out the regular log files. Use `agent.logging.event_data` to configure the events logger. + +The events log file is not collected by the {{agent}} monitoring. If the events log files are needed, they can be collected with the diagnostics or directly copied from the host running {{agent}}. + +| | | +| --- | --- | +| **Setting**
| **Description**
| +| `agent.logging.level`
| The minimum log level.

Possible values:

* `error`: Logs errors and critical errors.
* `warning`: Logs warnings, errors, and critical errors.
* `info`: Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors.
* `debug`: Logs debug messages, including a detailed printout of all events flushed. Also logs informational messages, warnings, errors, and critical errors. When the log level is `debug`, you can specify a list of **selectors** to display debug messages for specific components. If no selectors are specified, the `*` selector is used to display debug messages for all components.

Default: `info`
| +| `agent.logging.selectors`
| Specify the selector tags that are used by different {{agent}} components for debugging. To debug the output for all components, use `*`. To display debug messages related to event publishing, set to `publish`. Multiple selectors can be chained.

Possible values: `[beat]`, `[publish]`, `[service]`
| +| `agent.logging.to_stderr`
| Set to `true` to write all logging output to the `stderr` output—this is equivalent to using the `-e` command line option.

Default: `true`
| +| `agent.logging.to_syslog`
| Set to `true` to write all logging output to the `syslog` output.

Default: `false`
| +| `agent.logging.to_eventlog`
| Set to `true` to write all logging output to the Windows `eventlog` output.

Default: `false`
| +| `agent.logging.metrics.enabled`
| Set to `true` for {{agent}} to periodically log its internal metrics that have changed in the last period. For each metric that changed, the delta from the value at the beginning of the period is logged. Also, the total values for all non-zero internal metrics get logged on shutdown. If set to `false`, no metrics for the agent or any of the {{beats}} running under it are logged.

Default: `true`
| +| `agent.logging.metrics.period`
| Specify the period after which to log the internal metrics. This setting is not passed to any {{beats}} running under the {{agent}}.

Default: `30s`
| +| `agent.logging.to_files`
| Set to `true` to log to rotating files. Set to `false` to disable logging to files.

Default: `true`
| +| `agent.logging.files.path`
| The directory that log files is written to.
Logs file names end with a date and optional number: log-date.ndjson, log-date-1.ndjson, and so on as new files are created during rotation.

macOS: `/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
Linux: `/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
Windows: `C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson`
DEB: `/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson`
RPM: `/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson`| +| `agent.logging.files.name`
| The name of the file that logs are written to.

Default: `elastic-agent`
| +| `agent.logging.files.rotateeverybytes`
| The maximum size limit of a log file. If the limit is reached, a new log file is generated.

Default: `10485760` (10MB)
| +| `agent.logging.files.keepfiles`
| The most recent number of rotated log files to keep on disk. Older files are deleted during log rotation. The value must be in the range of `2` to `1024` files.

Default: `7`
| +| `agent.logging.files.permissions`
| The permissions mask to apply when rotating log files. The permissions option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with 0.

Default: `0600`
| +| `agent.logging.files.interval`
| Enable log file rotation on time intervals in addition to the size-based rotation. Intervals must be at least `1s`. Values of `1m`, `1h`, `24h`, `7*24h`, `30*24h`, and `365*24h` are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals get calculated from the Unix epoch.

Default: `0` (disabled)
| +| `agent.logging.files.rotateonstartup`
| Set to `true` to rotate existing logs on startup rather than to append to the existing file.

Default: `true`
| +| `agent.logging.event_data.to_files`
| Set to `true` to log to rotating files. Set to `false` to disable logging to files.

Default: `true`
| +| `agent.logging.event_data.path`
| The directory that log files is written to.
Logs file names end with a date and optional number: log-date.ndjson, log-date-1.ndjson, and so on as new files are created during rotation.

macOS: `/Library/Elastic/Agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson`
Linux: `/opt/Elastic/Agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson`
Windows: `C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\events\elastic-agent-event-log*.ndjson`
DEB: `/var/lib/elastic-agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson`
RPM: `/var/lib/elastic-agent/data/elastic-agent-*/logs/events/elastic-agent-event-log*.ndjson` +| `agent.logging.event_data.files.name`
| The name of the file that logs are written to.

Default: `elastic-agent-event-data`
| +| `agent.logging.event_data.files.rotateeverybytes`
| The maximum size limit of a log file. If the limit is reached, a new log file is generated.

Default: `5242880` (5MB)
| +| `agent.logging.event_data.files.keepfiles`
| The most recent number of rotated log files to keep on disk. Older files are deleted during log rotation. The value must be in the range of `2` to `1024` files.

Default: `2`
| +| `agent.logging.event_data.files.permissions`
| The permissions mask to apply when rotating log files. The permissions option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with 0.

Default: `0600`
| +| `agent.logging.event_data.files.interval`
| Enable log file rotation on time intervals in addition to the size-based rotation. Intervals must be at least `1s`. Values of `1m`, `1h`, `24h`, `7*24h`, `30*24h`, and `365*24h` are boundary-aligned with minutes, hours, days, weeks, months, and years as reported by the local system clock. All other intervals get calculated from the Unix epoch.

Default: `0` (disabled)
| +| `agent.logging.event_data.files.rotateonstartup`
| Set to `true` to rotate existing logs on startup rather than to append to the existing file.

Default: `false`
| diff --git a/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md b/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md new file mode 100644 index 0000000000..57e3c58194 --- /dev/null +++ b/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md @@ -0,0 +1,208 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-unprivileged.html +--- + +# Run Elastic Agent without administrative privileges [elastic-agent-unprivileged] + +Beginning with {{stack}} version 8.15, {{agent}} is no longer required to be run by a user with superuser privileges. You can now run agents in an `unprivileged` mode that does not require `root` access on Linux or macOS, or `admin` access on Windows. Being able to run agents without full administrative privileges is often a requirement in organizations where this kind of access is often very limited. + +In general, agents running without full administrative privileges will perform and behave exactly as those run by a superuser. There are certain integrations and datastreams that are not available, however. If an integration requires root access, this is [indicated on the integration main page](#unprivileged-integrations). + +You can also [change the privilege mode](#unprivileged-change-mode) of an {{agent}} after it has been installed. + +Refer to [Agent and dashboard behaviors in unprivileged mode](#unprivileged-command-behaviors) and [Run {{agent}} in `unprivileged` mode](#unprivileged-running) for the requirements and steps associated with running an agent without full `root` or `admin` superuser privileges. + +* [Run {{agent}} in `unprivileged` mode](#unprivileged-running) +* [Agent and dashboard behaviors in unprivileged mode](#unprivileged-command-behaviors) +* [Using Elastic integrations](#unprivileged-integrations) +* [Viewing an {{agent}} privilege mode](#unprivileged-view-mode) +* [Changing an {{agent}}'s privilege mode](#unprivileged-change-mode) +* [Using `unprivileged` mode with a pre-existing user and group](#unprivileged-preexisting-user) + + +## Run {{agent}} in `unprivileged` mode [unprivileged-running] + +To run {{agent}} without administrative privileges you use exactly the same commands that you use for {{agent}} otherwise, with one exception. When you run the [`elastic-agent install`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) command, add the `--unprivileged` flag. For example: + +```shell +elastic-agent install \ + --url=https://cedd4e0e21e240b4s2bbbebdf1d6d52f.fleet.eu-west-1.aws.cld.elstc.co:443 \ + --enrollment-token=NEFmVllaa0JLRXhKebVKVTR5TTI6N2JaVlJpSGpScmV0ZUVnZVlRUExFQQ== \ + --unprivileged +``` + +::::{important} +Note the following current restrictions for running {{agent}} in `unprivileged` mode: + +* On Linux systems, after {{agent}} has been installed with the `--unprivileged` flag, all {{agent}} commands can be run without being the root user. + + * The `sudo` option is still required for the `elastic-agent install` command. Only `root` can install new services. The installed service will not run as the root user. + +* Using `sudo` without specifying an alternate non-root user with `sudo -u` in a command may result in [an error](/troubleshoot/ingest/fleet/common-problems.md#agent-sudo-error) due to the agent not having the required privileges. +* Using `sudo -u elastic-agent-user` will run commands as the user running the {{agent}} service and will always work. +* For files that allow users in the `elastic-agent` group access, using an alternate user that has been added to that group will also work. There are still some commands that are only accessible to the `elastic-agent-user` that runs the service. + + * For example, `elastic-agent inspect` requires you to prefix the command with `sudo -u elastic-agent-user`. + + ```shell + sudo -u elastic-agent-user elastic-agent inspect + ``` + + +:::: + + + +## Agent and dashboard behaviors in unprivileged mode [unprivileged-command-behaviors] + +In addition to the [integrations that are not available](#unprivileged-integrations) when {{agent}} is run in unpriviledged mode, certain data streams are also not available. The following tables show, for different operating systems, the impact when the agent does not have full administrative privileges. In most cases the limitations can be mediated by granting permissions for a user or group to the files indicated. + +| Action | Behavior in unprivileged mode | Resolution | +| --- | --- | --- | +| Run {{agent}} with the System integration | Log file error: `Unexpected file opening error: Failed opening /var/log/system.log: open /var/log/system.log: permission denied`. | Give read permission to the `elastic-agent` group for the `/var/log/system.log` file to fix this error. | +| Run {{agent}} with the System integration | On the `[Logs System] Syslog` dashboard, the `Syslog events by hostname`, `Syslog hostnames and processes` and `Syslog logs` visualizations are are missing data. | Give read permission to the `elastic-agent` group for the `/var/log/system.log` file to fix the missing visualizations. | +| Run {{agent}} with the System integration | On the `[Metrics System] Host overview` dashboard, only the processes run by the `elastic-agent-user` user are shown in the CPU and memory usage lists. | To fix the missing processes in the visualization lists you can add add the `elastic-agent-user` user to the system `admin` group. Note that while this mitigates the issue, it also grants `elastic-agent user` with more permissions than may be desired. | +| Run {{agent}} and access the {{agent}} dashboards | On the `[Elastic Agent] Agents info` dashboard, visualizations including `Most Active Agents` and `Integrations per Agent` are missing data. | To fix the missing data in the visualizations you can add add the `elastic-agent-user` user to the system `admin` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. | +| Run {{agent}} and access the {{agent}} dashboards | On the `[Elastic Agent] Integrations` dashboard, visualizations including `Integration Errors Table`, `Events per integration` and `Integration Errors` are missing data. | To fix the missing data in the visualizations you can add add the `elastic-agent-user` user to the system `admin` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. | + +| Action | Behavior in unprivileged mode | Resolution | +| --- | --- | --- | +| Run {{agent}} with the System integration | Log file error: `[elastic_agent.filebeat][error] Harvester could not be started on new file: /var/log/auth.log.1, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: Failed opening /var/log/auth.log.1: open /var/log/auth.log.1: permission denied` | To avoid the error you can add add the `elastic-agent-user` user to the `adm` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. | +| Run {{agent}} with the System integration | Log file error: `[elastic_agent.metricbeat][error] error getting filesystem usage for /run/user/1000/gvfs: error in Statfs syscall: permission denied` | To avoid the error you can add add the `elastic-agent-user` user to the `adm` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. | +| Run {{agent}} with the System integration | On the `[Logs System] Syslog` dashboard, the `Syslog events by hostname`, `Syslog hostnames and processes` and `Syslog logs` visualizations are are missing data. | To fix the missing data in the visualizations you can add add the `elastic-agent-user` user to the `adm` group. Note that while this mitigates the issue it also grants `elastic-agent user` with more permissions than may be desired. | +| Run {{agent}} and access the {{agent}} dashboards | On the `[Elastic Agent] Agents info` dashboard, visualizations including `Most Active Agents` and `Integrations per Agent` are missing data. | Giving read permission to the `elastic-agent` group for the `/var/log/system.log` file will partially fix the visualizations, but errors may still occur because the `elastic-agent-user` does not have read access to files in the `/run/user/1000/` directory. | +| Run {{agent}} and access the {{agent}} dashboards | On the `[Elastic Agent] Integrations` dashboard, visualizations including `Integration Errors Table`, `Events per integration` and `Integration Errors` are missing data. | Give read permission to the `elastic-agent` group for the `/var/log/system.log` file to fix the missing visualizations. | + +| Action | Behavior in unprivileged mode | Resolution | +| --- | --- | --- | +| Run {{agent}} with the System integration | Log file error: `failed to open Windows Event Log channel "Security": Access is denied` | Add the `elastic-agent-user` user to the `Event Log Users` group to fix this error. | +| Run {{agent}} with the System integration | Log file error: `cannot open new key in the registry in order to enable the performance counters: Access is denied` | Update the permissions for the `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PartMgr` registry to fix this error. | +| Run {{agent}} with the System integration | Most of the System and {{agent}} dashboard visualizations are missing all data. | Add the `elastic-agent-user` user to the `Event Log Users` group and update the permissions for the `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PartMgr` registry to fix the missing visualizations.
Note that the `elastic-agent-user` user may still not have access to all processes, so the lists in the `Top processes by CPU usage` and `Top processes by memory usage` visualizations may be incomplete. | +| Run {{agent}} with the System integration | On the `[Metrics System] Host overview` dashboard, the `Disk usage` visualizations are missing data. | This occurs because direct access to the disk or a volume is restricted and not available to users without administrative privileges. Refer to [Running with Special Privileges](https://learn.microsoft.com/en-us/windows/win32/secbp/running-with-special-privileges) in the Microsoft documentation for details. | + + +## Using Elastic integrations [unprivileged-integrations] + +Most Elastic integrations support running {{agent}} in unprivileged mode. For the exceptions, any integration that requires {{agent}} to have root privileges has the requirement indicated at the top of the integration page in {{kib}}: + +:::{image} images/integration-root-requirement.png +:alt: Elastic Defend integration page showing root requirement +:class: screenshot +::: + +As well, a warning is displayed in {{kib}} if you try to add an integration that requires root privileges to an {{agent}} policy that has agents enrolled in unprivileged mode. + +:::{image} images/unprivileged-agent-warning.png +:alt: Warning indicating that root privileged agent is required for an integration +:class: screenshot +::: + +Examples of integrations that require {{agent}} to have administrative privileges are: + +* [{{elastic-defend}}](integration-docs://docs/reference/endpoint.md) +* [Auditd Manager](integration-docs://docs/reference/auditd_manager.md) +* [File Integrity Monitoring](integration-docs://docs/reference/fim.md) +* [Network Packet Capture](integration-docs://docs/reference/network_traffic.md) +* [System Audit](integration-docs://docs/reference/system_audit.md) +* [Universal Profiling Agent](integration-docs://docs/reference/profiler_agent.md) + + +## Viewing an {{agent}} privilege mode [unprivileged-view-mode] + +The **Agent details** page shows you the privilege mode for any running {{agent}}. + +To view the status of an {{agent}}: + +1. In {{fleet}}, open the **Agents** tab. +2. Select an agent and click **View agent** in the actions menu. +3. The **Agent details** tab shows whether the agent is running in `privileged` or `unprivileged` mode. + + :::{image} images/agent-privilege-mode.png + :alt: Agent details tab showing the agent is running as non-root + :class: screenshot + ::: + + +As well, for any {{agent}} policy you can view the number of agents that are currently running in privileged or unprivileged mode: + +1. In {{fleet}}, open the **Agent policies** tab. +2. Click the agent policy to view the policy details. + +The number of agents enrolled with the policy is shown. Hover over the link to view the number of privileged and unpriviled agents. + +:::{image} images/privileged-and-unprivileged-agents.png +:alt: Agent policy tab showing 1 unprivileged agent and 0 privileged enrolled agents +:class: screenshot +::: + +In the event that the {{agent}} policy has integrations installed that require root privileges, but there are agents running without root privileges, this is shown in the tooltip. + +:::{image} images/root-integration-and-unprivileged-agents.png +:alt: Agent policy tab showing 1 unprivileged agent and 0 privileged enrolled agents +:class: screenshot +::: + + +## Changing an {{agent}}'s privilege mode [unprivileged-change-mode] + +For any installed {{agent}} you can change the mode that it’s running in by running the `privileged` or `unprivileged` subcommand. + +Change mode from privileged to unprivileged: + +```shell +sudo elastic-agent unprivileged +``` + +Note that changing to `unprivileged` mode is prevented if the agent is currently enrolled in a policy that includes an integration that requires administrative access, such as the {{elastic-defend}} integration. + +Change mode from unprivileged to privileged: + +```shell +sudo elastic-agent privileged +``` + +When an agent is running in `unprivileged` mode, if it doesn’t have the right level of privilege to read a data source, you can also adjust the agent’s privileges by adding `elastic-agent-user` to the user group that has privileges to read the data source. + +As background, when you run {{agent}} in `unprivileged` mode, one user and one group are created on the host. The same names are used for all operating systems: + +* `elastic-agent-user`: The user that is created and that the {{agent}} service runs as. +* `elastic-agent`: The group that is created. Any user in this group has access to control and communicate over the control protocol to the {{agent}} daemon. + +For example: + +1. When you install {{agent}} with the `--unprivileged` setting, the `elastic-agent-user` user and the `elastic-agent` group are created automatically. +2. If you then want your user `myuser` to be able to run an {{agent}} command such as `elastic-agent status`, add the `myuser` user to the `elastic-agent` group. +3. Then, once added to the group, the `elastic-agent status` command will work. Prior to that, the user `myuser` running the command will result in a permission error that indicates a problem communicating with the control socket. + + +## Using `unprivileged` mode with a pre-existing user and group [unprivileged-preexisting-user] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +In certain cases you may want to install {{agent}} in `unprivileged` mode, with the agent running as a pre-existing user or as part of a pre-existing group. For example, on a Windows system you may have a service account in Active Directory and you’d like {{agent}} to run under that account. + +To install {{agent}} in `unprivileged` mode as a specific user, add the `--user` and `--password` parameters to the install command: + +```shell +elastic-agent install --unprivileged --user="my.path\username" --password="mypassword" +``` + +To install {{agent}} in `unprivileged` mode as part of a specific group, add the `--group` and `--password` parameters to the install command: + +```shell +elastic-agent install --unprivileged --group="my.path\groupname" --password="mypassword" +``` + +Alternatively, if you have {{agent}} already installed with administrative privileges, you can change the agent to use `unprivileged` mode and to run as a specific user or in a specific group. For example: + +```shell +elastic-agent unprivileged --user="my.path\username" --password="mypassword" +``` + +```shell +elastic-agent unprivileged --group="my.path\groupname" --password="mypassword" +``` diff --git a/reference/ingestion-tools/fleet/elasticsearch-output.md b/reference/ingestion-tools/fleet/elasticsearch-output.md new file mode 100644 index 0000000000..cd4806a015 --- /dev/null +++ b/reference/ingestion-tools/fleet/elasticsearch-output.md @@ -0,0 +1,241 @@ +--- +navigation_title: "{{es}}" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elasticsearch-output.html +--- + +# Configure the {{es}} output [elasticsearch-output] + + +The {{es}} output sends events directly to {{es}} by using the {{es}} HTTP API. + +**Compatibility:** This output works with all compatible versions of {{es}}. See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_compatibility). + +This example configures an {{es}} output called `default` in the `elastic-agent.yml` file: + +```yaml +outputs: + default: + type: elasticsearch + hosts: [127.0.0.1:9200] + username: elastic + password: changeme +``` + +This example is similar to the previous one, except that it uses the recommended [token-based (API key) authentication](#output-elasticsearch-apikey-authentication-settings): + +```yaml +outputs: + default: + type: elasticsearch + hosts: [127.0.0.1:9200] + api_key: "my_api_key" +``` + +::::{note} +Token-based authentication is required in an [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md) environment. +:::: + + +## {{es}} output configuration settings [_es_output_configuration_settings] + +The `elasticsearch` output type supports the following settings, grouped by category. Many of these settings have sensible defaults that allow you to run {{agent}} with minimal configuration. + +* [Commonly used settings](#output-elasticsearch-commonly-used-settings) +* [Authentication settings](#output-elasticsearch-authentication-settings) +* [Compatibility setting](#output-elasticsearch-compatibility-setting) +* [Data parsing, filtering, and manipulation settings](#output-elasticsearch-data-parsing-settings) +* [HTTP settings](#output-elasticsearch-http-settings) +* [Memory queue settings](#output-elasticsearch-memory-queue-settings) +* [Performance tuning settings](#output-elasticsearch-performance-tuning-settings) + + +## Commonly used settings [output-elasticsearch-commonly-used-settings] + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-enabled-setting$$$
`enabled`
| (boolean) Enables or disables the output. If set to `false`, the output is disabled.

**Default:** `true`
| +| $$$output-elasticsearch-hosts-setting$$$
`hosts`
| (list) The list of {{es}} nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is automatically sent to another node. Each {{es}} node can be defined as a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `https://es.found.io:9230` or `192.24.3.2:9300`. If no port is specified, `9200` is used.

::::{note}
When a node is defined as an `IP:PORT`, the *scheme* and *path* are taken from the `protocol` and `path` settings.
::::


```yaml
outputs:
default:
type: elasticsearch
hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] <1>
protocol: https
path: /elasticsearch
```

1. In this example, the {{es}} nodes are available at `https://10.45.3.2:9220/elasticsearch` and `https://10.45.3.1:9230/elasticsearch`.


Note that Elasticsearch Nodes in the [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md) environment are exposed on port 443.
| +| $$$output-elasticsearch-protocol-setting$$$
`protocol`
| (string) The name of the protocol {{es}} is reachable on. The options are: `http` or `https`. The default is `http`. However, if you specify a URL for `hosts`, the value of `protocol` is overridden by whatever scheme you specify in the URL.
| +| $$$output-elasticsearch-proxy_disable-setting$$$
`proxy_disable`
| (boolean) If set to `true`, all proxy settings, including `HTTP_PROXY` and `HTTPS_PROXY` variables, are ignored.

**Default:** `false`
| +| $$$output-elasticsearch-proxy_headers-setting$$$
`proxy_headers`
| (string) Additional headers to send to proxies during CONNECT requests.
| +| $$$output-elasticsearch-proxy_url-setting$$$
`proxy_url`
| (string) The URL of the proxy to use when connecting to the {{es}} servers. The value may be either a complete URL or a `host[:port]`, in which case the `http` scheme is assumed. If a value is not specified through the configuration file then proxy environment variables are used. See the [Go documentation](https://golang.org/pkg/net/http/#ProxyFromEnvironment) for more information about the environment variables.
| + + +## Authentication settings [output-elasticsearch-authentication-settings] + +When sending data to a secured cluster through the `elasticsearch` output, {{agent}} can use any of the following authentication methods: + +* [Basic authentication credentials](#output-elasticsearch-basic-authentication-settings) +* [Token-based (API key) authentication](#output-elasticsearch-apikey-authentication-settings) +* [Public Key Infrastructure (PKI) certificates](#output-elasticsearch-pki-certs-authentication-settings) +* [Kerberos](#output-elasticsearch-kerberos-authentication-settings) + +### Basic authentication credentials [output-elasticsearch-basic-authentication-settings] + +```yaml +outputs: + default: + type: elasticsearch + hosts: ["https://myEShost:9200"] + username: "your-username" + password: "your-password" +``` + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-password-setting$$$
`password`
| (string) The basic authentication password for connecting to {{es}}.
| +| $$$output-elasticsearch-username-setting$$$
`username`
| (string) The basic authentication username for connecting to {{es}}.

This user needs the privileges required to publish events to {{es}}.

Note that in an [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md) environment you need to use [token-based (API key) authentication](#output-elasticsearch-apikey-authentication-settings).
| + + +### Token-based (API key) authentication [output-elasticsearch-apikey-authentication-settings] + +```yaml +outputs: + default: + type: elasticsearch + hosts: ["https://myEShost:9200"] + api_key: "KnR6yE41RrSowb0kQ0HWoA" +``` + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-api_key-setting$$$
`api_key`
| (string) Instead of using a username and password, you can use [API keys](/deploy-manage/api-keys/elasticsearch-api-keys.md) to secure communication with {{es}}. The value must be the ID of the API key and the API key joined by a colon: `id:api_key`. Token-based authentication is required in an [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md) environment.
| + + +### Public Key Infrastructure (PKI) certificates [output-elasticsearch-pki-certs-authentication-settings] + +```yaml +outputs: + default: + type: elasticsearch + hosts: ["https://myEShost:9200"] + ssl.certificate: "/etc/pki/client/cert.pem" + ssl.key: "/etc/pki/client/cert.key" +``` + +For a list of available settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specifically the settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). + + +### Kerberos [output-elasticsearch-kerberos-authentication-settings] + +The following encryption types are supported: + +* aes128-cts-hmac-sha1-96 +* aes128-cts-hmac-sha256-128 +* aes256-cts-hmac-sha1-96 +* aes256-cts-hmac-sha384-192 +* des3-cbc-sha1-kd +* rc4-hmac + +Example output config with Kerberos password-based authentication: + +```yaml +outputs: + default: + type: elasticsearch + hosts: ["http://my-elasticsearch.elastic.co:9200"] + kerberos.auth_type: password + kerberos.username: "elastic" + kerberos.password: "changeme" + kerberos.config_path: "/etc/krb5.conf" + kerberos.realm: "ELASTIC.CO" +``` + +The service principal name for the {{es}} instance is constructed from these options. Based on this configuration, the name would be: + +`HTTP/my-elasticsearch.elastic.co@ELASTIC.CO` + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-kerberos.auth_type-setting$$$
`kerberos.auth_type`
| (string) The type of authentication to use with Kerberos KDC:

`password`
: When specified, also set `kerberos.username` and `kerberos.password`.

`keytab`
: When specified, also set `kerberos.username` and `kerberos.keytab`. The keytab must contain the keys of the selected principal, or authentication fails.

**Default:** `password`
| +| $$$output-elasticsearch-kerberos.config_path$$$
`kerberos.config_path`
| (string) Path to the `krb5.conf`. {{agent}} uses this setting to find the Kerberos KDC to retrieve a ticket.
| +| $$$output-elasticsearch-kerberos.enabled-setting$$$
`kerberos.enabled`
| (boolean) Enables or disables the Kerberos configuration.

::::{note}
Kerberos settings are disabled if either `enabled` is set to `false` or the `kerberos` section is missing.
::::

| +| $$$output-elasticsearch-kerberos.enable_krb5_fast$$$
`kerberos.enable_krb5_fast`
| (boolean) If `true`, enables Kerberos FAST authentication. This may conflict with some Active Directory installations.

**Default:** `false`
| +| $$$output-elasticsearch-kerberos.keytab$$$
`kerberos.keytab`
| (string) If `kerberos.auth_type` is `keytab`, provide the path to the keytab of the selected principal.
| +| $$$output-elasticsearch-kerberos.password$$$
`kerberos.password`
| (string) If `kerberos.auth_type` is `password`, provide a password for the selected principal.
| +| $$$output-elasticsearch-kerberos.realm$$$
`kerberos.realm`
| (string) Name of the realm where the output resides.
| +| $$$output-elasticsearch-kerberos.username$$$
`kerberos.username`
| (string) Name of the principal used to connect to the output.
| + + +### Compatibility setting [output-elasticsearch-compatibility-setting] + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-allow_older_versions-setting$$$
`allow_older_versions`
| Allow {{agent}} to connect and send output to an {{es}} instance that is running an earlier version than the agent version.

Note that this setting does not affect {{agent}}'s ability to connect to {{fleet-server}}. {{fleet-server}} will not accept a connection from an agent at a later major or minor version. It will accept a connection from an agent at a later patch version. For example, an {{agent}} at version 8.14.3 can connect to a {{fleet-server}} on version 8.14.0, but an agent at version 8.15.0 or later is not able to connect.

**Default:** `true`
| + + +### Data parsing, filtering, and manipulation settings [output-elasticsearch-data-parsing-settings] + +Settings used to parse, filter, and transform data. + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-escape_html-setting$$$
`escape_html`
| (boolean) Configures escaping of HTML in strings. Set to `true` to enable escaping.

**Default:** `false`
| +| $$$output-elasticsearch-pipeline-setting$$$
`pipeline`
| (string) A format string value that specifies the [ingest pipeline](/manage-data/ingest/transform-enrich/ingest-pipelines.md) to write events to.

```yaml
outputs:
default:
type: elasticsearchoutput.elasticsearch:
hosts: ["http://localhost:9200"]
pipeline: my_pipeline_id
```

You can set the ingest pipeline dynamically by using a format string to access any event field. For example, this configuration uses a custom field, `fields.log_type`, to set the pipeline for each event:

```yaml
outputs:
default:
type: elasticsearch hosts: ["http://localhost:9200"]
pipeline: "%{[fields.log_type]}_pipeline"
```

With this configuration, all events with `log_type: normal` are sent to a pipeline named `normal_pipeline`, and all events with `log_type: critical` are sent to a pipeline named `critical_pipeline`.

::::{tip}
To learn how to add custom fields to events, see the `fields` option.
::::


See the `pipelines` setting for other ways to set the ingest pipeline dynamically.
| +| $$$output-elasticsearch-pipelines-setting$$$
`pipelines`
| An array of pipeline selector rules. Each rule specifies the [ingest pipeline](/manage-data/ingest/transform-enrich/ingest-pipelines.md) to use for events that match the rule. During publishing, {{agent}} uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the `pipelines` setting is missing or no rule matches, the `pipeline` setting is used.

Rule settings:

**`pipeline`**
: The pipeline format string to use. If this string contains field references, such as `%{[fields.name]}`, the fields must exist, or the rule fails.

**`mappings`**
: A dictionary that takes the value returned by `pipeline` and maps it to a new name.

**`default`**
: The default string value to use if `mappings` does not find a match.

**`when`**
: A condition that must succeed in order to execute the current rule.

All the conditions supported by processors are also supported here.

The following example sends events to a specific pipeline based on whether the `message` field contains the specified string:

```yaml
outputs:
default:
type: elasticsearch hosts: ["http://localhost:9200"]
pipelines:
- pipeline: "warning_pipeline"
when.contains:
message: "WARN"
- pipeline: "error_pipeline"
when.contains:
message: "ERR"
```

The following example sets the pipeline by taking the name returned by the `pipeline` format string and mapping it to a new name that’s used for the pipeline:

```yaml
outputs:
default:
type: elasticsearch
hosts: ["http://localhost:9200"]
pipelines:
- pipeline: "%{[fields.log_type]}"
mappings:
critical: "sev1_pipeline"
normal: "sev2_pipeline"
default: "sev3_pipeline"
```

With this configuration, all events with `log_type: critical` are sent to `sev1_pipeline`, all events with `log_type: normal` are sent to a `sev2_pipeline`, and all other events are sent to `sev3_pipeline`.
| + + + +## HTTP settings [output-elasticsearch-http-settings] + +Settings that modify the HTTP requests sent to {{es}}. + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-headers-setting$$$
`headers`
| Custom HTTP headers to add to each request created by the {{es}} output.

Example:

```yaml
outputs:
default:
type: elasticsearch
headers:
X-My-Header: Header contents
```

Specify multiple header values for the same header name by separating them with a comma.
| +| $$$output-elasticsearch-parameters-setting$$$
`parameters`
| Dictionary of HTTP parameters to pass within the URL with index operations.
| +| $$$output-elasticsearch-path-setting$$$
`path`
| (string) An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {{es}} listens behind an HTTP reverse proxy that exports the API under a custom prefix.
| + + +## Memory queue settings [output-elasticsearch-memory-queue-settings] + +The memory queue keeps all events in memory. + +The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. + +The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. `flush.min_events` gives a limit on the number of events that can be included in a single batch, and `flush.timeout` specifies how long the queue should wait to completely fill an event request. If the output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of `bulk_max_size` and `flush.min_events`. + +`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`. + +In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set `flush.timeout` to 0. + +For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity. + +In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s. + +This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by `bulk_max_size`, and limited to at most 512 events by `flush.min_events`), or when events have been waiting for + +```yaml + queue.mem.events: 4096 + queue.mem.flush.min_events: 512 + queue.mem.flush.timeout: 5s +``` + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| +| $$$output-elasticsearch-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| +| $$$output-elasticsearch-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| + + +## Performance tuning settings [output-elasticsearch-performance-tuning-settings] + +Settings that may affect performance when sending data through the {{es}} output. + +Use the `preset` option to automatically configure the group of performance tuning settings to optimize for `throughput`, `scale`, `latency`, or you can select a `balanced` set of performance specifications. + +The performance tuning `preset` values take precedence over any settings that may be defined separately. If you want to change any setting, set `preset` to `custom` and specify the performance tuning settings individually. + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to {{es}} after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| +| $$$output-elasticsearch-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to {{es}} after a network error.

**Default:** `60s`
| +| $$$output-elasticsearch-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single {{es}} bulk API index request.

Events can be collected into batches. {{agent}} will split batches larger than `bulk_max_size` into multiple batches.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Setting `bulk_max_size` to values less than or equal to 0 turns off the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

**Default:** `1600`
| +| $$$output-elasticsearch-compression_level-setting$$$
`compression_level`
| (int) The gzip compression level. Set this value to `0` to disable compression. The compression level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces network usage but increases CPU usage.

**Default:** `1`
| +| $$$output-elasticsearch-max_retries-setting$$$
`max_retries`
| (int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.

Set `max_retries` to a value less than 0 to retry until all events are published.

**Default:** `3`
| +| $$$output-elasticsearch-preset-setting$$$
`preset`
| Configures the full group of [performance tuning settings](#output-elasticsearch-performance-tuning-settings) to optimize your {{agent}} performance when sending data to an {{es}} output.

Refer to [Performance tuning settings](/reference/ingestion-tools/fleet/es-output-settings.md#es-output-settings-performance-tuning-settings) for a table showing the group of values associated with any preset, and another table showing EPS (events per second) results from testing the different preset options.

Performance tuning preset settings:

**`balanced`**
: Configure the default tuning setting values for "out-of-the-box" performance.

**`throughput`**
: Optimize the {{es}} output for throughput.

**`scale`**
: Optimize the {{es}} output for scale.

**`latency`**
: Optimize the {{es}} output to reduce latence.

**`custom`**
: Use the `custom` option to fine-tune the performance tuning settings individually.

**Default:** `balanced`
| +| $$$output-elasticsearch-timeout-setting$$$
`timeout`
| (string) The HTTP request timeout in seconds for the {{es}} request.

**Default:** `90s`
| +| $$$output-elasticsearch-worker-setting$$$
`worker`
| (int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).

**Default:** `1`
| + + diff --git a/reference/ingestion-tools/fleet/enable-custom-policy-settings.md b/reference/ingestion-tools/fleet/enable-custom-policy-settings.md new file mode 100644 index 0000000000..46415404fc --- /dev/null +++ b/reference/ingestion-tools/fleet/enable-custom-policy-settings.md @@ -0,0 +1,36 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/enable-custom-policy-settings.html +--- + +# Enable custom settings in an agent policy [enable-custom-policy-settings] + +In certain cases it can be useful to enable custom settings that are not available in {{fleet}}, and that override the default behavior for {{agent}}. + +::::{warning} +Use these custom settings with caution as they are intended for special cases. We do not test all possible combinations of settings which will be passed down to the components of {{agent}}, so it is possible that certain custom configurations can result in breakages. +:::: + + +* [Configure the agent download timeout](#configure-agent-download-timeout) + + +## Configure the agent download timeout [configure-agent-download-timeout] + +You can configure the the amount of time that {{agent}} waits for an upgrade package download to complete. This is useful in the case of a slow or intermittent network connection. + +```shell +PUT kbn:/api/fleet/agent_policies/ +{ + "name": "Test policy", + "namespace": "default", + "overrides": { + "agent": { + "download": { + "timeout": "120s" + } + } + } +} +``` + diff --git a/reference/ingestion-tools/fleet/env-provider.md b/reference/ingestion-tools/fleet/env-provider.md new file mode 100644 index 0000000000..d8a46c29cc --- /dev/null +++ b/reference/ingestion-tools/fleet/env-provider.md @@ -0,0 +1,17 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/env-provider.html +--- + +# Env Provider [env-provider] + +Provides access to the environment variables as key-value pairs. + +For example, set the variable `foo`: + +```shell +foo=bar elastic-agent run +``` + +The environment variable can be referenced as `${env.foo}`. + diff --git a/reference/ingestion-tools/fleet/epr-proxy-setting.md b/reference/ingestion-tools/fleet/epr-proxy-setting.md new file mode 100644 index 0000000000..b28a9e80ca --- /dev/null +++ b/reference/ingestion-tools/fleet/epr-proxy-setting.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/epr-proxy-setting.html +--- + +# Set the proxy URL of the Elastic Package Registry [epr-proxy-setting] + +{{fleet}} might be unable to access the {{package-registry}} because {{kib}} is behind a proxy server. + +Also your organization might have network traffic restrictions that prevent {{kib}} from reaching the public {{package-registry}} (EPR) endpoints, like [epr.elastic.co](https://epr.elastic.co/), to download package metadata and content. You can route traffic to the public endpoint of EPR through a network gateway, then configure proxy settings in the [{{kib}} configuration file](kibana://docs/reference/configuration-reference/fleet-settings.md), `kibana.yml`. For example: + +```yaml +xpack.fleet.registryProxyUrl: your-nat-gateway.corp.net +``` + +## What information is sent to the {{package-registry}}? [_what_information_is_sent_to_the_package_registry] + +In production environments, {{kib}}, through the {{fleet}} plugin, is the only service interacting with the {{package-registry}}. Communication happens when interacting with the Integrations UI, and when upgrading {{kib}}. The shared information is about discovery of Elastic packages and their available versions. In general, the only deployment-specific data that is shared is the {{kib}} version. + + diff --git a/reference/ingestion-tools/fleet/es-output-settings.md b/reference/ingestion-tools/fleet/es-output-settings.md new file mode 100644 index 0000000000..c14ef87b8f --- /dev/null +++ b/reference/ingestion-tools/fleet/es-output-settings.md @@ -0,0 +1,73 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/es-output-settings.html +--- + +# Elasticsearch output settings [es-output-settings] + +Specify these settings to send data over a secure connection to {{es}}. In the {{fleet}} [Output settings](/reference/ingestion-tools/fleet/fleet-settings.md#output-settings), make sure that {{es}} output type is selected. + +| | | +| --- | --- | +| $$$es-output-hosts-setting$$$
**Hosts**
| The {{es}} URLs where {{agent}}s will send data. By default, {{es}} is exposed on the following ports:

`9200`
: Default {{es}} port for self-managed clusters

`443`
: Default {{es}} port for {{ecloud}}

**Examples:**

* `https://192.0.2.0:9200`
* `https://1d7a52f5eb344de18ea04411fe09e564.fleet.eu-west-1.aws.qa.cld.elstc.co:443`
* `https://[2001:db8::1]:9200`

Refer to the [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) documentation for default ports and other configuration details.
| +| $$$es-trusted-fingerprint-yaml-setting$$$
**{{es}} CA trusted fingerprint**
| HEX encoded SHA-256 of a CA certificate. If this certificate is present in the chain during the handshake, it will be added to the `certificate_authorities` list and the handshake will continue normally. To learn more about trusted fingerprints, refer to the [{{es}} security documentation](/deploy-manage/deploy/self-managed/installing-elasticsearch.md).
| +| $$$es-agent-proxy-output$$$
**Proxy**
| Select a proxy URL for {{agent}} to connect to {{es}}. To learn about proxy configuration, refer to [Using a proxy server with {{agent}} and {{fleet}}](/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md).
| +| $$$es-output-advanced-yaml-setting$$$
**Advanced YAML configuration**
| YAML settings that will be added to the {{es}} output section of each policy that uses this output. Make sure you specify valid YAML. The UI does not currently provide validation.

See [Advanced YAML configuration](#es-output-settings-yaml-config) for descriptions of the available settings.
| +| $$$es-agent-integrations-output$$$
**Make this output the default for agent integrations**
| When this setting is on, {{agent}}s use this output to send data if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md).
| +| $$$es-agent-monitoring-output$$$
**Make this output the default for agent monitoring**
| When this setting is on, {{agent}}s use this output to send [agent monitoring data](/reference/ingestion-tools/fleet/monitor-elastic-agent.md) if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md).
| +| $$$es-agent-performance-tuning$$$
**Performance tuning**
| Choose one of the menu options to tune your {{agent}} performance when sending data to an {{es}} output. You can optimize for throughput, scale, latency, or you can choose a balanced (the default) set of performance specifications. Refer to [Performance tuning settings](#es-output-settings-performance-tuning-settings) for details about the setting values and their potential impact on performance.

You can also use the [Advanced YAML configuration](#es-output-settings-yaml-config) field to set custom values. Note that if you adjust any of the performance settings described in the following **Advanced YAML configuration*** section, the ***Performance tuning*** option automatically changes to `Custom` and cannot be changed.

Performance tuning preset values take precedence over any settings that may be defined separately. If you want to change any setting, you need to use the `Custom` ***Performance tuning*** option and specify the settings in the ***Advanced YAML configuration*** field.

For example, if you would like to use the balanced preset values except that you prefer a higher compression level, you can do so as follows:

1. In {{fleet}}, open the ***Settings*** tab.
2. In the ***Outputs*** section, select ***Add output*** to create a new output, or select the edit icon to edit an existing output.
3. In the ***Add new output*** or the ***Edit output*** flyout, set ***Performance tuning** to `Custom`.
4. Refer to the list of [performance tuning preset values](#es-output-settings-performance-tuning-settings), and add the settings you prefer into the **Advanced YAML configuration** field. For the `balanced` presets, the yaml configuration would be as shown:

```yaml
bulk_max_size: 1600
worker: 1
queue.mem.events: 3200
queue.mem.flush.min_events: 1600
queue.mem.flush.timeout: 10s
compression_level: 1
idle_connection_timeout: 3s
```

5. Adjust any settings as preferred. For example, you can update the `compression_level` setting to `4`.

When you create an {{agent}} policy using this output, the output will use the balanced preset options except with the higher compression level, as specified.
| + +## Advanced YAML configuration [es-output-settings-yaml-config] + +| Setting | Description | +| --- | --- | +| $$$output-elasticsearch-fleet-settings-allow_older_versions-setting$$$
`allow_older_versions`
| Allow {{agent}} to connect and send output to an {{es}} instance that is running an earlier version than the agent version.

Note that this setting does not affect {{agent}}'s ability to connect to {{fleet-server}}. {{fleet-server}} will not accept a connection from an agent at a later major or minor version. It will accept a connection from an agent at a later patch version. For example, an {{agent}} at version 8.14.3 can connect to a {{fleet-server}} on version 8.14.0, but an agent at version 8.15.0 or later is not able to connect.

**Default:** `true`
| +| $$$output-elasticsearch-fleet-settings-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to {{es}} after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| +| $$$output-elasticsearch-fleet-settings-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to {{es}} after a network error.

**Default:** `60s`
| +| $$$output-elasticsearch-fleet-settings-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single {{es}} bulk API index request.

Events can be collected into batches. {{agent}} will split batches larger than `bulk_max_size` into multiple batches.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Setting `bulk_max_size` to values less than or equal to 0 turns off the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

**Default:** `1600`
| +| $$$output-elasticsearch-fleet-settings-compression_level-setting$$$
`compression_level`
| (int) The gzip compression level. Set this value to `0` to disable compression. The compression level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces network usage but increases CPU usage.
| +| $$$output-elasticsearch-fleet-settings-max_retries-setting$$$
`max_retries`
| (int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.

Set `max_retries` to a value less than 0 to retry until all events are published.

**Default:** `3`
| +| $$$output-elasticsearch-fleet-settings-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| +| $$$output-elasticsearch-fleet-settings-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| +| $$$output-elasticsearch-fleet-settings-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| +| $$$output-elasticsearch-fleet-settings-timeout-setting$$$
`timeout`
| (string) The HTTP request timeout in seconds for the {{es}} request.

**Default:** `90s`
| +| $$$output-elasticsearch-fleet-settings-worker-setting$$$
`worker`
| (int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).

**Default:** `1`
| + + +## Performance tuning settings [es-output-settings-performance-tuning-settings] + +| Configuration | Balanced | Optimized for Throughput | Optimized for Scale | Optimized for Latency | +| --- | --- | --- | --- | --- | +| `bulk_max_size` | 1600 | 1600 | 1600 | 50 | +| `worker` | 1 | 4 | 1 | 1 | +| `queue.mem.events` | 3200 | 12800 | 3200 | 4100 | +| `queue.mem.flush.min_events` | 1600 | 1600 | 1600 | 2050 | +| `queue.mem.flush.timeout` | 10 | 5 | 20 | 1 | +| `compression_level` | 1 | 1 | 1 | 1 | +| `idle_connection_timeout` | 3 | 15 | 1 | 60 | + +For descriptions of each setting, refer to [Advanced YAML configuration](#es-output-settings-yaml-config). For the `queue.mem.events`, `queue.mem.flush.min_events` and `queue.mem.flush.timeout` settings, refer to the [internal queue configuration settings](beats://docs/reference/filebeat/configuring-internal-queue.md) in the {{filebeat}} documentation. + +`Balanced` represents the new default setting (out of the box behaviour). Relative to `Balanced`, `Optimized for throughput` setting will improve EPS by 4 times, `Optimized for Scale` will perform on par and `Optimized for Latency` will show a 20% degredation in EPS (Events Per Second). These relative performance numbers were calculated from a performance testbed which operates in a controlled setting ingesting a large log file. + +As mentioned, the `custom` preset allows you to input your own set of parameters for a finer tuning of performance. The following table is a summary of a few data points and how the resulting EPS compares to the `Balanced` setting mentioned above. + +These presets apply only to agents on version 8.12.0 or later. + +| worker | bulk_max_size | queue.mem_events | queue.mem.flush.min_events | queue.mem.flush.timeout | idle_connection_timeout | Relative EPS | +| --- | --- | --- | --- | --- | --- | --- | +| 1 | 1600 | 3200 | 1600 | 5 | 15 | 1x | +| 1 | 2048 | 4096 | 2048 | 5 | 15 | 1x | +| 1 | 4096 | 8192 | 4096 | 5 | 15 | 1x | +| 2 | 1600 | 6400 | 1600 | 5 | 15 | 2x | +| 2 | 2048 | 8192 | 2048 | 5 | 15 | 2x | +| 2 | 4096 | 16384 | 4096 | 5 | 15 | 2x | +| 4 | 1600 | 12800 | 1600 | 5 | 15 | 3.6x | +| 4 | 2048 | 16384 | 2048 | 5 | 15 | 3.6x | +| 4 | 4096 | 32768 | 4096 | 5 | 15 | 3.6x | +| 8 | 1600 | 25600 | 1600 | 5 | 15 | 5.3x | +| 8 | 2048 | 32768 | 2048 | 5 | 15 | 5.1x | +| 8 | 4096 | 65536 | 4096 | 5 | 15 | 5.2x | +| 16 | 1600 | 51200 | 1600 | 5 | 15 | 5.3x | +| 16 | 2048 | 65536 | 2048 | 5 | 15 | 5.2x | +| 16 | 4096 | 131072 | 4096 | 5 | 15 | 5.3x | diff --git a/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md b/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md new file mode 100644 index 0000000000..ce8448e824 --- /dev/null +++ b/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md @@ -0,0 +1,156 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/example-kubernetes-fleet-managed-agent-helm.html +--- + +# Example: Install Fleet-managed Elastic Agent on Kubernetes using Helm [example-kubernetes-fleet-managed-agent-helm] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +This example demonstrates how to install {{fleet}}-managed {{agent}} on a {{k8s}} system using a Helm chart, gather {{k8s}} metrics and send them to an {{es}} cluster in {{ecloud}}, and then view visualizations of those metrics in {{kib}}. + +For an overview of the {{agent}} Helm chart and its benefits, refer to [Install {{agent}} on Kubernetes using Helm](/reference/ingestion-tools/fleet/install-on-kubernetes-using-helm.md). + +This guide takes you through these steps: + +* [Install {{agent}}](#agent-fleet-managed-helm-example-install-agent) +* [Install the Kubernetes integration](#agent-fleet-managed-helm-example-install-integration) +* [Tidy up](#agent-fleet-managed-helm-example-tidy-up) + + +## Prerequisites [agent-fleet-managed-helm-example-prereqs] + +To get started, you need: + +* A local install of the [Helm](https://helm.sh/) {{k8s}} package manager. +* An [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) hosted {{es}} cluster on version 8.16 or higher. +* An active {{k8s}} cluster. +* A local clone of the [elastic/elastic-agent](https://github.com/elastic/elastic-agent/tree/8.16) GitHub repository. Make sure to use the `8.16` branch to ensure that {{agent}} has full compatibility with the Helm chart. + + +## Install {{agent}} [agent-fleet-managed-helm-example-install-agent] + +1. Open your {{ecloud}} deployment, and from the navigation menu select **Fleet**. +2. From the **Agents** tab, select **Add agent**. +3. In the **Add agent** UI, specify a policy name and select **Create policy**. Leave the **Collect system logs and metrics** option selected. +4. Scroll down in the **Add agent** flyout to the **Install Elastic Agent on your host** section. +5. Select the **Linux TAR** tab and copy the values for `url` and `enrollment-token`. You’ll use these when you run the `helm install` command. +6. Open a terminal shell and change into a directory in your local clone of the `elastic-agent` repo. +7. Copy this command. + + ```sh + helm install demo ./deploy/helm/elastic-agent \ + --set agent.fleet.enabled=true \ + --set agent.fleet.url= \ + --set agent.fleet.token= \ + --set agent.fleet.preset=perNode + ``` + + Note that the command has these properties: + + * `helm install` runs the Helm CLI install tool. + * `demo` gives a name to the installed chart. You can choose any name. + * `./deploy/helm/elastic-agent` is a local path to the Helm chart to install (in time it’s planned to have a public URL for the chart). + * `--set agent.fleet.enabled=true` enables {{fleet}}-managed {{agent}}. The CLI parameter overrides the default `false` value for `agent.fleet.enabled` in the {{agent}} [values.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml) file. + * `--set agent.fleet.url=` sets the address where {{agent}} will connect to {{fleet}} in your {{ecloud}} deployment, over port 443 (again, overriding the value set by default in the {{agent}} [values.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml) file). + * `--set agent.fleet.token=` sets the enrollment token that {{agent}} uses to authenticate with {{fleet}}. + * `--set agent.fleet.preset=perNode` enables {{k8s}} metrics on `per node` basis. You can alternatively set cluster wide metrics (`clusterWide`) or kube-state-metrics (`ksmSharded`). + + ::::{tip} + For a full list of all available YAML settings and descriptions, refer to the [{{agent}} Helm Chart Readme](https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent). + :::: + +8. Update the command to replace: + + 1. `` with the URL that you copied earlier. + 2. `` with the enrollment token that you copied earlier. + + After your updates, the command should look something like this: + + ```sh + helm install demo ./deploy/helm/elastic-agent \ + --set agent.fleet.enabled=true \ + --set agent.fleet.url=https://256575858845283fxxxxxxxd5265d2b4.fleet.us-central1.gcp.foundit.no:443 \ + --set agent.fleet.token=eSVvFDUvSUNPFldFdhhZNFwvS5xxxxxxxxxxxxFEWB1eFF1YedUQ1NWFXwr== \ + --set agent.fleet.preset=perNode + ``` + +9. Run the command. + + The command output should confirm that {{agent}} has been installed: + + ```sh + ... + Installed agent: + - perNode [daemonset - managed mode] + ... + ``` + +10. Run the `kubectl get pods -n default` command to confirm that the {{agent}} pod is running: + + ```sh + NAME READY STATUS RESTARTS AGE + agent-pernode-demo-86mst 1/1 Running 0 12s + ``` + +11. In the **Add agent** flyout, wait a minute or so for confirmation that {{agent}} has successfully enrolled with {{fleet}} and that data is flowing: + + :::{image} images/helm-example-nodes-enrollment-confirmation.png + :alt: Screen capture of Add Agent UI showing that the agent has enrolled in Fleet + :class: screenshot + ::: + +12. In {{fleet}}, open the **Agents** tab and see that an **Agent-pernode-demo-#** agent is running. +13. Select the agent to view its details. +14. On the **Agent details** tab, on the **Integrations** pane, expand `system-1` to confirm that logs and metrics are incoming. You can click either the `Logs` or `Metrics` link to view details. + + :::{image} images/helm-example-nodes-logs-and-metrics.png + :alt: Screen capture of the Logs and Metrics view on the Integrations pane + :class: screenshot + ::: + + + +## Install the Kubernetes integration [agent-fleet-managed-helm-example-install-integration] + +Now that you’ve {{agent}} and data is flowing, you can set up the {{k8s}} integration. + +1. In your {{ecloud}} deployment, from the {{kib}} menu open the **Integrations** page. +2. Run a search for `Kubernetes` and then select the {{k8s}} integration card. +3. On the {{k8s}} integration page, click **Add Kubernetes** to add the integration to your {{agent}} policy. +4. Scroll to the bottom of **Add Kubernetes integration** page. Under **Where to add this integration?*** select the ***Existing hosts** tab. On the **Agent policies** menu, select the agent policy that you created previously in the [Install {{agent}}](#agent-fleet-managed-helm-example-install-agent) steps. + + You can leave all of the other integration settings at their default values. + +5. Click **Save and continue**. When prompted, select to **Add Elastic Agent later** since you’ve already added it using Helm. +6. On the {{k8s}} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Pods** dashboard. + + On the dashboard, you can view the status of your {{k8s}} pods, including metrics on memory usage, CPU usage, and network throughput. + + :::{image} images/helm-example-fleet-metrics-dashboard.png + :alt: Screen capture of the Metrics Kubernetes pods dashboard + :class: screenshot + ::: + + +You’ve successfully installed {{agent}} using Helm, and your {{k8s}} metrics data is available for viewing in {{kib}}. + + +## Tidy up [agent-fleet-managed-helm-example-tidy-up] + +After you’ve run through this example, run the `helm uninstall` command to uninstall {{agent}}. + +```sh +helm uninstall demo +``` + +The uninstall should be confirmed as shown: + +```sh +release "demo" uninstalled +``` + +As a reminder, for full details about using the {{agent}} Helm chart refer to the [{{agent}} Helm Chart Readme](https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent). diff --git a/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md b/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md new file mode 100644 index 0000000000..0941ef411a --- /dev/null +++ b/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md @@ -0,0 +1,281 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/example-kubernetes-standalone-agent-helm.html +--- + +# Example: Install standalone Elastic Agent on Kubernetes using Helm [example-kubernetes-standalone-agent-helm] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +This example demonstrates how to install standalone {{agent}} on a Kubernetes system using a Helm chart, gather Kubernetes metrics and send them to an {{es}} cluster in {{ecloud}}, and then view visualizations of those metrics in {{kib}}. + +For an overview of the {{agent}} Helm chart and its benefits, refer to [Install {{agent}} on Kubernetes using Helm](/reference/ingestion-tools/fleet/install-on-kubernetes-using-helm.md). + +This guide takes you through these steps: + +* [Install {{agent}}](#agent-standalone-helm-example-install) +* [Upgrade your {{agent}} configuration](#agent-standalone-helm-example-upgrade) +* [Change {{agent}}'s running mode](#agent-standalone-helm-example-change-mode) +* [Tidy up](#agent-standalone-helm-example-tidy-up) + + +## Prerequisites [agent-standalone-helm-example-prereqs] + +To get started, you need: + +* A local install of the [Helm](https://helm.sh/) {{k8s}} package manager. +* An [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) hosted {{es}} cluster on version 8.16 or higher. +* An [{{es}} API key](/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent). +* An active {{k8s}} cluster. +* A local clone of the [elastic/elastic-agent](https://github.com/elastic/elastic-agent/tree/8.16) GitHub repository. Make sure to use the `8.16` branch to ensure that {{agent}} has full compatibility with the Helm chart. + + +## Install {{agent}} [agent-standalone-helm-example-install] + +1. Open your {{ecloud}} deployment, and from the navigation menu select **Manage this deployment**. +2. In the **Applications** section, copy the {{es}} endpoint and make a note of the endpoint value. +3. Open a terminal shell and change into a directory in your local clone of the `elastic-agent` repo. +4. Copy this command. + + ```sh + helm install demo ./deploy/helm/elastic-agent \ + --set kubernetes.enabled=true \ + --set outputs.default.type=ESPlainAuthAPI \ + --set outputs.default.url=:443 \ + --set outputs.default.api_key="API_KEY" + ``` + + Note that the command has these properties: + + * `helm install` runs the Helm CLI install tool. + * `demo` gives a name to the installed chart. You can choose any name. + * `./deploy/helm/elastic-agent` is a local path to the Helm chart to install (in time it’s planned to have a public URL for the chart). + * `--set kubernetes.enabled=true` enables the {{k8s}} integration. The CLI parameter overrides the default `false` value for `kubernetes.enabled` in the {{agent}} [values.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml) file. + * `--set outputs.default.type=ESPlainAuthAPI` sets the authentication method for the {{es}} output to require an API key (again, overriding the value set by default in the {{agent}} [values.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml) file). + * `--set outputs.default.url=:443` sets the address of your {{ecloud}} deployment, where {{agent}} will send its output over port 443. + * `--set outputs.default.api_key="API_KEY"` sets the API key that {{agent}} will use to authenticate with your {{es}} cluster. + + ::::{tip} + For a full list of all available YAML settings and descriptions, refer to the [{{agent}} Helm Chart Readme](https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent). + :::: + +5. Update the command to replace: + + 1. `` with the {{es}} endpoint value that you copied earlier. + 2. `` with your API key name. + + After your updates, the command should look something like this: + + ```sh + helm install demo ./deploy/helm/elastic-agent \ + --set kubernetes.enabled=true \ + --set outputs.default.type=ESPlainAuthAPI \ + --set outputs.default.url=https://demo.es.us-central1.gcp.foundit.no:443 \ + --set outputs.default.api_key="A6ecaHNTJUFFcJI6esf4:5HJPxxxxxxxPS4KwSBeVEs" + ``` + +6. Run the command. + + The command output should confirm that three {{agents}} have been installed as well as the {{k8s}} integration: + + ```sh + ... + Installed agents: + - clusterWide [deployment - standalone mode] + - ksmSharded [statefulset - standalone mode] + - perNode [daemonset - standalone mode] + + Installed integrations: + - kubernetes [built-in chart integration] + ... + ``` + +7. Run the `kubectl get pods -n default` command to confirm that the {{agent}} pods are running: + + ```sh + NAME READY STATUS RESTARTS AGE + agent-clusterwide-demo-77c65f6c7b-trdms 1/1 Running 0 5m18s + agent-ksmsharded-demo-0 2/2 Running 0 5m18s + agent-pernode-demo-c7d75 1/1 Running 0 5m18s + ``` + +8. In your {{ecloud}} deployment, from the {{kib}} menu open the **Integrations** page. +9. Run a search for `Kubernetes` and then select the {{k8s}} integration card. +10. On the {{k8s}} integration page, select **Install Kubernetes assets**. This installs the dashboards, {{es}} indexes, and other assets used to monitor your {{k8s}} cluster. +11. On the {{k8s}} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Nodes** dashboard. + + On the dashboard, you can view the status of your {{k8s}} nodes, including metrics on memory, CPU, and filesystem usage, network throughput, and more. + + :::{image} images/helm-example-nodes-metrics-dashboard.png + :alt: Screen capture of the Metrics Kubernetes nodes dashboard + :class: screenshot + ::: + +12. On the {{k8s}} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Pods** dashboard. As with the nodes dashboard, on this dashboard you can view the status of your {{k8s}} pods, including various metrics on memory, CPU, and network throughput. + + :::{image} images/helm-example-pods-metrics-dashboard.png + :alt: Screen capture of the Metrics Kubernetes pods dashboard + :class: screenshot + ::: + + + +## Upgrade your {{agent}} configuration [agent-standalone-helm-example-upgrade] + +Now that you have {{agent}} installed, collecting, and sending data successfully, let’s try changing the agent configuration settings. + +In the previous install example, three {{agent}} nodes were installed. One of these nodes, `agent-ksmsharded-demo-0`, is installed to enable the [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) service. Let’s suppose that you don’t need those metrics and would like to upgrade your configuration accordingly. + +1. Copy the command that you used earlier to install {{agent}}: + + ```sh + helm install demo ./deploy/helm/elastic-agent \ + --set kubernetes.enabled=true \ + --set outputs.default.type=ESPlainAuthAPI \ + --set outputs.default.url=:443 \ + --set outputs.default.api_key="API_KEY" + ``` + +2. Update the command as follows: + + 1. Change `install` to upgrade. + 2. Add a parameter `--set kubernetes.state.enabled=false`. This will override the default `true` value for the setting `kubernetes.state` in the {{agent}} [values.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml) file. + + ```sh + helm upgrade demo ./deploy/helm/elastic-agent \ + --set kubernetes.enabled=true \ + --set kubernetes.state.enabled=false \ + --set outputs.default.type=ESPlainAuthAPI \ + --set outputs.default.url=:443 \ + --set outputs.default.api_key="API_KEY" + ``` + +3. Run the command. + + The command output should confirm that now only two {{agents}} are installed together with the {{k8s}} integration: + + ```sh + ... + Installed agents: + - clusterWide [deployment - standalone mode] + - perNode [daemonset - standalone mode] + + Installed integrations: + - kubernetes [built-in chart integration] + ... + ``` + + +You’ve upgraded your configuration to run only two {{agents}}, without the kube-state-metrics service. You can similarly upgrade your agent to change other settings defined in the in the {{agent}} [values.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/helm/elastic-agent/values.yaml) file. + + +## Change {{agent}}'s running mode [agent-standalone-helm-example-change-mode] + +By default {{agent}} runs under the `elastic` user account. For some use cases you may want to temporarily change an agent to run with higher privileges. + +1. Run the `kubectl get pods -n default` command to view the running {{agent}} pods: + + ```sh + NAME READY STATUS RESTARTS AGE + agent-clusterwide-demo-77c65f6c7b-trdms 1/1 Running 0 5m18s + agent-pernode-demo-c7d75 1/1 Running 0 5m18s + ``` + +2. Now, run the `kubectl exec` command to enter one of the running {{agents}}, substituting the correct pod name returned from the previous command. For example: + + ```sh + kubectl exec -it pods/agent-pernode-demo-c7d75 -- bash + ``` + +3. From inside the pod, run the Linux `ps aux` command to view the running processes. + + ```sh + ps aux + ``` + + The results should be similar to the following: + + ```sh + USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND + elastic+ 1 0.0 0.0 1936 416 ? Ss 21:04 0:00 /usr/bin/tini -- /usr/local/bin/docker-entrypoint -c /etc/elastic-agent/agent.yml -e + elastic+ 10 0.2 1.3 2555252 132804 ? Sl 21:04 0:13 elastic-agent container -c /etc/elastic-agent/agent.yml -e + elastic+ 37 0.6 2.0 2330112 208468 ? Sl 21:04 0:37 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E + elastic+ 38 0.2 1.7 2190072 177780 ? Sl 21:04 0:13 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se + elastic+ 56 0.1 1.7 2190136 175896 ? Sl 21:04 0:11 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E + elastic+ 68 0.1 1.8 2190392 184140 ? Sl 21:04 0:12 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E + elastic+ 78 0.7 2.0 2330496 204964 ? Sl 21:04 0:48 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se + elastic+ 535 0.0 0.0 3884 3012 pts/0 Ss 22:47 0:00 bash + elastic+ 543 0.0 0.0 5480 2360 pts/0 R+ 22:47 0:00 ps aux + ``` + +4. In the command output, note that {{agent}} is currently running as the `elastic` user: + + ```sh + elastic+ 10 0.2 1.3 2555252 132804 ? Sl 21:04 0:13 elastic-agent container -c /etc/elastic-agent/agent.yml -e + ``` + +5. Run `exit` to leave the {{agent}} pod. +6. Run the `helm upgrade` command again, this time adding the parameter `--set agent.unprivileged=false` to override the default `true` value for that setting. + + ```sh + helm upgrade demo ./deploy/helm/elastic-agent \ + --set kubernetes.enabled=true \ + --set kubernetes.state.enabled=false \ + --set outputs.default.type=ESPlainAuthAPI \ + --set outputs.default.url=:443 \ + --set outputs.default.api_key="API_KEY" \ + --set agent.unprivileged=false + ``` + +7. Run the `kubectl get pods -n default` command to view the running {{agent}} pods: + + ```sh + NAME READY STATUS RESTARTS AGE + agent-clusterwide-demo-77c65f6c7b-trdms 1/1 Running 0 5m18s + agent-pernode-demo-s6s7z 1/1 Running 0 5m18s + ``` + +8. Re-run the `kubectl exec` command to enter one of the running {{agents}}, substituting the correct pod name. For example: + + ```sh + kubectl exec -it pods/agent-pernode-demo-s6s7z -- bash + ``` + +9. From inside the pod, run the Linux `ps aux` command to view the running processes. + + ```sh + USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND + root 1 0.0 0.0 1936 452 ? Ss 23:10 0:00 /usr/bin/tini -- /usr/local/bin/docker-entrypoint -c /etc/elastic-agent/agent.yml -e + root 9 0.9 1.3 2488368 135920 ? Sl 23:10 0:01 elastic-agent container -c /etc/elastic-agent/agent.yml -e + root 27 0.9 1.9 2255804 203128 ? Sl 23:10 0:01 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E + root 44 0.3 1.8 2116148 187432 ? Sl 23:10 0:00 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E + root 64 0.3 1.8 2263868 188892 ? Sl 23:10 0:00 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat metricbeat -E + root 76 0.4 1.8 2190136 190972 ? Sl 23:10 0:00 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se + root 100 1.2 2.0 2256316 207692 ? Sl 23:10 0:01 /usr/share/elastic-agent/data/elastic-agent-d99b09/components/agentbeat filebeat -E se + root 142 0.0 0.0 3752 3068 pts/0 Ss 23:12 0:00 bash + root 149 0.0 0.0 5480 2376 pts/0 R+ 23:13 0:00 ps aux + ``` + +10. Run `exit` to leave the {{agent}} pod. + +You’ve upgraded the {{agent}} privileges to run as `root`. To change the settings back, you can re-run the `helm upgrade` command with `--set agent.unprivileged=true` to return to the default `unprivileged` mode. + + +## Tidy up [agent-standalone-helm-example-tidy-up] + +After you’ve run through this example, run the `helm uninstall` command to uninstall {{agent}}. + +```sh +helm uninstall demo +``` + +The uninstall should be confirmed as shown: + +```sh +release "demo" uninstalled +``` + +As a reminder, for full details about using the {{agent}} Helm chart refer to the [{{agent}} Helm Chart Readme](https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent). diff --git a/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md new file mode 100644 index 0000000000..1a83419bbb --- /dev/null +++ b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md @@ -0,0 +1,314 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/example-standalone-monitor-nginx-serverless.html +--- + +# Example: Use standalone Elastic Agent with Elastic Cloud Serverless to monitor nginx [example-standalone-monitor-nginx-serverless] + +This guide walks you through a simple monitoring scenario so you can learn the basics of setting up standalone {{agent}}, using it to work with {{serverless-full}} and an Elastic integration. + +Following these steps, you’ll deploy the {{stack}}, install a standalone {{agent}} on a host to monitor an nginx web server instance, and access visualizations based on the collected logs. + +1. [Install nginx](#nginx-guide-install-nginx-serverless). +2. [Create an {{serverless-full}} project](#nginx-guide-sign-up-serverless). +3. [Create an API key](#nginx-guide-create-api-key-serverless). +4. [Create an {{agent}} policy](#nginx-guide-create-policy-serverless). +5. [Add the Nginx Integration](#nginx-guide-add-integration-serverless). +6. [Configure standalone {{agent}}](#nginx-guide-configure-standalone-agent-serverless). +7. [Confirm that your {{agent}} data is flowing](#nginx-guide-confirm-agent-data-serverless). +8. [View your system data](#nginx-guide-view-system-data-serverless). +9. [View your nginx logging data](#nginx-guide-view-nginx-data-serverless). + + +## Prerequisites [nginx-guide-prereqs-serverless] + +To get started, you need: + +1. An internet connection and an email address for your {{ecloud}} trial. +2. A Linux host machine on which you’ll install an nginx web server. The commands in this guide use an Ubuntu image but any Linux distribution should be fine. + + +## Step 1: Install nginx [nginx-guide-install-nginx-serverless] + +To start, we’ll set up a basic [nginx web server](https://docs.nginx.com/nginx/admin-guide/web-server/). + +1. Run the following command on an Ubuntu Linux host, or refer to the [nginx install documentation](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) for the command appropriate to your operating system. + + ```sh + sudo apt install nginx + ``` + +2. Open a web browser and visit your host machine’s external URL, for example `http://192.168.64.17/`. You should see the nginx welcome message. + + :::{image} images/guide-nginx-welcome.png + :alt: Browser window showing Welcome to nginx! + :class: screenshot + ::: + + + +## Step 2: Create an {{serverless-full}} project [nginx-guide-sign-up-serverless] + +::::{note} +If you’ve already signed up for a trial deployment you can skip this step. +:::: + + +Now that your web server is running, let’s get set up to monitor it in {{ecloud}}. An {{ecloud}} {{serverless-short}} project offers you all of the features of the {{stack}} as a hosted service. To test drive your first deployment, sign up for a free {{ecloud}} trial: + +1. Go to our [{{ecloud}} Trial](https://cloud.elastic.co/registration?elektra=guide-welcome-cta) page. +2. Enter your email address and a password. + + :::{image} images/guide-sign-up-trial.png + :alt: Start your free Elastic Cloud trial + :class: screenshot + ::: + +3. After you’ve [logged in](https://cloud.elastic.co/login), select **Create project**. +4. On the **Observability** tab, select **Next**. The **Observability** and **Security** projects both include {{fleet}}, which you can use to create a policy for the {{agent}} that will monitor your nginx installation. +5. Give your project a name. You can leave the default options or select a different cloud provider and region. +6. Select **Create project**, and then wait a few minutes for the new project to set up. +7. Once the project is ready, select **Continue**. At this point, you access {{kib}} and a selection of setup guides. + + +## Step 3: Create an {{es}} API key [nginx-guide-create-api-key-serverless] + +1. When your {{serverless-short}} project is ready, open the {{kib}} menu and go to **Project settings** → **Management → API keys**. +2. Select **Create API key**. +3. Give the key a name, for example `nginx example API key`. +4. Leave the other default options and select **Create API key**. +5. In the **Create API key** confirmation dialog, change the dropdown menu setting from `Encoded` to `Beats`. This sets the API key to the format used for communication between {{agent}} and {{es}}. +6. Copy the generated API key and store it in a safe place. You’ll use it in a later step. + + +## Step 4: Create an {{agent}} policy [nginx-guide-create-policy-serverless] + +{{agent}} is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, and more. A single agent makes it easy and fast to deploy monitoring across your infrastructure. Each agent has a single policy (a collection of input settings) that you can update to add integrations for new data sources, security protections, and more. + +1. Open the {{kib}} menu and go to **Project settings** → **{{fleet}} → Agent policies**. + + :::{image} images/guide-agent-policies.png + :alt: Agent policies tab in Fleet + ::: + +2. Click **Create agent policy**. +3. Give your policy a name. For this example we’ll call it `nginx-policy`. +4. Leave **Collect system logs and metrics** selected. +5. Click **Create agent policy**. + + :::{image} images/guide-create-agent-policy.png + :alt: Create agent policy UI + ::: + + + +## Step 5: Add the Nginx Integration [nginx-guide-add-integration-serverless] + +Elastic integrations are a streamlined way to connect your data from popular services and platforms to the {{stack}}, including nginx. + +1. From the **{{fleet}} → Agent policies** tab, click the link for your new `nginx-policy`. + + :::{image} images/guide-nginx-policy.png + :alt: The nginx-policy UI with integrations tab selected + ::: + +2. Note that the System integration (`system-1`) is included because you opted earlier to collect system logs and metrics. +3. Click **Add integration**. +4. On the Integrations page search for "nginx". + + :::{image} images/guide-integrations-page.png + :alt: Integrations page with nginx in the search bar + ::: + +5. Select the **Nginx** card. +6. Click **Add Nginx**. +7. Click the link to **Add integration only (skip agent installation)**. You’ll install standalone {{agent}} in a later step. +8. Here, you can select options such as the paths to where your nginx logs are stored, whether or not to collect metrics data, and various other settings. + + For now, leave all of the default settings and click **Save and continue** to add the Nginx integration to your `nginx-policy` policy. + + :::{image} images/guide-add-nginx-integration.png + :alt: Add Nginx Integration UI + ::: + +9. In the confirmation dialog, select to **Add {{agent}} later**. + + :::{image} images/guide-nginx-integration-added.png + :alt: Nginx Integration added confirmation UI with Add {{agent}} later selected. + ::: + + + +## Step 6: Configure standalone {{agent}} [nginx-guide-configure-standalone-agent-serverless] + +Rather than opt for {{fleet}} to centrally manage {{agent}}, you’ll configure an agent to run in standalone mode, so it will be managed by hand. + +1. Open the {{kib}} menu and go to **{{fleet}} → Agents** and click **Add agent**. +2. For the **What type of host are you adding?** step, select `nginx-policy` from the drop-down menu if it’s not already selected. +3. For the **Enroll in {{fleet}}?** step, select **Run standalone**. + + :::{image} images/guide-add-agent-standalone01.png + :alt: Add agent UI with nginx-policy and Run-standalone selected. + ::: + +4. For the **Configure the agent** step, choose **Download Policy**. Save the `elastic-agent.yml` file to a directory on the host where you’ll install nginx for monitoring. + + Have a look inside the policy file and notice that it contains all of the input, output, and other settings for the Nginx and System integrations. If you already have a standalone agent installed on a host with an existing {{agent}} policy, you can use the method described here to add a new integration. Just add the settings from the **Configure the agent** step to your existing `elastic-agent.yml` file. + +5. For the **Install {{agent}} on your host** step, select the tab for your host operating system and run the commands on your host. + + :::{image} images/guide-install-agent-on-host.png + :alt: Install {{agent}} on your host step, showing tabs with the commands for different operating systems. + ::: + + ::::{note} + {{agent}} commands should be run as `root`. You can prefix each agent command with `sudo` or you can start a new shell as `root` by running `sudo su`. If you need to run {{agent}} commands without `root` access, refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md). + + :::: + + + If you’re prompted with `Elastic Agent will be installed at {installation location} and will run as a service. Do you want to continue?` answer `Yes`. + + If you’re prompted with `Do you want to enroll this Agent into Fleet?` answer `no`. + +6. You can run the `status` command to confirm that {{agent}} is running. + + ```cmd + elastic-agent status + + ┌─ fleet + │ └─ status: (STOPPED) Not enrolled into Fleet + └─ elastic-agent + └─ status: (HEALTHY) Running + ``` + + Since you’re running the agent in standalone mode the `Not enrolled into Fleet` message is expected. + +7. Open the `elastic-agent.yml` policy file that you saved. +8. Near the top of the file, replace: + + ```yaml + username: '${ES_USERNAME}' + password: '${ES_PASSWORD}' + ``` + + with: + + ```yaml + api_key: '' + ``` + + where `your-api-key` is the API key that you generated in [Step 3: Create an {{es}} API key](#nginx-guide-create-api-key-serverless). + +9. Find the location of the default `elastic-agent.yml` policy file that is included in your {{agent}} install. Install directories for each platform are described in [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md). In our example Ubuntu image the default policy file can be found in `/etc/elastic-agent/elastic-agent.yml`. +10. Replace the default policy file with the version that you downloaded and updated. For example: + + ```sh + cp /home/ubuntu/homedir/downloads/elastic-agent.yml /etc/elastic-agent/elastic-agent.yml + ``` + + ::::{note} + You may need to prefix the `cp` command with `sudo` for the permission required to replace the default file. + :::: + + + By default, {{agent}} monitors the configuration file and reloads the configuration automatically when `elastic-agent.yml` is updated. + +11. Run the `status` command again, this time with the `--output yaml` option which provides structured and much more detailed output. See the [`elastic-agent status`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-status-command) command documentation for more details. + + ```shell + elastic-agent status --output yaml + ``` + + The results show you the agent status together with details about the running components, which correspond to the inputs and outputs defined for the integrations that have been added to the {{agent}} policy, in this case the System and Nginx Integrations. + +12. At the top of the command output, the `info` section contains details about the agent instance. Make a note of the agent ID. In this example the ID is `4779b439-1130-4841-a878-e3d7d1a457d0`. You’ll use that ID in the next section. + + ```yaml + elastic-agent status --output yaml + + info: + id: 4779b439-1130-4841-a878-e3d7d1a457d0 + version: 8.9.1 + commit: 5640f50143410fe33b292c9f8b584117c7c8f188 + build_time: 2023-08-10 17:04:04 +0000 UTC + snapshot: false + state: 2 + message: Running + ``` + + + +## Step 7: Confirm that your {{agent}} data is flowing [nginx-guide-confirm-agent-data-serverless] + +Now that {{agent}} is running, it’s time to confirm that the agent data is flowing into {{es}}. + +1. Check that {{agent}} logs are flowing. + + 1. Open the {{kib}} menu and go to **Observability → Discover**. + 2. In the KQL query bar, enter the query `agent.id : "{{agent-id}}"` where `{{agent-id}}` is the ID you retrieved from the `elastic-agent status --output yaml` command. For example: `agent.id : "4779b439-1130-4841-a878-e3d7d1a457d0"`. + + If {{agent}} has connected successfully with your {{ecloud}} deployment, the agent logs should be flowing into {{es}} and visible in {{kib}} Discover. + + :::{image} images/guide-agent-logs-flowing.png + :alt: Kibana Discover shows agent logs are flowing into Elasticsearch. + ::: + +2. Check that {{agent}} metrics are flowing. + + 1. Open the {{kib}} menu and go to **Observability → Dashboards**. + 2. In the search field, search for `Elastic Agent` and select `[Elastic Agent] Agent metrics` in the results. + + like the agent logs, the agent metrics should be flowing into {{es}} and visible in {{kib}} Dashboard. You can view metrics on CPU usage, memory usage, open handles, events rate, and more. + + :::{image} images/guide-agent-metrics-flowing.png + :alt: Kibana Dashboard shows agent metrics are flowing into Elasticsearch. + ::: + + + +## Step 8: View your system data [nginx-guide-view-system-data-serverless] + +In the step to [create an {{agent}} policy](#nginx-guide-create-policy-serverless) you chose to collect system logs and metrics, so you can access those now. + +1. View your system logs. + + 1. Open the {{kib}} menu and go to **Project settings → Integrations → Installed integrations**. + 2. Select the **System** card and open the **Assets** tab. This is a quick way to access all of the dashboards, saved searches, and visualizations that come with each integration. + 3. Select `[Logs System] Syslog dashboard`. + 4. Select the calandar icon and change the time setting to `Today`. The {{kib}} Dashboard shows visualizations of Syslog events, hostnames and processes, and more. + +2. View your system metrics. + + 1. Return to **Project settings → Integrations → Installed integrations**. + 2. Select the **System** card and open the **Assets** tab. + 3. This time, select `[Metrics System] Host overview`. + 4. Select the calandar icon and change the time setting to `Today`. The {{kib}} Dashboard shows visualizations of host metrics including CPU usage, memory usage, running processes, and others. + + :::{image} images/guide-system-metrics-dashboard.png + :alt: The System metrics host overview showing CPU usage, memory usage, and other visualizations + ::: + + + +## Step 9: View your nginx logging data [nginx-guide-view-nginx-data-serverless] + +Now let’s view your nginx logging data. + +1. Open the {{kib}} menu and go to **Project settings → Integrations → Installed integrations**. +2. Select the **Nginx** card and open the **Assets** tab. +3. Select `[Logs Nginx] Overview`. The {{kib}} Dashboard opens with geographical log details, response codes and errors over time, top pages, and more. +4. Refresh your nginx web page several times to update the logging data. You can also try accessing the nginx page from different web browsers. After a minute or so, the `Browsers breakdown` visualization shows the respective volume of requests from the different browser types. + + :::{image} images/guide-nginx-browser-breakdown.png + :alt: Kibana Dashboard shows agent metrics are flowing into Elasticsearch. + ::: + + +Congratulations! You have successfully set up monitoring for nginx using standalone {{agent}} and an {{serverless-full}} project. + + +## What’s next? [_whats_next] + +* Learn more about [{{fleet}} and {{agent}}](/reference/ingestion-tools/fleet/index.md). +* Learn more about [{{integrations}}](integration-docs://docs/reference/index.md). diff --git a/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md new file mode 100644 index 0000000000..29cef1d53a --- /dev/null +++ b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md @@ -0,0 +1,313 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/example-standalone-monitor-nginx.html +--- + +# Example: Use standalone Elastic Agent with Elasticsearch Service to monitor nginx [example-standalone-monitor-nginx] + +This guide walks you through a simple monitoring scenario so you can learn the basics of setting up standalone {{agent}}, using it to work with {{ess}} and an Elastic integration. + +Following these steps, you’ll deploy the {{stack}}, install a standalone {{agent}} on a host to monitor an nginx web server instance, and access visualizations based on the collected logs. + +1. [Install nginx](#nginx-guide-install-nginx-ess). +2. [Create an {{ecloud}} deployment](#nginx-guide-sign-up-ess). +3. [Create an {{ecloud}} API key.](#nginx-guide-create-api-key-ess) +4. [Create an {{agent}} policy](#nginx-guide-create-policy-ess). +5. [Add the Nginx Integration](#nginx-guide-add-integration-ess). +6. [Configure standalone {{agent}}](#nginx-guide-configure-standalone-agent-ess). +7. [Confirm that your {{agent}} data is flowing](#nginx-guide-confirm-agent-data-ess). +8. [View your system data](#nginx-guide-view-system-data-ess). +9. [View your nginx logging data](#nginx-guide-view-nginx-data-ess). + + +## Prerequisites [nginx-guide-prereqs-ess] + +To get started, you need: + +1. An internet connection and an email address for your {{ecloud}} trial. +2. A Linux host machine on which you’ll install an nginx web server. The commands in this guide use an Ubuntu image but any Linux distribution should be fine. + + +## Step 1: Install nginx [nginx-guide-install-nginx-ess] + +To start, we’ll set up a basic [nginx web server](https://docs.nginx.com/nginx/admin-guide/web-server/). + +1. Run the following command on an Ubuntu Linux host, or refer to the [nginx install documentation](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) for the command appropriate to your operating system. + + ```sh + sudo apt install nginx + ``` + +2. Open a web browser and visit your host machine’s external URL, for example `http://192.168.64.17/`. You should see the nginx welcome message. + + :::{image} images/guide-nginx-welcome.png + :alt: Browser window showing Welcome to nginx! + :class: screenshot + ::: + + + +## Step 2: Create an {{ecloud}} deployment [nginx-guide-sign-up-ess] + +::::{note} +If you’ve already signed up for a trial deployment you can skip this step. +:::: + + +Now that your web server is running, let’s get set up to monitor it in {{ecloud}}. An {{ecloud}} {{ess}} deployment offers you all of the features of the {{stack}} as a hosted service. To test drive your first deployment, sign up for a free {{ecloud}} trial: + +1. Go to our [{{ecloud}} Trial](https://cloud.elastic.co/registration?elektra=guide-welcome-cta) page. +2. Enter your email address and a password. + + :::{image} images/guide-sign-up-trial.png + :alt: Start your free Elastic Cloud trial + :class: screenshot + ::: + +3. After you’ve [logged in](https://cloud.elastic.co/login), select **Create deployment** and give your deployment a name. You can leave the default options or select a different cloud provider, region, hardware profile, or version. +4. Select **Create deployment**. +5. While the deployment sets up, make a note of your `elastic` superuser password and keep it in a safe place. +6. Once the deployment is ready, select **Continue**. At this point, you access {{kib}} and a selection of setup guides. + + +## Step 3: Create an {{es}} API key [nginx-guide-create-api-key-ess] + +1. From the {{kib}} menu and go to **Stack Management** → **API keys**. +2. Select **Create API key**. +3. Give the key a name, for example `nginx example API key`. +4. Leave the other default options and select **Create API key**. +5. In the **Create API key** confirmation dialog, change the dropdown menu setting from `Encoded` to `Beats`. This sets the API key format for communication between {{agent}} (which is based on {{beats}}) and {{es}}. +6. Copy the generated API key and store it in a safe place. You’ll use it in a later step. + + +## Step 4: Create an {{agent}} policy [nginx-guide-create-policy-ess] + +{{agent}} is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, and more. A single agent makes it easy and fast to deploy monitoring across your infrastructure. Each agent has a single policy (a collection of input settings) that you can update to add integrations for new data sources, security protections, and more. + +1. When your {{ecloud}} deployment is ready, open the {{kib}} menu and go to **{{fleet}} → Agent policies**. + + :::{image} images/guide-agent-policies.png + :alt: Agent policies tab in Fleet + ::: + +2. Click **Create agent policy**. +3. Give your policy a name. For this example we’ll call it `nginx-policy`. +4. Leave **Collect system logs and metrics** selected. +5. Click **Create agent policy**. + + :::{image} images/guide-create-agent-policy.png + :alt: Create agent policy UI + ::: + + + +## Step 5: Add the Nginx Integration [nginx-guide-add-integration-ess] + +Elastic integrations are a streamlined way to connect your data from popular services and platforms to the {{stack}}, including nginx. + +1. From the **{{fleet}} → Agent policies** tab, click the link for your new `nginx-policy`. + + :::{image} images/guide-nginx-policy.png + :alt: The nginx-policy UI with integrations tab selected + ::: + +2. Note that the System integration (`system-1`) is included because you opted earlier to collect system logs and metrics. +3. Click **Add integration**. +4. On the Integrations page search for "nginx". + + :::{image} images/guide-integrations-page.png + :alt: Integrations page with nginx in the search bar + ::: + +5. Select the **Nginx** card. +6. Click **Add Nginx**. +7. Click the link to **Add integration only (skip agent installation)**. You’ll install standalone {{agent}} in a later step. +8. Here, you can select options such as the paths to where your nginx logs are stored, whether or not to collect metrics data, and various other settings. + + For now, leave all of the default settings and click **Save and continue** to add the Nginx integration to your `nginx-policy` policy. + + :::{image} images/guide-add-nginx-integration.png + :alt: Add Nginx Integration UI + ::: + +9. In the confirmation dialog, select to **Add {{agent}} later**. + + :::{image} images/guide-nginx-integration-added.png + :alt: Nginx Integration added confirmation UI with Add {{agent}} later selected. + ::: + + + +## Step 6: Configure standalone {{agent}} [nginx-guide-configure-standalone-agent-ess] + +Rather than opt for {{fleet}} to centrally manage {{agent}}, you’ll configure an agent to run in standalone mode, so it will be managed by hand. + +1. In {{fleet}}, open the **Agents** tab and click **Add agent**. +2. For the **What type of host are you adding?** step, select `nginx-policy` from the drop-down menu if it’s not already selected. +3. For the **Enroll in {{fleet}}?** step, select **Run standalone**. + + :::{image} images/guide-add-agent-standalone01.png + :alt: Add agent UI with nginx-policy and Run-standalone selected. + ::: + +4. For the **Configure the agent** step, choose **Download Policy**. Save the `elastic-agent.yml` file to a directory on the host where you’ll install nginx for monitoring. + + Have a look inside the policy file and notice that it contains all of the input, output, and other settings for the Nginx and System integrations. If you already have a standalone agent installed on a host with an existing {{agent}} policy, you can use the method described here to add a new integration. Just add the settings from the **Configure the agent** step to your existing `elastic-agent.yml` file. + +5. For the **Install {{agent}} on your host** step, select the tab for your host operating system and run the commands on your host. + + :::{image} images/guide-install-agent-on-host.png + :alt: Install {{agent}} on your host step, showing tabs with the commands for different operating systems. + ::: + + ::::{note} + {{agent}} commands should be run as `root`. You can prefix each agent command with `sudo` or you can start a new shell as `root` by running `sudo su`. If you need to run {{agent}} commands without `root` access, refer to [Run {{agent}} without administrative privileges](/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md). + + :::: + + + If you’re prompted with `Elastic Agent will be installed at {installation location} and will run as a service. Do you want to continue?` answer `Yes`. + + If you’re prompted with `Do you want to enroll this Agent into Fleet?` answer `no`. + +6. You can run the `status` command to confirm that {{agent}} is running. + + ```cmd + elastic-agent status + + ┌─ fleet + │ └─ status: (STOPPED) Not enrolled into Fleet + └─ elastic-agent + └─ status: (HEALTHY) Running + ``` + + Since you’re running the agent in standalone mode the `Not enrolled into Fleet` message is expected. + +7. Open the `elastic-agent.yml` policy file that you saved. +8. Near the top of the file, replace: + + ```yaml + username: '${ES_USERNAME}' + password: '${ES_PASSWORD}' + ``` + + with: + + ```yaml + api_key: '' + ``` + + where `your-api-key` is the API key that you generated in [Step 3: Create an {{es}} API key](#nginx-guide-create-api-key-ess). + +9. Find the location of the default `elastic-agent.yml` policy file that is included in your {{agent}} install. Install directories for each platform are described in [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md). In our example Ubuntu image the default policy file can be found in `/etc/elastic-agent/elastic-agent.yml`. +10. Replace the default policy file with the version that you downloaded and updated. For example: + + ```sh + cp /home/ubuntu/homedir/downloads/elastic-agent.yml /etc/elastic-agent/elastic-agent.yml + ``` + + ::::{note} + You may need to prefix the `cp` command with `sudo` for the permission required to replace the default file. + :::: + + + By default, {{agent}} monitors the configuration file and reloads the configuration automatically when `elastic-agent.yml` is updated. + +11. Run the `status` command again, this time with the `--output yaml` option which provides structured and much more detailed output. See the [`elastic-agent status`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-status-command) command documentation for more details. + + ```shell + elastic-agent status --output yaml + ``` + + The results show you the agent status together with details about the running components, which correspond to the inputs and outputs defined for the integrations that have been added to the {{agent}} policy, in this case the System and Nginx Integrations. + +12. At the top of the command output, the `info` section contains details about the agent instance. Make a note of the agent ID. In this example the ID is `4779b439-1130-4841-a878-e3d7d1a457d0`. You’ll use that ID in the next section. + + ```yaml + elastic-agent status --output yaml + + info: + id: 4779b439-1130-4841-a878-e3d7d1a457d0 + version: 8.9.1 + commit: 5640f50143410fe33b292c9f8b584117c7c8f188 + build_time: 2023-08-10 17:04:04 +0000 UTC + snapshot: false + state: 2 + message: Running + ``` + + + +## Step 7: Confirm that your {{agent}} data is flowing [nginx-guide-confirm-agent-data-ess] + +Now that {{agent}} is running, it’s time to confirm that the agent data is flowing into {{es}}. + +1. Check that {{agent}} logs are flowing. + + 1. Open the {{kib}} menu and go to **Analytics → Discover**. + 2. In the KQL query bar, enter the query `agent.id : "{{agent-id}}"` where `{{agent-id}}` is the ID you retrieved from the `elastic-agent status --output yaml` command. For example: `agent.id : "4779b439-1130-4841-a878-e3d7d1a457d0"`. + + If {{agent}} has connected successfully with your {{ecloud}} deployment, the agent logs should be flowing into {{es}} and visible in {{kib}} Discover. + + :::{image} images/guide-agent-logs-flowing.png + :alt: Kibana Discover shows agent logs are flowing into Elasticsearch. + ::: + +2. Check that {{agent}} metrics are flowing. + + 1. Open the {{kib}} menu and go to **Analytics → Dashboard**. + 2. In the search field, search for `Elastic Agent` and select `[Elastic Agent] Agent metrics` in the results. + + like the agent logs, the agent metrics should be flowing into {{es}} and visible in {{kib}} Dashboard. You can view metrics on CPU usage, memory usage, open handles, events rate, and more. + + :::{image} images/guide-agent-metrics-flowing.png + :alt: Kibana Dashboard shows agent metrics are flowing into Elasticsearch. + ::: + + + +## Step 8: View your system data [nginx-guide-view-system-data-ess] + +In the step to [create an {{agent}} policy](#nginx-guide-create-policy-ess) you chose to collect system logs and metrics, so you can access those now. + +1. View your system logs. + + 1. Open the {{kib}} menu and go to **Management → Integrations → Installed integrations**. + 2. Select the **System** card and open the **Assets** tab. This is a quick way to access all of the dashboards, saved searches, and visualizations that come with each integration. + 3. Select `[Logs System] Syslog dashboard`. + 4. Select the calandar icon and change the time setting to `Today`. The {{kib}} Dashboard shows visualizations of Syslog events, hostnames and processes, and more. + +2. View your system metrics. + + 1. Return to **Management → Integrations → Installed integrations**. + 2. Select the **System** card and open the **Assets** tab. + 3. This time, select `[Metrics System] Host overview`. + 4. Select the calandar icon and change the time setting to `Today`. The {{kib}} Dashboard shows visualizations of host metrics including CPU usage, memory usage, running processes, and others. + + :::{image} images/guide-system-metrics-dashboard.png + :alt: The System metrics host overview showing CPU usage, memory usage, and other visualizations + ::: + + + +## Step 9: View your nginx logging data [nginx-guide-view-nginx-data-ess] + +Now let’s view your nginx logging data. + +1. Open the {{kib}} menu and go to **Management → Integrations → Installed integrations**. +2. Select the **Nginx** card and open the **Assets** tab. +3. Select `[Logs Nginx] Overview`. The {{kib}} Dashboard opens with geographical log details, response codes and errors over time, top pages, and more. +4. Refresh your nginx web page several times to update the logging data. You can also try accessing the nginx page from different web browsers. After a minute or so, the `Browsers breakdown` visualization shows the respective volume of requests from the different browser types. + + :::{image} images/guide-nginx-browser-breakdown.png + :alt: Kibana Dashboard shows agent metrics are flowing into Elasticsearch. + ::: + + +Congratulations! You have successfully set up monitoring for nginx using standalone {{agent}} and an {{ecloud}} deployment. + + +## What’s next? [_whats_next_2] + +* Learn more about [{{fleet}} and {{agent}}](/reference/ingestion-tools/fleet/index.md). +* Learn more about [{{integrations}}](integration-docs://docs/reference/index.md). diff --git a/reference/ingestion-tools/fleet/extract_array-processor.md b/reference/ingestion-tools/fleet/extract_array-processor.md new file mode 100644 index 0000000000..d0a64e4690 --- /dev/null +++ b/reference/ingestion-tools/fleet/extract_array-processor.md @@ -0,0 +1,47 @@ +--- +navigation_title: "extract_array" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/extract_array-processor.html +--- + +# Extract array [extract_array-processor] + + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The `extract_array` processor populates fields with values read from an array field. + + +## Example [_example_25] + +The following example populates `source.ip` with the first element of the `my_array` field, `destination.ip` with the second element, and `network.transport` with the third. + +```yaml + - extract_array: + field: my_array + mappings: + source.ip: 0 + destination.ip: 1 + network.transport: 2 +``` + + +## Configuration settings [_configuration_settings_30] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | The array field whose elements are to be extracted. | +| `mappings` | Yes | | Maps each field name to an array index. Use 0 for the first element in the array. Multiple fields can be mapped to the same array element. | +| `ignore_missing` | No | `false` | Whether to ignore events where the array field is missing. If `false`, processing of an event fails if the specified field does not exist. | +| `overwrite_keys` | No | `false` | Whether to overwrite target fields specified in the mapping if the fields already exist. If `false`, processing fails if a target field already exists. | +| `fail_on_error` | No | `true` | If `true` and an error occurs, any changes to the event are reverted, and the original event is returned. If `false`, processing continues despite errors. | +| `omit_empty` | No | `false` | Whether empty values are extracted from the array. If `true`, instead of the target field being set to an empty value, it is left unset. The empty string (`""`), an empty array (`[]`), or an empty object (`{}`) are considered empty values. | + diff --git a/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md b/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md new file mode 100644 index 0000000000..593078c455 --- /dev/null +++ b/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md @@ -0,0 +1,79 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/filter-agent-list-by-tags.html +--- + +# Add tags to filter the Agents list [filter-agent-list-by-tags] + +You can add tags to {{agent}} during or after enrollment, then use the tags to filter the Agents list shown in {{fleet}}. + +Tags are useful for capturing information that is specific to the installation environment, such machine type, location, operating system, environment, and so on. Tags can be any arbitrary information that will help you filter and perform operations on {{agent}}s with the same attributes. + +To filter the Agents list by tag, in {{kib}}, go to **{{fleet}} > Agents** and click **Tags**. Select the tags to filter on. The tags are also available in the KQL field for autocompletion. + +:::{image} images/agent-tags.png +:alt: Agents list filtered to show agents with the staging tag +:class: screenshot +::: + +If you haven’t added tags to any {{agent}}s yet, the list will be empty. + + +## Add, remove, rename, or delete tags in {{fleet}} [add-tags-in-fleet] + +You can use {{fleet}} to add, remove, or rename tags applied to one or more {{agent}}s. + +Want to add tags when enrolling from a host instead? See [Add tags during agent enrollment](#add-tags-at-enrollment). + +To manage tags in {{fleet}}: + +1. On the **Agents** tab, select one or more agents. +2. From the **Actions** menu, click **Add / remove tags**. + + :::{image} images/add-remove-tags.png + :alt: Screenshot of add / remove tags menu + :class: screenshot + ::: + + ::::{tip} + Make sure you use the correct **Actions** menu. To manage tags for a single agent, click the ellipsis button under the **Actions** column. To manage tags for multiple agents, click the **Actions** button to open the bulk actions menu. + :::: + +3. In the tags menu, perform an action: + + | To…​ | Do this…​ | + | --- | --- | + | Create a new tag | Type the tag name and click **Create new tag…​**. Notice the tag name hasa check mark to show that the tag has been added to the selected agents. | + | Rename a tag | Hover over the tag name and click the ellipsis button. Type a new name and press Enter.The tag will be renamed in all agents that use it, even agents that are notselected. | + | Delete a tag | Hover over the tag name and click the ellipsis button. Click **Delete tag**.The tag will be deleted from all agents, even agents that are not selected. | + | Add or remove a tag from an agent | Click the tag name to add or clear the check mark. In the **Tags** column,notice that the tags are added or removed. Note that the menu only showstags that are common to all selected agents. | + + + +## Add tags during agent enrollment [add-tags-at-enrollment] + +When you install or enroll {{agent}} in {{fleet}}, you can specify a comma-separated list of tags to apply to the agent, then use the tags to filter the Agents list shown in {{fleet}}. + +The following command applies the `macOS` and `staging` tags during installation: + +```shell +sudo ./elastic-agent install \ + --url= \ + --enrollment-token= \ + --tag macOS,staging +``` + +For the full command synopsis, refer to [elastic-agent install](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) and [elastic-agent enroll](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-enroll-command). + +The following command applies the `docker` and `dev` tags to {{agent}} running in a Docker container: + +```yaml +docker run \ + --env FLEET_ENROLL=1 \ + --env FLEET_URL= \ + --env FLEET_ENROLLMENT_TOKEN= \ + --env ELASTIC_AGENT_TAGS=docker,dev + --rm docker.elastic.co/elastic-agent/elastic-agent:9.0.0-beta1 +``` + +For more information about running on containers, refer to the guides under [Install {{agent}}s in a containerized environment](/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md). diff --git a/reference/ingestion-tools/fleet/fingerprint-processor.md b/reference/ingestion-tools/fleet/fingerprint-processor.md new file mode 100644 index 0000000000..a88fcdf96b --- /dev/null +++ b/reference/ingestion-tools/fleet/fingerprint-processor.md @@ -0,0 +1,39 @@ +--- +navigation_title: "fingerprint" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fingerprint-processor.html +--- + +# Generate a fingerprint of an event [fingerprint-processor] + + +The `fingerprint` processor generates a fingerprint of an event based on a specified subset of its fields. + +The value that is hashed is constructed as a concatenation of the field name and field value separated by `|`. For example `|field1|value1|field2|value2|`. + +Nested fields are supported in the following format: `"field1.field2"`, for example: `["log.path.file", "foo"]` + + +## Example [_example_26] + +```yaml + - fingerprint: + fields: ["field1", "field2", ...] +``` + + +## Configuration settings [_configuration_settings_31] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | List of fields to use as the source for the fingerprint. The list will be alphabetically sorted by the processor. | +| `ignore_missing` | No | `false` | Whether to ignore missing fields. | +| `target_field` | No | `fingerprint` | Field in which the generated fingerprint should be stored. | +| `method` | No | `sha256` | Algorithm to use for computing the fingerprint. Must be one of: `md5`, `sha1`, `sha256`, `sha384`, `sha512`, or `xxhash`. | +| `encoding` | No | `hex` | Encoding to use on the fingerprint value. Must be one of: `hex`, `base32`, or `base64`. | + diff --git a/reference/ingestion-tools/fleet/fleet-agent-environment-variables.md b/reference/ingestion-tools/fleet/fleet-agent-environment-variables.md new file mode 100644 index 0000000000..77a70b0445 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-agent-environment-variables.md @@ -0,0 +1,11 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-agent-environment-variables.html +--- + +# Set environment variables in an Elastic Agent policy [fleet-agent-environment-variables] + +As an advanced use case, you may wish to configure environment variables in your {{agent}} policy. This is useful, for example, if there are configuration details about the system on which {{agent}} is running that you may not know in advance. As a solution, you may want to configure environment variables to be interpreted by {{agent}} at runtime, using information from the running environment. + +For {{fleet}}-managed {{agents}}, you can configure environment variables using the [Env Provider](/reference/ingestion-tools/fleet/env-provider.md). Refer to [Variables and conditions in input configurations](/reference/ingestion-tools/fleet/dynamic-input-configuration.md) in the standalone {{agent}} documentation for more detail. + diff --git a/reference/ingestion-tools/fleet/fleet-agent-proxy-managed.md b/reference/ingestion-tools/fleet/fleet-agent-proxy-managed.md new file mode 100644 index 0000000000..4f3d81f362 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-agent-proxy-managed.md @@ -0,0 +1,202 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-agent-proxy-managed.html +--- + +# Fleet managed Elastic Agent connectivity using a proxy server [fleet-agent-proxy-managed] + +Proxy settings in the {{agent}} policy override proxy settings specified by environment variables. This means you can specify proxy settings for {{agent}} that are different from host or system-level environment settings. + +This page describes where a proxy server is allowed in your deployment and how to configure proxy settings for {{agent}} and {{fleet}}. The steps for deploying the proxy server itself are beyond the scope of this article. + +{{agents}} generally egress two sets of connections, one for Control plane traffic to the {{fleet-server}}, the other Data plane traffic to an output such as {{es}}. In a similar fashion operators would place {{agent}} behind a proxy server, and proxy the control and data plane traffic to their final destinations. + +{{fleet}} central management enables you to define your proxy servers and then configure an output or the {{fleet-server}} to be reachable through any of these proxies. This also enables you to modify the proxy server details if needed without having to re-install {{agents}}. + +:::{image} images/agent-proxy-server-managed-deployment.png +:alt: Image showing connections between {{fleet}} managed {agent} +::: + +In this scenario Fleet Server and Elasticsearch are deployed in {{ecloud}} and reachable on port 443. + +## Configuring proxy servers in {{fleet}} for managed agents [fleet-agent-proxy-server-managed-agents] + +These steps describe how to set up {{fleet}} components to use a proxy. + +1. **Globally add proxy server details to {{fleet}}.** + + 1. In {{fleet}}, open the **Settings** tab. + 2. Select **Add proxy**. The **Add proxy** or **Edit proxy** flyout opens. + + :::{image} images/elastic-agent-proxy-edit-proxy.png + :alt: Screen capture of the Edit Proxy UI in Fleet + ::: + + 3. Add a name for the proxy (in this example `Proxy-A`) and specify the Proxy URL. + 4. Add any other optional settings. + 5. Select **Save and apply settings**. The proxy information is saved and that proxy is ready for other components in {{fleet}} to reference. + +2. **Attach the proxy to a {{fleet-server}}.** + + If the control plane traffic to/from the Fleet Server needs to also go through the proxy server, the proxy created needs to also be added to the definition of that Fleet Server. + + 1. In {{fleet}}, open the **Settings** tab. + 2. In the list of **Fleet Server Hosts**, choose a host and select the edit button to configure it. + 3. In the **Proxy** section dropdown list, select the proxy that you configured. + + :::{image} images/elastic-agent-proxy-edit-fleet-server.png + :alt: Screen capture of the Edit Fleet Server UI + ::: + + In this example, All the {{agents}} in a policy that uses this {{fleet-server}} will now connect to the {{fleet-server}} through the proxy server defined in `Proxy-A`. + + + :::::{admonition} + ::::{warning} + Any invalid changes to the {{fleet-server}} definition that may cause connectivity issues between the {{agents}} and the {{fleet-server}} will cause them to disconnect. The only remedy would be to re-install the affected agents. This is because the connectivity to the {{fleet-server}} ensures policy updates reach the agents. If a policy with an invalid host address reaches the agent it will no longer be able to connect and therefore won’t receive any other updates from the {{fleet-server}} (including the corrected setting). In this regard, adding a proxy server that is not reachable by the agents will break connectivity to the {{fleet-server}}. + :::: + + + ::::: + +3. **Attach the proxy to the output** + + Similarly, if the data plane traffic to an output is to traverse via a proxy, that proxy definition would need to be added to the output defined in the Fleet. + + 1. In {{fleet}}, open the **Settings** tab. + 2. In the list of **Outputs**, choose an output and select the edit button to configure it. + 3. In the **Proxy** section dropdown list, select the proxy that you configured. + + :::{image} images/elastic-agent-proxy-edit-output.png + :alt: Screen capture of the Edit output UI in Fleet + ::: + + In this example, All the {{agents}} in a policy that is configured to write to the chosen output will now write to that output through the proxy server defined in `Proxy-A`. + + + :::::{admonition} + ::::{warning} + If agents are unable to reach the configured proxy server, they will not be able to write data to the output that has the proxy server configured. When changing the proxy of an output, please ensure that the affected agents all have connectivity to the proxy itself. + :::: + + + ::::: + +4. **Attach the proxy to the agent download source** + + Likewise, if the download traffic to or from the artifact registry needs to go through the proxy server, that proxy definition also needs to be added to the agent binary source defined in {{Fleet}}. + + 1. In {{fleet}}, open the **Settings** tab. + 2. In the **Agent Binary Download** list, choose an agent binary source and select the edit button to configure it. + 3. In the **Proxy** section dropdown list, select the proxy that you configured. + + :::{image} images/elastic-agent-proxy-edit-agent-binary-source.png + :alt: Screen capture of the Edit agent binary source UI in Fleet + ::: + + In this example, all of the {{agents}} enrolled in a policy that is configured to download from the chosen agent download source will now download from that agent download source through the proxy server defined in `Proxy-A`. + + + :::::{admonition} + ::::{warning} + If agents are unable to reach the configured proxy server, they will not be able to download binaries from the agent download source that has the proxy server configured. When changing the proxy of an agent binary source, please ensure that the affected agents all have connectivity to the proxy itself. + :::: + + + ::::: + +5. **Configure the {{agent}} policy** + + You can now configure the {{agent}} policy to use the {{fleet-server}} and the outputs that are reachable through a proxy server. + + * If the policy is configured with a {{fleet-server}} that has a proxy attached to it, all the control plane traffic from the agents in that policy will reach the {{fleet-server}} through that proxy. + * Similarly, if the output definition has a proxy attached to it, all the agents in that policy will write (data plane) to the output through the proxy. + +6. **Enroll the {{agents}}** + + Now that {{fleet}} is configured, all policy downloads will update the agent with the latest configured proxies. When the agent is first installed it needs to communicate with {{fleet}} (through {{fleet-server}}) in order to download its first policy configuration. + + + +### Set the proxy for retrieving agent policies from {{fleet}} [cli-proxy-settings] + +If there is a proxy between {{agent}} and {{fleet}}, specify proxy settings on the command line when you install {{agent}} and enroll in {{fleet}}. The settings you specify at the command line are added to the `fleet.yml` file installed on the system where the {{agent}} is running. + +::::{note} +If the initial agent communication with {{fleet}} (i.e control plane) needs to traverse the proxy server, then the agent needs to be configured to do so using the `–proxy-url` command line flag which is applied during the agent installation. Once connectivity to {{fleet}} is established, proxy server details can be managed through the UI. +:::: + + +::::{note} +If {{kib}} is behind a proxy server, you’ll still need to [configure {{kib}} settings](/reference/ingestion-tools/fleet/epr-proxy-setting.md) to access the package registry. +:::: + + +The `enroll` and `install` commands accept the following flags: + +| CLI flag | Description | +| --- | --- | +| `--proxy-url ` | URL of the proxy server. The value may be either a complete URL or a`host[:port]`, in which case the `http` scheme is assumed. The URL accepts optionalusername and password settings for authenticating with the proxy. For example:`http://:@/`. | +| `--proxy-disabled` | If specified, all proxy settings, including the `HTTP_PROXY` and `HTTPS_PROXY`environment variables, are ignored. | +| `--proxy-header
=` | Additional header to send to the proxy during CONNECT requests. Use the`--proxy-header` flag multiple times to add additional headers. You can usethis setting to pass keys/tokens required for authenticating with the proxy. | + +For example: + +```sh +elastic-agent install --url="https://10.0.1.6:8220" --enrollment-token=TOKEN --proxy-url="http://10.0.1.7:3128" --fleet-server-es-ca="/usr/local/share/ca-certificates/es-ca.crt" --certificate-authorities="/usr/local/share/ca-certificates/fleet-ca.crt" +``` + +The command in the previous example adds the following settings to the `fleet.yml` policy on the host where {{agent}} is installed: + +```yaml +fleet: + enabled: true + access_api_key: API-KEY + hosts: + - https://10.0.1.6:8220 + ssl: + verification_mode: "" + certificate_authorities: + - /usr/local/share/ca-certificates/es-ca.crt + renegotiation: never + timeout: 10m0s + proxy_url: http://10.0.1.7:3128 + reporting: + threshold: 10000 + check_frequency_sec: 30 + agent: + id: "" +``` + +::::{note} +When {{agent}} runs, the `fleet.yml` file gets encrypted and renamed to `fleet.enc`. +:::: + + + +## {{agent}} connectivity using a secure proxy gateway [fleet-agent-proxy-server-secure-gateway] + +Many secure proxy gateways are configured to perform mutual TLS and expect all connections to present their certificate. In these instances the Client (in this case the Elastic Agent) would need to present a certificate and key to the Server (the secure proxy). In return the client expects to see a certificate authority chain from the server to ensure it is also communicating to a trusted entity. + +:::{image} images/elastic-agent-proxy-gateway-secure.png +:alt: Image showing data flow between the proxy server and the Certificate Authority +::: + +If mTLs is a requirement when connecting to your proxy server, then you have the option to add the Client certificate and Client certificate key to the proxy. Once configured, all the Elastic Agents in a policy that connect to this secure proxy (via an output or fleet server), will use the nominated certificates to establish connections to the proxy server. + +It should be noted that the user can define a local path to the certificate and key as in many common scenarios the certificate and key will be unique for each Elastic Agent. + +Equally important is the Certificate Authority that the agents need to use to validate the certificate they are receiving from the secure proxy server. This can also be added when creating the proxy definition in the Fleet settings. + +:::{image} images/elastic-agent-edit-proxy-secure-settings.png +:alt: Screen capture of the Edit Proxy UI +::: + +::::{note} +Currently {{agents}} will not present a certificate for Control Plane traffic to the {{fleet-server}}. Some proxy servers are setup to mandate that the client setting up a connection presents a certificate to them before allowing that client to connect. This issue will be resolved by [issue #2248](https://github.com/elastic/elastic-agent/issues/2248). Our recommendation is to avoid adding a secure proxy as such in a {{fleet-server}} configuration flyout. +:::: + + +::::{note} +In case {{kib}} is behind a proxy server or is otherwise unable to access the {{package-registry}} to download package metadata and content, refer to [Set the proxy URL of the {{package-registry}}](/reference/ingestion-tools/fleet/epr-proxy-setting.md). +:::: diff --git a/reference/ingestion-tools/fleet/fleet-agent-proxy-standalone.md b/reference/ingestion-tools/fleet/fleet-agent-proxy-standalone.md new file mode 100644 index 0000000000..87834aaa31 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-agent-proxy-standalone.md @@ -0,0 +1,34 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-agent-proxy-standalone.html +--- + +# Standalone Elastic Agent connectivity using a proxy server [fleet-agent-proxy-standalone] + +Proxy settings in the {{agent}} policy override proxy settings specified by environment variables. This means you can specify proxy settings for {{agent}} that are different from host or system-level environment settings. + +The following proxy settings are valid in the agent policy: + +| Setting | Description | +| --- | --- | +| `proxy_url` | (string) URL of the proxy server. If set, the configured URL is used as aproxy for all connection attempts by the component. The value may be either acomplete URL or a `host[:port]`, in which case the `http` scheme is assumed. Ifa value is not specified through the configuration, then proxy environmentvariables are used. The URL accepts optional `username` and `password` settingsfor authenticating with the proxy. For example:`http://:@/`. | +| `proxy_headers` | (string) Additional headers to send to the proxy during CONNECT requests. Youcan use this setting to pass keys/tokens required for authenticating with theproxy. | +| `proxy_disable` | (boolean) If set to `true`, all proxy settings, including the `HTTP_PROXY` and`HTTPS_PROXY` environment variables, are ignored. | + + +## Set the proxy for communicating with {{es}} [_set_the_proxy_for_communicating_with_es] + +For standalone agents, to set the proxy for communicating with {{es}}, specify proxy settings in the `elastic-agent.yml` file. For example: + +```yaml +outputs: + default: + api_key: API-KEY + hosts: + - https://10.0.1.2:9200 + proxy_url: http://10.0.1.7:3128 + type: elasticsearch +``` + +For more information, refer to [*Configure standalone {{agent}}s*](/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md). + diff --git a/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md b/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md new file mode 100644 index 0000000000..606d925856 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md @@ -0,0 +1,29 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-agent-proxy-support.html +--- + +# Using a proxy server with Elastic Agent and Fleet [fleet-agent-proxy-support] + +Many enterprises secure their assets by placing a proxy server between them and the internet. The main role of the proxy server is to filter content and provide a single gateway through which all traffic traverses in and out of a data center. These proxy servers provide a various degree of functionality, security and privacy. + +Your organization’s security strategy and other considerations may require you to use a proxy server between some components in your deployment. For example, you may have a firewall rule that prevents endpoints from connecting directly to {{es}}. In this scenario, you can set up the {{agent}} to connect to a proxy, then the proxy can connect to {{es}} through the firewall. + +Support is available in {{agent}} and {{fleet}} for connections through HTTP Connect (HTTP 1 only) and SOCKS5 proxy servers. + +::::{note} +Some environments require users to authenticate with the proxy. There are no explicit settings for proxy authentication in {{agent}} or {{fleet}}, except the ability to pass credentials in the URL or as keys/tokens in headers, as described later. +:::: + + +Refer to [When to configure proxy settings](/reference/ingestion-tools/fleet/elastic-agent-proxy-config.md) for more detail, or jump into one of the following guides: + +* [Proxy Server connectivity using default host variables](/reference/ingestion-tools/fleet/host-proxy-env-vars.md) +* [Fleet managed {{agent}} connectivity using a proxy server](/reference/ingestion-tools/fleet/fleet-agent-proxy-managed.md) +* [Standalone {{agent}} connectivity using a proxy server](/reference/ingestion-tools/fleet/fleet-agent-proxy-standalone.md) + + + + + + diff --git a/reference/ingestion-tools/fleet/fleet-agent-serverless-restrictions.md b/reference/ingestion-tools/fleet/fleet-agent-serverless-restrictions.md new file mode 100644 index 0000000000..e2d805f06b --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-agent-serverless-restrictions.md @@ -0,0 +1,38 @@ +--- +navigation_title: "Restrictions for {{serverless-full}}" +applies_to: + serverless: all +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-agent-serverless-restrictions.html +--- + +# {{fleet}} and {{agent}} restrictions for {{serverless-full}} [fleet-agent-serverless-restrictions] + +## {{agent}} [elastic-agent-serverless-restrictions] + +If you are using {{agent}} with [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), note these differences from use with {{ess}} and self-managed {{es}}: + +* The number of {{agents}} that may be connected to an {{serverless-full}} project is limited to 10 thousand. +* The minimum supported version of {{agent}} supported for use with {{serverless-full}} is 8.11.0. + +$$$outputs-serverless-restrictions$$$ +**Outputs** + +* On {{serverless-short}}, you can configure new {{es}} outputs to use a proxy, with the restriction that the output URL is fixed. Any new {{es}} outputs must use the default {{es}} host URL. + + +## {{fleet}} [fleet-serverless-restrictions] + +The path to get to the {{fleet}} application in {{kib}} differs across projects: + +* In {{ess}} deployments, navigate to **Management > Fleet**. +* In {{serverless-short}} {{observability}} projects, navigate to **Project settings > Fleet**. +* In {{serverless-short}} Security projects, navigate to **Assets > Fleet**. + + +## {{fleet-server}} [fleet-server-serverless-restrictions] + +Note the following restrictions with using {{fleet-server}} on {{serverless-short}}: + +* On-premises {{fleet-server}} is not currently available for use in a {{serverless-short}} environment. We recommend using the hosted {{fleet-server}} that is included and configured automatically in {{serverless-short}} {{observability}} and Security projects. +* On {{serverless-short}}, you can configure {{fleet-server}} to use a proxy, with the restriction that the {{fleet-server}} host URL is fixed. Any new {{fleet-server}} hosts must use the default {{fleet-server}} host URL. diff --git a/reference/ingestion-tools/fleet/fleet-api-docs.md b/reference/ingestion-tools/fleet/fleet-api-docs.md new file mode 100644 index 0000000000..373efcb1e1 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-api-docs.md @@ -0,0 +1,407 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-api-docs.html +--- + +# Kibana Fleet APIs [fleet-api-docs] + +You can find details for all available {{fleet}} API endpoints in our generated [Kibana API docs](https://www.elastic.co/docs/api/doc/kibana). + +In this section, we provide examples of some commonly used {{fleet}} APIs. + + +## Using the Console [using-the-console] + +You can run {{fleet}} API requests through the {{kib}} Console. + +1. Open the {{kib}} menu and go to **Management → Dev Tools**. +2. In your request, prepend your {{fleet}} API endpoint with `kbn:`, for example: + + ```sh + GET kbn:/api/fleet/agent_policies + ``` + + +For more detail about using the {{kib}} Console refer to [Run API requests](/explore-analyze/query-filter/tools/console.md). + + +## Authentication [authentication] + +Authentication is required to send {{fleet}} API requests. For more information, refer to [Authentication](https://www.elastic.co/docs/api/doc/kibana/authentication). + + +## Create agent policy [create-agent-policy-api] + +To create a new agent policy in {{fleet}}, call `POST /api/fleet/agent_policies`. + +This cURL example creates an agent policy called `Agent policy 1` in the default namespace. + +```shell +curl --request POST \ + --url 'https://my-kibana-host:9243/api/fleet/agent_policies?sys_monitoring=true' \ + --header 'Accept: */*' \ + --header 'Authorization: ApiKey yourbase64encodedkey' \ + --header 'Cache-Control: no-cache' \ + --header 'Connection: keep-alive' \ + --header 'Content-Type: application/json' \ + --header 'kbn-xsrf: xxx' \ + --data '{ + "name": "Agent policy 1", + "description": "", + "namespace": "default", + "monitoring_enabled": [ + "logs", + "metrics" + ] +}' +``` + +::::{admonition} +To save time, you can use {{kib}} to generate the API request, then run it from the Dev Tools console. + +1. Go to **{{fleet}} → Agent policies**. +2. Click **Create agent policy** and give the policy a name. +3. Click **Preview API request**. +4. Click **Open in Console** and run the request. + +:::: + + +Example response: + +```shell +{ + "item": { + "id": "2b820230-4b54-11ed-b107-4bfe66d759e4", <1> + "name": "Agent policy 1", + "description": "", + "namespace": "default", + "monitoring_enabled": [ + "logs", + "metrics" + ], + "status": "active", + "is_managed": false, + "revision": 1, + "updated_at": "2022-10-14T00:07:19.763Z", + "updated_by": "1282607447", + "schema_version": "1.0.0" + } +} +``` + +1. Make a note of the policy ID. You’ll need the policy ID to add integration policies. + + + +## Create integration policy [create-integration-policy-api] + +To create an integration policy (also known as a package policy) and add it to an existing agent policy, call `POST /api/fleet/package_policies`. + +::::{tip} +You can use the {{fleet}} API to [Create and customize an {{elastic-defend}} policy](/reference/security/elastic-defend/create-defend-policy-api.md). +:::: + + +This cURL example creates an integration policy for Nginx and adds it to the agent policy created in the previous example: + +```shell +curl --request POST \ + --url 'https://my-kibana-host:9243/api/fleet/package_policies' \ + --header 'Authorization: ApiKey yourbase64encodedkey' \ + --header 'Content-Type: application/json' \ + --header 'kbn-xsrf: xx' \ + --data '{ + "name": "nginx-demo-123", + "policy_id": "2b820230-4b54-11ed-b107-4bfe66d759e4", + "package": { + "name": "nginx", + "version": "1.5.0" + }, + "inputs": { + "nginx-logfile": { + "streams": { + "nginx.access": { + "vars": { + "tags": [ + "test" + ] + } + }, + "nginx.error": { + "vars": { + "tags": [ + "test" + ] + } + } + } + } + } +}' +``` + +::::{admonition} +* To save time, you can use {{kib}} to generate the API call, then run it from the Dev Tools console. + + 1. Go to **Integrations**, select an {{agent}} integration, and click **Add **. + 2. Configure the integration settings and select which agent policy to use. + 3. Click **Preview API request**. + + If you’re creating the integration policy for a new agent policy, the preview shows two requests: one to create the agent policy, and another to create the integration policy. + + 4. Click **Open in Console** and run the request (or requests). + +* To find out which inputs, streams, and variables are available for an integration, go to **Integrations**, select an {{agent}} integration, and click **API reference**. + +:::: + + +Example response (truncated for readability): + +```shell +{ + "item" : { + "created_at" : "2022-10-15T00:41:28.594Z", + "created_by" : "1282607447", + "enabled" : true, + "id" : "92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", + "inputs" : [ + { + "enabled" : true, + "policy_template" : "nginx", + "streams" : [ + { + "compiled_stream" : { + "exclude_files" : [ + ".gz$" + ], + "ignore_older" : "72h", + "paths" : [ + "/var/log/nginx/access.log*" + ], + "processors" : [ + { + "add_locale" : null + } + ], + "tags" : [ + "test" + ] + }, + "data_stream" : { + "dataset" : "nginx.access", + "type" : "logs" + }, + "enabled" : true, + "id" : "logfile-nginx.access-92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", + "release" : "ga", + "vars" : { + "ignore_older" : { + "type" : "text", + "value" : "72h" + }, + "paths" : { + "type" : "text", + "value" : [ + "/var/log/nginx/access.log*" + ] + }, + "preserve_original_event" : { + "type" : "bool", + "value" : false + }, + "processors" : { + "type" : "yaml" + }, + "tags" : { + "type" : "text", + "value" : [ + "test" + ] + } + } + }, + { + "compiled_stream" : { + "exclude_files" : [ + ".gz$" + ], + "ignore_older" : "72h", + "multiline" : { + "match" : "after", + "negate" : true, + "pattern" : "^\\d{4}\\/\\d{2}\\/\\d{2} " + }, + "paths" : [ + "/var/log/nginx/error.log*" + ], + "processors" : [ + { + "add_locale" : null + } + ], + "tags" : [ + "test" + ] + }, + "data_stream" : { + "dataset" : "nginx.error", + "type" : "logs" + }, + "enabled" : true, + "id" : "logfile-nginx.error-92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", + "release" : "ga", + "vars" : { + "ignore_older" : { + "type" : "text", + "value" : "72h" + }, + "paths" : { + "type" : "text", + "value" : [ + "/var/log/nginx/error.log*" + ] + }, + "preserve_original_event" : { + "type" : "bool", + "value" : false + }, + "processors" : { + "type" : "yaml" + }, + "tags" : { + "type" : "text", + "value" : [ + "test" + ] + } + } + } + ], + "type" : "logfile" + }, + ... + { + "enabled" : true, + "policy_template" : "nginx", + "streams" : [ + { + "compiled_stream" : { + "hosts" : [ + "http://127.0.0.1:80" + ], + "metricsets" : [ + "stubstatus" + ], + "period" : "10s", + "server_status_path" : "/nginx_status" + }, + "data_stream" : { + "dataset" : "nginx.stubstatus", + "type" : "metrics" + }, + "enabled" : true, + "id" : "nginx/metrics-nginx.stubstatus-92f33e57-3165-4dcd-a1d5-f01c8ffdcbcd", + "release" : "ga", + "vars" : { + "period" : { + "type" : "text", + "value" : "10s" + }, + "server_status_path" : { + "type" : "text", + "value" : "/nginx_status" + } + } + } + ], + "type" : "nginx/metrics", + "vars" : { + "hosts" : { + "type" : "text", + "value" : [ + "http://127.0.0.1:80" + ] + } + } + } + ], + "name" : "nginx-demo-123", + "namespace" : "default", + "package" : { + "name" : "nginx", + "title" : "Nginx", + "version" : "1.5.0" + }, + "policy_id" : "d625b2e0-4c21-11ed-9426-31f0877749b7", + "revision" : 1, + "updated_at" : "2022-10-15T00:41:28.594Z", + "updated_by" : "1282607447", + "version" : "WzI5OTAsMV0=" + } +} +``` + + +## Get enrollment tokens [get-enrollment-token-api] + +To get a list of valid enrollment tokens from {{fleet}}, call `GET /api/fleet/enrollment_api_keys`. + +This cURL example returns a list of enrollment tokens. + +```shell +curl --request GET \ + --url 'https://my-kibana-host:9243/api/fleet/enrollment_api_keys' \ + --header 'Authorization: ApiKey N2VLRDA0TUJIQ05MaGYydUZrN1Y6d2diMUdwSkRTWGFlSm1rSVZlc2JGQQ==' \ + --header 'Content-Type: application/json' \ + --header 'kbn-xsrf: xx' +``` + +Example response (formatted for readability): + +```shell +{ + "items" : [ + { + "active" : true, + "api_key" : "QlN2UaA0TUJlMGFGbF8IVkhJaHM6eGJjdGtyejJUUFM0a0dGSwlVSzdpdw==", + "api_key_id" : "BSvR04MBe0aFl_HVHIhs", + "created_at" : "2022-10-14T00:07:21.420Z", + "id" : "39703af4-5945-4232-90ae-3161214512fa", + "name" : "Default (39703af4-5945-4232-90ae-3161214512fa)", + "policy_id" : "2b820230-4b54-11ed-b107-4bfe66d759e4" + }, + { + "active" : true, + "api_key" : "Yi1MSTA2TUJIQ05MaGYydV9kZXQ5U2dNWFkyX19sWEdSemFQOUfzSDRLZw==", + "api_key_id" : "b-LI04MBHCNLhf2u_det", + "created_at" : "2022-10-13T23:58:29.266Z", + "id" : "e4768bf2-55a6-433f-a540-51d4ca2d34be", + "name" : "Default (e4768bf2-55a6-433f-a540-51d4ca2d34be)", + "policy_id" : "ee37a8e0-4b52-11ed-b107-4bfe66d759e4" + }, + { + "active" : true, + "api_key" : "b3VLbjA0TUJIQ04MaGYydUk1Z3Q6VzhMTTBITFRTmnktRU9IWDaXWnpMUQ==", + "api_key_id" : "luKn04MBHCNLhf2uI5d4", + "created_at" : "2022-10-13T23:21:30.707Z", + "id" : "d18d2918-bb10-44f2-9f98-df5543e21724", + "name" : "Default (d18d2918-bb10-44f2-9f98-df5543e21724)", + "policy_id" : "c3e31e80-4b4d-11ed-b107-4bfe66d759e4" + }, + { + "active" : true, + "api_key" : "V3VLRTa0TUJIQ05MaGYydVMx4S06WjU5dsZ3YzVRSmFUc5xjSThImi1ydw==", + "api_key_id" : "WuKE04MBHCNLhf2uS1E-", + "created_at" : "2022-10-13T22:43:27.139Z", + "id" : "aad31121-df89-4f57-af84-7c43f72640ee", + "name" : "Default (aad31121-df89-4f57-af84-7c43f72640ee)", + "policy_id" : "72fcc4d0-4b48-11ed-b107-4bfe66d759e4" + }, + ], + "page" : 1, + "perPage" : 20, + "total" : 4 +} +``` diff --git a/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md b/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md new file mode 100644 index 0000000000..0b465a0dae --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md @@ -0,0 +1,87 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-enrollment-tokens.html +--- + +# Fleet enrollment tokens [fleet-enrollment-tokens] + +A {{fleet}} enrollment token (referred to as an `enrollment API key` in the {{fleet}} API documentation) is an {{es}} API key that you use to enroll one or more {{agent}}s in {{fleet}}. The enrollment token enrolls the {{agent}} in a specific agent policy that defines the data to be collected by the agent. You can use the token as many times as required. It will remain valid until you revoke it. + +The enrollment token is used for the initial communication between {{agent}} and {{fleet-server}}. After the initial connection request from the {{agent}}, the {{fleet-server}} passes two API keys to the {{agent}}: + +* An output API key + + This API key is used to send data to {{es}}. It has the minimal permissions needed to ingest all the data specified by the agent policy. If the API key is invalid, the {{agent}} stops ingesting data into {{es}}. + +* A communication API key + + This API key is used to communicate with the {{fleet-server}}. It has only the permissions needed to communicate with the {{fleet-server}}. If the API key is invalid, {{fleet-server}} stops communicating with the {{agent}}. + + + +## Create enrollment tokens [create-fleet-enrollment-tokens] + +Create enrollment tokens and use them to enroll {{agent}}s in specific policies. + +::::{tip} +When you use the {{fleet}} UI to add an agent or create a new policy, {{fleet}} creates an enrollment token for you automatically. +:::: + + +To create an enrollment token: + +1. In {{kib}}, go to **Management → {{fleet}} → Enrollment tokens**. +2. Click **Create enrollment token**. Name your token and select an agent policy. + + Note that the token name you specify must be unique so as to avoid conflict with any existing API keys. + + :::{image} images/create-token.png + :alt: Enrollment tokens tab in {fleet} + :class: screenshot + ::: + +3. Click **Create enrollment token**. +4. In the list of tokens, click the **Show token** icon to see the token secret. + + :::{image} images/show-token.png + :alt: Enrollment tokens tab with Show token icon highlighted + :class: screenshot + ::: + + +All {{agent}}s enrolled through this token will use the selected policy unless you assign or enroll them in a different policy. + +To learn how to install {{agent}}s and enroll them in {{fleet}}, refer to [*Install {{agent}}s*](/reference/ingestion-tools/fleet/install-elastic-agents.md). + +::::{tip} +You can use the {{fleet}} API to get a list of enrollment tokens. For more information, refer to [{{kib}} {{fleet}} APIs](/reference/ingestion-tools/fleet/fleet-api-docs.md). +:::: + + + +## Revoke enrollment tokens [revoke-fleet-enrollment-tokens] + +You can revoke an enrollment token that you no longer wish to use to enroll {{agents}} in an agent policy in {{fleet}}. Revoking an enrollment token essentially invalidates the API key used by agents to communicate with {{fleet-server}}. + +To revoke an enrollment token: + +1. In {{fleet}}, click **Enrollment tokens**. +2. Find the token you want to revoke in the list and click the **Revoke token** icon. + + :::{image} images/revoke-token.png + :alt: Enrollment tokens tab with Revoke token highlighted + :class: screenshot + ::: + +3. Click **Revoke enrollment token**. You can no longer use this token to enroll {{agent}}s. However, the currently enrolled agents will continue to function. + + To re-enroll your {{agent}}s, use an active enrollment token. + + +Note that when an enrollment token is revoked it is not immediately deleted. Deletion occurs automatically after the duration specified in the {{es}} [`xpack.security.authc.api_key.delete.retention_period`](elasticsearch://docs/reference/elasticsearch/configuration-reference/security-settings.md#api-key-service-settings-delete-retention-period) setting has expired (see [Invalidate API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-invalidate-api-key) for details). + +Until the enrollment token has been deleted: + +* The token name may not be re-used when you [create an enrollment token](#create-fleet-enrollment-tokens). +* The token continues to be visible in the {{fleet}} UI. +* The token continues to be returned by a `GET /api/fleet/enrollment_api_keys` API request. Revoked enrollment tokens are identified as `"active": false`. diff --git a/reference/ingestion-tools/fleet/fleet-roles-privileges.md b/reference/ingestion-tools/fleet/fleet-roles-privileges.md new file mode 100644 index 0000000000..bbb3770849 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-roles-privileges.md @@ -0,0 +1,53 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-roles-and-privileges.html +--- + +# Required roles and privileges [fleet-roles-and-privileges] + +Beginning with {{stack}} version 8.1, you no longer require the built-in `elastic` superuser credentials to use {{fleet}} and Integrations. + +Assigning the {{kib}} feature privileges `Fleet` and `Integrations` grants access to these features: + +`all` +: Grants full read-write access. + +`read` +: Grants read-only access. + +The built-in `editor` role grants the following privileges, supporting full read-write access to {{fleet}} and Integrations: + +* {{Fleet}}: `All` +* Integrations: `All` + +The built-in `viewer` role grants the following privileges, supporting read-only access to {{fleet}} and Integrations: + +* {{Fleet}}:: `None` +* Integrations:: `Read` + +You can also create a new role that can be assigned to a user to grant access to {{fleet}} and Integrations. + + +## Create a role for {{fleet}} [fleet-roles-and-privileges-create] + +To create a new role with full access to use and manage {{fleet}} and Integrations: + +1. In {{kib}}, go to **Management → Stack Management**. +2. In the **Security** section, select **Roles**. +3. Select **Create role**. +4. Specify a name for the role. +5. Leave the {{es}} settings at their defaults, or refer to [Security privileges](elasticsearch://docs/reference/elasticsearch/security-privileges.md) for descriptions of the available settings. +6. In the {{kib}} section, select **Add Kibana privilege**. +7. In the **Spaces** menu, select *** All Spaces**. Since many Integrations assets are shared across spaces, the users needs the {{kib}} privileges in all spaces. +8. Expand the **Management** section. +9. Set **Fleet** privileges to **All**. +10. Set **Integrations** privileges to **All**. + +:::{image} images/kibana-fleet-privileges.png +:alt: Kibana privileges flyout showing Fleet and Integrations set to All +:class: screenshot +::: + +To create a read-only user for Integrations, follow the same steps as above but set the **Fleet** privileges to **None*** and the ***Integrations** privileges to **Read**. + +Read-only access to {{fleet}} is not currently supported but is planned for development in a later release. diff --git a/reference/ingestion-tools/fleet/fleet-server-monitoring.md b/reference/ingestion-tools/fleet/fleet-server-monitoring.md new file mode 100644 index 0000000000..06d0fbebf8 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-server-monitoring.md @@ -0,0 +1,46 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-server-monitoring.html +--- + +# Monitor a self-managed Fleet Server [fleet-server-monitoring] + +For self-managed {{fleet-server}}s, monitoring is key because the operation of the {{fleet-server}} is paramount to the health of the deployed agents and the services they offer. When {{fleet-server}} is not operating correctly, it may lead to delayed check-ins, status information, and updates for the agents it manages. The monitoring data will tell you when to add capacity for {{fleet-server}}, and provide error logs and information to troubleshoot other issues. + +For self-managed clusters, monitoring is on by default when you create a new agent policy or use the existing Default {{fleet-server}} agent policy. + +To monitor {{fleet-server}}: + +1. In {{fleet}}, open the **Agent policies** tab. +2. Click the {{fleet-server}} policy name to edit the policy. +3. Click the **Settings** tab and verify that **Collect agent logs** and **Collect agent metrics** are selected. +4. Next, set the **Default namespace** to something like `fleetserver`. + + Setting the default namespace lets you segregate {{fleet-server}} monitoring data from other collected data. This makes it easier to search and visualize the monitoring data. + + :::{image} images/fleet-server-agent-policy-page.png + :alt: {{fleet-server}} agent policy + :class: screenshot + ::: + +5. To confirm your change, click **Save changes**. + +To see the metrics collected for the agent running {{fleet-server}}, go to **Analytics > Discover**. + +In the following example, `fleetserver` was configured as the namespace, and you can see the metrics collected: + +:::{image} images/datastream-namespace.png +:alt: Data stream +:class: screenshot +::: + +Go to **Analytics > Dashboard** and search for the predefined dashboard called **[Elastic Agent] Agent metrics**. Choose this dashboard, and run a query based on the `fleetserver` namespace. + +The following dashboard shows data for the query `data_stream.namespace: "fleetserver"`. In this example, you can observe CPU and memory usage as a metric and then resize the {{fleet-server}}, if necessary. + +:::{image} images/dashboard-datastream01.png +:alt: Dashboard Data stream +:class: screenshot +::: + +Note that as an alternative to running the query, you can hide all metrics except `fleet_server` in the dashboard. diff --git a/reference/ingestion-tools/fleet/fleet-server-scalability.md b/reference/ingestion-tools/fleet/fleet-server-scalability.md new file mode 100644 index 0000000000..07f03f466a --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-server-scalability.md @@ -0,0 +1,221 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-server-scalability.html +--- + +# Fleet Server scalability [fleet-server-scalability] + +This page summarizes the resource and {{fleet-server}} configuration requirements needed to scale your deployment of {{agent}}s. To scale {{fleet-server}}, you need to modify settings in your deployment and the {{fleet-server}} agent policy. + +::::{tip} +Refer to the [Scaling recommendations](#agent-policy-scaling-recommendations) section for specific recommendations about using {{fleet-server}} at scale. +:::: + + +First modify your {{fleet}} deployment settings in {{ecloud}}: + +1. Log in to {{ecloud}} and go to your deployment. +2. Under **Deployments > *deployment name***, click **Edit**. +3. Under {{integrations-server}}: + + * Modify the compute resources available to the server to accommodate a higher scale of {{agent}}s + * Modify the availability zones to satisfy fault tolerance requirements + + For recommended settings, refer to [Scaling recommendations ({{ecloud}})](#scaling-recommendations). + + :::{image} images/fleet-server-hosted-container.png + :alt: {{fleet-server}} hosted agent + :class: screenshot + ::: + + +Next modify the {{fleet-server}} configuration by editing the agent policy: + +1. In {{fleet}}, open the **Agent policies** tab. Click the name of the **{{ecloud}} agent policy** to edit the policy. +2. Open the **Actions** menu next to the {{fleet-server}} integration and click **Edit integration**. + + :::{image} images/elastic-cloud-agent-policy.png + :alt: {{ecloud}} policy + :class: screenshot + ::: + +3. Under {{fleet-server}}, modify **Max Connections** and other [advanced settings](#fleet-server-configuration) as described in [Scaling recommendations ({{ecloud}})](#scaling-recommendations). + + :::{image} images/fleet-server-configuration.png + :alt: {{fleet-server}} configuration + :class: screenshot + ::: + + + +## Advanced {{fleet-server}} options [fleet-server-configuration] + +The following advanced settings are available to fine tune your {{fleet-server}} deployment. + +`cache` +: `num_counters` +: Size of the hash table. Best practice is to have this set to 10 times the max connections. + +`max_cost` +: Total size of the cache. + + +`server.timeouts` +: `checkin_timestamp` +: How often {{fleet-server}} updates the "last activity" field for each agent. Defaults to `30s`. In a large-scale deployment, increasing this setting may improve performance. If this setting is higher than `2m`, most agents will be shown as "offline" in the Fleet UI. For a typical setup, it’s recommended that you set this value to less than `2m`. + +`checkin_long_poll` +: How long {{fleet-server}} allows a long poll request from an agent before timing out. Defaults to `5m`. In a large-scale deployment, increasing this setting may improve performance. + + +`server.limits` +: `policy_throttle` +: How often a new policy is rolled out to the agents. + + +Deprecated: Use the `action_limit` settings instead. + +`action_limit.interval` +: How quickly {{fleet-server}} dispatches pending actions to the agents. + +`action_limit.burst` +: Burst of actions that may be dispatched before falling back to the rate limit defined by `interval`. + +`checkin_limit.max` +: Maximum number of agents that can call the checkin API concurrently. + +`checkin_limit.interval` +: How fast the agents can check in to the {{fleet-server}}. + +`checkin_limit.burst` +: Burst of check-ins allowed before falling back to the rate defined by `interval`. + +`checkin_limit.max_body_byte_size` +: Maximum size in bytes of the checkin API request body. + +`artifact_limit.max` +: Maximum number of agents that can call the artifact API concurrently. It allows the user to avoid overloading the {{fleet-server}} from artifact API calls. + +`artifact_limit.interval` +: How often artifacts are rolled out. Default of `100ms` allows 10 artifacts to be rolled out per second. + +`artifact_limit.burst` +: Number of transactions allowed for a burst, controlling oversubscription on outbound buffer. + +`artifact_limit.max_body_byte_size` +: Maximum size in bytes of the artficact API request body. + +`ack_limit.max` +: Maximum number of agents that can call the ack API concurrently. It allows the user to avoid overloading the {{fleet-server}} from Ack API calls. + +`ack_limit.interval` +: How often an acknowledgment (ACK) is sent. Default value of `10ms` enables 100 ACKs per second to be sent. + +`ack_limit.burst` +: Burst of ACKs to accommodate (default of 20) before falling back to the rate defined in `interval`. + +`ack_limit.max_body_byte_size` +: Maximum size in bytes of the ack API request body. + +`enroll_limit.max` +: Maximum number of agents that can call the enroll API concurrently. This setting allows the user to avoid overloading the {{fleet-server}} from Enrollment API calls. + +`enroll_limit.interval` +: Interval between processing enrollment request. Enrollment is both CPU and RAM intensive, so the number of enrollment requests needs to be limited for overall system health. Default value of `100ms` allows 10 enrollments per second. + +`enroll_limit.burst` +: Burst of enrollments to accept before falling back to the rate defined by `interval`. + +`enroll_limit.max_body_byte_size` +: Maximum size in bytes of the enroll API request body. + +`status_limit.max` +: Maximum number of agents that can call the status API concurrently. This setting allows the user to avoid overloading the Fleet Server from status API calls. + +`status_limit.interval` +: How frequently agents can submit status requests to the Fleet Server. + +`status_limit.burst` +: Burst of status requests to accomodate before falling back to the rate defined by interval. + +`status_limit.max_body_byte_size` +: Maximum size in bytes of the status API request body. + +`upload_start_limit.max` +: Maximum number of agents that can call the uploadStart API concurrently. This setting allows the user to avoid overloading the Fleet Server from uploadStart API calls. + +`upload_start_limit.interval` +: How frequently agents can submit file start upload requests to the Fleet Server. + +`upload_start_limit.burst` +: Burst of file start upload requests to accomodate before falling back to the rate defined by interval. + +`upload_start_limit.max_body_byte_size` +: Maximum size in bytes of the uploadStart API request body. + +`upload_end_limit.max` +: Maximum number of agents that can call the uploadEnd API concurrently. This setting allows the user to avoid overloading the Fleet Server from uploadEnd API calls. + +`upload_end_limit.interval` +: How frequently agents can submit file end upload requests to the Fleet Server. + +`upload_end_limit.burst` +: Burst of file end upload requests to accomodate before falling back to the rate defined by interval. + +`upload_end_limit.max_body_byte_size` +: Maximum size in bytes of the uploadEnd API request body. + +`upload_chunk_limit.max` +: Maximum number of agents that can call the uploadChunk API concurrently. This setting allows the user to avoid overloading the Fleet Server from uploadChunk API calls. + +`upload_chunk_limit.interval` +: How frequently agents can submit file chunk upload requests to the Fleet Server. + +`upload_chunk_limit.burst` +: Burst of file chunk upload requests to accomodate before falling back to the rate defined by interval. + +`upload_chunk_limit.max_body_byte_size` +: Maximum size in bytes of the uploadChunk API request body. + + +## Scaling recommendations ({{ecloud}}) [scaling-recommendations] + +The following tables provide the minimum resource requirements and scaling guidelines based on the number of agents required by your deployment. It should be noted that these compute resource can be spread across multiple availability zones (for example: a 32GB RAM requirement can be satisfed with 16GB of RAM in 2 different zones). + +* [Resource requirements by number of agents](#resource-requirements-by-number-agents) + + +### Resource requirements by number of agents [resource-requirements-by-number-agents] + +| | | | | +| --- | --- | --- | --- | +| Number of Agents | {{fleet-server}} Memory | {{fleet-server}} vCPU | {{es}} Hot Tier | +| 2,000 | 2GB | up to 8 vCPU | 32GB RAM | 8 vCPU | +| 5,000 | 4GB | up to 8 vCPU | 32GB RAM | 8 vCPU | +| 10,000 | 8GB | up to 8 vCPU | 128GB RAM | 32 vCPU | +| 15,000 | 8GB | up to 8 vCPU | 256GB RAM | 64 vCPU | +| 25,000 | 8GB | up to 8 vCPU | 256GB RAM | 64 vCPU | +| 50,000 | 8GB | up to 8 vCPU | 384GB RAM | 96 vCPU | +| 75,000 | 8GB | up to 8 vCPU | 384GB RAM | 96 vCPU | +| 100,000 | 16GB | 16 vCPU | 512GB RAM | 128 vCPU | + +A series of scale performance tests are regularly executed in order to verify the above requirements and the ability for {{fleet}} to manage the advertised scale of {{agent}}s. These tests go through a set of acceptance criteria. The criteria mimics a typical platform operator workflow. The test cases are performing agent installations, version upgrades, policy modifications, and adding/removing integrations, tags, and policies. Acceptance criteria is passed when the {{agent}}s reach a `Healthy` state after any of these operations. + + +## Scaling recommendations [agent-policy-scaling-recommendations] + +**{{agent}} policies** + +A single instance of {{fleet}} supports a maximum of 1000 {{agent}} policies. If more policies are configured, UI performance might be impacted. The maximum number of policies is not affected by the number of spaces in which the policies are used. + +If you are using {{agent}} with [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), the maximum supported number of {{agent}} policies is 500. + +**{{agents}}** + +When you use {{fleet}} to manage a large volume (10k or more) of {{agents}}, the check-in from each of the multiple agents triggers an {{es}} authentication request. To help reduce the possibility of cache eviction and to speed up propagation of {{agent}} policy changes and actions, we recommend setting the [API key cache size](elasticsearch://docs/reference/elasticsearch/configuration-reference/security-settings.md#api-key-service-settings) in your {{es}} configuration to 2x the maximum number of agents. + +For example, with 25,000 running {{agents}} you could set the cache value to `50000`: + +```yaml +xpack.security.authc.api_key.cache.max_keys: 50000 +``` diff --git a/reference/ingestion-tools/fleet/fleet-server-secrets.md b/reference/ingestion-tools/fleet/fleet-server-secrets.md new file mode 100644 index 0000000000..35d5323dd1 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-server-secrets.md @@ -0,0 +1,131 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-server-secrets.html +--- + +# Fleet Server Secrets [fleet-server-secrets] + +{{fleet-server}} configuration can contain secret values. You may specify these values directly in the configuration or through secret files. You can use command line arguments to pass the values or file paths when you are running under {{agent}}, or you can use environment variables if {{agent}} is running in a container. + +For examples of how to deploy secret files, refer to our [Secret files guide](/reference/ingestion-tools/fleet/secret-files-guide.md). + +::::{note} +Stand-alone {{fleet-server}} is under active development. +:::: + + + +## Secret values [_secret_values] + +The following secret values may be used when configuring {{fleet-server}}. + +Note that the configuration fragments shown below are specified either in the UI as part of the output specification or as part of the {{fleet-server}} integration settings. + +`service_token` +: The `service_token` is used to communicate with {{es}}. + + It may be specified in the configuration directly as: + + ```yaml + output.elasticsearch.service_token: my-service-token + ``` + + Or by a file with: + + ```yaml + output.elasticsearch.service_token_path: /path/to/token-file + ``` + + When you are running {{fleet-server}} under {{agent}}, you can specify it with either the `--fleet-server-service-token` or the `--fleet-server-service-token-path` flag. See [{{agent}} command reference](/reference/ingestion-tools/fleet/agent-command-reference.md) for more details. + + If you are [running {{fleet-server}} under {{agent}} in a container](/reference/ingestion-tools/fleet/elastic-agent-container.md), you can use the environment variables `FLEET_SERVER_SERVICE_TOKEN` or `FLEET_SERVER_SERVICE_TOKEN_PATH`. + + +TLS private key +: Use the TLS private key to encrypt communications between {{fleet-server}} and {{agent}}. See [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md) for more details. + + Although it is not recommended, you may specify the private key directly in the configuration as: + + ```yaml + inputs: + - type: fleet-server + ssl.key: | + ----BEGIN CERTIFICATE---- + .... + ----END CERTIFICATE---- + ``` + + Alternatively, you can provide the path to the private key with the same attribute: + + ```yaml + inputs: + - type: fleet-server + ssl.key: /path/to/cert.key + ``` + + When you are running {{fleet-server}} under {{agent}}, you can provide the private key path using with the `--fleet-server-cert-key` flag. See [{{agent}} command reference](/reference/ingestion-tools/fleet/agent-command-reference.md) for more details. + + If you are [running {{fleet-server}} under {{agent}} in a container](/reference/ingestion-tools/fleet/elastic-agent-container.md), you can use the environment variable `FLEET_SERVER_CERT_KEY` to specify the private key path. + + +TLS private key passphrase +: The private key passphrase is used to decrypt an encrypted private key file. + + You can specify the passphrase as a secret file in the configuration with: + + ```yaml + inputs: + - type: fleet-server + ssl.key_passphrase_path: /path/to/passphrase + ``` + + When you are running {{fleet-server}} under {{agent}}, you can provide the passphrase path using the `--fleet-server-cert-key-passphrase` flag. See [{{agent}} command reference](/reference/ingestion-tools/fleet/agent-command-reference.md) for more details. + + If you are [running {{fleet-server}} under {{agent}} in a container](/reference/ingestion-tools/fleet/elastic-agent-container.md), you can use the environment variable `FLEET_SERVER_CERT_KEY_PASSPHRASE` to specify the file path. + + +APM API Key +: The APM API Key may be used to gather APM data from {{fleet-server}}. + + You can specify it directly in the instrumentation segment of the configuration: + + ```yaml + inputs: + - type: fleet-server + instrumentation.api_key: my-apm-api-key + ``` + + Or by a file with: + + ```yaml + inputs: + - type: fleet-server + instrumentation.api_key_file: /path/to/apmAPIKey + ``` + + You may specify the API key by value using the environment variable `ELASTIC_APM_API_KEY`. + + +APM secret token +: The APM secret token may be used to gather APM data from {{fleet-server}}. + + You can specify the secret token directly in the instrumentation segment of the configuration: + + ```yaml + inputs: + - type: fleet-server + instrumentation.secret_token: my-apm-secret-token + ``` + + Or by a file with: + + ```yaml + inputs: + - type: fleet-server + instrumentation.secret_token_file: /path/to/apmSecretToken + ``` + + You may also specify the token by value using the environment variable `ELASTIC_APM_SECRET_TOKEN`. + + + diff --git a/reference/ingestion-tools/fleet/fleet-server.md b/reference/ingestion-tools/fleet/fleet-server.md new file mode 100644 index 0000000000..edbe1e963f --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-server.md @@ -0,0 +1,63 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-server.html +--- + +# What is Fleet Server? [fleet-server] + +{{fleet-server}} is a component that connects {{agent}}s to {{fleet}}. It supports many {{agent}} connections and serves as a control plane for updating agent policies, collecting status information, and coordinating actions across {{agent}}s. It also provides a scalable architecture. As the size of your agent deployment grows, you can deploy additional {{fleet-server}}s to manage the increased workload. + +* On-premises {{fleet-server}} is not currently available for use in an [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md) environment. We recommend using the hosted {{fleet-server}} that is included and configured automatically in {{serverless-short}} {{observability}} and Security projects. + +The following diagram shows how {{agent}}s communicate with {{fleet-server}} to retrieve agent policies: + +:::{image} images/fleet-server-agent-policies-diagram.png +:alt: {{fleet-server}} Cloud deployment model +::: + + +1. When a new agent policy is created, the {{fleet}} UI saves the policy to a {{fleet}} index in {{es}}. +2. To enroll in the policy, {{agent}}s send a request to {{fleet-server}}, using the enrollment key generated for authentication. +3. {{fleet-server}} monitors {{fleet}} indices, picks up the new agent policy from {{es}}, then ships the policy to all {{agent}}s enrolled in that policy. {{fleet-server}} may also write updated policies to the {{fleet}} index to manage coordination between agents. +4. {{agent}} uses configuration information in the policy to collect and send data to {{es}}. +5. {{agent}} checks in with {{fleet-server}} for updates, maintaining an open connection. +6. When a policy is updated, {{fleet-server}} retrieves the updated policy from {{es}} and sends it to the connected {{agent}}s. +7. To communicate with {{fleet}} about the status of {{agent}}s and the policy rollout, {{fleet-server}} writes updates to {{fleet}} indices. + +::::{admonition} +**Does {{fleet-server}} run inside of {{agent}}?** + +{{fleet-server}} is a subprocess that runs inside a deployed {{agent}}. This means the deployment steps are similar to any {{agent}}, except that you enroll the agent in a special {{fleet-Server}} policy. Typically—​especially in large-scale deployments—​this agent is dedicated to running {{fleet-server}} as an {{agent}} communication host and is not configured for data collection. + +:::: + + + +## Service account [fleet-security-account] + +{{fleet-server}} uses a service token to communicate with {{es}}, which contains a `fleet-server` service account. Each {{fleet-server}} can use its own service token, and you can share it across multiple servers (not recommended). The advantage of using a separate token for each server is that you can invalidate each one separately. + +You can create a service token by either using the {{fleet}} UI or the {{es}} API. For more information, refer to [Deploy {{fleet-server}} on-premises and {{es}} on Cloud](/reference/ingestion-tools/fleet/add-fleet-server-mixed.md) or [Deploy on-premises and self-managed](/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md), depending on your deployment model. + + +## {{fleet-server}} High-availability operations [fleet-server-HA-operations] + +{{fleet-server}} is stateless. Connections to the {{fleet-server}} therefore can be load balanced as long as the {{fleet-server}} has capacity to accept more connections. Load balancing is done on a round-robin basis. + +How you handle high-availability, fault-tolerance, and lifecycle management of {{fleet-server}} depends on the deployment model you use. + + +## Learn more [_learn_more] + +To learn more about deploying and scaling {{fleet-server}}, refer to: + +* [Deploy on {{ecloud}}](/reference/ingestion-tools/fleet/add-fleet-server-cloud.md) +* [Deploy {{fleet-server}} on-premises and {{es}} on Cloud](/reference/ingestion-tools/fleet/add-fleet-server-mixed.md) +* [Deploy on-premises and self-managed](/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md) +* [{{fleet-server}} scalability](/reference/ingestion-tools/fleet/fleet-server-scalability.md) +* [Monitor a self-managed {{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server-monitoring.md) + + +## {{fleet-server}} secrets configuration [fleet-server-secrets-config] + +Secrets used to configure {{fleet-server}} can either be directly specified in configuration or provided through secret files. See [{{fleet-server}} Secrets](/reference/ingestion-tools/fleet/fleet-server-secrets.md) for more information. diff --git a/reference/ingestion-tools/fleet/fleet-settings-changing-outputs.md b/reference/ingestion-tools/fleet/fleet-settings-changing-outputs.md new file mode 100644 index 0000000000..7c2b1efe2a --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-settings-changing-outputs.md @@ -0,0 +1,18 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-settings-changing-outputs.html +--- + +# Considerations when changing outputs [fleet-settings-changing-outputs] + +{{fleet}} provides the capability to update your [output settings](/reference/ingestion-tools/fleet/fleet-settings.md#output-settings) to add new outputs, and then to assign those new outputs to an {{agent}} policy. However, changing outputs should be done with caution. + +When you change the output configuration within a policy applied to one or more agents, there’s a high likelihood of those agents re-ingesting previously processed logs: + +* Changing the output will cause the agents to remove and recreate all existing integrations associated with the new output, which as a result of the change receives a new UUID. +* As a consequence of the newly generated output UUID, the agents will retransmit all events and logs they have been configured to collect, since the data registry will be re-created. + +In cases when an update to an output is required, it’s generally preferable to update your existing output rather than create a new one. + +An example of an update being needed would be when switching from a static IP address to a global load balancer (where both endpoints point to the same underlying cluster). In this type of situation, changing to a new output would result in data being re-collected, while updating the existing output would not. + diff --git a/reference/ingestion-tools/fleet/fleet-settings.md b/reference/ingestion-tools/fleet/fleet-settings.md new file mode 100644 index 0000000000..06b11f3847 --- /dev/null +++ b/reference/ingestion-tools/fleet/fleet-settings.md @@ -0,0 +1,130 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/fleet-settings.html +--- + +# Fleet settings [fleet-settings] + +::::{note} +The settings described here are configurable through the {{fleet}} UI. Refer to [{{fleet}} settings in {{kib}}](kibana://docs/reference/configuration-reference/fleet-settings.md) for a list of settings that you can configure in the `kibana.yml` configuration file. +:::: + + +On the **Settings** tab in **Fleet**, you can configure global settings available to all {{agent}}s enrolled in {{fleet}}. This includes {{fleet-server}} hosts and output settings. + + +## {{fleet-server}} host settings [fleet-server-hosts-setting] + +Click **Edit hosts** and specify the host URLs your {{agent}}s will use to connect to a {{fleet-server}}. + +::::{tip} +If the **Edit hosts** option is grayed out, {{fleet-server}} hosts are configured outside of {{fleet}}. For more information, refer to [{{fleet}} settings in {{kib}}](kibana://docs/reference/configuration-reference/fleet-settings.md). +:::: + + +Not sure if {{fleet-server}} is running? Refer to [What is {{fleet-server}}?](/reference/ingestion-tools/fleet/fleet-server.md). + +On self-managed clusters, you must specify one or more URLs. + +On {{ecloud}}, this field is populated automatically. If you are using Azure Private Link, GCP Private Service Connect, or AWS PrivateLink and enrolling the {{agent}} with a private link URL, ensure that this setting is configured. Otherwise, {{agent}} will reset to use a default address instead of the private link URL. + +::::{note} +If a URL is specified without a port, {{kib}} sets the port to `80` (http) or `443` (https). +:::: + + +By default, {{fleet-server}} is typically exposed on the following ports: + +`8220` +: Default {{fleet-server}} port for self-managed clusters + +`443` or `9243` +: Default {{fleet-server}} port for {{ecloud}}. View the {{fleet}} **Settings** tab to find the actual port that’s used. + +::::{important} +The exposed ports must be open for ingress and egress in the firewall and networking rules on the host to allow {{agent}}s to communicate with {{fleet-server}}. +:::: + + +Specify multiple URLs (click **Add row**) to scale out your deployment and provide automatic failover. If multiple URLs exist, {{fleet}} shows the first provided URL for enrollment purposes. Enrolled {{agent}}s will connect to the URLs in round robin order until they connect successfully. + +When a {{fleet-server}} is added or removed from the list, all agent policies are updated automatically. + +**Examples:** + +* `https://192.0.2.1:8220` +* `https://abae718c1276457893b1096929e0f557.fleet.eu-west-1.aws.qa.cld.elstc.co:443` +* `https://[2001:db8::1]:8220` + + +## Output settings [output-settings] + +Add or edit output settings to specify where {{agent}}s send data. {{agent}}s use the default output if you don’t select an output in the agent policy. + +::::{tip} +If you have an `Enterprise` [{{stack}} subscription](https://www.elastic.co/subscriptions), you can configure {{agent}} to [send data to different outputs for different integration policies](/reference/ingestion-tools/fleet/integration-level-outputs.md). +:::: + + +::::{note} +The {{ecloud}} internal output is locked and cannot be edited. This output is used for internal routing to reduce external network charges when using the {{ecloud}} agent policy. It also provides visibility for troubleshooting on {{ece}}. +:::: + + +To add or edit an output: + +1. Go to **{{fleet}} → Settings**. +2. Under **Outputs**, click **Add output** or **Edit**. + + :::{image} images/fleet-add-output-button.png + :alt: {{fleet}} Add output button + ::: + + The **Add new output** UI opens. + +3. Set the output name and type. +4. Specify settings for the output type you selected: + + * [{{es}} output settings](/reference/ingestion-tools/fleet/es-output-settings.md) + * [{{ls}} output settings](/reference/ingestion-tools/fleet/ls-output-settings.md) + * [Kafka output settings](/reference/ingestion-tools/fleet/kafka-output-settings.md) + * [Remote {{es}} output](/reference/ingestion-tools/fleet/remote-elasticsearch-output.md) + +5. Click **Save and apply settings**. + +::::{tip} +If the options for editing an output are grayed out, outputs are configured outside of {{fleet}}. For more information, refer to [{{fleet}} settings in {{kib}}](kibana://docs/reference/configuration-reference/fleet-settings.md). +:::: + + + +## Agent binary download settings [fleet-agent-binary-download-settings] + +{{agent}}s must be able to access the {{artifact-registry}} to download binaries during upgrades. By default {{agent}}s download artifacts from the artifact registry at `https://artifacts.elastic.co/downloads/`. + +For {{agent}}s that cannot access the internet, you can specify agent binary download settings, and then configure agents to download their artifacts from the alternate location. For more information about running {{agent}}s in a restricted environment, refer to [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md). + +To add or edit the source of binary downloads: + +1. Go to **{{fleet}} → Settings**. +2. Under **Agent Binary Download**, click **Add agent binary source** or **Edit**. +3. Set the agent binary source name. +4. For **Host**, specify the address where you are hosting the artifacts repository. +5. (Optional) To make this location the default, select **Make this host the default for all agent policies**. {{agent}}s use the default location if you don’t select a different agent binary source in the agent policy. + + +## Proxies [proxy-settings] + +You can specify a proxy server to be used in {{fleet-server}}, {{agent}} outputs, or for any agent binary download sources. For full details about proxy configuration refer to [Using a proxy server with {{agent}} and {{fleet}}](/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md). + + +## Delete unenrolled agents [delete-unenrolled-agents-setting] + +After an {{agent}} has been unenrolled in {{fleet}}, a number of documents about the agent are retained just in case the agent needs to be recovered at some point. You can choose to have all data related to an unenrolled agent deleted automatically. + +Note that this option can also be enabled by adding the `xpack.fleet.enableDeleteUnenrolledAgents: true` setting to the [{{kib}} settings file](/get-started/the-stack.md). + +To enable automatic deletion of unenrolled agents: + +1. Go to **{{fleet}} → Settings**. +2. Under **Advanced Settings**, enable the **Delete unenrolled agents** option. diff --git a/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md b/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md new file mode 100644 index 0000000000..26c06a543e --- /dev/null +++ b/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md @@ -0,0 +1,134 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/grant-access-to-elasticsearch.html +--- + +# Grant standalone Elastic Agents access to Elasticsearch [grant-access-to-elasticsearch] + +You can use either API keys or user credentials to grant standalone {{agent}}s access to {{es}} resources. The following minimal permissions are required to send logs, metrics, traces, and synthetics to {{es}}: + +* `monitor` cluster privilege +* `auto_configure` and `create_doc` index privileges on `logs-*-*`, `metrics-*-*`, `traces-*-*`, and `synthetics-*-*`. + +It’s recommended that you use API keys to avoid exposing usernames and passwords in configuration files. + +If you’re using {{fleet}}, refer to [{{fleet}} enrollment tokens](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md). + + +## Create API keys for standalone agents [create-api-key-standalone-agent] + +::::{note} +API keys are sent as plain-text, so they only provide security when used in combination with Transport Layer Security (TLS). Our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body) on {{ecloud}} provides secure, encrypted connections out of the box! For self-managed {{es}} clusters, refer to [Public Key Infrastructure (PKI) certificates](/reference/ingestion-tools/fleet/elasticsearch-output.md#output-elasticsearch-pki-certs-authentication-settings). +:::: + + +You can set API keys to expire at a certain time, and you can explicitly invalidate them. Any user with the `manage_api_key` or `manage_own_api_key` cluster privilege can create API keys. + +For security reasons, we recommend using a unique API key per {{agent}}. You can create as many API keys per user as necessary. + +If you are using [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), API key authentication is required. + +To create an API key for {{agent}}: + +1. In an {{ecloud}} or on premises environment, in {{kib}} navigate to **{{stack-manage-app}} > API keys** and click **Create API key**. + + In a {{serverless-short}} environment, in {{kib}} navigate to **Project settings** > **Management** > **API keys** and click **Create API key**. + +2. Enter a name for your API key and select **Control security privileges**. +3. In the role descriptors box, copy and paste the following JSON. This example creates an API key with privileges for ingesting logs, metrics, traces, and synthetics: + + ```json + { + "standalone_agent": { + "cluster": [ + "monitor" + ], + "indices": [ + { + "names": [ + "logs-*-*", "metrics-*-*", "traces-*-*", "synthetics-*-*" <1> + ], + "privileges": [ + "auto_configure", "create_doc" + ] + } + ] + } + } + ``` + + 1. Adjust this list to match the data you want to collect. For example, if you aren’t using APM or synthetics, remove `"traces-*-*"` and `"synthetics-*-*"` from this list. + +4. To set an expiration date for the API key, select **Expire after time** and input the lifetime of the API key in days. +5. Click **Create API key**. + + You’ll see a message indicating that the key was created, along with the encoded key. By default, the API key is Base64 encoded, but that won’t work for {{agent}}. + + +1. Click the down arrow next to Base64 and select **Beats**. + + :::{image} images/copy-api-key.png + :alt: Message with field for copying API key + :class: screenshot + ::: + +2. Copy the API key. You will need this for the next step, and you will not be able to view it again. +3. To use the API key, specify the `api_key` setting in the `elastic-agent.yml` file. For example: + + ```yaml + [...] + outputs: + default: + type: elasticsearch + hosts: + - 'https://da4e3a6298c14a6683e6064ebfve9ace.us-central1.gcp.cloud.es.io:443' + api_key: _Nj4oH0aWZVGqM7MGop8:349p_U1ERHyIc4Nm8_AYkw <1> + [...] + ``` + + 1. The format of this key is `:`. Base64 encoded API keys are not currently supported in this configuration. + + +For more information about creating API keys in {{kib}}, see [API Keys](/deploy-manage/api-keys/elasticsearch-api-keys.md). + + +## Create a standalone agent role [create-role-standalone-agent] + +Although it’s recommended that you use an API key instead of a username and password to access {{es}} (and an API key is required in a {{serverless-short}} environment), you can create a role with the required privileges, assign it to a user, and specify the user’s credentials in the `elastic-agent.yml` file. + +1. In {{kib}}, go to **{{stack-manage-app}} > Roles**. +2. Click **Create role** and enter a name for the role. +3. In **Cluster privileges**, enter `monitor`. +4. In **Index privileges**, enter: + + 1. `logs-*-*`, `metrics-*-*`, `traces-*-*` and `synthetics-*-*` in the **Indices** field. + + ::::{note} + Adjust this list to match the data you want to collect. For example, if you aren’t using APM or synthetics, remove `traces-*-*` and `synthetics-*-*` from this list. + :::: + + 2. `auto_configure` and `create_doc` in the **Privileges** field. + + :::{image} images/create-standalone-agent-role.png + :alt: Create role settings for a standalone agent role + :class: screenshot + ::: + +5. Create the role and assign it to a user. For more information about creating roles, refer to [{{kib}} role management](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). +6. To use these credentials, set the username and password in the `elastic-agent.yml` file: + + ```yaml + [...] + outputs: + default: + type: elasticsearch + hosts: + - 'https://da4e3a6298c14a6683e6064ebfve9ace.us-central1.gcp.cloud.es.io:443' + username: ES_USERNAME <1> + password: ES_PASSWORD + [...] + ``` + + 1. For security reasons, specify a user with the minimal privileges described here. It’s recommended that you do not use the `elastic` superuser. + + diff --git a/reference/ingestion-tools/fleet/hints-annotations-autodiscovery.md b/reference/ingestion-tools/fleet/hints-annotations-autodiscovery.md new file mode 100644 index 0000000000..afee96a4b3 --- /dev/null +++ b/reference/ingestion-tools/fleet/hints-annotations-autodiscovery.md @@ -0,0 +1,414 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/hints-annotations-autodiscovery.html +--- + +# Hints annotations based autodiscover [hints-annotations-autodiscovery] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +::::{note} +Make sure you are using {{agent}} 8.5+. +:::: + + +::::{note} +Hints autodiscovery only works with {{agent}} Standalone. +:::: + + +Standalone {{agent}} supports autodiscover based on hints from the [provider](/reference/ingestion-tools/fleet/kubernetes-provider.md). The hints mechanism looks for hints in Kubernetes Pod annotations that have the prefix `co.elastic.hints`. As soon as the container starts, {{agent}} checks it for hints and launches the proper configuration for the container. Hints tell {{agent}} how to monitor the container by using the proper integration. This is the full list of supported hints: + + +## Required hints: [_required_hints] + + +### `co.elastic.hints/package` [_co_elastic_hintspackage] + +The package to use for monitoring. + + +## Optional hints available: [_optional_hints_available] + + +### `co.elastic.hints/host` [_co_elastic_hintshost] + +The host to use for metrics retrieval. If not defined, the host will be set as the default one: `:`. + + +### `co.elastic.hints/data_stream` [_co_elastic_hintsdata_stream] + +The list of data streams to enable. If not specified, the integration’s default data streams are used. To find the defaults, refer to the [Elastic integrations documentation](integration-docs://docs/reference/index.md). + +If data streams are specified, additional hints can be defined per data stream. For example, `co.elastic.hints/info.period: 5m` if the data stream specified is `info` for the [Redis module](beats://docs/reference/metricbeat/metricbeat-module-redis.md). + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: redis + annotations: + co.elastic.hints/package: redis + co.elastic.hints/data_streams: info + co.elastic.hints/info.period: 5m +``` + +If data stream hints are not specified, the top level hints will be used in its configuration. + + +### `co.elastic.hints/metrics_path` [_co_elastic_hintsmetrics_path] + +The path to retrieve the metrics from. + + +### `co.elastic.hints/period` [_co_elastic_hintsperiod] + +The time interval for metrics retrieval, for example, 10s. + + +### `co.elastic.hints/timeout` [_co_elastic_hintstimeout] + +Metrics retrieval timeout, for example, 3s. + + +### `co.elastic.hints/username` [_co_elastic_hintsusername] + +The username to use for authentication. + + +### `co.elastic.hints/password` [_co_elastic_hintspassword] + +The password to use for authentication. It is recommended to retrieve this sensitive information from an ENV variable and avoid placing passwords in plain text. + + +### `co.elastic.hints/stream` [_co_elastic_hintsstream] + +The stream to use for logs collection, for example, stdout/stderr. + +If the specified package has no logs support, a generic container’s logs input will be used as a fallback. See the `Hints autodiscovery for kubernetes log collection` example below. + + +### `co.elastic.hints/processors` [_co_elastic_hintsprocessors] + +Define a processor to be added to the input configuration. See [*Define processors*](/reference/ingestion-tools/fleet/agent-processors.md) for the list of supported processors. + +If the processors configuration uses list data structure, object fields must be enumerated. For example, hints for the rename processor configuration below + +```yaml +processors: + - rename: + fields: + - from: "a.g" + to: "e.d" + fail_on_error: true +``` + +will look like: + +```yaml +co.elastic.hints/processors.rename.fields.0.from: "a.g" +co.elastic.hints/processors.rename.fields.1.to: "e.d" +co.elastic.hints/processors.rename.fail_on_error: 'true' +``` + +If the processors configuration uses map data structure, enumeration is not needed. For example, the equivalent to the `add_fields` configuration below + +```yaml +processors: + - add_fields: + target: project + fields: + name: myproject +``` + +is + +```yaml +co.elastic.hints/processors.1.add_fields.target: "project" +co.elastic.hints/processors.1.add_fields.fields.name: "myproject" +``` + +In order to provide ordering of the processor definition, numbers can be provided. If not, the hints builder will do arbitrary ordering: + +```yaml +co.elastic.hints/processors.1.dissect.tokenizer: "%{key1} %{key2}" +co.elastic.hints/processors.dissect.tokenizer: "%{key2} %{key1}" +``` + +In the above sample the processor definition tagged with `1` would be executed first. + +::::{important} +Processor configuration is not supported on the datastream level, so annotations like `co.elastic.hints/.processors` are ignored. +:::: + + + +## Multiple containers [_multiple_containers] + +When a pod has multiple containers, the settings are shared unless you put the container name in the hint. For example, these hints configure `processors.decode_json_fields` for all containers in the pod, but set a specific `stream` hint for the container called sidecar. + +```yaml +annotations: + co.elastic.hints/processors.decode_json_fields.fields: "message" + co.elastic.hints/processors.decode_json_fields.add_error_key: true + co.elastic.hints/processors.decode_json_fields.overwrite_keys: true + co.elastic.hints/processors.decode_json_fields.target: "team" + co.elastic.hints.sidecar/stream: "stderr" +``` + + +## Available packages that support hints autodiscovery [_available_packages_that_support_hints_autodiscovery] + +The available packages that are supported through hints can be found [here](https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes/elastic-agent-standalone/templates.d). + + +## Configure hints autodiscovery [_configure_hints_autodiscovery] + +To enable hints autodiscovery, you must add `hints.enabled: true` to the provider’s configuration: + +```yaml +providers: + kubernetes: + hints.enabled: true +``` + +Then ensure that an init container is specified by uncommenting the respective sections in the {{agent}} manifest. An init container is required to download the hints templates. + +```yaml +initContainers: +- name: k8s-templates-downloader + image: docker.elastic.co/elastic-agent/elastic-agent:master + command: ['bash'] + args: + - -c + - >- + mkdir -p /usr/share/elastic-agent/state/inputs.d && + curl -sL https://github.com/elastic/elastic-agent/archive/master.tar.gz | tar xz -C /usr/share/elastic-agent/state/inputs.d --strip=5 "elastic-agent-master/deploy/kubernetes/elastic-agent-standalone/templates.d" + securityContext: + runAsUser: 0 + volumeMounts: + - name: elastic-agent-state + mountPath: /usr/share/elastic-agent/state +``` + +::::{note} +The {{agent}} can load multiple configuration files from `{path.config}/inputs.d` and finally produce a unified one (refer to [*Configure standalone {{agent}}s*](/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md)). Users have the ability to manually mount their own templates under `/usr/share/elastic-agent/state/inputs.d` **if they want to skip enabling initContainers section**. +:::: + + + +## Examples: [_examples] + + +### Hints autodiscovery for redis [_hints_autodiscovery_for_redis] + +Enabling hints allows users deploying Pods on the cluster to automatically turn on Elastic monitoring at Pod deployment time. For example, to deploy a Redis Pod on the cluster and automatically enable Elastic monitoring, add the proper hints as annotations on the Pod manifest file: + +```yaml +... +apiVersion: v1 +kind: Pod +metadata: + name: redis + annotations: + co.elastic.hints/package: redis + co.elastic.hints/data_streams: info + co.elastic.hints/host: '${kubernetes.pod.ip}:6379' + co.elastic.hints/info.period: 5s + labels: + k8s-app: redis + app: redis +... +``` + +After deploying this Pod, the data will start flowing in automatically. You can find it on the index `metrics-redis.info-default`. + +::::{note} +All assets (dashboards, ingest pipelines, and so on) related to the Redis integration are not installed. You need to explicitly [install them through {{kib}}](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md). +:::: + + + +### Hints autodiscovery for kubernetes log collection [_hints_autodiscovery_for_kubernetes_log_collection] + +The log collection for Kubernetes autodiscovered pods can be supported by using [container_logs.yml template](https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes/elastic-agent-standalone/templates.d/container_logs.yml). Elastic Agent needs to emit a container_logs mapping so as to start collecting logs for all the discovered containers **even if no annotations are present in the containers**. + +1. Follow steps described above to enable Hints Autodiscover +2. Make sure that relevant `container_logs.yml` template will be mounted under /usr/share/elastic-agent/state/inputs.d/ folder of Elastic Agent +3. Deploy Elastic Agent Manifest +4. Elastic Agent should be able to discover all containers inside kuernetes cluster and to collect available logs. + +The previous default behaviour can be disabled with `hints.default_container_logs: false`. So this will disable the automatic logs collection from all discovered pods. Users need specifically to annotate their pod with following annotations: + +```yaml +annotations: + co.elastic.hints/package: "container_logs" +``` + +```yaml +providers.kubernetes: + node: ${NODE_NAME} + scope: node + hints: + enabled: true + default_container_logs: false +... +``` + +In the following sample nginx manifest, we will additionally provide specific stream annotation, in order to configure the filestream input to read only stderr stream: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: nginx + name: nginx + namespace: default +spec: + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + annotations: + co.elastic.hints/package: "container_logs" + co.elastic.hints/stream: "stderr" + spec: + containers: + - image: nginx + name: nginx +... +``` + +Users can monitor the final rendered Elastic Agent configuration: + +```bash +kubectl exec -ti -n kube-system elastic-agent-7fkzm -- bash + + +/usr/share/elastic-agent# /elastic-agent inspect -v --variables --variables-wait 2s + +inputs: +- data_stream.namespace: default + id: hints-container-logs-3f69573a1af05c475857c1d0f98fc55aa01b5650f146d61e9653a966cd50bd9c-kubernetes-1780aca0-3741-4c8c-aced-b9776ba3fa81.nginx + name: filestream-generic + original_id: hints-container-logs-3f69573a1af05c475857c1d0f98fc55aa01b5650f146d61e9653a966cd50bd9c + [output truncated ....] + streams: + - data_stream: + dataset: kubernetes.container_logs + type: logs + exclude_files: [] + exclude_lines: [] + parsers: + - container: + format: auto + stream: stderr + paths: + - /var/log/containers/*3f69573a1af05c475857c1d0f98fc55aa01b5650f146d61e9653a966cd50bd9c.log + prospector: + scanner: + symlinks: true + tags: [] + type: filestream + use_output: default +outputs: + default: + hosts: + - https://elasticsearch:9200 + password: changeme + type: elasticsearch + username: elastic +providers: + kubernetes: + hints: + default_container_logs: false + enabled: true + node: control-plane + scope: node +``` + + +### Hints autodiscovery for kubernetes logs with JSON decoding [_hints_autodiscovery_for_kubernetes_logs_with_json_decoding] + +Based on the previous example, users might want to perform extra processing on specific logs, for example to decode specific fields containing JSON strings. Use of [decode_json_fields](/reference/ingestion-tools/fleet/decode-json-fields.md) is advisable as follows: + +You need to have enabled hints autodiscovery, as described in the previous `Hints autodiscovery for Kubernetes log collection` example. + +The pod that will produce JSON logs needs to be annotated with: + +```yaml + annotations: + co.elastic.hints/package: "container_logs" + co.elastic.hints/processors.decode_json_fields.fields: "message" + co.elastic.hints/processors.decode_json_fields.add_error_key: 'true' + co.elastic.hints/processors.decode_json_fields.overwrite_keys: 'true' + co.elastic.hints/processors.decode_json_fields.target: "team" +``` + +:::{note} +These parameters for the decode_json_fields processor are just an example. +::: + +The following log entry: + +```json +{"myteam": "ole"} +``` + +Will produce both fields: the original `message` field and also the target field `team`. + +```json +"team": { + "myteam": "ole" + }, + +"message": "{\"myteam\": \"ole\"}", +``` + + +## Troubleshooting [_troubleshooting] + +When things do not work as expected, you may need to troubleshoot your setup. Here we provide some directions to speed up your investigation: + +1. Exec inside an Agent’s Pod and run the `inspect` command to verify how inputs are constructed dynamically: + + ```sh + ./elastic-agent inspect --variables --variables-wait 1s -c /etc/elastic-agent/agent.yml + ``` + + Specifically, examine how the inputs are being populated. + +2. View the {{agent}} logs: + + ```sh + tail -f /etc/elastic-agent/data/logs/elastic-agent-*.ndjson + ``` + + Verify that the hints feature is enabled in the config and look for hints-related logs like: "Generated hints mappings are …​" In these logs, you can find the mappings that are extracted out of the annotations and determine if the values can populate a specific input. + +3. View the {{metricbeat}} logs: + + ```sh + tail -f /etc/elastic-agent/data/logs/default/metricbeat-*.ndjson + ``` + +4. View the {{filebeat}} logs: + + ```sh + tail -f /etc/elastic-agent/data/logs/default/filebeat-*.ndjson + ``` + +5. View the target input template. For the Redis example: + + ```sh + cat f /usr/share/elastic-agent/state/inputs.d/redis.yml + ``` + + diff --git a/reference/ingestion-tools/fleet/host-provider.md b/reference/ingestion-tools/fleet/host-provider.md new file mode 100644 index 0000000000..e75e2823f3 --- /dev/null +++ b/reference/ingestion-tools/fleet/host-provider.md @@ -0,0 +1,17 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/host-provider.html +--- + +# Host provider [host-provider] + +Provides information about the current host. The available keys are: + +| Key | Type | Description | +| --- | --- | --- | +| `host.name` | `string` | Host name | +| `host.platform` | `string` | Host platform | +| `host.architecture` | `string` | Host architecture | +| `host.ip[]` | `[]string` | Host IP addresses | +| `host.mac[]` | `[]string` | Host MAC addresses | + diff --git a/reference/ingestion-tools/fleet/host-proxy-env-vars.md b/reference/ingestion-tools/fleet/host-proxy-env-vars.md new file mode 100644 index 0000000000..61aabc9ad8 --- /dev/null +++ b/reference/ingestion-tools/fleet/host-proxy-env-vars.md @@ -0,0 +1,72 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/host-proxy-env-vars.html +--- + +# Proxy Server connectivity using default host variables [host-proxy-env-vars] + +Set environment variables on the host to configure default proxy settings. The {{agent}} uses host environment settings by default if no proxy settings are specified elsewhere. You can override host proxy settings later when you configure the {{agent}} and {{fleet}} settings. The following environment variables are available on the host: + +| Variable | Description | +| --- | --- | +| `HTTP_PROXY` | URL of the proxy server for HTTP traffic. | +| `HTTPS_PROXY` | URL of the proxy server for HTTPS traffic. | +| `NO_PROXY` | IP addresses or domain names that should not use the proxy. Supports patterns. | + +The proxy URL can be a complete URL or `host[:port]`, in which case the `http` scheme is assumed. An error is returned if the value is a different form. + + +## Where to set proxy environment variables [where-to-set-proxy-env-vars] + +The location where you set these environment variables is platform-specific and based on the system manager you’re using. Here are some examples to get you started. For more information about setting environment variables, refer to the documentation for your operating system. + +* For Windows services, set environment variables for the service in the Windows registry. + + This PowerShell command sets the `HKLM\SYSTEM\CurrentControlSet\Services\Elastic Agent\Environment` registry key, then restarts {{agent}}: + + ```yaml + $environment = [string[]]@( + "HTTPS_PROXY=https://proxy-hostname:proxy-port", + "HTTP_PROXY=http://proxy-hostname:proxy-port" + ) + + Set-ItemProperty "HKLM:SYSTEM\CurrentControlSet\Services\Elastic Agent" -Name Environment -Value $environment + + Restart-Service "Elastic Agent" + ``` + +* For Linux services, the location depends on the distribution you’re using. For example, you can set environment variables in: + + * `/etc/systemd/system/elastic-agent.service` for systems that use `systemd` to manage the service. To edit the file, run: + + ```shell + sudo systemctl edit --full elastic-agent.service + ``` + + Then add the environment variables under `[Service]` + + ```shell + [Service] + + Environment="HTTPS_PROXY=https://my.proxy:8443" + Environment="HTTP_PROXY=http://my.proxy:8080" + ``` + + * `/etc/sysconfig/elastic-agent` for Red Hat-like distributions that don’t use `systemd`. + * `/etc/default/elastic-agent` for Debian and Ubuntu distributions that don’t use `systemd`. + + For example: + + ```shell + HTTPS_PROXY=https://my.proxy:8443 + HTTP_PROXY=http://my.proxy:8080 + ``` + + +After adding environment variables, restart the service. + +::::{note} +If you use a proxy server to download new agent versions from `artifacts.elastic.co` for upgrading, configure [Agent binary download settings](/reference/ingestion-tools/fleet/fleet-settings.md#fleet-agent-binary-download-settings). +:::: + + diff --git a/reference/ingestion-tools/fleet/images/add-agent-to-hosts.png b/reference/ingestion-tools/fleet/images/add-agent-to-hosts.png new file mode 100644 index 0000000000..7ad181667c Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-agent-to-hosts.png differ diff --git a/reference/ingestion-tools/fleet/images/add-fleet-server-advanced.png b/reference/ingestion-tools/fleet/images/add-fleet-server-advanced.png new file mode 100644 index 0000000000..d43b3dc3b9 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-fleet-server-advanced.png differ diff --git a/reference/ingestion-tools/fleet/images/add-fleet-server-to-policy.png b/reference/ingestion-tools/fleet/images/add-fleet-server-to-policy.png new file mode 100644 index 0000000000..eb2976d5fb Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-fleet-server-to-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/add-fleet-server.png b/reference/ingestion-tools/fleet/images/add-fleet-server.png new file mode 100644 index 0000000000..6158bf5b57 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-fleet-server.png differ diff --git a/reference/ingestion-tools/fleet/images/add-integration-standalone.png b/reference/ingestion-tools/fleet/images/add-integration-standalone.png new file mode 100644 index 0000000000..0ccb1a258d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-integration-standalone.png differ diff --git a/reference/ingestion-tools/fleet/images/add-integration.png b/reference/ingestion-tools/fleet/images/add-integration.png new file mode 100644 index 0000000000..86de6040a1 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-integration.png differ diff --git a/reference/ingestion-tools/fleet/images/add-logstash-output.png b/reference/ingestion-tools/fleet/images/add-logstash-output.png new file mode 100644 index 0000000000..39eaf6e533 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-logstash-output.png differ diff --git a/reference/ingestion-tools/fleet/images/add-processor.png b/reference/ingestion-tools/fleet/images/add-processor.png new file mode 100644 index 0000000000..1d7330d401 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-processor.png differ diff --git a/reference/ingestion-tools/fleet/images/add-remove-tags.png b/reference/ingestion-tools/fleet/images/add-remove-tags.png new file mode 100644 index 0000000000..026568033d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add-remove-tags.png differ diff --git a/reference/ingestion-tools/fleet/images/add_resource_metadata.png b/reference/ingestion-tools/fleet/images/add_resource_metadata.png new file mode 100644 index 0000000000..67a6e2c2d4 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/add_resource_metadata.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-activity.png b/reference/ingestion-tools/fleet/images/agent-activity.png new file mode 100644 index 0000000000..1802173ef3 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-activity.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-architecture.png b/reference/ingestion-tools/fleet/images/agent-architecture.png new file mode 100644 index 0000000000..e667e5955a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-architecture.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-detail-integrations-health.png b/reference/ingestion-tools/fleet/images/agent-detail-integrations-health.png new file mode 100644 index 0000000000..ec0aa5b96c Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-detail-integrations-health.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-detail-overview.png b/reference/ingestion-tools/fleet/images/agent-detail-overview.png new file mode 100644 index 0000000000..bae0cf5e97 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-detail-overview.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-health-status.png b/reference/ingestion-tools/fleet/images/agent-health-status.png new file mode 100644 index 0000000000..34a35ab4e5 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-health-status.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-metrics-dashboard.png b/reference/ingestion-tools/fleet/images/agent-metrics-dashboard.png new file mode 100644 index 0000000000..65f3754b80 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-metrics-dashboard.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-monitoring-assets.png b/reference/ingestion-tools/fleet/images/agent-monitoring-assets.png new file mode 100644 index 0000000000..33649b1242 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-monitoring-assets.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-monitoring-settings.png b/reference/ingestion-tools/fleet/images/agent-monitoring-settings.png new file mode 100644 index 0000000000..f8fe9b74f0 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-monitoring-settings.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-output-settings.png b/reference/ingestion-tools/fleet/images/agent-output-settings.png new file mode 100644 index 0000000000..e9b2347106 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-output-settings.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-policy-custom-field.png b/reference/ingestion-tools/fleet/images/agent-policy-custom-field.png new file mode 100644 index 0000000000..5c66512fd7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-policy-custom-field.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-privilege-mode.png b/reference/ingestion-tools/fleet/images/agent-privilege-mode.png new file mode 100644 index 0000000000..b6596cb514 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-privilege-mode.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-proxy-server-managed-deployment.png b/reference/ingestion-tools/fleet/images/agent-proxy-server-managed-deployment.png new file mode 100644 index 0000000000..d4eca6801f Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-proxy-server-managed-deployment.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-proxy-server.png b/reference/ingestion-tools/fleet/images/agent-proxy-server.png new file mode 100644 index 0000000000..ef94ca5613 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-proxy-server.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-set-logging-level.png b/reference/ingestion-tools/fleet/images/agent-set-logging-level.png new file mode 100644 index 0000000000..f90e446f03 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-set-logging-level.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-status-diagram.png b/reference/ingestion-tools/fleet/images/agent-status-diagram.png new file mode 100644 index 0000000000..abbdd2a03b Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-status-diagram.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-status-filter.png b/reference/ingestion-tools/fleet/images/agent-status-filter.png new file mode 100644 index 0000000000..78624b971b Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-status-filter.png differ diff --git a/reference/ingestion-tools/fleet/images/agent-tags.png b/reference/ingestion-tools/fleet/images/agent-tags.png new file mode 100644 index 0000000000..fcfc82ee4b Binary files /dev/null and b/reference/ingestion-tools/fleet/images/agent-tags.png differ diff --git a/reference/ingestion-tools/fleet/images/apply-agent-policy.png b/reference/ingestion-tools/fleet/images/apply-agent-policy.png new file mode 100644 index 0000000000..694e7f150d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/apply-agent-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/ca-certs.png b/reference/ingestion-tools/fleet/images/ca-certs.png new file mode 100644 index 0000000000..a3c63403a2 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/ca-certs.png differ diff --git a/reference/ingestion-tools/fleet/images/ca.png b/reference/ingestion-tools/fleet/images/ca.png new file mode 100644 index 0000000000..02a8d4bd9d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/ca.png differ diff --git a/reference/ingestion-tools/fleet/images/certificate-rotation-agent-es.png b/reference/ingestion-tools/fleet/images/certificate-rotation-agent-es.png new file mode 100644 index 0000000000..9a7ff1f1a8 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/certificate-rotation-agent-es.png differ diff --git a/reference/ingestion-tools/fleet/images/client-certs.png b/reference/ingestion-tools/fleet/images/client-certs.png new file mode 100644 index 0000000000..c71b91d0bb Binary files /dev/null and b/reference/ingestion-tools/fleet/images/client-certs.png differ diff --git a/reference/ingestion-tools/fleet/images/collect-agent-diagnostics1.png b/reference/ingestion-tools/fleet/images/collect-agent-diagnostics1.png new file mode 100644 index 0000000000..b0c39fadc5 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/collect-agent-diagnostics1.png differ diff --git a/reference/ingestion-tools/fleet/images/collect-agent-diagnostics2.png b/reference/ingestion-tools/fleet/images/collect-agent-diagnostics2.png new file mode 100644 index 0000000000..b49d0734ab Binary files /dev/null and b/reference/ingestion-tools/fleet/images/collect-agent-diagnostics2.png differ diff --git a/reference/ingestion-tools/fleet/images/component-templates-list.png b/reference/ingestion-tools/fleet/images/component-templates-list.png new file mode 100644 index 0000000000..de5c9d26e7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/component-templates-list.png differ diff --git a/reference/ingestion-tools/fleet/images/copy-api-key.png b/reference/ingestion-tools/fleet/images/copy-api-key.png new file mode 100644 index 0000000000..a5b69c3ca9 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/copy-api-key.png differ diff --git a/reference/ingestion-tools/fleet/images/create-component-template.png b/reference/ingestion-tools/fleet/images/create-component-template.png new file mode 100644 index 0000000000..0e8600c274 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/create-component-template.png differ diff --git a/reference/ingestion-tools/fleet/images/create-index-template.png b/reference/ingestion-tools/fleet/images/create-index-template.png new file mode 100644 index 0000000000..f39eb2d96f Binary files /dev/null and b/reference/ingestion-tools/fleet/images/create-index-template.png differ diff --git a/reference/ingestion-tools/fleet/images/create-standalone-agent-role.png b/reference/ingestion-tools/fleet/images/create-standalone-agent-role.png new file mode 100644 index 0000000000..c59ffe8963 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/create-standalone-agent-role.png differ diff --git a/reference/ingestion-tools/fleet/images/create-token.png b/reference/ingestion-tools/fleet/images/create-token.png new file mode 100644 index 0000000000..d139f8357f Binary files /dev/null and b/reference/ingestion-tools/fleet/images/create-token.png differ diff --git a/reference/ingestion-tools/fleet/images/dashboard-datastream01.png b/reference/ingestion-tools/fleet/images/dashboard-datastream01.png new file mode 100644 index 0000000000..3d971b6eb9 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/dashboard-datastream01.png differ diff --git a/reference/ingestion-tools/fleet/images/data-stream-info.png b/reference/ingestion-tools/fleet/images/data-stream-info.png new file mode 100644 index 0000000000..ff57ec44c6 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/data-stream-info.png differ diff --git a/reference/ingestion-tools/fleet/images/datastream-namespace.png b/reference/ingestion-tools/fleet/images/datastream-namespace.png new file mode 100644 index 0000000000..8b47155e18 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/datastream-namespace.png differ diff --git a/reference/ingestion-tools/fleet/images/download-agent-policy.png b/reference/ingestion-tools/fleet/images/download-agent-policy.png new file mode 100644 index 0000000000..b7126d3953 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/download-agent-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-edit-proxy-secure-settings.png b/reference/ingestion-tools/fleet/images/elastic-agent-edit-proxy-secure-settings.png new file mode 100644 index 0000000000..29ae779992 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-edit-proxy-secure-settings.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-agent-binary-source.png b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-agent-binary-source.png new file mode 100644 index 0000000000..1a60e023c3 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-agent-binary-source.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-fleet-server.png b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-fleet-server.png new file mode 100644 index 0000000000..a389a242dd Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-fleet-server.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-output.png b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-output.png new file mode 100644 index 0000000000..3828393de4 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-output.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-proxy.png b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-proxy.png new file mode 100644 index 0000000000..abf40d5fac Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-edit-proxy.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-proxy-gateway-secure.png b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-gateway-secure.png new file mode 100644 index 0000000000..b40a2598a8 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-proxy-gateway-secure.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-agent-status-rule.png b/reference/ingestion-tools/fleet/images/elastic-agent-status-rule.png new file mode 100644 index 0000000000..b438ddb0f2 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-agent-status-rule.png differ diff --git a/reference/ingestion-tools/fleet/images/elastic-cloud-agent-policy.png b/reference/ingestion-tools/fleet/images/elastic-cloud-agent-policy.png new file mode 100644 index 0000000000..c71930c714 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/elastic-cloud-agent-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-add-output-button.png b/reference/ingestion-tools/fleet/images/fleet-add-output-button.png new file mode 100644 index 0000000000..c6193c7599 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-add-output-button.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-epr-proxy.png b/reference/ingestion-tools/fleet/images/fleet-epr-proxy.png new file mode 100644 index 0000000000..edf85e75d6 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-epr-proxy.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-policy-hidden-secret.png b/reference/ingestion-tools/fleet/images/fleet-policy-hidden-secret.png new file mode 100644 index 0000000000..a39fc34834 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-policy-hidden-secret.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-agent-policies-diagram.png b/reference/ingestion-tools/fleet/images/fleet-server-agent-policies-diagram.png new file mode 100644 index 0000000000..5b22b40c0f Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-agent-policies-diagram.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-agent-policy-page.png b/reference/ingestion-tools/fleet/images/fleet-server-agent-policy-page.png new file mode 100644 index 0000000000..fed328d88e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-agent-policy-page.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-certs.png b/reference/ingestion-tools/fleet/images/fleet-server-certs.png new file mode 100644 index 0000000000..bfb60776c0 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-certs.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-cloud-deployment.png b/reference/ingestion-tools/fleet/images/fleet-server-cloud-deployment.png new file mode 100644 index 0000000000..cdbcfb2f41 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-cloud-deployment.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-configuration.png b/reference/ingestion-tools/fleet/images/fleet-server-configuration.png new file mode 100644 index 0000000000..a730b4c3c4 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-configuration.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-hosted-container.png b/reference/ingestion-tools/fleet/images/fleet-server-hosted-container.png new file mode 100644 index 0000000000..96332494fb Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-hosted-container.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-on-prem-deployment.png b/reference/ingestion-tools/fleet/images/fleet-server-on-prem-deployment.png new file mode 100644 index 0000000000..8df0929275 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-on-prem-deployment.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-server-on-prem-es-cloud.png b/reference/ingestion-tools/fleet/images/fleet-server-on-prem-es-cloud.png new file mode 100644 index 0000000000..5e05b0f2f7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-server-on-prem-es-cloud.png differ diff --git a/reference/ingestion-tools/fleet/images/fleet-start.png b/reference/ingestion-tools/fleet/images/fleet-start.png new file mode 100644 index 0000000000..4449517c44 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/fleet-start.png differ diff --git a/reference/ingestion-tools/fleet/images/green-check.svg b/reference/ingestion-tools/fleet/images/green-check.svg new file mode 100644 index 0000000000..23094411c8 --- /dev/null +++ b/reference/ingestion-tools/fleet/images/green-check.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/reference/ingestion-tools/fleet/images/gsub_cronjob.png b/reference/ingestion-tools/fleet/images/gsub_cronjob.png new file mode 100644 index 0000000000..a5e1d2b89e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/gsub_cronjob.png differ diff --git a/reference/ingestion-tools/fleet/images/gsub_deployment.png b/reference/ingestion-tools/fleet/images/gsub_deployment.png new file mode 100644 index 0000000000..d3b166c7d0 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/gsub_deployment.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-add-agent-standalone01.png b/reference/ingestion-tools/fleet/images/guide-add-agent-standalone01.png new file mode 100644 index 0000000000..9c0f9635cc Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-add-agent-standalone01.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-add-nginx-integration.png b/reference/ingestion-tools/fleet/images/guide-add-nginx-integration.png new file mode 100644 index 0000000000..ba0a9c04f1 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-add-nginx-integration.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-agent-logs-flowing.png b/reference/ingestion-tools/fleet/images/guide-agent-logs-flowing.png new file mode 100644 index 0000000000..96101c1d95 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-agent-logs-flowing.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-agent-metrics-flowing.png b/reference/ingestion-tools/fleet/images/guide-agent-metrics-flowing.png new file mode 100644 index 0000000000..f8eb2ae0bc Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-agent-metrics-flowing.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-agent-policies.png b/reference/ingestion-tools/fleet/images/guide-agent-policies.png new file mode 100644 index 0000000000..651392026b Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-agent-policies.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-create-agent-policy.png b/reference/ingestion-tools/fleet/images/guide-create-agent-policy.png new file mode 100644 index 0000000000..35a39a76c7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-create-agent-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-install-agent-on-host.png b/reference/ingestion-tools/fleet/images/guide-install-agent-on-host.png new file mode 100644 index 0000000000..a502cfdfff Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-install-agent-on-host.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-integrations-page.png b/reference/ingestion-tools/fleet/images/guide-integrations-page.png new file mode 100644 index 0000000000..8a817ed6dd Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-integrations-page.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-nginx-browser-breakdown.png b/reference/ingestion-tools/fleet/images/guide-nginx-browser-breakdown.png new file mode 100644 index 0000000000..7f93ef7fbe Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-nginx-browser-breakdown.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-nginx-integration-added.png b/reference/ingestion-tools/fleet/images/guide-nginx-integration-added.png new file mode 100644 index 0000000000..7db08404a5 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-nginx-integration-added.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-nginx-policy.png b/reference/ingestion-tools/fleet/images/guide-nginx-policy.png new file mode 100644 index 0000000000..82a1680ad9 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-nginx-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-nginx-welcome.png b/reference/ingestion-tools/fleet/images/guide-nginx-welcome.png new file mode 100644 index 0000000000..82a137efbe Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-nginx-welcome.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-sign-up-trial.png b/reference/ingestion-tools/fleet/images/guide-sign-up-trial.png new file mode 100644 index 0000000000..10036e1fc1 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-sign-up-trial.png differ diff --git a/reference/ingestion-tools/fleet/images/guide-system-metrics-dashboard.png b/reference/ingestion-tools/fleet/images/guide-system-metrics-dashboard.png new file mode 100644 index 0000000000..24c7faf663 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/guide-system-metrics-dashboard.png differ diff --git a/reference/ingestion-tools/fleet/images/helm-example-fleet-metrics-dashboard.png b/reference/ingestion-tools/fleet/images/helm-example-fleet-metrics-dashboard.png new file mode 100644 index 0000000000..f0f3ae7fa0 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/helm-example-fleet-metrics-dashboard.png differ diff --git a/reference/ingestion-tools/fleet/images/helm-example-nodes-enrollment-confirmation.png b/reference/ingestion-tools/fleet/images/helm-example-nodes-enrollment-confirmation.png new file mode 100644 index 0000000000..c55a50bd62 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/helm-example-nodes-enrollment-confirmation.png differ diff --git a/reference/ingestion-tools/fleet/images/helm-example-nodes-logs-and-metrics.png b/reference/ingestion-tools/fleet/images/helm-example-nodes-logs-and-metrics.png new file mode 100644 index 0000000000..4d57e979d8 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/helm-example-nodes-logs-and-metrics.png differ diff --git a/reference/ingestion-tools/fleet/images/helm-example-nodes-metrics-dashboard.png b/reference/ingestion-tools/fleet/images/helm-example-nodes-metrics-dashboard.png new file mode 100644 index 0000000000..9322eb8181 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/helm-example-nodes-metrics-dashboard.png differ diff --git a/reference/ingestion-tools/fleet/images/helm-example-pods-metrics-dashboard.png b/reference/ingestion-tools/fleet/images/helm-example-pods-metrics-dashboard.png new file mode 100644 index 0000000000..fa894a1de3 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/helm-example-pods-metrics-dashboard.png differ diff --git a/reference/ingestion-tools/fleet/images/index-template-system-auth.png b/reference/ingestion-tools/fleet/images/index-template-system-auth.png new file mode 100644 index 0000000000..c4bedaff0e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/index-template-system-auth.png differ diff --git a/reference/ingestion-tools/fleet/images/ingest_pipeline_custom_k8s.png b/reference/ingestion-tools/fleet/images/ingest_pipeline_custom_k8s.png new file mode 100644 index 0000000000..9b97f7c30b Binary files /dev/null and b/reference/ingestion-tools/fleet/images/ingest_pipeline_custom_k8s.png differ diff --git a/reference/ingestion-tools/fleet/images/integration-root-requirement.png b/reference/ingestion-tools/fleet/images/integration-root-requirement.png new file mode 100644 index 0000000000..597214d8d1 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/integration-root-requirement.png differ diff --git a/reference/ingestion-tools/fleet/images/integrations-server-hosted-container.png b/reference/ingestion-tools/fleet/images/integrations-server-hosted-container.png new file mode 100644 index 0000000000..da0603124f Binary files /dev/null and b/reference/ingestion-tools/fleet/images/integrations-server-hosted-container.png differ diff --git a/reference/ingestion-tools/fleet/images/integrations.png b/reference/ingestion-tools/fleet/images/integrations.png new file mode 100644 index 0000000000..6f43f16e5a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/integrations.png differ diff --git a/reference/ingestion-tools/fleet/images/k8skibanaUI.png b/reference/ingestion-tools/fleet/images/k8skibanaUI.png new file mode 100644 index 0000000000..d1ea9a8db1 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/k8skibanaUI.png differ diff --git a/reference/ingestion-tools/fleet/images/k8sscaling.png b/reference/ingestion-tools/fleet/images/k8sscaling.png new file mode 100644 index 0000000000..6b219eae4d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/k8sscaling.png differ diff --git a/reference/ingestion-tools/fleet/images/kibana-agent-flyout.png b/reference/ingestion-tools/fleet/images/kibana-agent-flyout.png new file mode 100644 index 0000000000..c96b394cad Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kibana-agent-flyout.png differ diff --git a/reference/ingestion-tools/fleet/images/kibana-fleet-agents copy.png b/reference/ingestion-tools/fleet/images/kibana-fleet-agents copy.png new file mode 100644 index 0000000000..6660000437 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kibana-fleet-agents copy.png differ diff --git a/reference/ingestion-tools/fleet/images/kibana-fleet-agents.png b/reference/ingestion-tools/fleet/images/kibana-fleet-agents.png new file mode 100644 index 0000000000..6660000437 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kibana-fleet-agents.png differ diff --git a/reference/ingestion-tools/fleet/images/kibana-fleet-datasets.png b/reference/ingestion-tools/fleet/images/kibana-fleet-datasets.png new file mode 100644 index 0000000000..c1d790b5e1 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kibana-fleet-datasets.png differ diff --git a/reference/ingestion-tools/fleet/images/kibana-fleet-datastreams.png b/reference/ingestion-tools/fleet/images/kibana-fleet-datastreams.png new file mode 100644 index 0000000000..73aa22392b Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kibana-fleet-datastreams.png differ diff --git a/reference/ingestion-tools/fleet/images/kibana-fleet-privileges.png b/reference/ingestion-tools/fleet/images/kibana-fleet-privileges.png new file mode 100644 index 0000000000..cea848dab7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kibana-fleet-privileges.png differ diff --git a/reference/ingestion-tools/fleet/images/kubernetes_metadata.png b/reference/ingestion-tools/fleet/images/kubernetes_metadata.png new file mode 100644 index 0000000000..b832cb358f Binary files /dev/null and b/reference/ingestion-tools/fleet/images/kubernetes_metadata.png differ diff --git a/reference/ingestion-tools/fleet/images/logstash-certs.png b/reference/ingestion-tools/fleet/images/logstash-certs.png new file mode 100644 index 0000000000..e3568606f6 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/logstash-certs.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-agents-offline.png b/reference/ingestion-tools/fleet/images/migrate-agent-agents-offline.png new file mode 100644 index 0000000000..bc63da15f2 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-agents-offline.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-deployment-id.png b/reference/ingestion-tools/fleet/images/migrate-agent-deployment-id.png new file mode 100644 index 0000000000..7240ca1866 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-deployment-id.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-elasticsearch-output.png b/reference/ingestion-tools/fleet/images/migrate-agent-elasticsearch-output.png new file mode 100644 index 0000000000..0f34e22a13 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-elasticsearch-output.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-fleet-server-host.png b/reference/ingestion-tools/fleet/images/migrate-agent-fleet-server-host.png new file mode 100644 index 0000000000..d88a4e5d00 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-fleet-server-host.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-host-output-settings.png b/reference/ingestion-tools/fleet/images/migrate-agent-host-output-settings.png new file mode 100644 index 0000000000..44d8fd866e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-host-output-settings.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-install-command-output.png b/reference/ingestion-tools/fleet/images/migrate-agent-install-command-output.png new file mode 100644 index 0000000000..2841b4e0de Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-install-command-output.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-install-command.png b/reference/ingestion-tools/fleet/images/migrate-agent-install-command.png new file mode 100644 index 0000000000..630fb32640 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-install-command.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-new-deployment.png b/reference/ingestion-tools/fleet/images/migrate-agent-new-deployment.png new file mode 100644 index 0000000000..2e0d7cf456 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-new-deployment.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-newly-enrolled-agents.png b/reference/ingestion-tools/fleet/images/migrate-agent-newly-enrolled-agents.png new file mode 100644 index 0000000000..895026eeb8 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-newly-enrolled-agents.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-policy-settings.png b/reference/ingestion-tools/fleet/images/migrate-agent-policy-settings.png new file mode 100644 index 0000000000..32ca060fe4 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-policy-settings.png differ diff --git a/reference/ingestion-tools/fleet/images/migrate-agent-take-snapshot.png b/reference/ingestion-tools/fleet/images/migrate-agent-take-snapshot.png new file mode 100644 index 0000000000..518f0a7494 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migrate-agent-take-snapshot.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-add-integration-policy.png b/reference/ingestion-tools/fleet/images/migration-add-integration-policy.png new file mode 100644 index 0000000000..07ae2ace63 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-add-integration-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-add-nginx-integration.png b/reference/ingestion-tools/fleet/images/migration-add-nginx-integration.png new file mode 100644 index 0000000000..7986548a53 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-add-nginx-integration.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-add-processor.png b/reference/ingestion-tools/fleet/images/migration-add-processor.png new file mode 100644 index 0000000000..4d2715c2e7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-add-processor.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-agent-data-streams01.png b/reference/ingestion-tools/fleet/images/migration-agent-data-streams01.png new file mode 100644 index 0000000000..a9780a7fb2 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-agent-data-streams01.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-agent-details01.png b/reference/ingestion-tools/fleet/images/migration-agent-details01.png new file mode 100644 index 0000000000..81f246eed8 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-agent-details01.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-agent-status-healthy01.png b/reference/ingestion-tools/fleet/images/migration-agent-status-healthy01.png new file mode 100644 index 0000000000..12396b7eb5 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-agent-status-healthy01.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-event-from-agent.png b/reference/ingestion-tools/fleet/images/migration-event-from-agent.png new file mode 100644 index 0000000000..d663d2334a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-event-from-agent.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-event-from-filebeat.png b/reference/ingestion-tools/fleet/images/migration-event-from-filebeat.png new file mode 100644 index 0000000000..b89cc64d3a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-event-from-filebeat.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-index-lifecycle-policies.png b/reference/ingestion-tools/fleet/images/migration-index-lifecycle-policies.png new file mode 100644 index 0000000000..534424ba37 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-index-lifecycle-policies.png differ diff --git a/reference/ingestion-tools/fleet/images/migration-preserve-raw-event.png b/reference/ingestion-tools/fleet/images/migration-preserve-raw-event.png new file mode 100644 index 0000000000..daa4aa5c36 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/migration-preserve-raw-event.png differ diff --git a/reference/ingestion-tools/fleet/images/mutual-tls-cloud-proxy.png b/reference/ingestion-tools/fleet/images/mutual-tls-cloud-proxy.png new file mode 100644 index 0000000000..3b1805776e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/mutual-tls-cloud-proxy.png differ diff --git a/reference/ingestion-tools/fleet/images/mutual-tls-cloud.png b/reference/ingestion-tools/fleet/images/mutual-tls-cloud.png new file mode 100644 index 0000000000..1064a3aeb6 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/mutual-tls-cloud.png differ diff --git a/reference/ingestion-tools/fleet/images/mutual-tls-fs-onprem-proxy.png b/reference/ingestion-tools/fleet/images/mutual-tls-fs-onprem-proxy.png new file mode 100644 index 0000000000..cc2affef21 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/mutual-tls-fs-onprem-proxy.png differ diff --git a/reference/ingestion-tools/fleet/images/mutual-tls-fs-onprem.png b/reference/ingestion-tools/fleet/images/mutual-tls-fs-onprem.png new file mode 100644 index 0000000000..c0f633f1d2 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/mutual-tls-fs-onprem.png differ diff --git a/reference/ingestion-tools/fleet/images/mutual-tls-on-prem.png b/reference/ingestion-tools/fleet/images/mutual-tls-on-prem.png new file mode 100644 index 0000000000..0fababfa71 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/mutual-tls-on-prem.png differ diff --git a/reference/ingestion-tools/fleet/images/mutual-tls-onprem-advanced-yaml.png b/reference/ingestion-tools/fleet/images/mutual-tls-onprem-advanced-yaml.png new file mode 100644 index 0000000000..ca7953ea8d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/mutual-tls-onprem-advanced-yaml.png differ diff --git a/reference/ingestion-tools/fleet/images/pod-latency.png b/reference/ingestion-tools/fleet/images/pod-latency.png new file mode 100644 index 0000000000..ed2a3de461 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/pod-latency.png differ diff --git a/reference/ingestion-tools/fleet/images/privileged-and-unprivileged-agents.png b/reference/ingestion-tools/fleet/images/privileged-and-unprivileged-agents.png new file mode 100644 index 0000000000..8b929a8a16 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/privileged-and-unprivileged-agents.png differ diff --git a/reference/ingestion-tools/fleet/images/red-x.svg b/reference/ingestion-tools/fleet/images/red-x.svg new file mode 100644 index 0000000000..5426fb2afd --- /dev/null +++ b/reference/ingestion-tools/fleet/images/red-x.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/reference/ingestion-tools/fleet/images/review-component-template01.png b/reference/ingestion-tools/fleet/images/review-component-template01.png new file mode 100644 index 0000000000..d057d1a151 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/review-component-template01.png differ diff --git a/reference/ingestion-tools/fleet/images/review-component-template02.png b/reference/ingestion-tools/fleet/images/review-component-template02.png new file mode 100644 index 0000000000..5f9309bad9 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/review-component-template02.png differ diff --git a/reference/ingestion-tools/fleet/images/revoke-token.png b/reference/ingestion-tools/fleet/images/revoke-token.png new file mode 100644 index 0000000000..b8a24eb296 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/revoke-token.png differ diff --git a/reference/ingestion-tools/fleet/images/root-integration-and-unprivileged-agents.png b/reference/ingestion-tools/fleet/images/root-integration-and-unprivileged-agents.png new file mode 100644 index 0000000000..cb7d3bd4c7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/root-integration-and-unprivileged-agents.png differ diff --git a/reference/ingestion-tools/fleet/images/schedule-upgrade.png b/reference/ingestion-tools/fleet/images/schedule-upgrade.png new file mode 100644 index 0000000000..6714e78188 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/schedule-upgrade.png differ diff --git a/reference/ingestion-tools/fleet/images/selected-agent-metrics-dashboard.png b/reference/ingestion-tools/fleet/images/selected-agent-metrics-dashboard.png new file mode 100644 index 0000000000..e4b545d801 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/selected-agent-metrics-dashboard.png differ diff --git a/reference/ingestion-tools/fleet/images/show-token.png b/reference/ingestion-tools/fleet/images/show-token.png new file mode 100644 index 0000000000..902b0e9046 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/show-token.png differ diff --git a/reference/ingestion-tools/fleet/images/state-pod.png b/reference/ingestion-tools/fleet/images/state-pod.png new file mode 100644 index 0000000000..ab70408374 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/state-pod.png differ diff --git a/reference/ingestion-tools/fleet/images/system-managed.png b/reference/ingestion-tools/fleet/images/system-managed.png new file mode 100644 index 0000000000..7786d958e9 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/system-managed.png differ diff --git a/reference/ingestion-tools/fleet/images/tls-overview-mutual-all.jpg b/reference/ingestion-tools/fleet/images/tls-overview-mutual-all.jpg new file mode 100644 index 0000000000..88ef37a68d Binary files /dev/null and b/reference/ingestion-tools/fleet/images/tls-overview-mutual-all.jpg differ diff --git a/reference/ingestion-tools/fleet/images/tls-overview-mutual-fs-agent.png b/reference/ingestion-tools/fleet/images/tls-overview-mutual-fs-agent.png new file mode 100644 index 0000000000..4d05d5c1fb Binary files /dev/null and b/reference/ingestion-tools/fleet/images/tls-overview-mutual-fs-agent.png differ diff --git a/reference/ingestion-tools/fleet/images/tls-overview-mutual-fs-es.png b/reference/ingestion-tools/fleet/images/tls-overview-mutual-fs-es.png new file mode 100644 index 0000000000..0cd7746bfd Binary files /dev/null and b/reference/ingestion-tools/fleet/images/tls-overview-mutual-fs-es.png differ diff --git a/reference/ingestion-tools/fleet/images/tls-overview-oneway-all.jpg b/reference/ingestion-tools/fleet/images/tls-overview-oneway-all.jpg new file mode 100644 index 0000000000..e65f24add7 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/tls-overview-oneway-all.jpg differ diff --git a/reference/ingestion-tools/fleet/images/tls-overview-oneway-fs-agent.png b/reference/ingestion-tools/fleet/images/tls-overview-oneway-fs-agent.png new file mode 100644 index 0000000000..7d5326c270 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/tls-overview-oneway-fs-agent.png differ diff --git a/reference/ingestion-tools/fleet/images/tls-overview-oneway-fs-es.png b/reference/ingestion-tools/fleet/images/tls-overview-oneway-fs-es.png new file mode 100644 index 0000000000..6b24773844 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/tls-overview-oneway-fs-es.png differ diff --git a/reference/ingestion-tools/fleet/images/unprivileged-agent-warning.png b/reference/ingestion-tools/fleet/images/unprivileged-agent-warning.png new file mode 100644 index 0000000000..cb0f1da80e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/unprivileged-agent-warning.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-agent-custom.png b/reference/ingestion-tools/fleet/images/upgrade-agent-custom.png new file mode 100644 index 0000000000..32e443a831 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-agent-custom.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-available-indicator.png b/reference/ingestion-tools/fleet/images/upgrade-available-indicator.png new file mode 100644 index 0000000000..695b98d51a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-available-indicator.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-detailed-state01.png b/reference/ingestion-tools/fleet/images/upgrade-detailed-state01.png new file mode 100644 index 0000000000..0e2ed9bf9c Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-detailed-state01.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-detailed-state02.png b/reference/ingestion-tools/fleet/images/upgrade-detailed-state02.png new file mode 100644 index 0000000000..0eeac452ae Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-detailed-state02.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-failure.png b/reference/ingestion-tools/fleet/images/upgrade-failure.png new file mode 100644 index 0000000000..922fbc1d29 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-failure.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-integration-policies-automatically.png b/reference/ingestion-tools/fleet/images/upgrade-integration-policies-automatically.png new file mode 100644 index 0000000000..c22e6e124c Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-integration-policies-automatically.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-integration.png b/reference/ingestion-tools/fleet/images/upgrade-integration.png new file mode 100644 index 0000000000..17c0c69547 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-integration.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-non-detailed.png b/reference/ingestion-tools/fleet/images/upgrade-non-detailed.png new file mode 100644 index 0000000000..8e411d121a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-non-detailed.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-package-policy.png b/reference/ingestion-tools/fleet/images/upgrade-package-policy.png new file mode 100644 index 0000000000..4a4faf5989 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-package-policy.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-policy-editor.png b/reference/ingestion-tools/fleet/images/upgrade-policy-editor.png new file mode 100644 index 0000000000..9c848b10ca Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-policy-editor.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-resolve-conflicts.png b/reference/ingestion-tools/fleet/images/upgrade-resolve-conflicts.png new file mode 100644 index 0000000000..4c4ff77423 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-resolve-conflicts.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-single-agent.png b/reference/ingestion-tools/fleet/images/upgrade-single-agent.png new file mode 100644 index 0000000000..879c0af14a Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-single-agent.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-states.png b/reference/ingestion-tools/fleet/images/upgrade-states.png new file mode 100644 index 0000000000..70cf59a915 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-states.png differ diff --git a/reference/ingestion-tools/fleet/images/upgrade-view-previous-config.png b/reference/ingestion-tools/fleet/images/upgrade-view-previous-config.png new file mode 100644 index 0000000000..7202ca735e Binary files /dev/null and b/reference/ingestion-tools/fleet/images/upgrade-view-previous-config.png differ diff --git a/reference/ingestion-tools/fleet/images/view-agent-logs.png b/reference/ingestion-tools/fleet/images/view-agent-logs.png new file mode 100644 index 0000000000..dd457a1861 Binary files /dev/null and b/reference/ingestion-tools/fleet/images/view-agent-logs.png differ diff --git a/reference/ingestion-tools/fleet/include_fields-processor.md b/reference/ingestion-tools/fleet/include_fields-processor.md new file mode 100644 index 0000000000..546b3796e9 --- /dev/null +++ b/reference/ingestion-tools/fleet/include_fields-processor.md @@ -0,0 +1,35 @@ +--- +navigation_title: "include_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/include_fields-processor.html +--- + +# Keep fields from events [include_fields-processor] + + +The `include_fields` processor specifies which fields to export if a certain condition is fulfilled. The condition is optional. If it’s missing, the specified fields are always exported. The `@timestamp`, `@metadata`, and `type` fields are always exported, even if they are not defined in the `include_fields` list. + + +## Example [_example_27] + +```yaml + - include_fields: + when: + condition + fields: ["field1", "field2", ...] +``` + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +You can specify multiple `include_fields` processors under the `processors` section. + +::::{note} +If you define an empty list of fields under `include_fields`, only the required fields, `@timestamp` and `type`, are exported. +:::: + + diff --git a/reference/ingestion-tools/fleet/index.md b/reference/ingestion-tools/fleet/index.md new file mode 100644 index 0000000000..00dcae54c8 --- /dev/null +++ b/reference/ingestion-tools/fleet/index.md @@ -0,0 +1,145 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/fleet-and-elastic-agent.html + - https://www.elastic.co/guide/en/fleet/current/fleet-elastic-agent-quick-start.html + - https://www.elastic.co/guide/en/kibana/current/fleet.html + - https://www.elastic.co/guide/en/fleet/current/fleet-overview.html + - https://www.elastic.co/guide/en/fleet/current/index.html +--- + +# Fleet and Elastic Agent [fleet-and-elastic-agent] + +% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +$$$package-registry-intro$$$ + +## {{agent}} [elastic-agent] + +{{agent}} is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. A single agent makes it easier and faster to deploy monitoring across your infrastructure. Each agent has a single policy you can update to add integrations for new data sources, security protections, and more. + +As the following diagram illustrates, {{agent}} can monitor the host where it's deployed, and it can collect and forward data from remote services and hardware where direct deployment is not possible. + +:::{image} images/agent-architecture.png +:alt: Image showing {{agent}} collecting data from local host and remote services +::: + +To learn about installation options, refer to [](/reference/ingestion-tools/fleet/install-elastic-agents.md). + +:::{note} +Using {{fleet}} and {{agent}} {{serverless-full}}? Please note these [restrictions](/reference/ingestion-tools/fleet/fleet-agent-serverless-restrictions.md). +::: + +:::{tip} +Looking for a general guide that explores all of your options for ingesting data? Check out [Adding data to Elasticsearch](/manage-data/ingest.md). +::: + +## {{integrations}} + +[{{integrations}}](integration-docs://docs/reference/index.md) provide an easy way to connect Elastic to external services and systems, and quickly get insights or take action. They can collect new sources of data, and they often ship with out-of-the-box assets like dashboards, visualizations, and pipelines to extract structured fields out of logs and events. This makes it easier to get insights within seconds. Integrations are available for popular services and platforms like Nginx or AWS, as well as many generic input types like log files. + +{{kib}} provides a web-based UI to add and manage integrations. You can browse a unified view of available integrations that shows both {{agent}} and {{beats}} integrations. + +:::{image} images/integrations.png +:alt: Integrations page +::: + +## {{agent}} policies [configuring-integrations] + +Agent policies specify which integrations you want to run and on which hosts. You can apply an {{agent}} policy to multiple agents, making it even easier to manage configuration at scale. + +:::{image} images/add-integration.png +:alt: Add integration page +::: + +When you add an integration, you configure inputs for logs and metrics, such as the path to your Nginx access logs. When you're done, you save the integration to an {{agent}} policy. The next time enrolled agents check in, they receive the update. Having the policies automatically deployed is more convenient than doing it yourself by using SSH, Ansible playbooks, or some other tool. + +For more information, refer to [](/reference/ingestion-tools/fleet/agent-policy.md). + +If you prefer infrastructure as code, you may use YAML files and APIs. {{fleet}} has an API-first design. Anything you can do in the UI, you can also do using the API. This makes it easy to automate and integrate with other systems. + +## {{package-registry}} [package-registry-intro] + +The {{package-registry}} is an online package hosting service for the {{agent}} integrations available in {{kib}}. + +{{kib}} connects to the {{package-registry}} at `epr.elastic.co` using the Elastic Package Manager, downloads the latest integration package, and stores its assets in {{es}}. This process typically requires an internet connection because integrations are updated and released periodically. You can find more information about running the {{package-registry}} in air-gapped environments in the section about [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md). + +## Elastic Artifact Registry [artifact-registry-intro] + +{{fleet}} and {{agent}} require access to the public Elastic Artifact Registry. The {{agent}} running on any of your internal hosts should have access to `artifacts.elastic.co` in order to perform self-upgrades and install of certain components which are needed for some of the data integrations. + +Additionally, access to `artifacts.security.elastic.co` is needed for {{agent}} updates and security artifacts when using {{elastic-defend}}. + +You can find more information about running the above mentioned resources in air-gapped environments in the section about [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md). + +## Central management in {{fleet}} [central-management] + +{{fleet}} provides a web-based UI in {{kib}} for centrally managing {{agents}} and their policies. + +You can see the state of all your {{agents}} in {{fleet}}. On the **Agents** page, you can see which agents are healthy or unhealthy, and the last time they checked in. You can also see the version of the {{agent}} binary and policy. + +:::{image} images/kibana-fleet-agents.png +:alt: Agents page +::: + +{{fleet}} in {{kib}} enables you to manage {{elastic-agent}} installations in standalone or {{fleet}} mode. + +Standalone mode requires you to manually configure and manage the agent locally. It is recommended for advanced users only. + +{{fleet}} mode offers several advantages: + +* A central place to configure and monitor your {{agents}}. +* Ability to trigger {{agent}} binary and policy upgrades remotely. +* An overview of the data ingest in your {{es}} cluster. + +:::{image} images/fleet-start.png +:alt: {{fleet}} app in {{kib}} +:class: screenshot +::: + +{{fleet}} serves as the communication channel back to the {{agents}}. Agents check in for the latest updates on a regular basis. You can have any number of agents enrolled into each agent policy, which allows you to scale up to thousands of hosts. + +When you make a change to an agent policy, all the agents receive the update during their next check-in. You no longer have to distribute policy updates yourself. + +When you're ready to upgrade your {{agent}} binaries or integrations, you can initiate upgrades in {{fleet}}, and the {{agents}} running on your hosts will upgrade automatically. + +### Roll out changes to many {{agents}} quickly [selective-agent-management] + +Some subscription levels support bulk select operations, including: + +* Selective binary updates +* Selective agent policy reassignment +* Selective agent unenrollment + +This capability enables you to apply changes and trigger updates across many {{agents}} so you can roll out changes quickly across your organization. + +For more information, refer to [{{stack}} subscriptions](https://www.elastic.co/subscriptions). + +## {{fleet-server}} [fleet-server-intro] + +{{fleet-server}} is the mechanism to connect {{agents}} to {{fleet}}. It allows for a scalable infrastructure and is supported in {{ecloud}} and self-managed clusters. {{fleet-server}} is a separate process that communicates with the deployed {{agents}}. It can be started from any available x64 architecture {{agent}} artifact. + +For more information, refer to [](/reference/ingestion-tools/fleet/fleet-server.md). + +:::{admonition} {{fleet-server}} with {{serverless-full}} +On-premises {{fleet-server}} is not currently available for use with +{{serverless-full}} projects. In a {{serverless-short}} +environment we recommend using {{fleet-server}} on {{ecloud}}. +::: + +## {{es}} as the communication layer [fleet-communication-layer] + +All communication between the {{fleet}} UI and {{fleet-server}} happens through {{es}}. {{fleet}} writes policies, actions, and any changes to the `fleet-*` indices in {{es}}. Each {{fleet-server}} monitors the indices, picks up changes, and ships them to the {{agents}}. To communicate to {{fleet}} about the status of the {{agents}} and the policy rollout, the Fleet Servers write updates to the `fleet-*` indices. + +## {{agent}} self-protection [agent-self-protection] + +On macOS and Windows, when the {{elastic-defend}} integration is added to the agent policy, {{elastic-endpoint}} can prevent malware from executing on the host. For more information, refer to [{{elastic-endpoint}} self-protection](/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md). + +## Data streams make index management easier [data-streams-intro] + +The data collected by {{agent}} is stored in indices that are more granular than you'd get by default with the {{beats}} shippers or APM Server. This gives you more visibility into the sources of data volume, and control over lifecycle management policies and index permissions. These indices are called [data streams](/reference/ingestion-tools/fleet/data-streams.md). + +## Quick starts [fleet-elastic-agent-quick-start] + +Want to get up and running with {{fleet}} and {{agent}} quickly? Read our getting started guides: + +* [Get started with logs and metrics](/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md) +* [Get started with APM](/solutions/observability/apps/get-started-with-apm.md) diff --git a/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md b/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md new file mode 100644 index 0000000000..ef607fadd7 --- /dev/null +++ b/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md @@ -0,0 +1,90 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/ingest-pipeline-kubernetes.html +--- + +# Using a custom ingest pipeline with the Kubernetes Integration [ingest-pipeline-kubernetes] + +This tutorial explains how to add a custom ingest pipeline to a {{k8s}} Integration in order to add specific metadata fields for deployments and cronjobs of pods. + +Custom pipelines can be used to add custom data processing, like adding fields, obfuscating sensitive information, and more. + +## Metadata enrichment for Kubernetes [_metadata_enrichment_for_kubernetes] + +The [{{k8s}} Integration](integration-docs://docs/reference/kubernetes.md) is used to collect logs and metrics from Kubernetes clusters with {{agent}}. During the collection, the integration enhances the collected information with extra useful information that users can correlate with different Kubernetes assets. This additional information added on top of collected data, such as labels, annotations, ancestor names of Kubernetes assets, and others, are called metadata. + +The [{{k8s}} Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) offers the `add_resource_metadata` option to configure the metadata enrichment options. + +For {{agent}} versions >[8.10.4], the default configuration for metadata enrichment is `add_resource_metadata.deployment=false` and `add_resource_metadata.cronjob=false`. This means that pods that are created from replicasets that belong to specific deployments would not be enriched with `kubernetes.deployment.name`. Additionally, pods that are created from jobs that belong to specific cronjobs, would not be enriched with `kubernetes.cronjob.name`. + +**Kubernetes Integration Policy > Collect Kubernetes metrics from Kube-state-metrics > Kubernetes Pod Metrics** + +:::{image} images/add_resource_metadata.png +:alt: Configure add_resource_metadata +:class: screenshot +::: + +Example: Enabling the enrichment through `add_resource_metadata` in a Managed {{agent}} Policy. + +:::{note} +Enabling deployment and cronjob metadata enrichment leads to an increase of Elastic Agent’s memory consumption. Elastic Agent uses a local cache in order to keep records of the Kubernetes assets from being discovered. +::: + +## Add deployment and cronjob for {{k8s}} pods through ingest pipelines [_add_deployment_and_cronjob_for_k8s_pods_through_ingest_pipelines] + +As an alternative to keeping the feature enabled and using more memory resources for {{agent}}, users can make use of ingest pipelines to add the missing fields of `kubernetes.deployment.name` and `kubernetes.cronjob.name`. + +Navigate to `state_pod` datastream under: **Kubernetes Integration Policy > Collect Kubernetes metrics from Kube-state-metrics > Kubernetes Pod Metrics**. + +Create the following custom ingest pipeline with two processors: + +:::{image} images/ingest_pipeline_custom_k8s.png +:alt: Custom ingest pipeline +:class: screenshot +::: + +### Processor for deployment [_processor_for_deployment] + +:::{image} images/gsub_deployment.png +:alt: Gsub Processor for deployment +:class: screenshot +::: + + +### Processor for cronjob [_processor_for_cronjob] + +:::{image} images/gsub_cronjob.png +:alt: Gsub Processor for cronjob +:class: screenshot +::: + +The final `metrics-kubernetes.state_pod@custom` ingest pipeline: + +```json +[ + { + "gsub": { + "field": "kubernetes.replicaset.name", + "pattern": "(?:.(?!-))+$", + "replacement": "", + "target_field": "kubernetes.deployment.name", + "ignore_missing": true, + "ignore_failure": true + } + }, + { + "gsub": { + "field": "kubernetes.job.name", + "pattern": "(?:.(?!-))+$", + "replacement": "", + "target_field": "kubernetes.cronjob.name", + "ignore_missing": true, + "ignore_failure": true + } + } +] +``` + +:::{note} +The ingest pipeline does not check for the actual existence of a deployment and cronjob ancestor, it only adds the specific values. +::: diff --git a/reference/ingestion-tools/fleet/install-agent-msi.md b/reference/ingestion-tools/fleet/install-agent-msi.md new file mode 100644 index 0000000000..c141a17739 --- /dev/null +++ b/reference/ingestion-tools/fleet/install-agent-msi.md @@ -0,0 +1,74 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/install-agent-msi.html +--- + +# Install Elastic Agent from an MSI package [install-agent-msi] + +MSI is the file format and command line utility for the [Windows Installer](https://en.wikipedia.org/wiki/Windows_Installer). Windows Installer (previously known as Microsoft Installer) is an interface for Microsoft Windows that’s used to install and manage software on Windows systems. This section covers installing Elastic Agent through the MSI package repository. + +The MSI package installer must be run by an administrator account. The installer won’t start without Windows admin permissions. + + +## Install {{agent}} [_install_agent] + +1. Download the latest Elastic Agent MSI binary from the [{{agent}} download page](https://www.elastic.co/downloads/elastic-agent). +2. Run the installer. The command varies slightly depending on whether you’re using the default Windows command prompt or PowerShell. + + ::::{admonition} + * Using the default command prompt: + + ```shell + elastic-agent--windows-x86_64.msi INSTALLARGS="--url= --enrollment-token=" + ``` + + * Using PowerShell: + + ```shell + ./elastic-agent--windows-x86_64.msi --% INSTALLARGS="--url= --enrollment-token=" + ``` + + + :::: + + + Where: + + * `VERSION` is the {{stack}} version you’re installing, indicated in the MSI package name. For example, `8.13.2`. + * `URL` is the {{fleet-server}} URL used to enroll the {{agent}} into {{fleet}}. You can find this on the {{fleet}} **Settings** tab in {{kib}}. + * `TOKEN` is the authentication token used to enroll the {{agent}} into {{fleet}}. You can find this on the {{fleet}} **Enrollment tokens** tab. + + When you run the command, the value set for `INSTALLARGS` will be passed to the [`elastic-agent install`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) command verbatim. + +3. If you need to troubleshoot, you can install using `msiexec` with the `-L*V "log.txt"` option to create installation logs: + + ```shell + msiexec -i elastic-agent--windows-x86_64.msi INSTALLARGS="--url= --enrollment-token=" -L*V "log.txt" + ``` + + + +## Installation notes [_installation_notes] + +Installing using an MSI package has the following behaviors: + +* If `INSTALLARGS` are not provided, the MSI will copy the files to a temporary folder and finish. +* If `INSTALLARGS` are provided, the MSI will copy the files to a temporary folder and then run the [`elastic-agent install`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) command with the provided parameters. If the install flow is successful, the temporary folder is deleted. +* If `INSTALLARGS` are provided but the `elastic-agent install` command fails, the top-level folder is NOT deleted, in order to allow for further troubleshooting. +* If the `elastic-agent install` command fails for any reason, the MSI will rollback all changes. +* If the {{agent}} enrollment fails, the install will fail as well. To avoid this behavior you can add the [`--delay-enroll`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-install-command) option to the install command. + + +## Upgrading [_upgrading] + +The {{agent}} version can be upgraded via {{fleet}}, but the registered MSI version will display the initially installed version (this shortcoming will be addressed in future releases). Attempts to upgrade outside of {{fleet}} via the MSI will require an uninstall and reinstall procedure to upgrade. Also note that this MSI implementation relies on the tar {{agent}} binary to upgrade the installation. Therefore if the {{agent}} is installed in an air-gapped environment, you must ensure that the tar image is available before an upgrade request is issued. + + +## Installing in a custom location [_installing_in_a_custom_location] + +Starting in version 8.13, it’s also possible to override the default installation folder by running the MSI from the command line, as shown: + +```shell +elastic-agent--windows-x86_64.msi INSTALLARGS="--url= --enrollment-token=" INSTALLDIR="" +``` + diff --git a/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md b/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md new file mode 100644 index 0000000000..be3919e9c4 --- /dev/null +++ b/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md @@ -0,0 +1,39 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/install-elastic-agents-in-containers.html +--- + +# Install Elastic Agents in a containerized environment [install-elastic-agents-in-containers] + +You can run {{agent}} inside of a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. + +To learn how to run {{agent}}s in a containerized environment, see: + +* [Run {{agent}} in a container](/reference/ingestion-tools/fleet/elastic-agent-container.md) +* [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) + + * [Advanced {{agent}} configuration managed by {{fleet}}](/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md) + * [Configuring Kubernetes metadata enrichment on {{agent}}](/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md) + * [Run {{agent}} on GKE managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md) + * [Run {{agent}} on Amazon EKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md) + * [Run {{agent}} on Azure AKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md) + +* [Run {{agent}} Standalone on Kubernetes](/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md) +* [Scaling {{agent}} on {{k8s}}](/reference/ingestion-tools/fleet/scaling-on-kubernetes.md) +* [Using a custom ingest pipeline with the {{k8s}} Integration](/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md) +* [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users + + + + + + + + + + + + + + + diff --git a/reference/ingestion-tools/fleet/install-elastic-agents.md b/reference/ingestion-tools/fleet/install-elastic-agents.md new file mode 100644 index 0000000000..5edd1732e6 --- /dev/null +++ b/reference/ingestion-tools/fleet/install-elastic-agents.md @@ -0,0 +1,102 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html +--- + +# Install Elastic Agents [elastic-agent-installation] + +::::{admonition} Restrictions +:class: important + +Note the following restrictions when installing {{agent}} on your system: + +* You can install only a single {{agent}} per host. Due to the fact that the {{agent}} may read data sources that are only accessible by a superuser, {{agent}} will therefore also need to be executed with superuser permissions. +* You might need to log in as a root user (or Administrator on Windows) to run the commands described here. After the {{agent}} service is installed and running, make sure you run these commands without prepending them with `./` to avoid invoking the wrong binary. +* Running {{agent}} commands using the Windows PowerShell ISE is not supported. +* See also the [resource requirements](#elastic-agent-installation-resource-requirements) described on this page. + +:::: + + +You have a few options for installing and managing an {{agent}}: + +* **Install a {{fleet}}-managed {{agent}} (recommended)** + + With this approach, you install {{agent}} and use {{fleet}} in {{kib}} to define, configure, and manage your agents in a central location. + + We recommend using {{fleet}} management because it makes the management and upgrade of your agents considerably easier. + + Refer to [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). + +* **Install {{agent}} in standalone mode (advanced users)** + + With this approach, you install {{agent}} and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is reserved for advanced users only. + + Refer to [Install standalone {{agent}}s](/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md). + +* **Install {{agent}} in a containerized environment** + + You can run {{agent}} inside of a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. + + Refer to: + + * [Run {{agent}} in a container](/reference/ingestion-tools/fleet/elastic-agent-container.md) + * [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) + + * [Advanced {{agent}} configuration managed by {{fleet}}](/reference/ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md) + * [Configuring Kubernetes metadata enrichment on {{agent}}](/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md) + * [Run {{agent}} on GKE managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md) + * [Run {{agent}} on Amazon EKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md) + * [Run {{agent}} on Azure AKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md) + + * [Run {{agent}} Standalone on Kubernetes](/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md) + * [Scaling {{agent}} on {{k8s}}](/reference/ingestion-tools/fleet/scaling-on-kubernetes.md) + * [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users + + +::::{admonition} Restrictions in {{serverless-short}} +:class: important + +If you are using {{agent}} with [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), note these differences from use with {{ess}} and self-managed {{es}}: + +* The number of {{agents}} that may be connected to an {{serverless-full}} project is limited to 10 thousand. +* The minimum supported version of {{agent}} supported for use with {{serverless-full}} is 8.11.0. + +:::: + + + +## Resource requirements [elastic-agent-installation-resource-requirements] + +The {{agent}} resources consumption is influenced by the number of integration and the environment its been running on. + +Using our lab environment as an example, we can observe the following resource consumption: + + +### CPU and RSS memory size [_cpu_and_rss_memory_size] + +We tested using an AWS `m7i.large` instance type with 2 vCPUs, 8.0 GB of memory, and up to 12.5 Gbps of bandwidth. The tests ingested a single log file using both the [throughput and scale preset](/reference/ingestion-tools/fleet/elasticsearch-output.md#output-elasticsearch-performance-tuning-settings) with self monitoring enabled. These tests are representative of use cases that attempt to ingest data as fast as possible. This does not represent the resource overhead when using [{{elastic-defend}}](integration-docs://docs/reference/endpoint.md). + +| | | | +| --- | --- | --- | +| **Resource** | **Throughput** | **Scale** | +| **CPU*** | ~67% | ~20% | +| **RSS memory size*** | ~280 MB | ~220 MB | +| **Write network throughput** | ~3.5 MB/s | 480 KB/s | + +* including all monitoring processes + +Adding integrations will increase the memory used by the agent and its processes. + + +### Size on disk [_size_on_disk] + +The disk requirements for {{agent}} vary by operating system and {{stack}} version. With version 8.14 we have significantly reduced the size of the {{agent}} binary. Further reductions are planned to be made in future releases. + +| Operating system | 8.13 | 8.14 | 8.15 | +| --- | --- | --- | --- | +| **Linux** | 1800 MB | 1018 MB | 1060 MB | +| **macOS** | 1100 MB | 619 MB | 680 MB | +| **Windows** | 891 MB | 504 MB | 500 MB | + +During upgrades, double the disk space is required to store the new {{agent}} binary. After the upgrade completes, the original {{agent}} is removed from disk to free up the space. diff --git a/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md b/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md new file mode 100644 index 0000000000..170d957f9e --- /dev/null +++ b/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md @@ -0,0 +1,100 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/install-fleet-managed-elastic-agent.html +--- + +# Install Fleet-managed Elastic Agents [install-fleet-managed-elastic-agent] + +::::{admonition} +{{fleet}} is a web-based UI in {{kib}} for [centrally managing {{agent}}s](/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md). To use {{fleet}}, you install {{agent}}, then enroll the agent in a policy defined in {{kib}}. The policy includes integrations that specify how to collect observability data from specific services and protect endpoints. The {{agent}} connects to a trusted [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) instance to retrieve the policy and report agent events. + +:::: + + + +## Where to start [get-started] + +To get up and running quickly, read one of our end-to-end guides: + +* New to Elastic? Read our solution [Getting started guides](/get-started/index.md). +* Want to add data to an existing cluster or deployment? Read our [*Quick starts*](/reference/ingestion-tools/fleet/index.md). + +Looking for upgrade info? Refer to [Upgrade {{agent}}s](/reference/ingestion-tools/fleet/upgrade-elastic-agent.md). + +Just want to learn how to install {{agent}}? Continue reading this page. + + +## Prerequisites [elastic-agent-prereqs] + +You will always need: + +* **A {{kib}} user with `All` privileges on {{fleet}} and {{integrations}}.** Since many Integrations assets are shared across spaces, users need the {{kib}} privileges in all spaces. +* **[{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) running in a location accessible to {{agent}}.** {{agent}} must have a direct network connection to {{fleet-server}} and {{es}}. If you’re using our hosted {{ess}} on {{ecloud}}, {{fleet-server}} is already available as part of the {{integrations-server}}. For self-managed deployments, refer to [Deploy on-premises and self-managed](/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md). +* **Internet connection for {{kib}} to download integration packages from the {{package-registry}}.** Make sure the {{kib}} server can connect to `https://epr.elastic.co` on port `443`. If your environment has network traffic restrictions, there are ways to work around this requirement. See [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md) for more information. + +If you are using a {{fleet-server}} that uses your organization’s certificate, you will also need: + +* **A Certificate Authority (CA) certificate to configure Transport Layer Security (TLS) to encrypt traffic.** If your organization already uses the {{stack}}, you may already have a CA certificate. If you do not have a CA certificate, you can read more about generating one in [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). + +If you’re running {{agent}} 7.9 or earlier, stop the agent and manually remove it from your host. + + +## Installation steps [elastic-agent-installation-steps] + +::::{note} +You can install only a single {{agent}} per host. +:::: + + +{{agent}} can monitor the host where it’s deployed, and it can collect and forward data from remote services and hardware where direct deployment is not possible. + +To install an {{agent}} and enroll it in {{fleet}}: + +1. In {{fleet}}, open the **Agents** tab and click **Add agent**. +2. In the **Add agent** flyout, select an existing agent policy or create a new one. If you create a new policy, {{fleet}} generates a new [{{fleet}} enrollment token](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md). + + ::::{note} + For on-premises deployments, you can dedicate a policy to all the agents in the network boundary and configure that policy to include a specific {{fleet-server}} (or a cluster of {{fleet-server}}s). + + Read more in [Add a {{fleet-server}} to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-fleet-server-to-policy). + + :::: + +3. Make sure **Enroll in Fleet** is selected. +4. Download, install, and enroll the {{agent}} on your host by selecting your host operating system and following the **Install {{agent}} on your host** step. Note that the commands shown are for AMD platforms, but ARM packages are also available. Refer to the {{agent}} [downloads page](https://www.elastic.co/downloads/elastic-agent) for the full list of available packages. + + 1. If you are enrolling the agent in a {{fleet-server}} that uses your organization’s certificate you *must* add the `--certificate-authorities` option to the command provided in the in-product instructions. If you do not include the certificate, you will see the following error: "x509: certificate signed by unknown authority". + + :::{image} images/kibana-agent-flyout.png + :alt: Add agent flyout in {kib} + :class: screenshot + ::: + + +After about a minute, the agent will enroll in {{fleet}}, download the configuration specified in the agent policy, and start collecting data. + +**Notes:** + +* If you encounter an "x509: certificate signed by unknown authority" error, you might be trying to enroll in a {{fleet-server}} that uses self-signed certs. To fix this problem in a non-production environment, pass the `--insecure` flag. For more information, refer to the [troubleshooting guide](/troubleshoot/ingest/fleet/common-problems.md#agent-enrollment-certs). +* Optionally, you can use the `--tag` flag to specify a comma-separated list of tags to apply to the enrolled {{agent}}. For more information, refer to [Filter list of Agents by tags](/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md). +* Refer to [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md) for the location of installed {{agent}} files. +* Because {{agent}} is installed as an auto-starting service, it will restart automatically if the system is rebooted. + +To confirm that {{agent}} is installed and running, open the **Agents** tab in {{fleet}}. + +:::{image} images/kibana-fleet-agents.png +:alt: {{fleet}} showing enrolled agents +:class: screenshot +::: + +::::{tip} +If the status hangs at Enrolling, make sure the `elastic-agent` process is running. +:::: + + +If you run into problems: + +* Check the {{agent}} logs. If you use the default policy, agent logs and metrics are collected automatically unless you change the default settings. For more information, refer to [Monitor {{agent}} in {{fleet}}](/reference/ingestion-tools/fleet/monitor-elastic-agent.md). +* Refer to the [troubleshooting guide](/troubleshoot/ingest/fleet/common-problems.md). + +For information about managing {{agent}} in {{fleet}}, refer to [Centrally manage {{agent}}s in {{fleet}}](/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md). diff --git a/reference/ingestion-tools/fleet/install-on-kubernetes-using-helm.md b/reference/ingestion-tools/fleet/install-on-kubernetes-using-helm.md new file mode 100644 index 0000000000..5e8acd9f71 --- /dev/null +++ b/reference/ingestion-tools/fleet/install-on-kubernetes-using-helm.md @@ -0,0 +1,35 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/install-on-kubernetes-using-helm.html +--- + +# Install Elastic Agent on Kubernetes using Helm [install-on-kubernetes-using-helm] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +Starting with {{stack}} version 8.16, a Helm chart is available for installing {{agent}} in a Kubernetes environment. A Helm-based install offers several advantages, including simplified deployment, availability in marketplaces, streamlined ugrades, as well as quick rollbacks whenever they’re needed. + +Features of the Helm-based {{agent}} install include: + +* Support for both standalone and {{fleet}}-managed {{agent}}. +* For standalone agents, a built-in Kubernetes policy similar to that available in {{fleet}} for {{fleet}}-managed agents. +* Support for custom integrations. +* Support for {{es}} outputs with authentication through username and password, an API key, or a stored secret. +* Easy switching between privileged (`root`) and unprivileged {{agent}} deployments. +* Support for {{stack}} deployments on {{eck}}. + +For detailed install steps, try one of our walk-through examples: + +* [Example: Install standalone {{agent}} on Kubernetes using Helm](/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md) +* [Example: Install {{fleet}}-managed {{agent}} on {{k8s}} using Helm](/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md) + +::::{note} +The {{agent}} Helm chart is currently available from inside the [elastic/elastic-agent](https://github.com/elastic/elastic-agent) GitHub repo. It’s planned to soon make the chart available from the Elastic Helm repository. +:::: + + +You can also find details about the Helm chart, including all available YAML settings and descriptions, in the [{{agent}} Helm Chart Readme](https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent). Several [examples](https://github.com/elastic/elastic-agent/tree/main/deploy/helm/elastic-agent/examples) are available if you’d like to explore other use cases. + diff --git a/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md b/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md new file mode 100644 index 0000000000..1d1d0dd26a --- /dev/null +++ b/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md @@ -0,0 +1,151 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/install-standalone-elastic-agent.html +--- + +# Install standalone Elastic Agents [install-standalone-elastic-agent] + +To run an {{agent}} in standalone mode, install the agent and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is recommended for advanced users only. + +We recommend using [{{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md), when possible, because it makes the management and upgrade of your agents considerably easier. + +::::{important} +Standalone agents are unable to upgrade to new integration package versions automatically. When you upgrade the integration in {{kib}}, you’ll need to update the standalone policy manually. +:::: + + +::::{note} +You can install only a single {{agent}} per host. +:::: + + +{{agent}} can monitor the host where it’s deployed, and it can collect and forward data from remote services and hardware where direct deployment is not possible. + +To install and run {{agent}} standalone: + +1. On your host, download and extract the installation package. + + ::::{tab-set} + + :::{tab-item} macOS + Version 9.0.0-beta1 of {{agent}} has not yet been released. + ::: + + :::{tab-item} Linux + Version 9.0.0-beta1 of {{agent}} has not yet been released. + ::: + + :::{tab-item} Windows + Version 9.0.0-beta1 of {{agent}} has not yet been released. + ::: + + :::{tab-item} DEB + Version 9.0.0-beta1 of {{agent}} has not yet been released. + ::: + + :::{tab-item} RPM + Version 9.0.0-beta1 of {{agent}} has not yet been released. + ::: + + :::: + + The commands shown are for AMD platforms, but ARM packages are also available. Refer to the {{agent}} [downloads page](https://www.elastic.co/downloads/elastic-agent) for the full list of available packages. + +2. Modify settings in the `elastic-agent.yml` as required. + + To get started quickly and avoid errors, use {{kib}} to create and download a standalone configuration file rather than trying to build it by hand. For more information, refer to [Create a standalone {{agent}} policy](/reference/ingestion-tools/fleet/create-standalone-agent-policy.md). + + For additional configuration options, refer to [*Configure standalone {{agent}}s*](/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md). + +3. In the `elastic-agent.yml` policy file, under `outputs`, specify an API key or user credentials for the {{agent}} to access {{es}}. For example: + + ```yaml + [...] + outputs: + default: + type: elasticsearch + hosts: + - 'https://da4e3a6298c14a6683e6064ebfve9ace.us-central1.gcp.cloud.es.io:443' + api_key: _Nj4oH0aWZVGqM7MGop8:349p_U1ERHyIc4Nm8_AYkw <1> + [...] + ``` + + 1. For more information required privileges and creating API keys, see [Grant standalone {{agent}}s access to {{es}}](/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md). + +4. Make sure the assets you need, such as dashboards and ingest pipelines, are set up in {{kib}} and {{es}}. If you used {{kib}} to generate the standalone configuration, the assets are set up automatically. Otherwise, you need to install them. For more information, refer to [View integration assets](/reference/ingestion-tools/fleet/view-integration-assets.md) and [Install integration assets](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md#install-integration-assets). +5. From the agent directory, run the following commands to install {{agent}} and start it as a service. + + ::::{note} + On macOS, Linux (tar package), and Windows, run the `install` command to install {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, so just enable then start the service. + :::: + + ::::{tab-set} + + :::{tab-item} macOS + + ::::{tip} + You must run this command as the root user because some integrations require root privileges to collect sensitive data. + :::: + + ```shell + sudo ./elastic-agent install + ``` + ::: + + :::{tab-item} Linux + + ::::{tip} + You must run this command as the root user because some integrations require root privileges to collect sensitive data. + :::: + + ```shell + sudo ./elastic-agent install + ``` + ::: + + :::{tab-item} Windows + + Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + + From the PowerShell prompt, change to the directory where you installed {{agent}}, and run: + + ```shell + .\elastic-agent.exe install + ``` + ::: + + :::{tab-item} DEB + + ::::{tip} + You must run this command as the root user because some integrations require root privileges to collect sensitive data. + :::: + + ```shell + sudo systemctl enable elastic-agent <1> + sudo systemctl start elastic-agent + ``` + 1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. + ::: + + :::{tab-item} RPM + + ::::{tip} + You must run this command as the root user because some integrations require root privileges to collect sensitive data. + :::: + + + ```shell + sudo systemctl enable elastic-agent <1> + sudo systemctl start elastic-agent + ``` + + 1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. + ::: + + :::: + +Refer to [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md) for the location of installed {{agent}} files. + +Because {{agent}} is installed as an auto-starting service, it will restart automatically if the system is rebooted. + +If you run into problems, refer to [Troubleshoot common problems](/troubleshoot/ingest/fleet/common-problems.md). diff --git a/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md b/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md new file mode 100644 index 0000000000..552885c173 --- /dev/null +++ b/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md @@ -0,0 +1,57 @@ +--- +navigation_title: "Install and uninstall integration assets" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/install-uninstall-integration-assets.html +--- + +# Install and uninstall {{agent}} integration assets [install-uninstall-integration-assets] + + +{{agent}} integrations come with a number of assets, such as dashboards, saved searches, and visualizations for analyzing data. When you add an integration to an agent policy in {{fleet}}, the assets are installed automatically. If you’re building a policy file by hand, you need to install required assets such as index templates. + + +## Install integration assets [install-integration-assets] + +1. In {{kib}}, go to the **Integrations** page and open the **Browse integrations** tab. Search for and select an integration. You can select a category to narrow your search. +2. Click the **Settings** tab. +3. Click **Install assets** to set up the {{kib}} and {{es}} assets. + +Note that it’s currently not possible to have multiple versions of the same integration installed. When you upgrade an integration, the previous version assets are removed and replaced by the current version. + +::::{admonition} Current limitations with integrations and {{kib}} spaces +:class: important + +{{agent}} integration assets can be installed only on a single {{kib}} [space](/deploy-manage/manage-spaces.md). If you want to access assets in a different space, you can [copy them](/explore-analyze/find-and-organize/saved-objects.md#managing-saved-objects-copy-to-space). However, many integrations include markdown panels with dynamically generated links to other dashboards. When assets are copied between spaces, these links may not behave as expected and can result in a 404 `Dashboard not found` error. Refer to known issue [#175072](https://github.com/elastic/kibana/issues/175072) for details. + +These limitations and future plans for {{fleet}}'s integrations support in multi-space environments are currently being discussed in [#175831](https://github.com/elastic/kibana/issues/175831). Feedback is very welcome. For now, we recommend reviewing the specific integration documentation for any space-related considerations. + +:::: + + + +## Uninstall integration assets [uninstall-integration-assets] + +Uninstall an integration to remove all {{kib}} and {{es}} assets that were installed by the integration. + +1. Before uninstalling an integration, [delete the integration policy](/reference/ingestion-tools/fleet/edit-delete-integration-policy.md) from any {{agent}} policies that use it. + + Any {{agent}}s enrolled in the policy will stop using the deleted integration. + +2. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select an integration. +3. Click the **Settings** tab. +4. Click **Uninstall ** to remove all {{kib}} and {{es}} assets that were installed by this integration. + + +## Reinstall integration assets [reinstall-integration-assets] + +You may need to reinstall an integration package to resolve a specific problem, such as: + +* An asset was edited manually, and you want to reset assets to their original state. +* A temporary problem (like a network issue) occurred during package installation or upgrade. +* A package was installed in a prior version that had a bug in the install code. + +To reinstall integration assets: + +1. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select an integration. +2. Click the **Settings** tab. +3. Click **Reinstall ** to set up the {{kib}} and {{es}} assets. diff --git a/reference/ingestion-tools/fleet/installation-layout.md b/reference/ingestion-tools/fleet/installation-layout.md new file mode 100644 index 0000000000..6af7b2c249 --- /dev/null +++ b/reference/ingestion-tools/fleet/installation-layout.md @@ -0,0 +1,102 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/installation-layout.html +--- + +# Installation layout [installation-layout] + +{{agent}} files are installed in the following locations. + +:::::::{tab-set} + +::::::{tab-item} macOS +`/Library/Elastic/Agent/*` +: {{agent}} program files + +`/Library/Elastic/Agent/elastic-agent.yml` +: Main {{agent}} configuration + +`/Library/Elastic/Agent/fleet.enc` +: Main {{agent}} {{fleet}} encrypted configuration + +`/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson` +: Log files for {{agent}} and {{beats}} shippers ^1^ + +`/usr/bin/elastic-agent` +: Shell wrapper installed into PATH + +You can install {{agent}} in a custom base path other than `/Library`. When installing {{agent}} with the `./elastic-agent install` command, use the `--base-path` CLI option to specify the custom base path. +:::::: + +::::::{tab-item} Linux +`/opt/Elastic/Agent/*` +: {{agent}} program files + +`/opt/Elastic/Agent/elastic-agent.yml` +: Main {{agent}} configuration + +`/opt/Elastic/Agent/fleet.enc` +: Main {{agent}} {{fleet}} encrypted configuration + +`/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson` +: Log files for {{agent}} and {{beats}} shippers ^1^ + +`/usr/bin/elastic-agent` +: Shell wrapper installed into PATH + +You can install {{agent}} in a custom base path other than `/opt`. When installing {{agent}} with the `./elastic-agent install` command, use the `--base-path` CLI option to specify the custom base path. +:::::: + +::::::{tab-item} Windows +`C:\Program Files\Elastic\Agent*` +: {{agent}} program files + +`C:\Program Files\Elastic\Agent\elastic-agent.yml` +: Main {{agent}} configuration + +`C:\Program Files\Elastic\Agent\fleet.enc` +: Main {{agent}} {{fleet}} encrypted configuration + +`C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson` +: Log files for {{agent}} and {{beats}} shippers ^1^ + +You can install {{agent}} in a custom base path other than `C:\Program Files`. When installing {{agent}} with the `.\elastic-agent.exe install` command, use the `--base-path` CLI option to specify the custom base path. +:::::: + +::::::{tab-item} DEB +`/usr/share/elastic-agent/*` +: {{agent}} program files + +`/etc/elastic-agent/elastic-agent.yml` +: Main {{agent}} configuration + +`/etc/elastic-agent/fleet.enc` +: Main {{agent}} {{fleet}} encrypted configuration + +`/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson` +: Log files for {{agent}} and {{beats}} shippers ^1^ + +`/usr/bin/elastic-agent` +: Shell wrapper installed into PATH +:::::: + +::::::{tab-item} RPM +`/usr/share/elastic-agent/*` +: {{agent}} program files + +`/etc/elastic-agent/elastic-agent.yml` +: Main {{agent}} configuration + +`/etc/elastic-agent/fleet.enc` +: Main {{agent}} {{fleet}} encrypted configuration + +`/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson` +: Log files for {{agent}} and {{beats}} shippers ^1^ + +`/usr/bin/elastic-agent` +: Shell wrapper installed into PATH +:::::: + +::::::: + +^1^ Logs file names end with a date `(YYYYMMDD)` and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation. diff --git a/reference/ingestion-tools/fleet/integration-level-outputs.md b/reference/ingestion-tools/fleet/integration-level-outputs.md new file mode 100644 index 0000000000..10c562d0ce --- /dev/null +++ b/reference/ingestion-tools/fleet/integration-level-outputs.md @@ -0,0 +1,44 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/integration-level-outputs.html +--- + +# Set integration-level outputs [integration-level-outputs] + +If you have an `Enterprise` [{{stack}} subscription](https://www.elastic.co/subscriptions), you can configure {{agent}} data to be sent to different outputs for different integration policies. Note that the output clusters that you send data to must also be on the same subscription level. + +Integration-level outputs are very useful for certain scenarios. For example: + +* You can may want to send security logs monitored by an {{agent}} to one {{ls}} output, while informational logs are sent to a another {{ls}} output. +* If you operate multiple {{beats}} on a system and want to migrate these to {{agent}}, integration-level outputs enable you to maintain the distinct outputs that are currently used by each Beat. + + +## Order of precedence [_order_of_precedence] + +For each {{agent}}, the agent policy configures sending data to the following outputs in decreasing order of priority: + +1. The output set in the [integration policy](/reference/ingestion-tools/fleet/add-integration-to-policy.md). +2. The output set in the integration’s parent [{{agent}} policy](/reference/ingestion-tools/fleet/agent-policy.md). This includes the case where an integration policy belongs to multiple {{agent}} policies. +3. The global, default data output set in the [{{fleet}} settings](/reference/ingestion-tools/fleet/fleet-settings.md). + + +## Configure the output for an integration policy [_configure_the_output_for_an_integration_policy] + +To configure an integration-level output for {{agent}} data: + +1. In {{kib}}, go to **Integrations**. +2. On the **Installed integrations** tab, select the integration that you’d like to update. +3. Open the **Integration policies** tab. +4. From the **Actions** menu next to the integration, select **Edit integration**. +5. In the **integration settings** section, expand **Advanced options**. +6. Use the **Output** dropdown menu to select an output specific to this integration policy. +7. Click **Save and continue** to confirm your changes. + + +## View the output configured for an integration [_view_the_output_configured_for_an_integration] + +To view which {{agent}} output is set for an integration policy: + +1. In {{fleet}}, open the **Agent policies** tab. +2. Select an {{agent}} policy. +3. On the **Integrations** tab, the **Output** column indicates the output used for each integration policy. If data is configured to be sent to either the global output defined in {{fleet}} settings or to the integration’s parent {{agent}} policy, this is indicated in a tooltip. diff --git a/reference/ingestion-tools/fleet/integrations-assets-best-practices.md b/reference/ingestion-tools/fleet/integrations-assets-best-practices.md new file mode 100644 index 0000000000..e16b654835 --- /dev/null +++ b/reference/ingestion-tools/fleet/integrations-assets-best-practices.md @@ -0,0 +1,85 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/integrations-assets-best-practices.html +--- + +# Best practices for integration assets [integrations-assets-best-practices] + +When you use integrations with {{fleet}} and {{agent}} there are some restrictions to be aware of. + +* [Using integration assets with standalone {{agent}}](#assets-restrictions-standalone) +* [Using integration assets without {{agent}}](#assets-restrictions-without-agent) +* [Using {{fleet}} and {{agent}} integration assets in custom integrations](#assets-restrictions-custom-integrations) +* [Copying {{fleet}} and {{agent}} integration assets](#assets-restrictions-copying) +* [Editing assets managed by {{fleet}}](#assets-restrictions-editing-assets) +* [Creating custom component templates](#assets-restrictions-custom-component-templates) +* [Creating a custom ingest pipeline](#assets-restrictions-custom-ingest-pipeline) +* [Cloning the index template of an integration package](#assets-restrictions-cloning-index-template) + + +## Using integration assets with standalone {{agent}} [assets-restrictions-standalone] + +When you use standalone {{agent}} with integrations, the integration assets added to the {{agent}} policy must be installed on the destination {{es}} cluster. + +* If {{kib}} is available, the integration assets can be [installed through {{fleet}}](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md). +* If {{kib}} is not available (for instance if you have a remote cluster without a {{kib}} instance), then the integration assets need to be installed manually. + + +## Using integration assets without {{agent}} [assets-restrictions-without-agent] + +{{fleet}} integration assets are meant to work only with {{agent}}. + +The {{fleet}} integration assets are not supposed to work when sending arbitrary logs or metrics collected with other products such as {{filebeat}}, {{metricbeat}} or {{ls}}. + + +## Using {{fleet}} and {{agent}} integration assets in custom integrations [assets-restrictions-custom-integrations] + +While it’s possible to include {{fleet}} and {{agent}} integration assets in a custom integration, this is not recommended nor supported. Assets from another integration should not be referenced directly from a custom integration. + +As an example scenario, one may want to ingest Redis logs from Kafka. This can be done using the [Redis integration](integration-docs://docs/reference/redis-intro.md), but only certain files and paths are allowed. It’s technically possible to use the [Custom Kafka Logs integration](integration-docs://docs/reference/kafka_log.md) with a custom ingest pipeline, referencing the ingest pipeline of the Redis integration to ingest logs into the index templates of the Custom Kafka Logs integration data streams. + +However, referencing assets of an integration from another custom integration is not recommended nor supported. A configuration as described above can break when the integration is upgraded, as can happen automatically. + + +## Copying {{fleet}} and {{agent}} integration assets [assets-restrictions-copying] + +As an alternative to referencing assets from another integration from within a custom integration, assets such as index templates and ingest pipelines can be copied so that they become standalone. + +This way, because the assets are not managed by another integration, there is less risk of a configuration breaking or of an integration asset being deleted when the other integration is upgraded. + +Note, however, that creating standalone integration assets based off of {{fleet}} and {{agent}} integrations is considered a custom configuration that is not tested nor supported. Whenever possible it’s recommended to use standard integrations. + + +## Editing assets managed by {{fleet}} [assets-restrictions-editing-assets] + +{{fleet}}-managed integration assets should not be edited. Examples of these assets include an integration index template, the `@package` component templates, and ingest pipelines that are bundled with integrations. Any changes made to these assets will be overwritten when the integration is upgraded. + + +## Creating custom component templates [assets-restrictions-custom-component-templates] + +While creating a `@custom` component template for a package integration is supported, it involves risks which can prevent data from being ingested correctly. This practice can lead to broken indexing, data loss, and breaking of integration package upgrades. + +For example: + +* If the `@package` component template of an integration is changed from a "normal" datastream to `TSDB` or `LogsDB`, some of the custom settings or mappings introduced may not be compatible with these indexing modes. +* If the type of an ECS field is overridden from, for example, `keyword` to `text`, aggregations based on that field may be prevented for built-in dashboards. + +A similar caution against custom index mappings is noted in [Edit the {{es}} index template](/reference/ingestion-tools/fleet/data-streams.md#data-streams-index-templates-edit). + + +## Creating a custom ingest pipeline [assets-restrictions-custom-ingest-pipeline] + +If you create a custom index pipeline (as documented in the [Transform data with custom ingest pipelines](/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md) tutorial), Elastic is not responsible for ensuring that it indexes and behaves as expected. Creating a custom pipeline involves custom processing of the incoming data, which should be done with caution and tested carefully. + +Refer to [Ingest pipelines](/reference/ingestion-tools/fleet/data-streams.md#data-streams-pipelines) to learn more. + + +## Cloning the index template of an integration package [assets-restrictions-cloning-index-template] + +When you clone the index template of an integration package, this involves risk as any changes made to the original index template when it is upgraded will not be propagated to the cloned version. That is, the structure of the new index template is effectively frozen at the moment that it is cloned. Cloning an index template of an integration package can therefore lead to broken indexing, data loss, and breaking of integration package upgrades. + +Additionally, cloning index templates to add or inject additional component templates cannot be tested by Elastic, so we cannot guarantee that the template will work in future releases. + +If you want to change the ILM Policy, the number of shards, or other settings for the datastreams of one or more integrations, but the changes do not need to be specific to a given namespace, it’s highly recommended to use the `package@custom` component templates, as described in [Scenario 1](/reference/ingestion-tools/fleet/data-streams-scenario1.md) and [Scenario 2](/reference/ingestion-tools/fleet/data-streams-scenario2.md) of the Customize data retention policies tutorial, so as to avoid the problems mentioned above. + +If you want to change these settings for the data streams in one or more integrations and the changes **need to be namespace specific**, then you can do so following the steps in [Scenario 3](/reference/ingestion-tools/fleet/data-streams-scenario3.md) of the Customize data retention policies tutorial, but be aware of the restrictions mentioned above. diff --git a/reference/ingestion-tools/fleet/kafka-output-settings.md b/reference/ingestion-tools/fleet/kafka-output-settings.md new file mode 100644 index 0000000000..41d4d446b4 --- /dev/null +++ b/reference/ingestion-tools/fleet/kafka-output-settings.md @@ -0,0 +1,138 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/kafka-output-settings.html +--- + +# Kafka output settings [kafka-output-settings] + +Specify these settings to send data over a secure connection to Kafka. In the {{fleet}} [Output settings](/reference/ingestion-tools/fleet/fleet-settings.md#output-settings), make sure that the Kafka output type is selected. + +::::{note} +If you plan to use {{ls}} to modify {{agent}} output data before it’s sent to Kafka, please refer to our [guidance](#kafka-output-settings-ls-warning) for doing so, further in on this page. +:::: + + + +### General settings [_general_settings] + +| | | +| --- | --- | +| $$$kafka-output-version$$$
**Kafka version**
| The Kafka protocol version that {{agent}} will request when connecting. Defaults to `1.0.0`. Currently Kafka versions from `0.8.2.0` to `2.6.0` are supported, however the latest Kafka version (`3.x.x`) is expected to be compatible when version `2.6.0` is selected. When using Kafka 4.0 and newer, the version must be set to at least `2.1.0`.
| +| $$$kafka-output-hosts$$$
**Hosts**
| The addresses your {{agent}}s will use to connect to one or more Kafka brokers. Use the format `host:port` (without any protocol `http://`). Click **Add row** to specify additional addresses.

**Examples:**

* `localhost:9092`
* `mykafkahost:9092`

Refer to the [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) documentation for default ports and other configuration details.
| + + +### Authentication settings [_authentication_settings] + +Select the mechanism that {{agent}} uses to authenticate with Kafka. + +| | | +| --- | --- | +| $$$kafka-output-authentication-none$$$
**None**
| No authentication is used between {{agent}} and Kafka. This is the default option. In production, it’s recommended to have an authentication method selected.

Plaintext
: Set this option for traffic between {{agent}} and Kafka to be sent as plaintext, without any transport layer security.

This is the default option when no authentication is set.


Encryption
: Set this option for traffic between {{agent}} and Kafka to use transport layer security.

When **Encryption*** is selected, the ***Server SSL certificate authorities** and **Verification mode** mode options become available.

| +| $$$kafka-output-authentication-basic$$$
**Username / Password**
| Connect to Kafka with a username and password.

Provide your username and password, and select a SASL (Simple Authentication and Security Layer) mechanism for your login credentials.

When SCRAM is enabled, {{agent}} uses the [SCRAM](https://en.wikipedia.org/wiki/Salted_Challenge_Response_Authentication_Mechanism) mechanism to authenticate the user credential. SCRAM is based on the IETF RFC5802 standard which describes a challenge-response mechanism for authenticating users.

* Plain - SCRAM is not used to authenticate
* SCRAM-SHA-256 - uses the SHA-256 hashing function
* SCRAM-SHA-512 - uses the SHA-512 hashing function

To prevent unauthorized access your Kafka password is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the password as plain text in the agent policy definition. Secret storage requires {{fleet-server}} version 8.12 or higher.

Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See [Preconfiguration settings](kibana://docs/reference/configuration-reference/fleet-settings.md#_preconfiguration_settings_for_advanced_use_cases) in the {{kib}} Guide to learn more.
| +| $$$kafka-output-authentication-ssl$$$
**SSL**
| Authenticate using the Secure Sockets Layer (SSL) protocol. Provide the following details for your SSL certificate:

Client SSL certificate
: The certificate generated for the client. Copy and paste in the full contents of the certificate. This is the certificate that all the agents will use to connect to Kafka.

In cases where each client has a unique certificate, the local path to that certificate can be placed here. The agents will pick the certificate in that location when establishing a connection to Kafka.


Client SSL certificate key
: The private key generated for the client. This must be in PKCS 8 key. Copy and paste in the full contents of the certificate key. This is the certificate key that all the agents will use to connect to Kafka.

In cases where each client has a unique certificate key, the local path to that certificate key can be placed here. The agents will pick the certificate key in that location when establishing a connection to Kafka.

To prevent unauthorized access the certificate key is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the key as plain text in the agent policy definition. Secret storage requires {{fleet-server}} version 8.12 or higher.

Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See [Preconfiguration settings](kibana://docs/reference/configuration-reference/fleet-settings.md#_preconfiguration_settings_for_advanced_use_cases) in the {{kib}} Guide to learn more.

| +| **Server SSL certificate authorities**
| The CA certificate to use to connect to Kafka. This is the CA used to generate the certificate and key for Kafka. Copy and paste in the full contents for the CA certificate.

This setting is optional. This setting is not available when the authentication `None` and `Plaintext` options are selected.

Click **Add row** to specify additional certificate authories.
| +| **Verification mode**
| Controls the verification of server certificates. Valid values are:

`Full`
: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate.

`None`
: Performs *no verification* of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged.

`Strict`
: Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error.

`Certificate`
: Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.

The default value is `Full`. This setting is not available when the authentication `None` and `Plaintext` options are selected.
| + + +### Partitioning settings [_partitioning_settings] + +The number of partitions created is set automatically by the Kafka broker based on the list of topics. Records are then published to partitions either randomly, in round-robin order, or according to a calculated hash. + +| | | +| --- | --- | +| $$$kafka-output-partitioning-random$$$
**Random**
| Publish records to Kafka output broker event partitions randomly. Specify the number of events to be published to the same partition before the partitioner selects a new partition.
| +| $$$kafka-output-partitioning-roundrobin$$$
**Round robin**
| Publish records to Kafka output broker event partitions in a round-robin fashion. Specify the number of events to be published to the same partition before the partitioner selects a new partition.
| +| $$$kafka-output-partitioning-hash$$$
**Hash**
| Publish records to Kafka output broker event partitions based on a hash computed from the specified list of fields. If a field is not specified, the Kafka event key value is used.
| + + +### Topics settings [_topics_settings] + +Use this option to set the Kafka topic for each {{agent}} event. + +| | | +| --- | --- | +| $$$kafka-output-topics-default$$$
**Default topic**
| Set a default topic to use for events sent by {{agent}} to the Kafka output.

You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)][Elastic Common Schema (ECS)](ecs://docs/reference/index.md)) field. Available fields include:

* `data_stream_type`
* `data_stream.dataset`
* `data_stream.namespace`
* `@timestamp`
* `event-dataset`

You can also set a custom field. This is useful if you’re using the [`add_fields` processor](/reference/ingestion-tools/fleet/add_fields-processor.md) as part of your {{agent}} input. Otherwise, setting a custom field is not recommended.
| + + +### Header settings [_header_settings] + +A header is a key-value pair, and multiple headers can be included with the same key. Only string values are supported. These headers will be included in each produced Kafka message. + +| | | +| --- | --- | +| $$$kafka-output-headers-key$$$
**Key**
| The key to set in the Kafka header.
| +| $$$kafka-output-headers-value$$$
**Value**
| The value to set in the Kafka header.

Click **Add header** to configure additional headers to be included in each Kafka message.
| +| $$$kafka-output-headers-clientid$$$
**Client ID**
| The configurable ClientID used for logging, debugging, and auditing purposes. The default is `Elastic`. The Client ID is part of the protocol to identify where the messages are coming from.
| + + +### Compression settings [_compression_settings] + +You can enable compression to reduce the volume of Kafka output. + +| | | +| --- | --- | +| $$$kafka-output-compression-codec$$$
**Codec**
| Select a compression codec to use. Supported codecs are `snappy`, `lz4` and `gzip`.
| +| $$$kafka-output-compression-level$$$
**Level**
| For the `gzip` codec you can choose a compression level. The level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces the network usage but increases the CPU usage. The default value is 4.
| + + +### Broker settings [_broker_settings] + +Configure timeout and buffer size values for the Kafka brokers. + +| | | +| --- | --- | +| $$$kafka-output-broker-timeout$$$
**Broker timeout**
| The maximum length of time a Kafka broker waits for the required number of ACKs before timing out (see the `ACK reliability` setting further in). The default is 30 seconds.
| +| $$$kafka-output-broker-reachability-timeout$$$
**Broker reachability timeout**
| The maximum length of time that an {{agent}} waits for a response from a Kafka broker before timing out. The default is 30 seconds.
| +| $$$kafka-output-broker-ack-reliability$$$
**ACK reliability**
| The ACK reliability level required from broker. Options are:

* Wait for local commit
* Wait for all replicas to commit
* Do not wait

The default is `Wait for local commit`.

Note that if ACK reliability is set to `Do not wait` no ACKs are returned by Kafka. Messages might be lost silently in the event of an error.
| + + +### Other settings [_other_settings] + +| | | +| --- | --- | +| $$$kafka-output-other-key$$$
**Key**
| An optional formatted string specifying the Kafka event key. If configured, the event key can be extracted from the event using a format string.

See the [Kafka documentation](https://kafka.apache.org/intro#intro_topics) for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster.
| +| $$$kafka-output-other-proxy$$$
**Proxy**
| Select a proxy URL for {{agent}} to connect to Kafka. To learn about proxy configuration, refer to [Using a proxy server with {{agent}} and {{fleet}}](/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md).
| +| $$$kafka-output-advanced-yaml-setting$$$
**Advanced YAML configuration**
| YAML settings that will be added to the Kafka output section of each policy that uses this output. Make sure you specify valid YAML. The UI does not currently provide validation.

See [Advanced YAML configuration](#kafka-output-settings-yaml-config) for descriptions of the available settings.
| +| $$$kafka-output-agent-integrations$$$
**Make this output the default for agent integrations**
| When this setting is on, {{agent}}s use this output to send data if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md).
| +| $$$kafka-output-agent-monitoring$$$
**Make this output the default for agent monitoring**
| When this setting is on, {{agent}}s use this output to send [agent monitoring data](/reference/ingestion-tools/fleet/monitor-elastic-agent.md) if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md).
| + +## Advanced YAML configuration [kafka-output-settings-yaml-config] + +| Setting | Description | +| --- | --- | +| $$$output-kafka-fleet-settings-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to Kafka after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| +| $$$output-kafka-fleet-settings-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to Kafka after a network error.

**Default:** `60s`
| +| $$$output-kafka-fleet-settings-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single Kafka request.

**Default:** `2048`
| +| $$$output-kafka-fleet-settings-flush_frequency-setting$$$
`bulk_flush_frequency`
| (int) Duration to wait before sending bulk Kafka request. `0` is no delay.

**Default:** `0`
| +| $$$output-kafka-fleet-settings-channel_buffer_size-setting$$$
`channel_buffer_size`
| (int) Per Kafka broker number of messages buffered in output pipeline.

**Default:** `256`
| +| $$$output-kafka-fleet-settings-client_id-setting$$$
`client_id`
| (string) The configurable ClientID used for logging, debugging, and auditing purposes.

**Default:** `Elastic Agent`
| +| $$$output-kafka-fleet-settings-codec-setting$$$
`codec`
| Output codec configuration. You can specify either the `json` or `format` codec. By default the `json` codec is used.

**`json.pretty`**: If `pretty` is set to true, events will be nicely formatted. The default is false.

**`json.escape_html`**: If `escape_html` is set to true, html symbols will be escaped in strings. The default is false.

Example configuration that uses the `json` codec with pretty printing enabled to write events to the console:

```yaml
output.console:
codec.json:
pretty: true
escape_html: false
```

**`format.string`**: Configurable format string used to create a custom formatted message.

Example configurable that uses the `format` codec to print the events timestamp and message field to console:

```yaml
output.console:
codec.format:
string: '%{[@timestamp]} %{[message]}'
```

**Default:** `json`
| +| $$$output-kafka-fleet-settings-keep_alive-setting$$$
`keep_alive`
| (string) The keep-alive period for an active network connection. If `0s`, keep-alives are disabled.

**Default:** `0s`
| +| $$$output-kafka-fleet-settings-max_message_bytes-setting$$$
`max_message_bytes`
| (int) The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. This value should be equal to or less than the broker’s `message.max.bytes`.

**Default:** `1000000` (bytes)
| +| $$$output-kafka-fleet-settings-metadata-setting$$$
`metadata`
| Kafka metadata update settings. The metadata contains information about brokers, topics, partition, and active leaders to use for publishing.

**`refresh_frequency`**
: Metadata refresh interval. Defaults to 10 minutes.

**`full`**
: Strategy to use when fetching metadata. When this option is `true`, the client will maintain a full set of metadata for all the available topics. When set to `false` it will only refresh the metadata for the configured topics. The default is false.

**`retry.max`**
: Total number of metadata update retries. The default is 3.

**`retry.backoff`**
: Waiting time between retries. The default is 250ms.
| +| $$$output-kafka-fleet-settings-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| +| $$$output-kafka-fleet-settings-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| +| $$$output-kafka-fleet-settings-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| + + +## Kafka output and using {{ls}} to index data to {{es}} [kafka-output-settings-ls-warning] + +If you are considering using {{ls}} to ship the data from `kafka` to {{es}}, please be aware the structure of the documents sent from {{agent}} to `kafka` must not be modified by {{ls}}. We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec in order to make sure the input doesn’t edit the fields and their contents. + +The data streams setup by the integrations expect to receive events having the same structure and field names as they were sent directly from an {{agent}}. + +The structure of the documents sent from {{agent}} to `kafka` must not be modified by {{ls}}. We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec. + +Refer to the [{{ls}} output for {{agent}}](/reference/ingestion-tools/fleet/ls-output-settings.md) documentation for more details. + +```yaml +inputs { + kafka { + ... + ecs_compatibility => "disabled" + codec => json { ecs_compatibility => "disabled" } + ... + } +} +... +``` diff --git a/reference/ingestion-tools/fleet/kafka-output.md b/reference/ingestion-tools/fleet/kafka-output.md new file mode 100644 index 0000000000..0b5deb93a6 --- /dev/null +++ b/reference/ingestion-tools/fleet/kafka-output.md @@ -0,0 +1,188 @@ +--- +navigation_title: "Kafka" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/kafka-output.html +--- + +# Kafka output [kafka-output] + + +The Kafka output sends events to Apache Kafka. + +**Compatibility:** This output can connect to Kafka version 0.8.2.0 and later. Older versions might work as well, but are not supported. + +This example configures a Kafka output called `kafka-output` in the {{agent}} `elastic-agent.yml` file, with settings as described further in: + +```yaml +outputs: + kafka-output: + type: kafka + hosts: + - 'kafka1:9092' + - 'kafka2:9092' + - 'kafka3:9092' + client_id: Elastic + version: 1.0.0 + compression: gzip + compression_level: 4 + username: + password: + sasl: + mechanism: SCRAM-SHA-256 + partition: + round_robin: + group_events: 1 + topic: 'elastic-agent' + headers: [] + timeout: 30 + broker_timeout: 30 + required_acks: 1 + ssl: + verification_mode: full +``` + +## Kafka output and using {{ls}} to index data to {{es}} [_kafka_output_and_using_ls_to_index_data_to_es] + +If you are considering using {{ls}} to ship the data from `kafka` to {{es}}, please be aware the structure of the documents sent from {{agent}} to `kafka` must not be modified by {{ls}}. We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec in order to make sure the input doesn’t edit the fields and their contents. + +The data streams set up by the integrations expect to receive events having the same structure and field names as they were sent directly from an {{agent}}. + +Refer to [{{ls}} output for {{agent}}](/reference/ingestion-tools/fleet/logstash-output.md) documentation for more details. + +```yaml +inputs { + kafka { + ... + ecs_compatibility => "disabled" + codec => json { ecs_compatibility => "disabled" } + ... + } +} +... +``` + + +## Kafka output configuration settings [_kafka_output_configuration_settings] + +The `kafka` output supports the following settings, grouped by category. Many of these settings have sensible defaults that allow you to run {{agent}} with minimal configuration. + +* [Kafka output configuration settings](#output-kafka-commonly-used-settings) +* [Authentication settings](#output-kafka-authentication-settings) +* [Memory queue settings](#output-kafka-memory-queue-settings) +* [Topics settings](#output-kafka-topics-settings) +* [Partition settings](#output-kafka-partition-settings) +* [Header settings](#output-kafka-header-settings) +* [Other configuration settings](#output-kafka-configuration-settings) + +## Commonly used settings [output-kafka-commonly-used-settings] + +| Setting | Description | +| --- | --- | +| $$$output-kafka-enabled-setting$$$
`enabled`
| (boolean) Enables or disables the output. If set to `false`, the output is disabled.
| +| $$$kafka-hosts-setting$$$
`hosts`
| The addresses your {{agent}}s will use to connect to one or more Kafka brokers.

Following is an example `hosts` setting with three hosts defined:

```yaml
hosts:
- 'localhost:9092'
- 'mykafkahost01:9092'
- 'mykafkahost02:9092'
```
| +| $$$kafka-version-setting$$$
`version`
| Kafka protocol version that {{agent}} will request when connecting. Defaults to 1.0.0.

The protocol version controls the Kafka client features available to {{agent}}; it does not prevent {{agent}} from connecting to Kafka versions newer than the protocol version.
| + + +## Authentication settings [output-kafka-authentication-settings] + +| Setting | Description | +| --- | --- | +| $$$kafka-username-setting$$$
`username`
| The username for connecting to Kafka. If username is configured, the password must be configured as well.
| +| $$$kafka-password-setting$$$
`password`
| The password for connecting to Kafka.
| +| $$$kafka-sasl.mechanism-setting$$$
`sasl.mechanism`
| The SASL mechanism to use when connecting to Kafka. It can be one of:

* `PLAIN` for SASL/PLAIN.
* `SCRAM-SHA-256` for SCRAM-SHA-256.
* `SCRAM-SHA-512` for SCRAM-SHA-512. If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` are provided. Otherwise, SASL authentication is disabled.
| +| $$$kafka-ssl-setting$$$
`ssl`
| When sending data to a secured cluster through the `kafka` output, {{agent}} can use SSL/TLS. For a list of available settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specifically the settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options).
| + + +## Memory queue settings [output-kafka-memory-queue-settings] + +The memory queue keeps all events in memory. + +The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. + +The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. `flush.min_events` gives a limit on the number of events that can be included in a single batch, and `flush.timeout` specifies how long the queue should wait to completely fill an event request. If the output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of `bulk_max_size` and `flush.min_events`. + +`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`. + +In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set `flush.timeout` to 0. + +For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity. + +In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s. + +This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by `bulk_max_size`, and limited to at most 512 events by `flush.min_events`), or when events have been waiting for + +```yaml + queue.mem.events: 4096 + queue.mem.flush.min_events: 512 + queue.mem.flush.timeout: 5s +``` + +| Setting | Description | +| --- | --- | +| $$$output-kafka-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| +| $$$output-kafka-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| +| $$$output-kafka-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| + + +## Topics settings [output-kafka-topics-settings] + +Use these options to set the Kafka topic for each {{agent}} event. + +| Setting | Description | +| --- | --- | +| $$$kafka-topic-setting$$$
`topic`
| The default Kafka topic used for produced events.
| + + +## Partition settings [output-kafka-partition-settings] + +The number of partitions created is set automatically by the Kafka broker based on the list of topics. Records are then published to partitions either randomly, in round-robin order, or according to a calculated hash. + +In the following example, after each event is published to a partition, the partitioner selects the next partition in round-robin fashion. + +```yaml + partition: + round_robin: + group_events: 1 +``` + +| Setting | Description | +| --- | --- | +| $$$kafka-random.group-events-setting$$$
`random.group_events`
| Sets the number of events to be published to the same partition, before the partitioner selects a new partition by random. The default value is 1 meaning after each event a new partition is picked randomly.
| +| $$$kafka-round_robin.group_events-setting$$$
`round_robin.group_events`
| Sets the number of events to be published to the same partition, before the partitioner selects the next partition. The default value is 1 meaning after each event the next partition will be selected.
| +| $$$kafka-hash.hash-setting$$$
`hash.hash`
| List of fields used to compute the partitioning hash value from. If no field is configured, the events key value will be used.
| +| $$$kafka-hash.random-setting$$$
`hash.random`
| Randomly distribute events if no hash or key value can be computed.
| + + +## Header settings [output-kafka-header-settings] + +A header is a key-value pair, and multiple headers can be included with the same key. Only string values are supported. These headers will be included in each produced Kafka message. + +| Setting | Description | +| --- | --- | +| $$$kafka-key-setting$$$
`key`
| The key to set in the Kafka header.
| +| $$$kafka-value-setting$$$
`value`
| The value to set in the Kafka header.
| +| $$$kafka-client_id-setting$$$
`client_id`
| The configurable ClientID used for logging, debugging, and auditing purposes. The default is `Elastic`. The Client ID is part of the protocol to identify where the messages are coming from.
| + + +## Other configuration settings [output-kafka-configuration-settings] + +You can specify these various other options in the `kafka-output` section of the agent configuration file. + +| Setting | Description | +| --- | --- | +| $$$output-kafka-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to Kafka after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| +| $$$kafka-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to Kafka after a network error.

**Default:** `60s`
| +| $$$kafka-broker_timeout-setting$$$
`broker_timeout`
| The maximum length of time a Kafka broker waits for the required number of ACKs before timing out (see the `required_acks` setting further in).

**Default:** `30` (seconds)
| +| $$$kafka-bulk_flush_frequency-setting$$$
`bulk_flush_frequency`
| (int) Duration to wait before sending bulk Kafka request. `0`` is no delay.

**Default:** `0`
| +| $$$kafka-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single Kafka request.

**Default:** `2048`
| +| $$$kafka-channel_buffer_size-setting$$$
`channel_buffer_size`
| (int) Per Kafka broker number of messages buffered in output pipeline.

**Default:** `256`
| +| $$$kafka-codec-setting$$$
`codec`
| Output codec configuration. You can specify either the `json` or `format` codec. By default the `json` codec is used.

**`json.pretty`**: If `pretty` is set to true, events will be nicely formatted. The default is false.

**`json.escape_html`**: If `escape_html` is set to true, html symbols will be escaped in strings. The default is false.

Example configuration that uses the `json` codec with pretty printing enabled to write events to the console:

```yaml
output.console:
codec.json:
pretty: true
escape_html: false
```

**`format.string`**: Configurable format string used to create a custom formatted message.

Example configurable that uses the `format` codec to print the events timestamp and message field to console:

```yaml
output.console:
codec.format:
string: '%{[@timestamp]} %{[message]}'
```
| +| $$$kafka-compression-setting$$$
`compression`
| Select a compression codec to use. Supported codecs are `snappy`, `lz4` and `gzip`.
| +| $$$kafka-compression_level-setting$$$
`compression_level`
| For the `gzip` codec you can choose a compression level. The level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces the network usage but increases the CPU usage.

**Default:** `4`.
| +| $$$kafka-keep_alive-setting$$$
`keep_alive`
| (string) The keep-alive period for an active network connection. If `0s`, keep-alives are disabled.

**Default:** `0s`
| +| $$$kafka-max_message_bytes-setting$$$
`max_message_bytes`
| (int) The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. This value should be equal to or less than the broker’s `message.max.bytes`.

**Default:** `1000000` (bytes)
| +| $$$kafka-metadata-setting$$$
`metadata`
| Kafka metadata update settings. The metadata contains information about brokers, topics, partition, and active leaders to use for publishing.

**`refresh_frequency`**
: Metadata refresh interval. Defaults to 10 minutes.

**`full`**
: Strategy to use when fetching metadata. When this option is `true`, the client will maintain a full set of metadata for all the available topics. When set to `false` it will only refresh the metadata for the configured topics. The default is false.

**`retry.max`**
: Total number of metadata update retries. The default is 3.

**`retry.backoff`**
: Waiting time between retries. The default is 250ms.
| +| $$$kafka-required_acks-setting$$$
`required_acks`
| The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1.

Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error.

**Default:** `1` (wait for local commit)
| +| $$$kafka-timeout-setting$$$
`timeout`
| The number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds).

**Default:** `1000000` (bytes)
| + + diff --git a/reference/ingestion-tools/fleet/kubernetes-provider.md b/reference/ingestion-tools/fleet/kubernetes-provider.md new file mode 100644 index 0000000000..3908062a0f --- /dev/null +++ b/reference/ingestion-tools/fleet/kubernetes-provider.md @@ -0,0 +1,229 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/kubernetes-provider.html +--- + +# Kubernetes Provider [kubernetes-provider] + +Provides inventory information from Kubernetes. + + +## Provider configuration [_provider_configuration_2] + +```yaml +providers.kubernetes: + node: ${NODE_NAME} + scope: node + #kube_config: /Users/elastic-agent/.kube/config + #sync_period: 600s + #cleanup_timeout: 60s + resources: + pod: + enabled: true +``` + +`node` +: (Optional) Specify the node to scope {{agent}} to in case it cannot be accurately detected by the default discovery approach: + + 1. If {{agent}} is deployed in Kubernetes cluster as Pod, use hostname of pod as the pod name to query pod metadata for node name. + 2. If step 1 fails or {{agent}} is deployed outside of the Kubernetes cluster, use machine-id to match against Kubernetes nodes for node name. + 3. If node cannot be discovered with step 1 or 2 fall back to `NODE_NAME` environment variable as default value. In case it is not set return error. + + +`cleanup_timeout` +: (Optional) Specify the time of inactivity before stopping the running configuration for a container. This is `60s` by default. + +`sync_period` +: (Optional) Specify the timeout for listing historical resources. + +`kube_config` +: (Optional) Use the given config file as configuration for Kubernetes client. If `kube_config` is not set, the `KUBECONFIG` environment variable will be checked and will fall back to InCluster if not present. InCluster mode means that if {{agent}} runs as a Pod it will try to initialize the client using the token and certificate that are mounted in the Pod by default: + + * `/var/run/secrets/kubernetes.io/serviceaccount/token` + * `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` + + +as well as using the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` to reach the API Server. `kube_client_options`:: (Optional) Additional options can be configured for Kubernetes client. Currently client QPS and burst are supported, if not set Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) will be used. Example: + +```yaml + kube_client_options: + qps: 5 + burst: 10 +``` + +`scope` +: (Optional) Specify the level for autodiscover. `scope` can either take `node` or `cluster` as values. `node` scope allows discovery of resources in the specified node. `cluster` scope allows cluster wide discovery. Only `pod` and `node` resources can be discovered at node scope. + +`resources` +: (Optional) Specify the resources that want to start the autodiscovery for. One of `pod`, `node`, `service`. By default `node` and `pod` are being enabled. `service` resource requires the `scope` to be set at `cluster`. + +`namespace` +: (Optional) Select the namespace from which to collect the metadata. If it is not set, the processor collects metadata from all namespaces. It is unset by default. + +`include_annotations` +: (Optional) If added to the provider config, then the list of annotations present in the config are added to the event. + +`include_labels` +: (Optional) If added to the provider config, then the list of labels present in the config will be added to the event. + +`exclude_labels` +: (Optional) If added to the provider config, then the list of labels present in the config will be excluded from the event. + +`labels.dedot` +: (Optional) If set to be `true` in the provider config, then `.` in labels will be replaced with `_`. By default it is `true`. + +`annotations.dedot` +: (Optional) If set to be `true` in the provider config, then `.` in annotations will be replaced with `_`. By default it is `true`. + +`add_resource_metadata` +: (Optional) Specify filters and configration for the extra metadata, that will be added to the event. Configuration parameters: + + * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change the default behaviour `include_labels`, `exclude_labels` and `include_annotations` can be defined. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. + * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name isn’t added, this can be enabled by setting `deployment: true`. + * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name isn’t added, this can be enabled by setting `cronjob: true`. Example: + + +```yaml + add_resource_metadata: + namespace: + #use_regex_include: false + include_labels: ["namespacelabel1"] + #use_regex_exclude: false + #exclude_labels: ["namespacelabel2"] + node: + #use_regex_include: false + include_labels: ["nodelabel2"] + include_annotations: ["nodeannotation1"] + #use_regex_exclude: false + #exclude_labels: ["nodelabel3"] + #deployment: false + #cronjob: false +``` + + +## Provider for Pod resources [_provider_for_pod_resources] + +The available keys are: + +| Key | Type | Description | +| --- | --- | --- | +| `kubernetes.namespace` | `string` | Namespace of the Pod | +| `kubernetes.namespace_uid` | `string` | UID of the Namespace of the Pod | +| `kubernetes.namespace_labels.*` | `object` | Labels of the Namespace of the Pod | +| `kubernetes.namespace_annotations.*` | `object` | Annotations of the Namespace of the Pod | +| `kubernetes.pod.name` | `string` | Name of the Pod | +| `kubernetes.pod.uid` | `string` | UID of the Pod | +| `kubernetes.pod.ip` | `string` | IP of the Pod | +| `kubernetes.labels.*` | `object` | Object of labels of the Pod | +| `kubernetes.annotations.*` | `object` | Object of annotations of the Pod | +| `kubernetes.container.name` | `string` | Name of the container | +| `kubernetes.container.runtime` | `string` | Runtime of the container | +| `kubernetes.container.id` | `string` | ID of the container | +| `kubernetes.container.image` | `string` | Image of the container | +| `kubernetes.container.port` | `string` | Port of the container (if defined) | +| `kubernetes.container.port_name` | `string` | Port’s name for the container (if defined) | +| `kubernetes.node.name` | `string` | Name of the Node | +| `kubernetes.node.uid` | `string` | UID of the Node | +| `kubernetes.node.hostname` | `string` | Hostname of the Node | +| `kubernetes.node.labels.*` | `string` | Labels of the Node | +| `kubernetes.node.annotations.*` | `string` | Annotations of the Node | +| `kubernetes.deployment.name.*` | `string` | Deployment name of the Pod (if exists) | +| `kubernetes.statefulset.name.*` | `string` | StatefulSet name of the Pod (if exists) | +| `kubernetes.replicaset.name.*` | `string` | ReplicaSet name of the Pod (if exists) | + +These are the fields available within config templating. The `kubernetes.*` fields will be available on each emitted event. + +::::{note} +`kubernetes.labels.*` and `kubernetes.annotations.*` used in config templating are not dedoted and should not be confused with labels and annotations added in the final Elasticsearch document and which are dedoted by default. For examples refer to [Conditions based autodiscover](/reference/ingestion-tools/fleet/conditions-based-autodiscover.md). +:::: + + +Note that not all of these fields are available by default and special configuration options are needed in order to include them. + +For example, if the Kubernetes provider provides the following inventory: + +```json +[ + { + "id": "1", + "mapping:": {"namespace": "kube-system", "pod": {"name": "kube-controllermanger"}}, + "processors": {"add_fields": {"kuberentes.namespace": "kube-system", "kubernetes.pod": {"name": "kube-controllermanger"}} + { + "id": "2", + "mapping:": {"namespace": "kube-system", "pod": {"name": "kube-scheduler"}}, + "processors": {"add_fields": {"kubernetes.namespace": "kube-system", "kubernetes.pod": {"name": "kube-scheduler"}} + } +] +``` + +{{agent}} automatically prefixes the result with `kubernetes`: + +```json +[ + {"kubernetes": {"id": "1", "namespace": {"name": "kube-system"}, "pod": {"name": "kube-controllermanger"}}, + {"kubernetes": {"id": "2", "namespace": {"name": "kube-system"}, "pod": {"name": "kube-scheduler"}}, +] +``` + +In addition, the Kubernetes metadata are being added to each event by default. + + +## Provider for Node resources [_provider_for_node_resources] + +```yaml +providers.kubernetes: + node: ${NODE_NAME} + scope: node + #kube_config: /Users/elastic-agent/.kube/config + #sync_period: 600s + #cleanup_timeout: 60s + resources: + node: + enabled: true +``` + +This resource is enabled by default but in this example we define it explicitly for clarity. + +The available keys are: + +| Key | Type | Description | +| --- | --- | --- | +| `kubernetes.labels.*` | `object` | Object of labels of the Node | +| `kubernetes.annotations.*` | `object` | Object of labels of the Node | +| `kubernetes.node.name` | `string` | Name of the Node | +| `kubernetes.node.uid` | `string` | UID of the Node | +| `kubernetes.node.hostname` | `string` | Hostname of the Node | + + +## Provider for Service resources [_provider_for_service_resources] + +```yaml +providers.kubernetes: + node: ${NODE_NAME} + scope: cluster + #kube_config: /Users/elastic-agent/.kube/config + #sync_period: 600s + #cleanup_timeout: 60s + resources: + service: + enabled: true +``` + +Note that this resource is only available with `scope: cluster` setting and `node` cannot be used as scope. + +The available keys are: + +| Key | Type | Description | +| --- | --- | --- | +| `kubernetes.namespace` | `string` | Namespace of the Service | +| `kubernetes.namespace_uid` | `string` | UID of the Namespace of the Service | +| `kubernetes.namespace_labels.*` | `object` | Labels of the Namespace of the Service | +| `kubernetes.namespace_annotations.*` | `object` | Annotations of the Namespace of the Service | +| `kubernetes.labels.*` | `object` | Object of labels of the Service | +| `kubernetes.annotations.*` | `object` | Object of labels of the Service | +| `kubernetes.service.name` | `string` | Name of the Service | +| `kubernetes.service.uid` | `string` | UID of the Service | +| `kubernetes.selectors.*` | `string` | Kubernetes selectors | + +Refer to [kubernetes autodiscovery with Elastic Agent](/reference/ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md) for more information about shaping dynamic inputs for autodiscovery. + diff --git a/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md b/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md new file mode 100644 index 0000000000..6f1963dba9 --- /dev/null +++ b/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md @@ -0,0 +1,88 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/kubernetes_leaderelection-provider.html +--- + +# Kubernetes LeaderElection Provider [kubernetes_leaderelection-provider] + +Provides the option to enable leaderelection between a set of {{agent}}s running on Kubernetes. Only one {{agent}} at a time will be the holder of the leader lock and based on this, configurations can be enabled with the condition that the {{agent}} holds the leadership. This is useful in cases where the {{agent}} between a set of {{agent}}s collects cluster wide metrics for the Kubernetes cluster, such as the `kube-state-metrics` endpoint. + +Provider needs a `kubeconfig` file to establish a connection to Kubernetes API. It can automatically reach the API if it’s running in an InCluster environment ({{agent}} runs as Pod). + +```yaml +providers.kubernetes_leaderelection: + #enabled: true + #kube_config: /Users/elastic-agent/.kube/config + #kube_client_options: + # qps: 5 + # burst: 10 + #leader_lease: agent-k8s-leader-lock + #leader_retryperiod: 2 + #leader_leaseduration: 15 + #leader_renewdeadline: 10 +``` + +`enabled` +: (Optional) Defaults to true. To explicitly disable the LeaderElection provider, set `enabled: false`. + +`kube_config` +: (Optional) Use the given config file as configuration for the Kubernetes client. If `kube_config` is not set, `KUBECONFIG` environment variable will be checked and will fall back to InCluster if it’s not present. + +`kube_client_options` +: (Optional) Configure additional options for the Kubernetes client. Supported options are `qps` and `burst`. If not set, the Kubernetes client’s default QPS and burst settings are used. + +`leader_lease` +: (Optional) Specify the name of the leader lease. This is set to `elastic-agent-cluster-leader` by default. + +`leader_retryperiod` +: (Optional) Default value 2 (in sec). How long before {{agent}}s try to get the `leader` role. + +`leader_leaseduration` +: (Optional) Default value 15 (in sec). How long the leader {{agent}} holds the `leader` state. + +`leader_renewdeadline` +: (Optional) Default value 10 (in sec). How long leaders retry getting the `leader` role. + +The available key is: + +| Key | Type | Description | +| --- | --- | --- | +| `kubernetes_leaderelection.leader` | `bool` | The value of the leadership flag. This is set to `true` when the {{agent}} is the current leader, and is set to `false` otherwise. | + + +## Understanding leader timings [_understanding_leader_timings] + +As described above, the LeaderElection configuration offers the following parameters: Lease duration (`leader_leaseduration`), Renew deadline (`leader_renewdeadline`), and Retry period (`leader_retryperiod`). Based on the config provided, each agent will trigger {{k8s}} API requests and will try to check the status of the lease. + +::::{note} +The number of leader calls to the K8s Control API is proportional to the number of {{agent}}s installed. This means that requests will come from all {{agent}}s per `leader_retryperiod`. Setting `leader_retryperiod` to a greater value than the default (2sec), means that fewer requests will be made towards the {{k8s}} Control API, but will also increase the period where collection of metrics from the leader {{agent}} might be lost. +:::: + + +The library applies [specific checks](https://github.com/kubernetes/client-go/blob/master/tools/leaderelection/leaderelection.go#L76) for the timing parameters and if those are not verified {{agent}} will exit with a `panic` error. + +In general: - Leaseduration must be greater than renewdeadline - Renewdeadline must be greater than retryperiod*JitterFactor. + +::::{note} +Constant JitterFactor=1.2 is defined in [leaderelection lib](https://pkg.go.dev/gopkg.in/kubernetes/client-go.v11/tools/leaderelection). +:::: + + + +## Enabling configurations only when on leadership [_enabling_configurations_only_when_on_leadership] + +Use conditions based on the `kubernetes_leaderelection.leader` key to leverage the leaderelection provider and enable specific inputs only when the {{agent}} holds the leadership lock. The below example enables the `state_container` metricset only when the leadership lock is acquired: + +```yaml +- data_stream: + dataset: kubernetes.state_container + type: metrics + metricsets: + - state_container + add_metadata: true + hosts: + - 'kube-state-metrics:8080' + period: 10s + condition: ${kubernetes_leaderelection.leader} == true +``` + diff --git a/reference/ingestion-tools/fleet/kubernetes_secrets-provider.md b/reference/ingestion-tools/fleet/kubernetes_secrets-provider.md new file mode 100644 index 0000000000..882e1eda9f --- /dev/null +++ b/reference/ingestion-tools/fleet/kubernetes_secrets-provider.md @@ -0,0 +1,57 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/kubernetes_secrets-provider.html +--- + +# Kubernetes Secrets Provider [kubernetes_secrets-provider] + +Provides access to the Kubernetes Secrets API. + +Use the format `${kubernetes_secrets...}` to reference a Kubernetes Secrets variable, where `default` is the namespace of the Secret, `somesecret` is the name of the Secret and `value` is the field of the Secret to access. + +To obtain the values for the secrets, a request to the API Server is made. To avoid multiple requests for the same secret and to not overwhelm the API Server, a cache to store the values is used by default. This configuration can be set by using the variables `cache_*` (see below). + +The provider needs a `kubeconfig` file to establish connection to the Kubernetes API. It can automatically reach the API if it’s run in an InCluster environment ({{agent}} runs as pod). + +```yaml +providers.kubernetes_secrets: + #kube_config: /Users/elastic-agent/.kube/config + #kube_client_options: + # qps: 5 + # burst: 10 + #cache_disable: false + #cache_refresh_interval: 60s + #cache_ttl: 1h + #cache_request_timeout: 5s +``` + +`kube_config` +: (Optional) Use the given config file as configuration for the Kubernetes client. If `kube_config` is not set, `KUBECONFIG` environment variable will be checked and will fall back to InCluster if it’s not present. + +`kube_client_options` +: (Optional) Configure additional options for the Kubernetes client. Supported options are `qps` and `burst`. If not set, the Kubernetes client’s default QPS and burst settings are used. + +`cache_disable` +: (Optional) Disables the cache for the secrets. When disabled, thus is set to `true`, code makes a request to the API Server to obtain the value. To continue using the cache, set the variable to `false`. Default is `false`. + +`cache_refresh_interval` +: (Optional) Defines the period to update all secret values kept in the cache. Defaults to `60s`. + +`cache_ttl` +: (Optional) Defines for how long a secret should be kept in the cache if not being requested. The default is `1h`. + +`cache_request_timeout` +: (Optional) Defines how long the API Server can take to provide the value for a given secret. Defaults to `5s`. + +If you run agent on Kubernetes, the proper rule in the `ClusterRole` is required to provide access to the {{agent}} pod in the Secrets API: + +```yaml +- apiGroups: [""] + resources: + - secrets + verbs: ["get"] +``` + +::::{warning} +The above rule will give permission to {{agent}} pod to access Kubernetes Secrets API. Anyone who has access to the {{agent}} pod (`kubectl exec` for example) will also have access to the Kubernetes Secrets API. This allows access to a specific secret, regardless of the namespace that it belongs to. This option should be carefully considered. +:::: diff --git a/reference/ingestion-tools/fleet/local-dynamic-provider.md b/reference/ingestion-tools/fleet/local-dynamic-provider.md new file mode 100644 index 0000000000..857227a6dc --- /dev/null +++ b/reference/ingestion-tools/fleet/local-dynamic-provider.md @@ -0,0 +1,47 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/local-dynamic-provider.html +--- + +# Local dynamic provider [local-dynamic-provider] + +Define multiple key-value pairs to generate multiple configurations. + +For example, the following {{agent}} policy defines a local dynamic provider that defines three values for `item`: + +```yaml +inputs: + - id: logfile-${local_dynamic.my_var} + type: logfile + streams: + - paths: "/var/${local_dynamic.my_var}/app.log" + +providers: + local_dynamic: + items: + - vars: + my_var: key1 + - vars: + my_var: key2 + - vars: + my_var: key3 +``` + +The configuration generated by this policy looks like: + +```yaml +inputs: + - id: logfile-key1 + type: logfile + streams: + - paths: "/var/key1/app.log" + - id: logfile-key2 + type: logfile + streams: + - paths: "/var/key2/app.log" + - id: logfile-key3 + type: logfile + streams: + - paths: "/var/key3/app.log" +``` + diff --git a/reference/ingestion-tools/fleet/local-provider.md b/reference/ingestion-tools/fleet/local-provider.md new file mode 100644 index 0000000000..a15e92d917 --- /dev/null +++ b/reference/ingestion-tools/fleet/local-provider.md @@ -0,0 +1,16 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/local-provider.html +--- + +# Local [local-provider] + +Provides custom keys to use as variables. For example: + +```yaml +providers: + local: + vars: + foo: bar +``` + diff --git a/reference/ingestion-tools/fleet/logstash-output.md b/reference/ingestion-tools/fleet/logstash-output.md new file mode 100644 index 0000000000..c04d658b75 --- /dev/null +++ b/reference/ingestion-tools/fleet/logstash-output.md @@ -0,0 +1,145 @@ +--- +navigation_title: "{{ls}}" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/logstash-output.html +--- + +# {{ls}} output [logstash-output] + + +The {{ls}} output uses an internal protocol to send events directly to {{ls}} over TCP. {{ls}} provides additional parsing, transformation, and routing of data collected by {{agent}}. + +**Compatibility:** This output works with all compatible versions of {{ls}}. Refer to the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_compatibility). + +This example configures a {{ls}} output called `default` in the `elastic-agent.yml` file: + +```yaml +outputs: + default: + type: logstash + hosts: ["127.0.0.1:5044"] <1> +``` + +1. The {{ls}} server and the port (`5044`) where {{ls}} is configured to listen for incoming {{agent}} connections. + + +To receive the events in {{ls}}, you also need to create a {{ls}} configuration pipeline. The {{ls}} configuration pipeline listens for incoming {{agent}} connections, processes received events, and then sends the events to {{es}}. + +The following {{ls}} pipeline definition example configures a pipeline that listens on port `5044` for incoming {{agent}} connections and routes received events to {{es}}. + +```yaml +input { + elastic_agent { + port => 5044 + enrich => none <1> + ssl => true + ssl_certificate_authorities => [""] + ssl_certificate => "" + ssl_key => "" + ssl_verify_mode => "force_peer" + } +} + +output { + elasticsearch { + hosts => ["http://localhost:9200"] <2> + # cloud_id => "..." + data_stream => "true" + api_key => "" <3> + data_stream => true + ssl => true + # cacert => "" + } +} +``` + +1. Do not modify the events' schema. +2. The {{es}} server and the port (`9200`) where {{es}} is running. +3. The API Key used by {{ls}} to ship data to the destination data streams. + + +For more information about configuring {{ls}}, refer to [Configuring {{ls}}](logstash://docs/reference/creating-logstash-pipeline.md) and [{{agent}} input plugin](logstash://docs/reference/plugins-inputs-elastic_agent.md). + +## {{ls}} output configuration settings [_ls_output_configuration_settings] + +The `logstash` output supports the following settings, grouped by category. Many of these settings have sensible defaults that allow you to run {{agent}} with minimal configuration. + +* [Commonly used settings](#output-logstash-commonly-used-settings) +* [Authentication settings](#output-logstash-authentication-settings) +* [Memory queue settings](#output-logstash-memory-queue-settings) +* [Performance tuning settings](#output-logstash-performance-tuning-settings) + + +## Commonly used settings [output-logstash-commonly-used-settings] + +| Setting | Description | +| --- | --- | +| $$$output-logstash-enabled-setting$$$
`enabled`
| (boolean) Enables or disables the output. If set to `false`, the output is disabled.
| +| $$$output-logstash-escape_html-setting$$$
`escape_html`
| (boolean) Configures escaping of HTML in strings. Set to `true` to enable escaping.

**Default:** `false`
| +| $$$output-logstash-hosts-setting$$$
`hosts`
| (list) The list of known {{ls}} servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). If one host becomes unreachable, another one is selected randomly.

All entries in this list can contain a port number. If no port is specified, `5044` is used.
| +| $$$output-logstash-proxy_url-setting$$$
`proxy_url`
| (string) The URL of the SOCKS5 proxy to use when connecting to the {{ls}} servers. The value must be a URL with a scheme of `socks5://`. The protocol used to communicate to {{ls}} is not based on HTTP, so you cannot use a web proxy.

If the SOCKS5 proxy server requires client authentication, embed a username and password in the URL as shown in the example.

When using a proxy, hostnames are resolved on the proxy server instead of on the client. To change this behavior, set `proxy_use_local_resolver`.

```yaml
outputs:
default:
type: logstash
hosts: ["remote-host:5044"]
proxy_url: socks5://user:password@socks5-proxy:2233
```
| +| $$$output-logstash-proxy_use_local_resolver-setting$$$
`proxy_use_` `local_resolver`
| (boolean) Determines whether {{ls}} hostnames are resolved locally when using a proxy. If `false` and a proxy is used, name resolution occurs on the proxy server.

**Default:** `false`
| + + +## Authentication settings [output-logstash-authentication-settings] + +When sending data to a secured cluster through the `logstash` output, {{agent}} can use SSL/TLS. For a list of available settings, refer to [SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md), specifically the settings under [Table 7, Common configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#common-ssl-options) and [Table 8, Client configuration options](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md#client-ssl-options). + +::::{note} +To use SSL/TLS, you must also configure the [{{agent}} input plugin for {{ls}}](logstash://docs/reference/plugins-inputs-beats.md) to use SSL/TLS. +:::: + + +For more information, refer to [Configure SSL/TLS for the {{ls}} output](/reference/ingestion-tools/fleet/secure-logstash-connections.md). + + +## Memory queue settings [output-logstash-memory-queue-settings] + +The memory queue keeps all events in memory. + +The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. + +The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`. `flush.min_events` gives a limit on the number of events that can be included in a single batch, and `flush.timeout` specifies how long the queue should wait to completely fill an event request. If the output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of `bulk_max_size` and `flush.min_events`. + +`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`. + +In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set `flush.timeout` to 0. + +For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity. + +In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s. + +This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by `bulk_max_size`, and limited to at most 512 events by `flush.min_events`), or when events have been waiting for 5s without filling the requested size:f 512 events are available or the oldest available event has been waiting for 5s in the queue: + +```yaml + queue.mem.events: 4096 + queue.mem.flush.min_events: 512 + queue.mem.flush.timeout: 5s +``` + +| Setting | Description | +| --- | --- | +| $$$output-logstash-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| +| $$$output-logstash-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| +| $$$output-logstash-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| + + +## Performance tuning settings [output-logstash-performance-tuning-settings] + +Settings that may affect performance. + +| Setting | Description | +| --- | --- | +| $$$output-logstash-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to {{ls}} after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| +| $$$output-logstash-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to {{es}} after a network error.

**Default:** `60s`
| +| $$$output-logstash-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single {{ls}} request.

Events can be collected into batches. {{agent}} will split batches larger than `bulk_max_size` into multiple batches.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Set this value to `0` to turn off the splitting of batches. When splitting is turned off, the queue determines the number of events to be contained in a batch.

**Default:** `2048`
| +| $$$output-logstash-compression_level-setting$$$
`compression_level`
| (int) The gzip compression level. Set this value to `0` to disable compression. The compression level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces network usage but increases CPU usage.

**Default:** `3`
| +| $$$output-logstash-loadbalance-setting$$$
`loadbalance`
| If `true` and multiple {{ls}} hosts are configured, the output plugin load balances published events onto all {{ls}} hosts. If `false`, the output plugin sends all events to one host (determined at random) and switches to another host if the selected one becomes unresponsive.

With `loadbalance` enabled:

* {{agent}} reads batches of events and sends each batch to one {{ls}} worker dynamically, based on a work-queue shared between the outputs.
* If a connection drops, {{agent}} takes the disconnected {{ls}} worker out of its pool.
* {{agent}} tries to reconnect. If it succeeds, it re-adds the {{ls}} worker to the pool.
* If one of the {{ls}} nodes is slow but "healthy", it sends a keep-alive signal until the full batch of data is processed. This prevents {{agent}} from sending further data until it receives an acknowledgement signal back from {{ls}}. {{agent}} keeps all events in memory until after that acknowledgement occurs.

Without `loadbalance` enabled:

* {{agent}} picks a random {{ls}} host and sends batches of events to it. Due to the random algorithm, the load on the {{ls}} nodes should be roughly equal.
* In case of any errors, {{agent}} picks another {{ls}} node, also at random. If a connection to a host fails, the host is retried only if there are errors on the new connection.

**Default:** `false`

Example:

```yaml
outputs:
default:
type: logstash
hosts: ["localhost:5044", "localhost:5045"]
loadbalance: true
```
| +| $$$output-logstash-max_retries-setting$$$
`max_retries`
| (int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.

Set `max_retries` to a value less than 0 to retry until all events are published.

**Default:** `3`
| +| $$$output-logstash-pipelining-setting$$$
`pipelining`
| (int) The number of batches to send asynchronously to {{ls}} while waiting for an ACK from {{ls}}. The output becomes blocking after the specified number of batches are written. Specify `0` to turn off pipelining.

**Default:** `2`
| +| $$$output-logstash-slow_start-setting$$$
`slow_start`
| (boolean) If `true`, only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to `bulk_max_size` if no error is encountered. On error, the number of events per transaction is reduced again.

**Default:** `false`
| +| $$$output-logstash-timeout-setting$$$
`timeout`
| (string) The number of seconds to wait for responses from the {{ls}} server before timing out.

**Default:** `30s`
| +| $$$output-logstash-ttl-setting$$$
`ttl`
| (string) Time to live for a connection to {{ls}} after which the connection will be reestablished. This setting is useful when {{ls}} hosts represent load balancers. Because connections to {{ls}} hosts are sticky, operating behind load balancers can lead to uneven load distribution across instances. Specify a TTL on the connection to achieve equal connection distribution across instances.

**Default:** `0` (turns off the feature)

::::{note}
The `ttl` option is not yet supported on an asynchronous {{ls}} client (one with the `pipelining` option set).
::::

| +| $$$output-logstash-worker-setting$$$
`worker`
| (int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).

**Default:** `1`
| + + diff --git a/reference/ingestion-tools/fleet/ls-output-settings.md b/reference/ingestion-tools/fleet/ls-output-settings.md new file mode 100644 index 0000000000..880219f3f6 --- /dev/null +++ b/reference/ingestion-tools/fleet/ls-output-settings.md @@ -0,0 +1,83 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/ls-output-settings.html +--- + +# Logstash output settings [ls-output-settings] + +Specify these settings to send data over a secure connection to {{ls}}. You must also configure a {{ls}} pipeline that reads encrypted data from {{agent}}s and sends the data to {{es}}. Follow the in-product steps to configure the {{ls}} pipeline. + +In the {{fleet}} [Output settings](/reference/ingestion-tools/fleet/fleet-settings.md#output-settings), make sure that the {{ls}} output type is selected. + +Before using the {{ls}} output, you need to make sure that for any integrations that have been [added to your {{agent}} policy](/reference/ingestion-tools/fleet/add-integration-to-policy.md), the integration assets have been installed on the destination cluster. Refer to [Install and uninstall {{agent}} integration assets](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md) for the steps to add integration assets. + +To learn how to generate certificates, refer to [Configure SSL/TLS for the {{ls}} output](/reference/ingestion-tools/fleet/secure-logstash-connections.md). + +To receive the events in {{ls}}, you also need to create a {{ls}} configuration pipeline. The {{ls}} configuration pipeline listens for incoming {{agent}} connections, processes received events, and then sends the events to {{es}}. + +The following example configures a {{ls}} pipeline that listens on port `5044` for incoming {{agent}} connections and routes received events to {{es}}. + +The {{ls}} pipeline definition below is an example. Please refer to the `Additional Logstash configuration required` steps when creating the {{ls}} output in the Fleet outputs page. + +```yaml +input { + elastic_agent { + port => 5044 + enrich => none <1> + ssl => true + ssl_certificate_authorities => [""] + ssl_certificate => "" + ssl_key => "" + ssl_verify_mode => "force_peer" + } +} +output { + elasticsearch { + hosts => ["http://localhost:9200"] <2> + # cloud_id => "..." + data_stream => "true" + api_key => "" <3> + data_stream => true + ssl => true + # cacert => "" + } +} +``` + +1. Do not modify the events' schema. +2. The {{es}} server and the port (`9200`) where {{es}} is running. +2. The API Key obtained from the {{ls}} output creation steps in Fleet. + + +| | | +| --- | --- | +| $$$ls-logstash-hosts$$$
**{{ls}} hosts**
| The addresses your {{agent}}s will use to connect to {{ls}}. Use the format `host:port`. Click **add** row to specify additional {{ls}} addresses.

**Examples:**

* `192.0.2.0:5044`
* `mylogstashhost:5044`

Refer to the [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) documentation for default ports and other configuration details.
| +| $$$ls-server-ssl-certificate-authorities-setting$$$
**Server SSL certificate authorities**
| The CA certificate to use to connect to {{ls}}. This is the CA used to generate the certificate and key for {{ls}}. Copy and paste in the full contents for the CA certificate.

This setting is optional.
| +| $$$ls-client-ssl-certificate-setting$$$
**Client SSL certificate**
| The certificate generated for the client. Copy and paste in the full contents of the certificate. This is the certificate that all the agents will use to connect to {{ls}}.

In cases where each client has a unique certificate, the local path to that certificate can be placed here. The agents will pick the certificate in that location when establishing a connection to {{ls}}.
| +| $$$ls-client-ssl-certificate-key-setting$$$
**Client SSL certificate key**
| The private key generated for the client. This must be in PKCS 8 key. Copy and paste in the full contents of the certificate key. This is the certificate key that all the agents will use to connect to {{ls}}.

In cases where each client has a unique certificate key, the local path to that certificate key can be placed here. The agents will pick the certificate key in that location when establishing a connection to {{ls}}.

To prevent unauthorized access the certificate key is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the key as plain text in the agent policy definition. Secret storage requires {{fleet-server}} version 8.12 or higher.

Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See [Preconfiguration settings](kibana://docs/reference/configuration-reference/fleet-settings.md#_preconfiguration_settings_for_advanced_use_cases) in the {{kib}} Guide to learn more.
| +| $$$ls-agent-proxy-output$$$
**Proxy**
| Select a proxy URL for {{agent}} to connect to {{ls}}. To learn about proxy configuration, refer to [Using a proxy server with {{agent}} and {{fleet}}](/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md).
| +| $$$ls-output-advanced-yaml-setting$$$
**Advanced YAML configuration**
| YAML settings that will be added to the {{ls}} output section of each policy that uses this output. Make sure you specify valid YAML. The UI does not currently provide validation.

See [Advanced YAML configuration](#ls-output-settings-yaml-config) for descriptions of the available settings.
| +| $$$ls-agent-integrations-output$$$
**Make this output the default for agent integrations**
| When this setting is on, {{agent}}s use this output to send data if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md).

Output to {{ls}} is not supported for agent integrations in a policy used by {{fleet-server}} or APM.
| +| $$$ls-agent-monitoring-output$$$
**Make this output the default for agent monitoring**
| When this setting is on, {{agent}}s use this output to send [agent monitoring data](/reference/ingestion-tools/fleet/monitor-elastic-agent.md) if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md).

Output to {{ls}} is not supported for agent monitoring in a policy used by {{fleet-server}} or APM.
| + +## Advanced YAML configuration [ls-output-settings-yaml-config] + +| Setting | Description | +| --- | --- | +| $$$output-logstash-fleet-settings-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to {{ls}} after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| +| $$$output-logstash-fleet-settings-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to {{es}} after a network error.

**Default:** `60s`
| +| $$$output-logstash-fleet-settings-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single {{ls}} request.

Events can be collected into batches. {{agent}} will split batches larger than `bulk_max_size` into multiple batches.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Set this value to `0` to turn off the splitting of batches. When splitting is turned off, the queue determines the number of events to be contained in a batch.

**Default:** `2048`
| +| $$$output-logstash-fleet-settings-compression_level-setting$$$
`compression_level`
| (int) The gzip compression level. Set this value to `0` to disable compression. The compression level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces network usage but increases CPU usage.
| +| $$$output-logstash-fleet-settings-escape_html-setting$$$
`escape_html`
| (boolean) Configures escaping of HTML in strings. Set to `true` to enable escaping.

**Default:** `false`
| +| $$$output-logstash-fleet-settings-index-setting$$$
`index`
| (string) The index root name to write events to.
| +| $$$output-logstash-fleet-settings-loadbalance-setting$$$
`loadbalance`
| If `true` and multiple {{ls}} hosts are configured, the output plugin load balances published events onto all {{ls}} hosts. If `false`, the output plugin sends all events to one host (determined at random) and switches to another host if the selected one becomes unresponsive.

With `loadbalance` enabled:

* {{agent}} reads batches of events and sends each batch to one {{ls}} worker dynamically, based on a work-queue shared between the outputs.
* If a connection drops, {{agent}} takes the disconnected {{ls}} worker out of its pool.
* {{agent}} tries to reconnect. If it succeeds, it re-adds the {{ls}} worker to the pool.
* If one of the {{ls}} nodes is slow but "healthy", it sends a keep-alive signal until the full batch of data is processed. This prevents {{agent}} from sending further data until it receives an acknowledgement signal back from {{ls}}. {{agent}} keeps all events in memory until after that acknowledgement occurs.

Without `loadbalance` enabled:

* {{agent}} picks a random {{ls}} host and sends batches of events to it. Due to the random algorithm, the load on the {{ls}} nodes should be roughly equal.
* In case of any errors, {{agent}} picks another {{ls}} node, also at random. If a connection to a host fails, the host is retried only if there are errors on the new connection.

**Default:** `false`

Example:

```yaml
outputs:
default:
type: logstash
hosts: ["localhost:5044", "localhost:5045"]
loadbalance: true
```
| +| $$$output-logstash-fleet-settings-max_retries-setting$$$
`max_retries`
| (int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.

Set `max_retries` to a value less than 0 to retry until all events are published.

**Default:** `3`
| +| $$$output-logstash-fleet-settings-pipelining-setting$$$
`pipelining`
| (int) The number of batches to send asynchronously to {{ls}} while waiting for an ACK from {{ls}}. The output becomes blocking after the specified number of batches are written. Specify `0` to turn off pipelining.

**Default:** `2`
| +| $$$output-logstash-fleet-settings-proxy_use_local_resolver-setting$$$
`proxy_use_` `local_resolver`
| (boolean) Determines whether {{ls}} hostnames are resolved locally when using a proxy. If `false` and a proxy is used, name resolution occurs on the proxy server.

**Default:** `false`
| +| $$$output-logstash-fleet-settings-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| +| $$$output-logstash-fleet-settings-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| +| $$$output-logstash-fleet-settings-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| +| $$$output-logstash-fleet-settings-slow_start-setting$$$
`slow_start`
| (boolean) If `true`, only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to `bulk_max_size` if no error is encountered. On error, the number of events per transaction is reduced again.

**Default:** `false`
| +| $$$output-logstash-fleet-settings-timeout-setting$$$
`timeout`
| (string) The number of seconds to wait for responses from the {{ls}} server before timing out.

**Default:** `30s`
| +| $$$output-logstash-fleet-settings-ttl-setting$$$
`ttl`
| (string) Time to live for a connection to {{ls}} after which the connection will be reestablished. This setting is useful when {{ls}} hosts represent load balancers. Because connections to {{ls}} hosts are sticky, operating behind load balancers can lead to uneven load distribution across instances. Specify a TTL on the connection to achieve equal connection distribution across instances.

**Default:** `0` (turns off the feature)

::::{note}
The `ttl` option is not yet supported on an asynchronous {{ls}} client (one with the `pipelining` option set).
::::

| +| $$$output-logstash-fleet-settings-worker-setting$$$
`worker`
| (int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).

**Default:** `1`
| diff --git a/reference/ingestion-tools/fleet/manage-agents.md b/reference/ingestion-tools/fleet/manage-agents.md new file mode 100644 index 0000000000..bc7ffac653 --- /dev/null +++ b/reference/ingestion-tools/fleet/manage-agents.md @@ -0,0 +1,32 @@ +--- +navigation_title: "{{agent}}s" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/manage-agents.html +--- + +# {{agent}}s [manage-agents] + + +::::{tip} +To learn how to add {{agent}}s to {{fleet}}, see [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). +:::: + + +To manage your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}}. On the **Agents** tab, you can perform the following actions: + +| User action | Result | +| --- | --- | +| [Unenroll {{agent}}s](/reference/ingestion-tools/fleet/unenroll-elastic-agent.md) | Unenroll {{agent}}s from {{fleet}}. | +| [Set inactivity timeout](/reference/ingestion-tools/fleet/set-inactivity-timeout.md) | Set inactivity timeout to move {{agent}}s to inactive status after being offline for the set amount of time. | +| [Upgrade {{agent}}s](/reference/ingestion-tools/fleet/upgrade-elastic-agent.md) | Upgrade {{agent}}s to the latest version. | +| [Migrate {{agent}}s](/reference/ingestion-tools/fleet/migrate-elastic-agent.md) | Migrate {{agent}}s from one cluster to another. | +| [Monitor {{agent}}s](/reference/ingestion-tools/fleet/monitor-elastic-agent.md) | Monitor {{fleet}}-managed {{agent}}s by viewing agent status, logs, and metrics. | +| [Add tags to filter the Agents list](/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md) | Add tags to {{agent}}, then use the tags to filter the Agents list in {{fleet}}. | + + + + + + + + diff --git a/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md b/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md new file mode 100644 index 0000000000..9cae9976be --- /dev/null +++ b/reference/ingestion-tools/fleet/manage-elastic-agents-in-fleet.md @@ -0,0 +1,45 @@ +--- +navigation_title: "Manage {{agent}}s in {{fleet}}" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/manage-agents-in-fleet.html +--- + +# Centrally manage {{agent}}s in {{fleet}} [manage-agents-in-fleet] + + +::::{admonition} +The {{fleet}} app in {{kib}} supports both {{agent}} infrastructure management and agent policy management. You can use {{fleet}} to: + +* Manage {{agent}} binaries and specify settings installed on the host that determine whether the {{agent}} is enrolled in {{fleet}}, what version of the agent is running, and which agent policy is used. +* Manage agent policies that specify agent configuration settings, which integrations are running, whether agent monitoring is turned on, input settings, and so on. + +Advanced users who don’t want to use {{fleet}} for central management can use an external infrastructure management solution and [install {{agent}} in standalone mode](/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md) instead. + +:::: + + +::::{important} +{{fleet}} currently requires a {{kib}} user with `All` privileges on {{fleet}} and {{integrations}}. Since many Integrations assets are shared across spaces, users need the {{kib}} privileges in all spaces. Refer to [Required roles and privileges](/reference/ingestion-tools/fleet/fleet-roles-privileges.md) to learn how to create a user role with the required privileges to access {{fleet}} and {{integrations}}. + +:::: + + +To learn how to add {{agent}}s to {{fleet}}, refer to [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). + +To use {{fleet}} go to **Management > {{fleet}}** in {{kib}}. The following table describes the main management actions you can perform in {{fleet}}: + +| Component | Management actions | +| --- | --- | +| [{{fleet}} settings](/reference/ingestion-tools/fleet/fleet-settings.md) | Configure global settings available to all {{agent}}s managed by {{fleet}},including {{fleet-server}} hosts and output settings. | +| [{{agent}}s](/reference/ingestion-tools/fleet/manage-agents.md) | Enroll, unenroll, upgrade, add tags, and view {{agent}} status and logs. | +| [Policies](/reference/ingestion-tools/fleet/agent-policy.md) | Create and edit agent policies and add integrations to them. | +| [{{fleet}} enrollment tokens](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md) | Create and revoke enrollment tokens. | +| [Uninstall tokens](/reference/security/elastic-defend/agent-tamper-protection.md) | ({{elastic-defend}} integration only) Access tokens to allow uninstalling {{agent}} from endpoints with Agent tamper protection enabled. | +| [Data streams](/reference/ingestion-tools/fleet/data-streams.md) | View data streams and navigate to dashboards to analyze your data. | + + + + + + + diff --git a/reference/ingestion-tools/fleet/manage-integrations.md b/reference/ingestion-tools/fleet/manage-integrations.md new file mode 100644 index 0000000000..e14773c73f --- /dev/null +++ b/reference/ingestion-tools/fleet/manage-integrations.md @@ -0,0 +1,58 @@ +--- +navigation_title: "Manage integrations" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/integrations.html +--- + +# Manage {{agent}} integrations [integrations] + + +::::{admonition} +Integrations are available for a wide array of popular services and platforms. To see the full list of available integrations, go to the **Integrations** page in {{kib}}, or visit [Elastic Integrations](integration-docs://docs/reference/index.md). + +{{agent}} integrations provide a simple, unified way to collect data from popular apps and services, and protect systems from security threats. + +Each integration comes prepackaged with assets that support all of your observability needs: + +* Data ingestion, storage, and transformation rules +* Configuration options +* Pre-built, custom dashboards and visualizations +* Documentation + +:::: + + +::::{note} +Please be aware that some integrations may function differently across different spaces. Also, some might only work in the default space. We recommend reviewing the specific integration documentation for any space-related considerations. + +:::: + + +The following table shows the main actions you can perform in the **Integrations** app in {{kib}}. You can perform some of these actions from other places in {{kib}}, too. + +| User action | Result | +| --- | --- | +| [Add an integration to an {{agent}} policy](/reference/ingestion-tools/fleet/add-integration-to-policy.md) | Configure an integration for a specific use case and add it to an {{agent}} policy. | +| [View integration policies](/reference/ingestion-tools/fleet/view-integration-policies.md) | View the integration policies created for a specific integration. | +| [Edit or delete an integration policy](/reference/ingestion-tools/fleet/edit-delete-integration-policy.md) | Change settings or delete the integration policy. | +| [Install and uninstall integration assets](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md) | Install, uninstall, and reinstall integration assets in {{kib}}. | +| [View integration assets](/reference/ingestion-tools/fleet/view-integration-assets.md) | View the {{kib}} assets installed for a specific integration. | +| [Upgrade an integration](/reference/ingestion-tools/fleet/upgrade-integration.md) | Upgrade an integration to the latest version. | + +::::{note} +The **Integrations** app in {{kib}} needs access to the public {{package-registry}} to discover integrations. If your deployment has network restrictions, you can [deploy your own self-managed {{package-registry}}](/reference/ingestion-tools/fleet/air-gapped.md#air-gapped-diy-epr). + +:::: + + + + + + + + + + + + + diff --git a/reference/ingestion-tools/fleet/managed-integrations-content.md b/reference/ingestion-tools/fleet/managed-integrations-content.md new file mode 100644 index 0000000000..e9c81ba2df --- /dev/null +++ b/reference/ingestion-tools/fleet/managed-integrations-content.md @@ -0,0 +1,35 @@ +--- +navigation_title: "Managed integrations content" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/managed-integrations-content.html +--- + +# Managed integrations content [managed-integrations-content] + + +Most integration content installed by {{fleet}} isn't editable. This content is tagged with a **Managed** badge in the {{kib}} UI. Managed content itself cannot be edited or deleted, however managed visualizations, dashboards, and saved searches can be cloned. + +:::{image} images/system-managed.png +:alt: An image of the new managed badge. +:class: screenshot +::: + +When a managed dashboard is cloned, any linked or referenced panels become part of the clone without relying on external sources. The panels are integrated into the cloned dashboard as stand alone components. For example, with a cloned dashboard, the cloned panels become entirely self-contained copies without any dependencies on the original configuration. Clones can be customized and modified without accidentally affecting the original. + +::::{note} +The cloned managed content retains the managed badge, but is independent from the original. + +:::: + +You can make a complete clone of a whole managed dashboard. If you clone a panel within a managed dashboard, you're prompted to save the changes as a new dashboard, which is unlinked from the original managed content. + +% The following details are copied from https://www.elastic.co/guide/en/kibana/8.17/fleet.html +To clone a dashboard: + +1. Go to **Dashboards**. +2. Click on the name of the managed dashboard to view the dashboard. +3. Click **Clone** in the toolbar. +4. Click **Save and return** after editing the dashboard. +5. Click **Save**. + +With managed content relating to specific visualization editor such as Lens, TSVB, and Maps, the clones retain the original reference configurations. To clone the visualization, view it in the editor then begin to make edits. Once finished editing you are prompted to save the edits as a new visualization. The same applies to editing any saved searches in a managed visualization. diff --git a/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md b/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md new file mode 100644 index 0000000000..d6cc82d3ac --- /dev/null +++ b/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md @@ -0,0 +1,40 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/migrate-auditbeat-to-agent.html +--- + +# Migrate from Auditbeat to Elastic Agent [migrate-auditbeat-to-agent] + +Before you begin, read [*Migrate from {{beats}} to {{agent}}*](/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md) to learn how to deploy {{agent}} and install integrations. + +Then come back to this page to learn about the integrations available to replace functionality provided by {{auditbeat}}. + + +## Compatibility [compatibility] + +The integrations that provide replacements for `auditd` and `file_integrity` modules are only available in {{stack}} version 8.3 and later. + + +## Replace {{auditbeat}} modules with {{agent}} integrations [use-integrations] + +The following table describes the integrations you can use instead of {{auditbeat}} modules and datasets. + +| If you use…​ | You can use this instead…​ | Notes | +| --- | --- | --- | +| [Auditd](beats://docs/reference/auditbeat/auditbeat-module-auditd.md) module | [Auditd Manager](integration-docs://docs/reference/auditd_manager.md) integration | This integration is a direct replacement of the module. You can port rules andconfiguration to this integration. Starting in {{stack}} 8.4, you can also set the`immutable` flag in the audit configuration. | +| [Auditd Logs](integration-docs://docs/reference/auditd.md) integration | Use this integration if you don’t need to manage rules. It only parses logs fromthe audit daemon `auditd`. Please note that the events created by this integrationare different than the ones created by[Auditd Manager](integration-docs://docs/reference/auditd_manager.md), since the latter merges allrelated messages in a single event while [Auditd Logs](integration-docs://docs/reference/auditd.md)creates one event per message. | +| [File Integrity](beats://docs/reference/auditbeat/auditbeat-module-file_integrity.md) module | [File Integrity Monitoring](integration-docs://docs/reference/fim.md) integration | This integration is a direct replacement of the module. It reports real-timeevents, but cannot report who made the changes. If you need to track thisinformation, use [{{elastic-defend}}](/reference/security/elastic-defend/install-endpoint.md)instead. | +| [System](beats://docs/reference/auditbeat/auditbeat-module-system.md) module | It depends…​ | There is not a single integration that collects all this information. | +| [System.host](beats://docs/reference/auditbeat/auditbeat-dataset-system-host.md) dataset | [Osquery](integration-docs://docs/reference/osquery.md) or [Osquery Manager](integration-docs://docs/reference/osquery_manager.md) integration | Schedule collection of information like:

* [system_info](https://www.osquery.io/schema/5.1.0/#system_info) for hostname, unique ID, and architecture
* [os_version](https://www.osquery.io/schema/5.1.0/#os_version)
* [interface_addresses](https://www.osquery.io/schema/5.1.0/#interface_addresses) for IPs and MACs
| +| [System.login](beats://docs/reference/auditbeat/auditbeat-dataset-system-login.md) dataset | [Endpoint](/reference/security/elastic-defend/install-endpoint.md) | Report login events. | +| [Osquery](integration-docs://docs/reference/osquery.md) or [Osquery Manager](integration-docs://docs/reference/osquery_manager.md) integration | Use the [last](https://www.osquery.io/schema/5.1.0/#last) table for Linux and macOS. | +| {{fleet}} [system](integration-docs://docs/reference/system.md) integration | Collect login events for Windows through the [Security event log](integration-docs://docs/reference/system.md#system-security). | +| [System.package](beats://docs/reference/auditbeat/auditbeat-dataset-system-package.md) dataset | [System Audit](integration-docs://docs/reference/system_audit.md) integration | This integration is a direct replacement of the System Package dataset. Starting in {{stack}} 8.7, you can port rules and configuration settings to this integration. This integration currently schedules collection of information such as:

* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
| +| [Osquery](integration-docs://docs/reference/osquery.md) or [Osquery Manager](integration-docs://docs/reference/osquery_manager.md) integration | Schedule collection of information like:

* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
* [apps](https://www.osquery.io/schema/5.1.0/#apps) (MacOS)
* [programs](https://www.osquery.io/schema/5.1.0/#programs) (Windows)
* [npm_packages](https://www.osquery.io/schema/5.1.0/#npm_packages)
* [atom_packages](https://www.osquery.io/schema/5.1.0/#atom_packages)
* [chocolatey_packages](https://www.osquery.io/schema/5.1.0/#chocolatey_packages)
* [portage_packages](https://www.osquery.io/schema/5.1.0/#portage_packages)
* [python_packages](https://www.osquery.io/schema/5.1.0/#python_packages)
| +| [System.process](beats://docs/reference/auditbeat/auditbeat-dataset-system-process.md) dataset | [Endpoint](/reference/security/elastic-defend/install-endpoint.md) | Best replacement because out of the box it reports events forevery process in [ECS](ecs://docs/reference/index.md) format and has excellentintegration in [Kibana](/get-started/the-stack.md). | +| [Custom Windows event log](integration-docs://docs/reference/winlog.md) and{{integrations-docs}}/windows#sysmonoperational[Sysmon] integrations | Provide process data. | +| [Osquery](integration-docs://docs/reference/osquery.md) or[Osquery Manager](integration-docs://docs/reference/osquery_manager.md) integration | Collect data from the [process](https://www.osquery.io/schema/5.1.0/#process) table on some OSeswithout polling. | +| [System.socket](beats://docs/reference/auditbeat/auditbeat-dataset-system-socket.md) dataset | [Endpoint](/reference/security/elastic-defend/install-endpoint.md) | Best replacement because it supports monitoring network connections on Linux,Windows, and MacOS. Includes process and user metadata. Currently does notdo flow accounting (byte and packet counts) or domain name enrichment (but doescollect DNS queries separately). | +| [Osquery](integration-docs://docs/reference/osquery.md) or [Osquery Manager](integration-docs://docs/reference/osquery_manager.md) integration | Monitor socket events via the [socket_events](https://www.osquery.io/schema/5.1.0/#socket_events) tablefor Linux and MacOS. | +| [System.user](beats://docs/reference/auditbeat/auditbeat-dataset-system-user.md) dataset | [Osquery](integration-docs://docs/reference/osquery.md) or [Osquery Manager](integration-docs://docs/reference/osquery_manager.md) integration | Monitor local users via the [user](https://www.osquery.io/schema/5.1.0/#user) table for Linux, Windows, and MacOS. | + diff --git a/reference/ingestion-tools/fleet/migrate-elastic-agent.md b/reference/ingestion-tools/fleet/migrate-elastic-agent.md new file mode 100644 index 0000000000..c06f749c7c --- /dev/null +++ b/reference/ingestion-tools/fleet/migrate-elastic-agent.md @@ -0,0 +1,247 @@ +--- +navigation_title: "Migrate {{agent}}s" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/migrate-elastic-agent.html +--- + +# Migrate {{fleet}}-managed {{agent}}s from one cluster to another [migrate-elastic-agent] + + +There are situations where you may need to move your installed {{agent}}s from being managed in one cluster to being managed in another cluster. + +For a seamless migration, we advise that you create an identical agent policy in the new cluster that is configured in the same manner as the original cluster. There are a few methods to do this. + +This guide takes you through the steps to migrate your {{agent}}s by snapshotting a source cluster and restoring it on a target cluster. These instructions assume that you have an {{ecloud}} deployment, but they can be applied to on-premise clusters as well. + + +## Take a snapshot of the source cluster [migrate-elastic-agent-take-snapshot] + +Refer to the full [Snapshot and restore](/deploy-manage/tools/snapshot-and-restore.md) documentation for full details. In short, to create a new snapshot in an {{ecloud}} deployment: + +1. In {{kib}}, open the main menu, then click **Manage this deployment**. +2. In the deployment menu, select **Snapshots**. +3. Click **Take snapshot now**. + + :::{image} images/migrate-agent-take-snapshot.png + :alt: Deployments Snapshots page + :class: screenshot + ::: + + + +## Create a new target cluster from the snapshot [migrate-elastic-agent-create-target] + +You can create a new cluster based on the snapshot taken in the previous step, and then migrate your {{agent}}s and {{fleet}} to the new cluster. For best results, it’s recommended that the new target cluster be at the same version as the cluster that the agents are migrating from. + +1. Open the {{ecloud}} console and select **Create deployment**. +2. Select **Restore snapshot data**. +3. In the **Restore from** field, select your source deployment. +4. Choose your deployment settings, and, optimally, choose the same {{stack}} version as the source cluster. +5. Click **Create deployment**. + + :::{image} images/migrate-agent-new-deployment.png + :alt: Create a deployment page + :class: screenshot + ::: + + + +## Update settings in the target cluster [migrate-elastic-agent-target-settings] + +when the target cluster is available you’ll need to adjust a few settings. Take some time to examine the {{fleet}} setup in the new cluster. + +1. Open the {{kib}} menu and select **Fleet**. +2. On the **Agents** tab, your agents should visible, however they’ll appear as `Offline`. This is because these agents have not yet enrolled in the new, target cluster, and are still enrolled in the original, source cluster. + + :::{image} images/migrate-agent-agents-offline.png + :alt: Agents tab in Fleet showing offline agents + :class: screenshot + ::: + +3. Open the {{fleet}} **Settings** tab. +4. Examine the configurations captured there for {{fleet}}. Note that these settings are scopied from the snapshot of the source cluster and may not have a meaning in the target cluster, so they need to be modified accordingly. + + In the following example, both the **Fleet Server hosts** and the **Outputs** settings are copied over from the source cluster: + + :::{image} images/migrate-agent-host-output-settings.png + :alt: Settings tab in Fleet showing source deployment host and output settings + :class: screenshot + ::: + + The next steps explain how to obtain the relevant {{fleet-server}} host and {{es}} output details applicable to the new target cluster in {{ecloud}}. + + + +### Modify the {{es}} output [migrate-elastic-agent-elasticsearch-output] + +1. In the new target cluster on {{ecloud}}, in the **Outputs** section, on the {{fleet}} **Settings** tab, you will find an internal output named `Elastic Cloud internal output`. The host address is in the form: + + `https://.containerhost:9244` + + Record this `` from the target cluster. In the example shown, the ID is `fcccb85b651e452aa28703a59aea9b00`. + +2. Also in the **Outputs** section, notice that the default {{es}} output (that was copied over from the source cluster) is also in the form: + + `https://.:443`. + + Modify the {{es}} output so that the cluster ID is the same as that for `Elastic Cloud internal output`. In this example we also rename the output to `New Elasticsearch`. + + :::{image} images/migrate-agent-elasticsearch-output.png + :alt: Outputs section showing the new Elasticsearch host setting + :class: screenshot + ::: + + In this example, the `New Elasticsearch` output and the `Elastic Cloud internal output` now have the same cluster ID, namely `fcccb85b651e452aa28703a59aea9b00`. + + +You have now created an {{es}} output that agents can use to write data to the new, target cluster. For on-premise environments not using {{ecloud}}, you should similarly be able to use the host address of the new cluster. + + +### Modify the {{fleet-server}} host [migrate-elastic-agent-fleet-host] + +Like the {{es}} host, the {{fleet-server}} host has also changed with the new target cluster. Note that if you’re deploying {{fleet-server}} on premise, the host has probably not changed address and this setting does not need to be modified. We still recommend that you ensure the agents are able to reach the the on-premise {{fleet-server}} host (which they should be able to as they were able to connect to it prior to the migration). + +The {{ecloud}} {{fleet-server}} host has a similar format to the {{es}} output: + +`https://.fleet..io` + +To configure the correct {{ecloud}} {{fleet-server}} host you will need to find the target cluster’s full `deployment-id`, and use it to replace the original `deployment-id` that was copied over from the source cluster. + +The easiest way to find the `deployment-id` is from the deployment URL: + +1. From the {{kib}} menu select **Manage this deployment**. +2. Copy the deployment ID from the URL in your browser’s address bar. + + :::{image} images/migrate-agent-deployment-id.png + :alt: Deployment management page + :class: screenshot + ::: + + In this example, the new deployment ID is `eed4ae8e2b604fae8f8d515479a16b7b`. + + Using that value for `deployment-id`, the new {{fleet-server}} host URL is: + + `https://eed4ae8e2b604fae8f8d515479a16b7b.fleet.us-central1.gcp.cloud.es.io:443` + +3. In the target cluster, under **Fleet server hosts**, replace the original host URL with the new value. + + :::{image} images/migrate-agent-fleet-server-host.png + :alt: Fleet server hosts showing the new host URL + :class: screenshot + ::: + + + +### Reset the {{ecloud}} policy [migrate-elastic-agent-reset-policy] + +On your target cluster, certain settings from the original {{ecloud}} {{agent}} policiy may still be retained, and need to be updated to reference the new cluster. For example, in the APM policy installed to the {{ecloud}} {{agent}} policy, the original and outdated APM URL is preserved. This can be fixed by running the `reset_preconfigured_agent_policies` API request. Note that when you reset the policy, all APM Integration settings are reset, including the secret key or any tail-based sampling. + +To reset the {{ecloud}} {{agent}} policy: + +1. Choose one of the API requests below and submit it through a terminal window. + + * If you’re using {{kib}} version 8.11 or higher, run: + + ```shell + curl --request POST \ + --url https://{KIBANA_HOST:PORT}/internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ + -u username:password \ + --header 'Content-Type: application/json' \ + --header 'kbn-xsrf: as' \ + --header 'elastic-api-version: 1' + ``` + + * If you’re using a {{kib}} version below 8.11, run: + + ```shell + curl --request POST \ + --url https://{KIBANA_HOST:PORT}/internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \ + -u username:password \ + --header 'Content-Type: application/json' \ + --header 'kbn-xsrf: as' + ``` + + After running the command, your {{ecloud}} agent policy settings should all be updated appropriately. + + +::::{note} +After running the command, a warning message may appear in {{fleet}} indicating that {{fleet-server}} is not healthy. As well, the {{agent}} associated with the {{ecloud}} agent policy may disappear from the list of agents. To remedy this, you can restart {{integrations-server}}: + +1. From the {{kib}} menu, select **Manage this deployment**. +2. In the deployment menu, select **Integrations Server**. +3. On the **Integrations Server** page, select **Force Restart**. + +After the restart, {{integrations-server}} will enroll a new {{agent}} for the {{ecloud}} agent policy and {{fleet-server}} should return to a healthy state. + +:::: + + + +### Confirm your policy settings [migrate-elastic-agent-confirm-policy] + +Now that the {{fleet}} settings are correctly set up, it pays to ensure that the {{agent}} policy is also correctly pointing to the correct entities. + +1. In the target cluster, go to **Fleet → Agent policies**. +2. Select a policy to verify. +3. Open the **Settings** tab. +4. Ensure that **Fleet Server**, **Output for integrations**, and **Output for agent monitoring** are all set to the newly created entities. + + :::{image} images/migrate-agent-policy-settings.png + :alt: An agent policy's settings showing the newly created entities + :class: screenshot + ::: + + +::::{note} +If you modified the {{fleet-server}} and the output in place these would have been updated accordingly. However if new entities are created, then ensure that the correct ones are referenced here. +:::: + + + +## Agent policies in the new target cluster [migrate-elastic-agent-migrated-policies] + +By creating the new target cluster from a snapshot, all of your policies should have been created along with all of the agents. These agents will be offline due to the fact that the actual agents are not checking in with the new, target cluster (yet) and are still communicating with the source cluster. + +The agents can now be re-enrolled into these policies and migrated over to the new, target cluster. + + +## Migrate {{agent}}s to the new target cluster [migrate-elastic-agent-migrated-agents] + +In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new, target cluster. + +This is best performed one policy at a time. For a given policy, you need to capture the enrollment token and the URL for the agent to connect to. You can find these by running the in-product steps to add a new agent. + +1. On the target cluster, open **Fleet** and select **Add agent**. +2. Select your newly created policy. +3. In the section **Install {{agent}} on your host**, find the sample install command. This contains the details you’ll need to enroll the agents, namely the enrollment token and the {{fleet-server}} URL. +4. Copy the portion of the install command containing these values. That is, `–url= –enrollment-token=`. + + :::{image} images/migrate-agent-install-command.png + :alt: Install command from the Add Agent UI + :class: screenshot + ::: + +5. On the host machines where the current agents are installed, enroll the agents again using this copied URL and the enrollment token: + + ```shell + sudo elastic-agent enroll --url= --enrollment-token= + ``` + + The command output should be like the following: + + :::{image} images/migrate-agent-install-command-output.png + :alt: Install command output + :class: screenshot + ::: + +6. The agent on each host will now check into the new {{fleet-server}} and appear in the new target cluster. In the source cluster, the agents will go offline as they won’t be sending any check-ins. + + :::{image} images/migrate-agent-newly-enrolled-agents.png + :alt: Newly enrolled agents in the target cluster + :class: screenshot + ::: + +7. Repeat this procedure for each {{agent}} policy. + +If all has gone well, you’ve successfully migrated your {{fleet}}-managed {{agent}}s to a new cluster. + diff --git a/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md b/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md new file mode 100644 index 0000000000..4208718f2a --- /dev/null +++ b/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md @@ -0,0 +1,344 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/migrate-beats-to-agent.html +--- + +# Migrate from Beats to Elastic Agent [migrate-beats-to-agent] + +Learn how to replace your existing {{filebeat}} and {{metricbeat}} deployments with {{agent}}, our single agent for logs, metrics, security, and threat prevention. + + +## Why migrate to {{agent}}? [why-migrate-to-elastic-agent] + +{{agent}} and {{beats}} provide similar functionality for log collection and host monitoring, but {{agent}} has some distinct advantages over {{beats}}. + +* **Easier to deploy and manage.** Instead of deploying multiple {{beats}}, you deploy a single {{agent}}. The {{agent}} downloads, configures, and manages any underlying programs required to collect and parse your data. +* **Easier to configure.** You no longer have to define and manage separate configuration files for each Beat running on a host. Instead you define a single agent policy that specifies which integration settings to use, and the {{agent}} generates the configuration required by underlying programs, like {{beats}}. +* **Central management.** Unlike {{beats}}, which require you to set up your own automation strategy for upgrades and configuration management, {{agent}}s can be managed from a central location in {{kib}} called {{fleet}}. In {{fleet}}, you can view the status of running {{agent}}s, update agent policies and push them to your hosts, and even trigger binary upgrades. +* **Endpoint protection.** Probably the biggest advantage of using {{agent}} is that it enables you to protect your endpoints from security threats. + + +## Limitations and requirements [_limitations_and_requirements] + +There are currently some limitations and requirements to be aware of before migrating to {{agent}}: + +* **No support for configuring the {{beats}} internal queue.** Each Beat has an internal queue that stores events before batching and publishing them to the output. To improve data throughput, {{beats}} users can set [configuration options](beats://docs/reference/filebeat/configuring-internal-queue.md) to tune the performance of the internal queue. However, the endless fine tuning required to configure the queue is cumbersome and not always fruitful. Instead of expecting users to configure the internal queue, {{agent}} uses sensible defaults. This means you won’t be able to migrate internal queue configurations to {{agent}}. + +For more information about {{agent}} limitations, see [*{{beats}} and {{agent}} capabilities*](/reference/ingestion-tools/fleet/index.md). + + +## Prepare for the migration [prepare-for-migration] + +Before you begin: + +1. Review your existing {{beats}} configurations and make a list of the integrations that are required. For example, if your existing implementation collects logs and metrics from Nginx, add Nginx to your list. +2. Make a note of any processors or custom configurations you want to migrate. Some of these customizations may no longer be needed or possible in {{agent}}. +3. Decide if it’s the right time to migrate to {{agent}}. Review the information under [*{{beats}} and {{agent}} capabilities*](/reference/ingestion-tools/fleet/index.md). Make sure the integrations you need are supported and Generally Available, and that the output and features you require are also supported. + +If everything checks out, proceed to the next step. Otherwise, you might want to continue using {{beats}} and possibly deploy {{agent}} alongside {{beats}} to use features like endpoint protection. + + +## Set up {{fleet-server}} (self-managed deployments only) [_set_up_fleet_server_self_managed_deployments_only] + +To use {{fleet}} for central management, a [{{fleet-server}}](/reference/ingestion-tools/fleet/fleet-server.md) must be running and accessible to your hosts. + +If you’re using {{ecloud}}, you can skip this step because {{ecloud}} runs a hosted version of {{fleet-server}}. + +Otherwise, follow the steps for self-managed deployments described in [Deploy {{fleet-server}} on-premises and {{es}} on Cloud](/reference/ingestion-tools/fleet/add-fleet-server-mixed.md) or [Deploy on-premises and self-managed](/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md), depending on your deployment model, and then return to this page when you are done. + + +## Deploy {{agent}} to your hosts to collect logs and metrics [_deploy_agent_to_your_hosts_to_collect_logs_and_metrics] + +To begin the migration, deploy an {{agent}} to a host where {{beats}} shippers are running. It’s recommended that you set up and test the migration in a development environment before deploying across your infrastructure. + +You can continue to run {{beats}} alongside {{agent}} until you’re satisfied with the data its sending to {{es}}. + +Read [*Install {{agent}}s*](/reference/ingestion-tools/fleet/install-elastic-agents.md) to learn how to deploy an {{agent}}. To save time, return to this page when the {{agent}} is deployed, healthy, and successfully sending data. + +Here’s a high-level overview to help you understand the deployment process. + +::::{admonition} Overview of the {{agent}} deployment process +During the deployment process you: + +* **Create an agent policy.** An agent policy is similar to a configuration file, but unlike a regular configuration file, which needs to be maintained on many different host systems, you can configure and maintain the agent policy in a central location in {{fleet}} and apply it to multiple {{agent}}s. +* **Add integrations to the agent policy.** {{agent}} integrations provide a simple, unified way to collect data from popular apps and services, and protect systems from security threats. To define the agent policy, you add integrations for each service or system you want to monitor. For example, to collect logs and metrics from a system running Nginx, you might add the Nginx integration and the System integration. + + **What happens when you add an integration to an agent policy?** The assets for the integration, such as dashboards and ingest pipelines, get installed if they aren’t already. Plus the settings you specify for the integration, such as how to connect to the source system or locate logs, are added as an integration policy. + + For the example described earlier, you would have an agent policy that contains two integration policies: one for collecting Nginx logs and metrics, and another for collecting system logs and metrics. + +* **Install {{agent}} on the host and enroll it in the agent policy.** When you enroll the {{agent}} in an agent policy, the agent gets added to {{fleet}}, where you can monitor and manage the agent. + +:::: + + +::::{tip} +It’s best to add one integration at a time and test it before adding more integrations to your agent policy. The System integration is a good way to get started if it’s supported on your OS. +:::: + + + +## View agent details and inspect the data streams [_view_agent_details_and_inspect_the_data_streams] + +After deploying an {{agent}} to a host, view details about the agent and inspect the data streams it creates. To learn more about the benefits of using data streams, refer to [Data streams](/reference/ingestion-tools/fleet/data-streams.md). + +1. On the **Agents** tab in **{{fleet}}**, confirm that the {{agent}} status is `Healthy`. + + :::{image} images/migration-agent-status-healthy01.png + :alt: Screen showing that agent status is Healthy + :class: screenshot + ::: + +2. Click the host name to examine the {{agent}} details. This page shows the integrations that are currently installed, the policy the agent is enrolled in, and information about the host machine: + + :::{image} images/migration-agent-details01.png + :alt: Screen showing that agent status is Healthy + :class: screenshot + ::: + +3. Go back to the main {{fleet}} page and click the **Data streams** tab. You should be able to see the data streams for various logs and metrics from the host. This is out-of-the-box without any extra configuration or dashboard creation: + + :::{image} images/migration-agent-data-streams01.png + :alt: Screen showing data streams created by the {agent} + :class: screenshot + ::: + +4. Go to **Analytics > Discover** and examine the data streams. Note that documents indexed by {{agent}} match these patterns: + + * `logs-*` + * `metrics-*` + + If {{beats}} are installed on the host machine, the data in {{es}} will be duplicated, with one set coming from {{beats}} and another from {{agent}} for the *same* data source. + + For example, filter on `filebeat-*` to see the data ingested by {{filebeat}}. + + :::{image} images/migration-event-from-filebeat.png + :alt: Screen showing event from {filebeat} + :class: screenshot + ::: + + Next, filter on `logs-*`. Notice that the document contains `data_stream.*` fields that come from logs ingested by the {{agent}}. + + :::{image} images/migration-event-from-agent.png + :alt: Screen showing event from {agent} + :class: screenshot + ::: + + ::::{note} + This duplication is superfluous and will consume extra storage space on your {{es}} deployment. After you’ve finished migrating all your configuration settings to {{agent}}, you’ll remove {{beats}} to prevent redundant messages. + :::: + + + +## Add integrations to the agent policy [_add_integrations_to_the_agent_policy] + +Now that you’ve deployed an {{agent}} to your host and it’s successfully sending data to {{es}}, add another integration. For guidance on which integrations you need, look at the list you created earlier when you [prepared for the migration](#prepare-for-migration). + +For example, if the agent policy you created earlier includes the System integration, but you also want to monitor Nginx: + +1. From the main menu in {{kib}}, click **Add integrations** and add the Nginx integration. + + :::{image} images/migration-add-nginx-integration.png + :alt: Screen showing the Nginx integration + :class: screenshot + ::: + +2. Configure the integration, then apply it to the agent policy you used earlier. Make sure you expand collapsed sections to see all the settings like log paths. + + :::{image} images/migration-add-integration-policy.png + :alt: Screen showing Nginx configuration + :class: screenshot + ::: + + When you save and deploy your changes, the agent policy is updated to include a new integration policy for Nginx. All {{agent}}s enrolled in the agent policy get the updated policy, and the {{agent}} running on your host will begin collecting Nginx data. + + ::::{note} + Integration policy names must be globally unique across all agent policies. + :::: + +3. Go back to **{{fleet}} > Agents** and verify that the agent status is still healthy. Click the host name to drill down into the agent details. From there, you can see the agent policy and integration policies that are applied. + + If the agent status is not Healthy, click **Logs** to view the agent log and troubleshoot the problem. + +4. Go back to the main **{{fleet}}** page, and click **Data streams** to inspect the data streams and navigate to the pre-built dashboards installed with the integration. + +Notice again that the data is duplicated because you still have {{beats}} running and sending data. + + +## Migrate processor configurations [_migrate_processor_configurations] + +Processors enable you to filter and enhance the data before it’s sent to the output. Each processor receives an event, applies a defined action to the event, and returns the event. If you define a list of processors, they are executed in the order they are defined. Elastic provides a [rich set of processors](beats://docs/reference/filebeat/defining-processors.md) that are supported by all {{beats}} and by {{agent}}. + +Prior to migrating from {{beats}}, you defined processors in the configuration file for each Beat. After migrating to {{agent}}, however, the {{beats}} configuration files are redundant. All configuration is policy-driven from {{fleet}} (or for advanced use cases, specified in a standalone agent policy). Any processors you defined previously in the {{beats}} configuration need to be added to an integration policy; they cannot be defined in the {{beats}} configuration. + +::::{important} +Globally-defined processors are not currently supported by {{agent}}. You must define processors in each integration policy where they are required. +:::: + + +To add processors to an integration policy: + +1. In {{fleet}}, open the **Agent policies** tab and click the policy name to view its integration policies. +2. Click the name of the integration policy to edit it. +3. Click the down arrow next to enabled streams, and under **Advanced options**, add the processor definition. The processor will be applied to the data set where it’s defined. + + :::{image} images/migration-add-processor.png + :alt: Screen showing how to add a processor to an integration policy + :class: screenshot + ::: + + For example, the following processor adds geographically specific metadata to host events: + + ```yaml + - add_host_metadata: + cache.ttl: 5m + geo: + name: nyc-dc1-rack1 + location: 40.7128, -74.0060 + continent_name: North America + country_iso_code: US + region_name: New York + region_iso_code: NY + city_name: New York + ``` + + +In {{kib}}, look at the data again to confirm it contains the fields you expect. + + +## Preserve raw events [_preserve_raw_events] + +In some cases, {{beats}} modules preserve the original, raw event, which consumes more storage space, but may be a requirement for compliance or forensic use cases. + +In {{agent}}, this behavior is optional and disabled by default. + +If you must preserve the raw event, edit the integration policy, and for each enabled data stream, click the **Preserve original event** toggle. + +:::{image} images/migration-preserve-raw-event.png +:alt: Screen showing how to add a processor to an integration policy +:class: screenshot +::: + +Do this for every data stream with a raw event you want to preserve. + + +## Migrate custom dashboards [_migrate_custom_dashboards] + +Elastic integration packages provide many assets, such as pre-built dashboards, to make it easier for you to visualize your data. In some cases, however, you might have custom dashboards you want to migrate. + +Because {{agent}} uses different data streams, the fields exported by an {{agent}} are slightly different from those exported {{beats}}. Any custom dashboards that you created for {{beats}} need to be modified or recreated to use the new fields. + +You have a couple of options for migrating custom dashboards: + +* (Recommended) Recreate the custom dashboards based on the new data streams. +* [Create index aliases to point to the new data streams](#create-index-aliases) and continue using custom dashboards. + + +### Create index aliases to point to data streams [create-index-aliases] + +You may want to continue using your custom dashboards if the dashboards installed with an integration are not adequate. To do this, use index aliases to feed data streams into your existing custom dashboards. + +For example, custom dashboards that point to `filebeat-` or `metricbeat-` can be aliased to use the equivalent data streams, `logs-` and `metrics-`. + +To use aliases: + +1. Add a `filebeat` alias to the `logs-` data stream. For example: + + ```json + POST _aliases + { + "actions": [ + { + "add": { + "index": "logs-*", + "alias": "filebeat-" + } + } + + ] + } + ``` + +2. Add a `metribeat` alias to the `metrics-` data stream. + + ```json + POST _aliases + { + "actions": [ + { + "add": { + "index": "metrics-*", + "alias": "metricbeat-" + } + } + ] + } + ``` + + +::::{important} +These aliases must be added to both the index template and existing indices. +:::: + + +Note that custom dashboards will show duplicated data until you remove {{beats}} from your hosts. + +For more information, see the [Aliases documentation](/manage-data/data-store/aliases.md). + + +## Migrate index lifecycle policies [_migrate_index_lifecycle_policies] + +{{ilm-cap}} ({{ilm}}) policies in {{es}} enable you to manage indices according to your performance, resiliency, and retention requirements. To learn more about {{ilm}}, refer to the [{{ilm}} documentation](/manage-data/lifecycle/index-lifecycle-management.md). + +{{ilm}} is configured by default in {{beats}} (version 7.0 and later) and in {{agent}} (all versions). To view the index lifecycle policies defined in {{es}}, go to **Management > Index Lifecycle Policies**. + +:::{image} images/migration-index-lifecycle-policies.png +:alt: Screen showing how to add a processor to an integration policy +:class: screenshot +::: + +If you used {{ilm}} with {{beats}}, you’ll see index lifecycle policies like **filebeat** and **metricbeat** in the list. After migrating to {{agent}}, you’ll see polices named **logs** and **metrics**, which encapsulate the {{ilm}} policies for all `logs-*` and `metrics-*` index templates. + +When you migrate from {{beats}} to {{agent}}, you have a couple of options for migrating index policy settings: + +* **Modify the newly created index lifecycle policies (recommended).** As mentioned earlier, {{ilm}} is enabled by default when the {{agent}} is installed. Index lifecycle policies are created and added to the index templates for data streams created by integrations. + + If you have existing index lifecycle policies for {{beats}}, it’s highly recommended that you modify the lifecycle policies for {{agent}} to match your previous policy. To do this: + + 1. In {{kib}}, go to **{{stack-manage-app}} > Index Lifecycle Policies** and search for a {{beats}} policy, for example, **filebeat**. Under **Linked indices**, notice you can view indices linked to the policy. Click the policy name to see the settings. + 2. Click the **logs** policy and, if necessary, edit the settings to match the old policy. + 3. Under **Index Lifecycle Policies**, search for another {{beats}} policy, for example, **metricbeat**. + 4. Click the **metrics** policy and edit the settings to match the old policy. + + Optionally delete the {{beats}} index lifecycle policies when they are no longer used by an index. + +* **Keep the {{beats}} policy and apply it to the index templates created for data streams.** To preserve an existing policy, modify it, as needed, and apply it to all the index templates created for data streams: + + 1. Under **Index Lifecycle Policies**, find the {{beats}} policy, for example, **filebeat**. + 2. In the **Actions** column, click the **Add policy to index template** icon. + 3. Under **Index template**, choose a data stream index template, then add the policy. + 4. Repeat this procedure, as required, to apply the policy to other data stream index templates. + + +::::{admonition} What if you’re not using {{ilm}} with {{beats}}? +You can begin to use {{ilm}} now with {{agent}}. Under **Index lifecycle policies**, click the policy name and edit the settings to meet your requirements. + +:::: + + + +## Remove {{beats}} from your host [_remove_beats_from_your_host] + +Any installed {{beats}} shippers will continue to work until you remove them. This allows you to roll out the migration in stages. You will continue to see duplicated data until you remove {{beats}}. + +When you’re satisfied with your {{agent}} deployment, remove {{beats}} from your hosts. All the data collected before installing the {{agent}} will still be available in {{es}} until you delete the data or it’s removed according to the data retention policies you’ve specified for {{ilm}}. + +To remove {{beats}} from your host: + +1. Stop the service by using the correct command for your system. +2. (Optional) Back up your {{beats}} configuration files in case you need to refer to them in the future. +3. Delete the {{beats}} installation directory. If necessary, stop any orphan processes that are running after you stopped the service. +4. If you added firewall rules to allow {{beats}} to communicate on your network, remove them. +5. After you’ve removed all {{beats}}, revoke any API keys or remove privileges for any {{beats}} users created to send data to {{es}}. + + diff --git a/reference/ingestion-tools/fleet/monitor-elastic-agent.md b/reference/ingestion-tools/fleet/monitor-elastic-agent.md new file mode 100644 index 0000000000..9089946fee --- /dev/null +++ b/reference/ingestion-tools/fleet/monitor-elastic-agent.md @@ -0,0 +1,319 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/monitor-elastic-agent.html +--- + +# Monitor Elastic Agents [monitor-elastic-agent] + +{{fleet}} provides built-in capabilities for monitoring your fleet of {{agent}}s. In {{fleet}}, you can: + +* [View agent status overview](#view-agent-status) +* [View details for an agent](#view-agent-details) +* [View agent activity](#view-agent-activity) +* [View agent logs](#view-agent-logs) +* [Collect {{agent}} diagnostics](#collect-agent-diagnostics) +* [View the {{agent}} metrics dashboard](#view-agent-metrics) +* [Change {{agent}} monitoring settings](#change-agent-monitoring) +* [Send {{agent}} monitoring data to a remote {{es}} cluster](#external-elasticsearch-monitoring) +* [Enable alerts and ML jobs based on {{fleet}} and {{agent}} status](#fleet-alerting) + +Agent monitoring is turned on by default in the agent policy unless you turn it off. Want to turn off agent monitoring to stop collecting logs and metrics? See [Change {{agent}} monitoring settings](#change-agent-monitoring). + +Want to receive an alert when your {{agent}} health status changes? Refer to [Enable alerts and ML jobs based on {{fleet}} and {{agent}} status](#fleet-alerting) and our [alerting example](#fleet-alerting-example). + +For more detail about how agents communicate their status to {{fleet}}, refer to [{{agent}} health status](/reference/ingestion-tools/fleet/agent-health-status.md). + + +## View agent status overview [view-agent-status] + +To view the overall status of your {{fleet}}-managed agents, in {{kib}}, go to **Management → {{fleet}} → Agents**. + +:::{image} images/kibana-fleet-agents.png +:alt: Agents tab showing status of each {agent} +:class: screenshot +::: + +::::{important} +The **Agents** tab in **{{fleet}}** displays a maximum of 10,000 agents, shown on 500 pages with 20 rows per page. If you have more than 10,000 agents, we recommend using the filtering and sorting options described in this section to narrow the table to fewer than 10,000 rows. +:::: + + +{{agent}}s can have the following statuses: + +| | | +| --- | --- | +| **Healthy** | {{agent}}s are enrolled and checked in. There are no agent policy updates or automatic agent binary updates in progress, but the agent binary may still be out of date. {{agent}}s continuously check in to the {{fleet-server}} for required updates. | +| **Unhealthy** | {{agent}}s have errors or are running in a degraded state. An agent will be reported as `unhealthy` as a result of a configuration problem on the host system. For example, an {{agent}} may not have the correct permissions required to run an integration that has been added to the {{agent}} policy. In this case, you may need to investigate and address the situation. | +| **Updating** | {{agent}}s are updating the agent policy, updating the binary, or enrolling or unenrolling from {{fleet}}. | +| **Offline** | {{agent}}s have stayed in an unhealthy status for a period of time. Offline agent’s API keys remain valid. You can still see these {{agent}}s in the {{fleet}} UI and investigate them for further diagnosis if required. | +| **Inactive** | {{agent}}s have been offline for longer than the time set in your [inactivity timeout](/reference/ingestion-tools/fleet/set-inactivity-timeout.md). These {{agent}}s are valid, but have been removed from the main {{fleet}} UI. | +| **Unenrolled** | {{agent}}s have been manually unenrolled and their API keys have been removed from the system. You can [unenroll](/reference/ingestion-tools/fleet/unenroll-elastic-agent.md) an offline {{agent}} using {{agent}} actions if you determine it’s offline and no longer valid.
These agents need to re-enroll in {{fleet}} to be operational again. | + +The following diagram shows the flow of {{agent}} statuses: + +:::{image} images/agent-status-diagram.png +:alt: Diagram showing the flow of Fleet Agent statuses +::: + +To filter the list of agents by status, click the **Status** dropdown and select one or more statuses. + +:::{image} images/agent-status-filter.png +:alt: Agent Status dropdown with multiple statuses selected +:class: screenshot +::: + +For advanced filtering, use the search bar to create structured queries using [{{kib}} Query Language](elasticsearch://docs/reference/query-languages/kql.md). For example, enter `local_metadata.os.family : "darwin"` to see only agents running on macOS. + +You can also sort the list of agents by host, last activity time, or version, by clicking on the table headings for those fields. + +To perform a bulk action on more than 10,000 agents, you can select the **Select everything on all pages** button. + + +## View details for an agent [view-agent-details] + +In {{fleet}}, you can access the detailed status of an individual agent and the integrations that are associated with it through the agent policy. + +1. In {{fleet}}, open the **Agents** tab. +2. In the **Host** column, click the agent’s name. + +On the **Agent details** tab, the **Overview** pane shows details about the agent and its performance, including its memory and CPU usage, last activity time, and last checkin message. To access metrics visualizations, you can also [View the {{agent}} metrics dashboard](#view-agent-metrics). + +:::{image} images/agent-detail-overview.png +:alt: Agent details overview pane with various metrics +::: + +The **Integrations** pane shows the status of the integrations that have been added to the agent policy. Expand any integration to view its health status. Any errors or warnings are displayed as alerts. + +:::{image} images/agent-detail-integrations-health.png +:alt: Agent details integrations pane with health status +::: + +To gather more detail about a particular error or warning, from the **Actions** menu select **View agent JSON**. The JSON contains all of the raw agent data tracked by Fleet. + +::::{note} +Currently, the **Integrations** pane shows the health status only for agent inputs. Health status is not yet available for agent outputs. +:::: + + + +## View agent activity [view-agent-activity] + +You can view a chronological list of all operations performed by your {{agent}}s. + +On the **Agents** tab, click **Agent activity**. All agent operations are shown, beginning from the most recent, including any in progress operations. + +:::{image} images/agent-activity.png +:alt: Agent activity panel +:class: screenshot +::: + + +## View agent logs [view-agent-logs] + +When {{fleet}} reports an agent status like `Offline` or `Unhealthy`, you might want to view the agent logs to diagnose potential causes. If agent monitoring is configured to collect logs (the default), you can view agent logs in {{fleet}}. + +1. In {{fleet}}, open the **Agents** tab. +2. In the **Host** column, click the agent’s name. +3. On the **Agent details** tab, verify that **Monitor logs** is enabled. If it’s not, refer to [Change {{agent}} monitoring settings](#change-agent-monitoring). +4. Click the **Logs** tab. + + :::{image} images/view-agent-logs.png + :alt: View agent logs under agent details + :class: screenshot + ::: + + +On the **Logs** tab you can filter, search, and explore the agent logs: + +* Use the search bar to create structured queries using [{{kib}} Query Language](elasticsearch://docs/reference/query-languages/kql.md). +* Choose one or more datasets to show logs for specific programs, such as {{filebeat}} or {{fleet-server}}. + + :::{image} images/kibana-fleet-datasets.png + :alt: {{fleet}} showing datasets for logging + :class: screenshot + ::: + +* Change the log level to filter the view by log levels. Want to see debugging logs? Refer to [Change the logging level](#change-logging-level). +* Change the time range to view historical logs. +* Click **Open in Logs** to tail agent log files in real time. For more information about logging, refer to [Tail log files](/solutions/observability/logs/logs-stream.md). + + +## Change the logging level [change-logging-level] + +The logging level for monitored agents is set to `info` by default. You can change the agent logging level, for example, to turn on debug logging remotely: + +1. After navigating to the **Logs** tab as described in [View agent logs](#view-agent-logs), scroll down to find the **Agent logging level** setting. + + :::{image} images/agent-set-logging-level.png + :alt: Logs tab showing the agent logging level setting + :class: screenshot + ::: + +2. Select an **Agent logging level**: + + | | | + | --- | --- | + | `error`
| Logs errors and critical errors. | + | `warning`
| Logs warnings, errors, and critical errors. | + | `info`
| Logs informational messages, including the number of events that are published.Also logs any warnings, errors, or critical errors. | + | `debug`
| Logs debug messages, including a detailed printout of all events flushed. Alsologs informational messages, warnings, errors, and critical errors. | + +3. Click **Apply changes** to apply the updated logging level to the agent. + + +## Collect {{agent}} diagnostics [collect-agent-diagnostics] + +{{fleet}} provides the ability to remotely generate and gather an {{agent}}'s diagnostics bundle. An agent can gather and upload diagnostics if it is online in a `Healthy` or `Unhealthy` state. To download the diagnostics bundle for local viewing: + +1. In {{fleet}}, open the **Agents** tab. +2. In the **Host** column, click the agent’s name. +3. Select the **Diagnostics** tab and click the **Request diagnostics .zip** button. + + :::{image} images/collect-agent-diagnostics1.png + :alt: Collect agent diagnostics under agent details + :class: screenshot + ::: + +4. In the **Request Diagnostics** pop-up, select **Collect additional CPU metrics** if you’d like detailed CPU data. + + :::{image} images/collect-agent-diagnostics2.png + :alt: Collect agent diagnostics confirmation pop-up + :class: screenshot + ::: + +5. Click the **Request diagnostics** button. + +When available, the new diagnostic bundle will be listed on this page, as well as any in-progress or previously collected bundles for the {{agent}}. + +Note that the bundles are stored in {{es}} and are removed automatically after 7 days. You can also delete any previously created bundle by clicking the `trash can` icon. + + +## View the {{agent}} metrics dashboard [view-agent-metrics] + +When agent monitoring is configured to collect metrics (the default), you can use the **[Elastic Agent] Agent metrics** dashboard in {{kib}} to view details about {{agent}} resource usage, event throughput, and errors. This information can help you identify problems and make decisions about scaling your deployment. + +To view agent metrics: + +1. In {{fleet}}, open the **Agents** tab. +2. In the **Host** column, click the agent’s name. +3. On the **Agent details** tab, verify that **Monitor metrics** is enabled. If it’s not, refer to [Change {{agent}} monitoring settings](#change-agent-monitoring). +4. Click **View more agent metrics** to navigate to the **[Elastic Agent] Agent metrics** dashboard. + + :::{image} images/selected-agent-metrics-dashboard.png + :alt: Screen capture showing {{agent}} metrics + :class: screenshot + ::: + + +The dashboard uses standard {{kib}} visualizations that you can extend to meet your needs. + + +## Change {{agent}} monitoring settings [change-agent-monitoring] + +Agent monitoring is turned on by default in the agent policy. To change agent monitoring settings for all agents enrolled in a specific agent policy: + +1. In {{fleet}}, open the **Agent policies** tab. +2. Click the agent policy to edit it, then click **Settings**. +3. Under **Agent monitoring**, deselect (or select) one or both of these settings: **Collect agent logs** and **Collect agent metrics**. +4. Under **Advanced monitoring options** you can configure additional settings including an HTTP monitoring endpoint, diagnostics rate limiting, and diagnostics file upload limits. Refer to [configure agent monitoring](/reference/ingestion-tools/fleet/agent-policy.md#change-policy-enable-agent-monitoring) for details. +5. Save your changes. + +To turn off agent monitoring when creating a new agent policy: + +1. In the **Create agent policy** flyout, expand **Advanced options**. +2. Under **Agent monitoring**, deselect **Collect agent logs** and **Collect agent metrics**. +3. When you’re done configuring the agent policy, click **Create agent policy**. + + +## Send {{agent}} monitoring data to a remote {{es}} cluster [external-elasticsearch-monitoring] + +You may want to store all of the health and status data about your {{agents}} in a remote {{es}} cluster, so that it’s separate and independent from the deployment where you use {{fleet}} to manage the agents. + +To do so, follow the steps in [Remote {{es}} output](/reference/ingestion-tools/fleet/remote-elasticsearch-output.md). After the new output is configured, follow the steps to update the {{agent}} policy and make sure that the **Output for agent monitoring** setting is enabled. {{agent}} monitoring data will use the remote {{es}} output that you configured. + + +## Enable alerts and ML jobs based on {{fleet}} and {{agent}} status [fleet-alerting] + +You can access the health status of {{fleet}}-managed {{agents}} and other {{fleet}} settings through internal {{fleet}} indices. This enables you to leverage various applications within the {{stack}} that can be triggered by the provided information. For instance, you can now create alerts and machine learning (ML) jobs based on these specific fields. Refer to the [Alerting documentation](/explore-analyze/alerts-cases.md) or see the [example](#fleet-alerting-example) on this page to learn how to define rules that can trigger actions when certain conditions are met. + +This functionality allows you to effectively track an agent’s status, and identify scenarios where it has gone offline, is experiencing health issues, or is facing challenges related to input or output. + +The following datastreams and fields are available. + +Datastream +: `metrics-fleet_server.agent_status-default` + + This data stream publishes the number of {{agents}} in various states. + + **Fields** + + * `@timestamp` + * `fleet.agents.total` - A count of all agents + * `fleet.agents.enrolled` - A count of all agents currently enrolled + * `fleet.agents.unenrolled` - A count of agents currently unenrolled + * `fleet.agents.healthy` - A count of agents currently healthy + * `fleet.agents.offline` - A count of agents currently offline + * `fleet.agents.updating` - A count of agents currently in the process of updating + * `fleet.agents.unhealthy` - A count of agents currently unhealthy + * `fleet.agents.inactive` - A count of agents currently inactive + + ::::{note} + Other fields regarding agent status, based on input and output health, are currently under consideration for future development. + :::: + + +Datastream +: `metrics-fleet_server.agent_versions-default` + + This index publishes a separate document for each version number and a count of enrolled agents only. + + **Fields** + + * `@timestamp` + * `fleet.agent.version` - A keyword field containing the version number + * `fleet.agent.count` - A count of agents on the specified version + + + +### Example: Enable an alert for offline {{agent}}s [fleet-alerting-example] + +You can set up an alert to notify you when one or more {{agent}}s goes offline: + +1. In {{kib}}, navigate to **Management > Stack Management > Rules**. +2. Click **Create rule**. +3. Select **Elasticsearch query** as the rule type. +4. Choose a name for the rule, for example `Elastic Agent status`. +5. Select **KQL or Lucene** as the query type. +6. Select `DATA VIEW metrics-*` as the data view. +7. Define your query, for example: `fleet.agents.offline >= 1`. +8. Set the alert group, threshold, and time window. For example: + + * WHEN: `count()` + * OVER: `all documents` + * IS ABOVE: `0` + * FOR THE LAST `5 minutes` + + This will generate an alert when one or more agents are reported by the `fleet.agents.offline` field over the last five minutes to be offline. + +9. Set the number of documents to send, for example: + + * SIZE: 100 + +10. Set **Check every** to the frequency at which the rule condition should be evaluated. The default setting is one minute. +11. Select an action to occur when the rule conditions are met. For example, to set the alert to send an email when an alert occurs, select the Email connector type and specify: + + * Email connector: `Elastic-Cloud-SMTP` + * Action frequency: `For each alert` and `On check intervals` + * Run when: `Query matched` + * To: + * Subject: + +12. Click **Save**. + +The new rule will be enabled and an email will be sent to the specified recipient when the alert conditions are met. + +From the **Rules** page you can select the rule you created to enable or disable it, and to view the rule details including a list of active alerts and an alert history. + +:::{image} images/elastic-agent-status-rule.png +:alt: A screen capture showing the details for the new Elastic Agent status rule +::: diff --git a/reference/ingestion-tools/fleet/move_fields-processor.md b/reference/ingestion-tools/fleet/move_fields-processor.md new file mode 100644 index 0000000000..16339b8dcb --- /dev/null +++ b/reference/ingestion-tools/fleet/move_fields-processor.md @@ -0,0 +1,91 @@ +--- +navigation_title: "move_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/move_fields-processor.html +--- + +# Move fields [move_fields-processor] + + +The `move_fields` processor moves event fields from one object into another. It can also rearrange fields or add a prefix to fields. + +The processor extracts fields from `from`, then uses `fields` and `exclude` as filters to choose which fields to move into the `to` field. + + +## Example [_example_28] + +For example, given the following event: + +```json +{ + "app": { + "method": "a", + "elapsed_time": 100, + "user_id": 100, + "message": "i'm a message" + } +} +``` + +To move `method` and `elapsed_time` into another object, use this configuration: + +```yaml +processors: + - move_fields: + from: "app" + fields: ["method", "elapsed_time"], + to: "rpc." +``` + +Your final event will be: + +```json +{ + "app": { + "user_id": 100, + "message": "i'm a message", + "rpc": { + "method": "a", + "elapsed_time": 100 + } + } +} +``` + +To add a prefix to the whole event: + +```json +{ + "app": { "method": "a"}, + "cost": 100 +} +``` + +Use this configuration: + +```yaml +processors: + - move_fields: + to: "my_prefix_" +``` + +Your final event will be: + +```json +{ + "my_prefix_app": { "method": "a"}, + "my_prefix_cost": 100 +} +``` + + +## Configuration settings [_configuration_settings_32] + +| Name | Required | Default | Description | | +| --- | --- | --- | --- | --- | +| `from` | no | | Which field you want extract. This field and any nested fields will be moved into `to` unless they are filtered out. If empty, indicates event root. | | +| `fields` | no | | Which fields to extract from `from` and move to `to`. An empty list indicates all fields. | | +| `ignore_missing` | no | false | Ignore "not found" errors when extracting fields. | | +| `exclude` | no | | A list of fields to exclude and not move. | | +| `to` | yes | | These fields extract from `from` destination field prefix the `to` will base on fields root. | | + diff --git a/reference/ingestion-tools/fleet/mutual-tls.md b/reference/ingestion-tools/fleet/mutual-tls.md new file mode 100644 index 0000000000..7b337f75dd --- /dev/null +++ b/reference/ingestion-tools/fleet/mutual-tls.md @@ -0,0 +1,203 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/mutual-tls.html +--- + +# Elastic Agent deployment models with mutual TLS [mutual-tls] + +Mutual Transport Layer Security (mTLS) provides a higher level of security and trust compared to one-way TLS, where only the server presents a certificate. It ensures that not only the server is who it claims to be, but the client is also authenticated. This is particularly valuable in scenarios where both parties need to establish trust and validate each other’s identities, such as in secure API communication, web services, or remote authentication. + +For a summary of flow by which TLS is established between components using either one-way or mutual TLS, refer to [One-way and mutual TLS certifications flow](/reference/ingestion-tools/fleet/tls-overview.md). + +* [Overview](#mutual-tls-overview) +* [On-premise deployments](#mutual-tls-on-premise) +* [{{fleet-server}} on {{ecloud}}](#mutual-tls-cloud) +* [{{fleet-server}} on {{ecloud}} using a proxy](#mutual-tls-cloud-proxy) +* [{{fleet-server}} on-premise and Hosted Elasticsearch Service](#mutual-tls-on-premise-hosted-es) + + +## Overview [mutual-tls-overview] + +With mutual TLS the following authentication and certification verification occurs: + +* **Client Authentication**: The client presents its digital certificate to the server during the TLS handshake. This certificate is issued by a trusted Certificate Authority (CA) and contains the client’s public key. +* **Server Authentication**: The server also presents its digital certificate to the client, proving its identity and sharing its public key. The server’s certificate is also issued by a trusted CA. +* **Certificate Verification**: Both the client and server verify each other’s certificates by checking the digital signatures against the CAs' public key (note that the client and server need not use the same CA). + +{{fleet}}-managed {{agent}} has two main connections to ensure correct operations: + +* Connectivity to {{fleet-server}} (the control plane, to check in, download policies, and similar). +* Connectivity to an Output (the data plane, such as {{es}} or {{ls}}). + +In order to bootstrap, {{agent}} initially must establish a secure connection to the {{fleet-server}}, which can reside on-premises or in {{ecloud}}. This connectivity verification process ensures the agent’s authenticity. Once verified, the agent receives the policy configuration. This policy download equips the agent with the knowledge of the other components it needs to engage with. For instance, it gains insights into the output destinations it should write data to. + +When mTLS is required, the secure setup between {{agent}}, {{fleet}}, and {{fleet-server}} is configured through the following steps: + +1. mTLS is enabled. +2. The initial mTLS connection between {{agent}} and {{fleet-server}} is configured when {{agent}} is enrolled, using the parameters passed through the `elastic-agent install` or `elastic-agent enroll` command. +3. Once enrollment has completed, {{agent}} downloads the initial {{agent}} policy from {{fleet-server}}. + + 1. If the {{agent}} policy contains mTLS configuration settings, those settings will take precedence over those used during enrollment: This includes both the mTLS settings used for connectivity between {{agent}} and {{fleet-server}} (and the {{fleet}} application in {{kib}}, for {{fleet}}-managed {{agent}}), and the settings used between {{agent}} and it’s specified output. + 2. If the {{agent}} policy does not contain any TLS, mTLS, or proxy configuration settings, these settings will remain as they were specified when {{agent}} enrolled. Note that the initial TLS, mTLS, or proxy configuration settings can not be removed through the {{agent}} policy; they can only be updated. + + +::::{important} +When you run {{agent}} with the {{elastic-defend}} integration, the [TLS certificates](https://en.wikipedia.org/wiki/X.509) used to connect to {{fleet-server}} and {{es}} need to be generated using [RSA](https://en.wikipedia.org/wiki/RSA_(cryptosystem)). For a full list of available algorithms to use when configuring TLS or mTLS, see [Configure SSL/TLS for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md). These settings are available for both standalone and {{fleet}}-managed {{agent}}. +:::: + + + +## On-premise deployments [mutual-tls-on-premise] + +:::{image} images/mutual-tls-on-prem.png +:alt: Diagram of mutual TLS on premise deployment model +::: + +Refer to the steps in [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md). To configure mutual TLS, include the following additional parameters when you install {{agent}} and {{fleet-server}}. + + +### {{agent}} settings [_agent_settings] + +During {{agent}} installation on premise use the following options: + +| | | +| --- | --- | +| `--certificate-authorities` | List of CA certificates that are trusted when {{fleet-server}} connects to {{agent}} | +| `--elastic-agent-cert` | {{agent}} certificate to present to {{fleet-server}} during authentication | +| `--elastic-agent-cert-key` | {{agent}} certificate key to present to {{fleet-server}} | +| `--elastic-agent-cert-key-passphrase` | The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}} | + + +### {{fleet-server}} settings [_fleet_server_settings] + +During {{fleet-server}} installation on-premise {{fleet-server}} authenticates with {{es}} and {{agents}}. You can use the following CLI options to facilitate these secure connections: + +| | | +| --- | --- | +| `--fleet-server-es-ca` | CA to use for the {{es}} connection | +| `--fleet-server-es-cert` | {{fleet-server}} certificate to present to {{es}} | +| `--fleet-server-es-cert-key` | {{fleet-server}} certificate key to present to {{es}} | +| `--certificate-authorities` | List of CA certificates that are trusted when {{agent}} connects to {{fleet-server}} and when {{fleet-server}} validates the {{agent}} identity. | +| `--fleet-server-cert` | {{fleet-server}} certificate to present to {{agents}} during authentication | +| `--fleet-server-cert-key` | {{fleet-server}}'s private certificate key used to decrypt the certificate | + + +### {{fleet}} settings: [_fleet_settings] + +In {{kib}}, navigate to {{fleet}}, open the **Settings** tab, and choose the **Output** that you’d like to configure. In the **Advanced YAML configuration**, add the following settings: + +| | | +| --- | --- | +| `ssl.certificate_authorities` | List of CA certificates that are trusted when {{fleet-server}} connects to {{agent}} | +| `ssl.certificate` | This certificate will be passed down to all the agents that have this output configured in their policy. This certificate is used by the agent when establishing mTLS to the output.
You may either apply the full certificate, in which case all the agents get the same certificate OR alternatively point to a local directory on the agent where the certificate resides, if the certificates are to be unique per agent. | +| `ssl.key` | This certificate key will be passed down to all the agents that have this output configured in their policy. The certificate key is used to decrypt the SSL certificate. | + +::::{important} +Note the following when you specify these SSL settings: + +* The certificate authority, certificate, and certificate key need to be specified as a path to a local file. You cannot specify a directory. +* You can define multiple CAs or paths to CAs. +* Only one certificate and certificate key can be defined. + +:::: + + +In the **Advanced YAML configuration** these settings should be added in the following format: + +```shell +ssl.certificate_authorities: + - /path/to/ca +ssl.certificate: /path/to/cert +ssl.key: /path/to/cert_key +``` + +OR + +```shell +ssl.certificate_authorities: + - /path/to/ca +ssl.certificate: /path/to/cert +ssl.key: /path/to/cert_key +``` + +:::{image} images/mutual-tls-onprem-advanced-yaml.png +:alt: Screen capture of output advanced yaml settings +::: + + +## {{fleet-server}} on {{ecloud}} [mutual-tls-cloud] + +In this deployment model, all traffic ingress into {{ecloud}} has its TLS connection terminated at the {{ecloud}} boundary. Since this termination is not handled on a per-tenant basis, a client-specific certificate can NOT be used at this point. + +:::{image} images/mutual-tls-cloud.png +:alt: Diagram of mutual TLS on cloud deployment model +::: + +We currently don’t support mTLS in this deployment model. An alternate deployment model is shown below where you can deploy your own secure proxy where TLS connections are terminated. + + +## {{fleet-server}} on {{ecloud}} using a proxy [mutual-tls-cloud-proxy] + +In this scenario, where you have access to the proxy, you can configure mTLS between the agent and your proxy. + +:::{image} images/mutual-tls-cloud-proxy.png +:alt: Diagram of mutual TLS on cloud deployment model with a proxy +::: + + +### {{agent}} settings [_agent_settings_2] + +During {{agent}} installation on premise use the following options: + +| | | +| --- | --- | +| `--certificate-authorities` | List of CA certificates that are trusted when {{agent}} connects to {{fleet-server}} or to the proxy between {{agent}} and {{fleet-server}} | +| `--elastic-agent-cert` | {{agent}} certificate to present during authentication to {{fleet-server}} or to the proxy between {{agent}} and {{fleet-server}} | +| `--elastic-agent-cert-key` | {{agent}}'s private certificate key used to decrypt the certificate | +| `--elastic-agent-cert-key-passphrase` | The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}} | + + +## {{fleet-server}} on-premise and Hosted Elasticsearch Service [mutual-tls-on-premise-hosted-es] + +In some scenarios you may want to deploy {{fleet-server}} on your own premises. In this case, you’re able to provide your own certificates and certificate authority to enable mTLS between {{fleet-server}} and {{agent}}. + +However, as with the [{{fleet-server}} on {{ecloud}}](#mutual-tls-cloud) use case, the data plane TLS connections terminate at the {{ecloud}} boundary. {{ecloud}} is not a multi-tenanted service and therefore can’t provide per-user certificates. + +:::{image} images/mutual-tls-fs-onprem.png +:alt: Diagram of mutual TLS with Fleet Server on premise and hosted Elasticsearch Service deployment model +::: + +Similar to the {{fleet-server}} on {{ecloud}} use case, a secure proxy can be placed in such an environment to terminate the TLS connections and satisfy the mTLS requirements. + +:::{image} images/mutual-tls-fs-onprem-proxy.png +:alt: Diagram of mutual TLS with Fleet Server on premise and hosted Elasticsearch Service deployment model with a proxy +::: + + +### {{agent}} settings [_agent_settings_3] + +During {{agent}} installation on premise use the following options, similar to [{{agent}} deployment on premises](#mutual-tls-on-premise): + +| | | +| --- | --- | +| `--certificate-authorities` | List of CA certificates that are trusted for when {{agent}} connects to {{fleet-server}} | +| `--elastic-agent-cert` | {{agent}} certificate to present to {{fleet-server}} during authentication | +| `--elastic-agent-cert-key` | {{agent}}'s private certificate key used to decrypt the certificate | +| `--elastic-agent-cert-key-passphrase` | The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}} | + + +### {{fleet-server}} settings [_fleet_server_settings_2] + +During {{fleet-server}} installation on-premise use the following options so that {{fleet-server}} can authenticate itself to the agent and then also to the secure proxy server: + +| | | +| --- | --- | +| `--fleet-server-es-ca` | CA to use for the {{es}} connection, via secure proxy. This CA is used to authenticate the TLS connection from a secure proxy | +| `--certificate-authorities` | List of CA certificates that are trusted when {{agent}} connects to {{fleet-server}} | +| `--fleet-server-cert` | {{fleet-server}} certificate to present to {{agents}} during authentication | +| `--fleet-server-cert-key` | {{fleet-server}}'s private certificate key used to decrypt the certificate | + + +### {{fleet}} settings [_fleet_settings_2] + +This is the same as what’s described for [on premise deployments](#mutual-tls-on-premise). The main difference is that you need to use certificates that are accepted by the secure proxy, as the mTLS is set up between the agent and the secure proxy. diff --git a/reference/ingestion-tools/fleet/otel-agent.md b/reference/ingestion-tools/fleet/otel-agent.md new file mode 100644 index 0000000000..c1f4fca5e1 --- /dev/null +++ b/reference/ingestion-tools/fleet/otel-agent.md @@ -0,0 +1,19 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/otel-agent.html +--- + +# Run Elastic Agent as an OTel Collector [otel-agent] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) is a vendor-neutral way to receive, process, and export telemetry data. {{agent}} includes an embedded OTel Collector, enabling you to instrument your applications and infrastructure once, and send data to multiple vendors and backends. + +When you run {{agent}} in `otel` mode it supports the standard OTel Collector configuration format that defines a set of receivers, processors, exporters, and connectors. Logs, metrics, and traces can be ingested using OpenTelemetry data formats. + +For a full overview and steps to configure {{agent}} in `otel` mode, including a guided onboarding, refer to the [Elastic Distributions for OpenTelemetry](https://github.com/elastic/opentelemetry/tree/main) repository in GitHub. You can also check the [`elastic-agent otel` command](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-otel-command) in the {{fleet}} and {{agent}} Command reference. + +If you have a currently running {{agent}} you can [transform it to run as an OTel Collector](/reference/ingestion-tools/fleet/otel-agent.md). \ No newline at end of file diff --git a/reference/ingestion-tools/fleet/package-signatures.md b/reference/ingestion-tools/fleet/package-signatures.md new file mode 100644 index 0000000000..120b30d767 --- /dev/null +++ b/reference/ingestion-tools/fleet/package-signatures.md @@ -0,0 +1,46 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/package-signatures.html +--- + +# Package signatures [package-signatures] + +All integration packages published by Elastic have package signatures that prevent malicious attackers from tampering with package content. When you install an Elastic integration, {{kib}} downloads the package and verifies the package signature against a public key. If the package is unverified, you can choose to force install it. However, it’s strongly recommended that you avoid installing unverified packages. + +::::{important} +By installing an unverified package, you acknowledge that you assume any risk involved. +:::: + + +To force installation of an unverified package: + +* When using the {{integrations}} UI, you’ll be prompted to confirm that you want to install the unverified integration. Click **Install anyway** to force installation. +* When using the {{fleet}} API, if you attempt to install an unverified package, you’ll see a 400 response code with a verification failed message. To force installation, set the URL parameter `ignoreUnverified=true`. For more information, refer to [{{kib}} {{fleet}} APIs](/reference/ingestion-tools/fleet/fleet-api-docs.md). + +After installation, unverified {{integrations}} are flagged on the **Installed integrations** tab of the {{integrations}} UI. + + +## Why is package verification necessary? [why-verify-packages] + +Integration packages contain instructions, such as ILM policies, transforms, and mappings, that can significantly modify the structure of your {{es}} indices. Relying solely on HTTPS DNS name validation to prove provenance of the package is not a safe practice. A determined attacker could forge a certificate and serve up packages intended to disrupt the target. + +Installing verified packages ensures that your integration software has not been corrupted or otherwise tampered with. + + +## What does it mean for a package to be unverified? [what-does-unverified-mean] + +Here are some situations where an integration package will fail verification during installation: + +* The package zip file on the Elastic server has been tampered with. +* The user has been maliciously redirected to a fake Elastic package registry. +* The public Elastic key has been compromised, and Elastic has signed packages with an updated key. + +Here are some reasons why an integration might be flagged as unverified after installation: + +* The integration package failed verification, but was force installed. +* The integration package was installed before {{fleet}} added support for package signature verification. + + +## What if the Elastic key changes in the future? [what-if-key-changes] + +In the unlikely event that the Elastic signing key changes in the future, any verified integration packages will continue to show as verified until new packages are installed or existing ones are upgraded. If this happens, you can set the `xpack.fleet.packageVerification.gpgKeyPath` setting in the `kibana.yml` configuration file to use the new key. diff --git a/reference/ingestion-tools/fleet/processor-parse-aws-vpc-flow-log.md b/reference/ingestion-tools/fleet/processor-parse-aws-vpc-flow-log.md new file mode 100644 index 0000000000..e215eed36e --- /dev/null +++ b/reference/ingestion-tools/fleet/processor-parse-aws-vpc-flow-log.md @@ -0,0 +1,216 @@ +--- +navigation_title: "parse_aws_vpc_flow_log" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/processor-parse-aws-vpc-flow-log.html +--- + +# Parse AWS VPC Flow Log [processor-parse-aws-vpc-flow-log] + + +The `parse_aws_vpc_flow_log` processor decodes AWS VPC Flow log messages. + + +## Example [_example_29] + +The following example configuration decodes the `message` field using the default version 2 VPC flow log format. + +```yaml +processors: + - parse_aws_vpc_flow_log: + format: version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status + field: message +``` + + +## Configuration settings [_configuration_settings_33] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | No | `message` | Source field containing the VPC flow log message. | +| `target_field` | No | `aws.vpcflow` | Target field for the VPC flow log object. This applies only to the original VPC flow log fields. ECS fields are written to the standard location. | +| `format` | Yes | | VPC flow log format. This supports VPC flow log fields from versions 2 through 5. It will accept a string or a list of strings. Each format must have a unique number of fields to enable matching it to a flow log message. | +| `mode` | No | `ecs` | Controls which fields are generated. The available options are:

* `original`: generates the fields specified in the format string.
* `ecs`: maps the original fields to ECS and removes the original fields that are mapped to ECS.
* `ecs_and_original`: maps the original fields to ECS and retains all the original fields.

To learn more, refer to [Modes](#modes).
| +| `ignore_missing` | No | false | Whether to ignore a missing source field. | +| `ignore_failure` | No | false | Whether to ignore failures while parsing and transforming the flow log message. | +| `id` | No | | Instance ID for debugging purposes. | + + +## Modes [modes] + +This section provides more information about available modes. + + +### Original [_original] + +This mode returns the same fields found in the `format` string. It will drop any fields whose value is a dash (`-`). It converts the strings into the appropriate data types. These are the known field names and their data types. + +::::{note} +The AWS VPC flow field names use underscores instead of dashes within {{agent}}. You may configure the `format` using field names that contain either. +:::: + + +| VPC Flow Log Field | Data Type | | +| --- | --- | --- | +| account_id | string | | +| action | string | | +| az_id | string | | +| bytes | long | | +| dstaddr | ip | | +| dstport | integer | | +| end | timestamp | | +| flow_direction | string | | +| instance_id | string | | +| interface_id | string | | +| log_status | string | | +| packets | long | | +| pkt_dst_aws_service | string | | +| pkt_dstaddr | ip | | +| pkt_src_aws_service | string | | +| pkt_srcaddr | ip | | +| protocol | integer | | +| region | string | | +| srcaddr | ip | | +| srcport | integer | | +| start | timestamp | | +| sublocation_id | string | | +| sublocation_type | string | | +| subnet_id | string | | +| tcp_flags | integer | | +| tcp_flags_array* | integer | | +| traffic_path | integer | | +| type | string | | +| version | integer | | +| vpc_id | string | | + + +### ECS [_ecs] + +This mode maps the original VPC flow log fields into their associated Elastic Common Schema (ECS) fields. It removes the original fields that were mapped to ECS to reduced duplication. These are the field associations. There may be some transformations applied to derive the ECS field. + +| VPC Flow Log Field | ECS Field | | +| --- | --- | --- | +| account_id | cloud.account.id | | +| action | event.outcome | | +| action | event.action | | +| action | event.type | | +| az_id | cloud.availability_zone | | +| bytes | network.bytes | | +| bytes | source.bytes | | +| dstaddr | destination.address | | +| dstaddr | destination.ip | | +| dstport | destination.port | | +| end | @timestamp | | +| end | event.end | | +| flow_direction | network.direction | | +| instance_id | cloud.instance.id | | +| packets | network.packets | | +| packets | source.packets | | +| protocol | network.iana_number | | +| protocol | network.transport | | +| region | cloud.region | | +| srcaddr | network.type | | +| srcaddr | source.address | | +| srcaddr | source.ip | | +| srcport | source.port | | +| start | event.start | | + + +### ECS and Original [_ecs_and_original] + +This mode maps the fields into ECS and retains all the original fields. Below is an example document produced using `ecs_and_orignal` mode. + +```json +{ + "@timestamp": "2021-03-26T03:29:09Z", + "aws": { + "vpcflow": { + "account_id": "64111117617", + "action": "REJECT", + "az_id": "use1-az5", + "bytes": 1, + "dstaddr": "10.200.0.0", + "dstport": 33004, + "end": "2021-03-26T03:29:09Z", + "flow_direction": "ingress", + "instance_id": "i-0axxxxxx1ad77", + "interface_id": "eni-069xxxxxb7a490", + "log_status": "OK", + "packets": 52, + "pkt_dst_aws_service": "CLOUDFRONT", + "pkt_dstaddr": "10.200.0.80", + "pkt_src_aws_service": "AMAZON", + "pkt_srcaddr": "89.160.20.156", + "protocol": 17, + "region": "us-east-1", + "srcaddr": "89.160.20.156", + "srcport": 50041, + "start": "2021-03-26T03:28:12Z", + "sublocation_id": "fake-id", + "sublocation_type": "wavelength", + "subnet_id": "subnet-02d645xxxxxxxdbc0", + "tcp_flags": 1, + "tcp_flags_array": [ + "fin" + ], + "traffic_path": 1, + "type": "IPv4", + "version": 5, + "vpc_id": "vpc-09676f97xxxxxb8a7" + } + }, + "cloud": { + "account": { + "id": "64111117617" + }, + "availability_zone": "use1-az5", + "instance": { + "id": "i-0axxxxxx1ad77" + }, + "region": "us-east-1" + }, + "destination": { + "address": "10.200.0.0", + "ip": "10.200.0.0", + "port": 33004 + }, + "event": { + "action": "reject", + "end": "2021-03-26T03:29:09Z", + "outcome": "failure", + "start": "2021-03-26T03:28:12Z", + "type": [ + "connection", + "denied" + ] + }, + "message": "5 64111117617 eni-069xxxxxb7a490 89.160.20.156 10.200.0.0 50041 33004 17 52 1 1616729292 1616729349 REJECT OK vpc-09676f97xxxxxb8a7 subnet-02d645xxxxxxxdbc0 i-0axxxxxx1ad77 1 IPv4 89.160.20.156 10.200.0.80 us-east-1 use1-az5 wavelength fake-id AMAZON CLOUDFRONT ingress 1", + "network": { + "bytes": 1, + "direction": "ingress", + "iana_number": "17", + "packets": 52, + "transport": "udp", + "type": "ipv4" + }, + "related": { + "ip": [ + "89.160.20.156", + "10.200.0.0", + "10.200.0.80" + ] + }, + "source": { + "address": "89.160.20.156", + "bytes": 1, + "ip": "89.160.20.156", + "packets": 52, + "port": 50041 + } +} +``` + diff --git a/reference/ingestion-tools/fleet/processor-syntax.md b/reference/ingestion-tools/fleet/processor-syntax.md new file mode 100644 index 0000000000..688699bed2 --- /dev/null +++ b/reference/ingestion-tools/fleet/processor-syntax.md @@ -0,0 +1,255 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/processor-syntax.html +--- + +# Processor syntax [processor-syntax] + +Specify a list of one or more processors: + +* When configuring processors in the standalone {{agent}} configuration file, put this list under the `processors` setting. +* When using the Integrations UI in {{kib}}, put this list in the **Processors** field. + +Each processor begins with a dash (-) and includes the processor name, an optional [condition](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions), and configuration settings to pass to the processor: + +```yaml +- : + when: + + + +- : + when: + + +``` + +If a [condition](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) is specified, it must be met in order for the processor to run. If no condition is specified, the processor always runs. + +To accomplish complex conditional processing, use the if-then-else processor configuration. This configuration allows you to run multiple processors based on a single condition. For example: + +```yaml +- if: + + then: <1> + - : + + - : + + ... + else: <2> + - : + + - : + +``` + +1. `then` must contain a single processor or a list of processors that will execute when the condition is `true`. +2. `else` is optional. It can contain a single processor or a list of processors that will execute when the condition is `false`. + + + +## Conditions [processor-conditions] + +Each condition receives a field to compare. You can specify multiple fields under the same condition by using `AND` between the fields (for example, `field1 AND field2`). + +For each field, you can specify a simple field name or a nested map, for example `dns.question.name`. + +Refer to the [integrations documentation](integration-docs://docs/reference/index.md) for a list of all fields created by a specific integration. + +The supported conditions are: + +* [`equals`](#processor-condition-equals) +* [`contains`](#processor-condition-contains) +* [`regexp`](#processor-condition-regexp) +* [`range`](#processor-condition-range) +* [`network`](#processor-condition-network) +* [`has_fields`](#processor-condition-has_fields) +* [`or`](#processor-condition-or) +* [`and`](#processor-condition-and) +* [`not`](#processor-condition-not) + + +### `equals` [processor-condition-equals] + +With the `equals` condition, you can check if a field has a certain value. The condition accepts only an integer or string value. + +For example, the following condition checks if the response code of the HTTP transaction is 200: + +```yaml +equals: + http.response.code: 200 +``` + + +### `contains` [processor-condition-contains] + +The `contains` condition checks if a value is part of a field. The field can be a string or an array of strings. The condition accepts only a string value. + +For example, the following condition checks if an error is part of the transaction status: + +```yaml +contains: + status: "Specific error" +``` + + +### `regexp` [processor-condition-regexp] + +The `regexp` condition checks the field against a regular expression. The condition accepts only strings. + +For example, the following condition checks if the process name starts with `foo`: + +```yaml +regexp: + system.process.name: "^foo.*" +``` + + +### `range` [processor-condition-range] + +The `range` condition checks if the field is in a certain range of values. The condition supports `lt`, `lte`, `gt` and `gte`. The condition accepts only integer or float values. + +For example, the following condition checks for failed HTTP transactions by comparing the `http.response.code` field with 400. + +```yaml +range: + http.response.code: + gte: 400 +``` + +This can also be written as: + +```yaml +range: + http.response.code.gte: 400 +``` + +The following condition checks if the CPU usage in percentage has a value between 0.5 and 0.8. + +```yaml +range: + system.cpu.user.pct.gte: 0.5 + system.cpu.user.pct.lt: 0.8 +``` + + +### `network` [processor-condition-network] + +The `network` condition checks if the field is in a certain IP network range. Both IPv4 and IPv6 addresses are supported. The network range may be specified using CIDR notation, like "192.0.2.0/24" or "2001:db8::/32", or by using one of these named ranges: + +* `loopback` - Matches loopback addresses in the range of `127.0.0.0/8` or `::1/128`. +* `unicast` - Matches global unicast addresses defined in RFC 1122, RFC 4632, and RFC 4291 with the exception of the IPv4 broadcast address (`255.255.255.255`). This includes private address ranges. +* `multicast` - Matches multicast addresses. +* `interface_local_multicast` - Matches IPv6 interface-local multicast addresses. +* `link_local_unicast` - Matches link-local unicast addresses. +* `link_local_multicast` - Matches link-local multicast addresses. +* `private` - Matches private address ranges defined in RFC 1918 (IPv4) and RFC 4193 (IPv6). +* `public` - Matches addresses that are not loopback, unspecified, IPv4 broadcast, link-local unicast, link-local multicast, interface-local multicast, or private. +* `unspecified` - Matches unspecified addresses (either the IPv4 address "0.0.0.0" or the IPv6 address "::"). + +The following condition returns true if the `source.ip` value is within the private address space. + +```yaml +network: + source.ip: private +``` + +This condition returns true if the `destination.ip` value is within the IPv4 range of `192.168.1.0` - `192.168.1.255`. + +```yaml +network: + destination.ip: '192.168.1.0/24' +``` + +And this condition returns true when `destination.ip` is within any of the given subnets. + +```yaml +network: + destination.ip: ['192.168.1.0/24', '10.0.0.0/8', loopback] +``` + + +### `has_fields` [processor-condition-has_fields] + +The `has_fields` condition checks if all the given fields exist in the event. The condition accepts a list of string values denoting the field names. + +For example, the following condition checks if the `http.response.code` field is present in the event. + +```yaml +has_fields: ['http.response.code'] +``` + + +### `or` [processor-condition-or] + +The `or` operator receives a list of conditions. + +```yaml +or: + - + - + - + ... +``` + +For example, to configure the condition `http.response.code = 304 OR http.response.code = 404`: + +```yaml +or: + - equals: + http.response.code: 304 + - equals: + http.response.code: 404 +``` + + +### `and` [processor-condition-and] + +The `and` operator receives a list of conditions. + +```yaml +and: + - + - + - + ... +``` + +For example, to configure the condition `http.response.code = 200 AND status = OK`: + +```yaml +and: + - equals: + http.response.code: 200 + - equals: + status: OK +``` + +To configure a condition like ` OR AND `: + +```yaml +or: + - + - and: + - + - +``` + + +### `not` [processor-condition-not] + +The `not` operator receives the condition to negate. + +```yaml +not: + +``` + +For example, to configure the condition `NOT status = OK`: + +```yaml +not: + equals: + status: OK +``` diff --git a/reference/ingestion-tools/fleet/providers.md b/reference/ingestion-tools/fleet/providers.md new file mode 100644 index 0000000000..88dd181eea --- /dev/null +++ b/reference/ingestion-tools/fleet/providers.md @@ -0,0 +1,95 @@ +--- +navigation_title: "Providers" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/providers.html +--- + +# Configure providers for standalone {{agent}}s [providers] + + +Providers supply the key-value pairs that are used for variable substitution and conditionals. Each provider’s keys are automatically prefixed with the name of the provider in the context of the {{agent}}. + +For example, a provider named `foo` provides `{"key1": "value1", "key2": "value2"}`, the key-value pairs are placed in `{"foo" : {"key1": "value1", "key2": "value2"}}`. To reference the keys, use `{{foo.key1}}` and `{{foo.key2}}`. + + +## Provider configuration [_provider_configuration] + +The provider configuration is specified under the top-level `providers` key in the `elastic-agent.yml` configuration. All registered providers are enabled by default. If a provider cannot connect, no mappings are produced. + +The following example shows two providers (`local` and `local_dynamic`) that supply custom keys: + +```yaml +providers: + local: + vars: + foo: bar + local_dynamic: + vars: + - item: key1 + - item: key2 +``` + +Providers are enabled automatically if a provider is referenced in an {{agent}} policy. All providers are prefixed without name collisions. The name of the provider is in the key in the configuration. + +```yaml +providers: + docker: + enabled: false +``` + +{{agent}} supports two broad types of providers: [context](#context-providers) and [dynamic](#dynamic-providers). + + +### Context providers [context-providers] + +Context providers give the current context of the running {{agent}}, for example, agent information (ID, version), host information (hostname, IP addresses), and environment information (environment variables). + +They can only provide a single key-value mapping. Think of them as singletons; an update of a key-value mapping results in a re-evaluation of the entire configuration. These providers are normally very static, but not required. A value can change which results in re-evaluation. + +Context providers use the Elastic Common Schema (ECS) naming to ensure consistency and understanding throughout documentation and projects. + +{{agent}} supports the following context providers: + +* [Local](/reference/ingestion-tools/fleet/local-provider.md) +* [Agent Provider](/reference/ingestion-tools/fleet/agent-provider.md) +* [Host Provider](/reference/ingestion-tools/fleet/host-provider.md) +* [Env Provider](/reference/ingestion-tools/fleet/env-provider.md) +* [Kubernetes Secrets Provider](/reference/ingestion-tools/fleet/kubernetes_secrets-provider.md) +* [Kubernetes Leader Election Provider](/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md) + + +### Dynamic Providers [dynamic-providers] + +Dynamic providers give an array of multiple key-value mappings. Each key-value mapping is combined with the previous context provider’s key and value mapping which provides a new unique mapping that is used to generate a configuration. + +{{agent}} supports the following context providers: + +* [Local Dynamic Provider](/reference/ingestion-tools/fleet/local-dynamic-provider.md) +* [Docker Provider](/reference/ingestion-tools/fleet/docker-provider.md) +* [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) + + +### Disabling Providers By Default [disable-providers-by-default] + +All registered providers are disabled by default until they are referenced in a policy. + +You can disable all providers even if they are referenced in a policy by setting `agent.providers.initial_default: false`. + +The following configuration disables all providers from running except for the docker provider, if it becomes referenced in the policy: + +```yaml +agent.providers.initial_default: false +providers: + docker: + enabled: true +``` + + + + + + + + + + diff --git a/reference/ingestion-tools/fleet/rate_limit-processor.md b/reference/ingestion-tools/fleet/rate_limit-processor.md new file mode 100644 index 0000000000..023652f8cc --- /dev/null +++ b/reference/ingestion-tools/fleet/rate_limit-processor.md @@ -0,0 +1,53 @@ +--- +navigation_title: "rate_limit" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/rate_limit-processor.html +--- + +# Rate limit the flow of events [rate_limit-processor] + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + + +The `rate_limit` processor limits the throughput of events based on the specified configuration. + +In the current implementation, rate-limited events are dropped. Future implementations may allow rate-limited events to be handled differently. + + +## Examples [_examples_9] + +```yaml +- rate_limit: + limit: "10000/m" +``` + +```yaml +- rate_limit: + fields: + - "cloudfoundry.org.name" + limit: "400/s" +``` + +```yaml +- if.equals.cloudfoundry.org.name: "acme" + then: + - rate_limit: + limit: "500/s" +``` + + +## Configuration settings [_configuration_settings_34] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `limit` | Yes | | The rate limit. Supported time units for the rate are `s` (per second), `m` (per minute), and `h` (per hour). | +| `fields` | No | | List of fields. The rate limit will be applied to each distinct value derived by combining the values of these fields. | + diff --git a/reference/ingestion-tools/fleet/registered_domain-processor.md b/reference/ingestion-tools/fleet/registered_domain-processor.md new file mode 100644 index 0000000000..20f4267fa9 --- /dev/null +++ b/reference/ingestion-tools/fleet/registered_domain-processor.md @@ -0,0 +1,44 @@ +--- +navigation_title: "registered_domain" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/registered_domain-processor.html +--- + +# Registered Domain [registered_domain-processor] + + +The `registered_domain` processor reads a field containing a hostname and then writes the "registered domain" contained in the hostname to the target field. For example, given `www.google.co.uk`, the processor would output `google.co.uk`. In other words, the "registered domain" is the effective top-level domain (`co.uk`) plus one level (`google`). Optionally, the processor can store the rest of the domain, the `subdomain`, into another target field. + +This processor uses the Mozilla Public Suffix list to determine the value. + + +## Example [_example_30] + +```yaml + - registered_domain: + field: dns.question.name + target_field: dns.question.registered_domain + target_etld_field: dns.question.top_level_domain + target_subdomain_field: dns.question.sudomain + ignore_missing: true + ignore_failure: true +``` + + +## Configuration settings [_configuration_settings_35] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | Source field containing a fully qualified domain name (FQDN). | +| `target_field` | Yes | | Target field for the registered domain value. | +| `target_etld_field` | No | | Target field for the effective top-level domain value. | +| `target_subdomain_field` | No | | Target subdomain field for the subdomain value. | +| `ignore_missing` | No | `false` | Whether to ignore errors when the source field is missing. | +| `ignore_failure` | No | `false` | Whether to ignore all errors produced by the processor. | +| `id` | No | | Identifier for this processor instance. Useful for debugging. | + diff --git a/reference/ingestion-tools/fleet/remote-elasticsearch-output.md b/reference/ingestion-tools/fleet/remote-elasticsearch-output.md new file mode 100644 index 0000000000..9d92935d06 --- /dev/null +++ b/reference/ingestion-tools/fleet/remote-elasticsearch-output.md @@ -0,0 +1,60 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/remote-elasticsearch-output.html +--- + +# Remote Elasticsearch output [remote-elasticsearch-output] + +Beginning in version 8.12.0, you can send {{agent}} data to a remote {{es}} cluster. This is especially useful for data that you want to keep separate and independent from the deployment where you use {{fleet}} to manage the agents. + +A remote {{es}} cluster supports the same [output settings](/reference/ingestion-tools/fleet/es-output-settings.md) as your main {{es}} cluster. + +::::{warning} +A bug has been found that causes {{elastic-defend}} response actions to stop working when a remote {{es}} output is configured for an agent. This bug is currently being investigated and is expected to be resolved in an upcoming release. +:::: + + +::::{note} +Using a remote {{es}} output with a target cluster that has [traffic filters](/deploy-manage/security/traffic-filtering.md) enabled is not currently supported. +:::: + + +To configure a remote {{es}} cluster for your {{agent}} data: + +1. In {{fleet}}, open the **Settings** tab. +2. In the **Outputs** section, select **Add output**. +3. In the **Add new output** flyout, provide a name for the output and select **Remote Elasticsearch** as the output type. +4. In the **Hosts** field, add the URL that agents should use to access the remote {{es}} cluster. + + 1. To find the remote host address, in the remote cluster open {{kib}} and go to **Management → {{fleet}} → Settings**. + 2. Copy the **Hosts** value for the default output. + 3. Back in your main cluster, paste the value you copied into the output **Hosts** field. + +5. Create a service token to access the remote cluster. + + 1. Below the **Service Token** field, copy the API request. + 2. In the remote cluster, open the {{kib}} menu and go to **Management → Dev Tools**. + 3. Run the API request. + 4. Copy the value for the generated token. + 5. Back in your main cluster, paste the value you copied into the output **Service Token** field. + + ::::{note} + To prevent unauthorized access the {{es}} Service Token is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the password as plain text in the agent policy definition. Secret storage requires {{fleet-server}} version 8.12 or higher. This setting can also be stored as a secret value or as plain text for preconfigured outputs. See [Preconfiguration settings](kibana://docs/reference/configuration-reference/fleet-settings.md#_preconfiguration_settings_for_advanced_use_cases) in the {{kib}} Guide to learn more. + :::: + +6. Choose whether or not the remote output should be the default for agent integrations or for agent monitoring data. When set, {{agent}}s use this output to send data if no other output is set in the [agent policy](/reference/ingestion-tools/fleet/agent-policy.md). +7. Select which [performance tuning settings](/reference/ingestion-tools/fleet/es-output-settings.md#es-output-settings-performance-tuning-settings) you’d prefer in order to optimize {{agent}} for throughput, scale, or latency, or leave the default `balanced` setting. +8. Add any [advanced YAML configuration settings](/reference/ingestion-tools/fleet/es-output-settings.md#es-output-settings-yaml-config) that you’d like for the output. +9. Click **Save and apply settings**. + +After the output is created, you can update an {{agent}} policy to use the new remote {{es}} cluster: + +1. In {{fleet}}, open the **Agent policies** tab. +2. Click the agent policy to edit it, then click **Settings**. +3. To send integrations data, set the **Output for integrations** option to use the output that you configured in the previous steps. +4. To send {{agent}} monitoring data, set the **Output for agent monitoring** option to use the output that you configured in the previous steps. +5. Click **Save changes**. + +The remote {{es}} cluster is now configured. + +As a final step before using the remote {{es}} output, you need to make sure that for any integrations that have been [added to your {{agent}} policy](/reference/ingestion-tools/fleet/add-integration-to-policy.md), the integration assets have been installed on the remote {{es}} cluster. Refer to [Install and uninstall {{agent}} integration assets](/reference/ingestion-tools/fleet/install-uninstall-integration-assets.md) for the steps. diff --git a/reference/ingestion-tools/fleet/rename-processor.md b/reference/ingestion-tools/fleet/rename-processor.md new file mode 100644 index 0000000000..7d9508491b --- /dev/null +++ b/reference/ingestion-tools/fleet/rename-processor.md @@ -0,0 +1,46 @@ +--- +navigation_title: "rename" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/rename-processor.html +--- + +# Rename fields from events [rename-processor] + + +The `rename` processor specifies a list of fields to rename. This processor cannot be used to overwrite fields. To overwrite fields, either first rename the target field, or use the `drop_fields` processor to drop the field, and then rename the field. + +::::{tip} +You can rename fields to resolve field name conflicts. For example, if an event has two fields, `c` and `c.b` (where `b` is a subfield of `c`), assigning scalar values results in an {{es}} error at ingest time. The assignment `{"c": 1,"c.b": 2}` would result in an error because `c` is an object and cannot be assigned a scalar value. To prevent this conflict, rename `c` to `c.value` before assigning values. +:::: + + + +## Example [_example_31] + +```yaml + - rename: + fields: + - from: "a.g" + to: "e.d" + ignore_missing: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings_36] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | Contains:

* `from: "old-key"`, where `from` is the original field name. You can use the `@metadata.` prefix in this field to rename keys in the event metadata instead of event fields.
* `to: "new-key"`, where `to` is the target field name.
| +| `ignore_missing` | No | `false` | Whether to ignore missing keys. If `true`, no error is logged when a key that should be renamed is missing. | +| `fail_on_error` | No | `true` | Whether to fail renaming if an error occurs. If `true` and an error occurs, the renaming of fields is stopped, and the original event is returned. If `false`, renaming continues even if an error occurs during renaming. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + +You can specify multiple `rename` processors under the `processors` section. + diff --git a/reference/ingestion-tools/fleet/replace-fields.md b/reference/ingestion-tools/fleet/replace-fields.md new file mode 100644 index 0000000000..e384e38cd0 --- /dev/null +++ b/reference/ingestion-tools/fleet/replace-fields.md @@ -0,0 +1,49 @@ +--- +navigation_title: "replace" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/replace-fields.html +--- + +# Replace fields from events [replace-fields] + + +The `replace` processor takes a list of fields to search for a matching value and replaces the matching value with a specified string. + +The `replace` processor cannot be used to create a completely new value. + +::::{tip} +You can use this processor to truncate a field value or replace it with a new string value. You can also use this processor to mask PII information. +:::: + + + +## Example [_example_32] + +The following example changes the path from `/usr/bin` to `/usr/local/bin`: + +```yaml + - replace: + fields: + - field: "file.path" + pattern: "/usr/" + replacement: "/usr/local/" + ignore_missing: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings_37] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | List of one or more items. Each item contains a `field: field-name`, `pattern: regex-pattern`, and `replacement: replacement-string`, where:

* `field` is the original field name. You can use the `@metadata.` prefix in this field to replace values in the event metadata instead of event fields.
* `pattern` is the regex pattern to match the field’s value
* `replacement` is the replacement string to use to update the field’s value
| +| `ignore_missing` | No | `false` | Whether to ignore missing fields. If `true`, no error is logged if the specified field is missing. | +| `fail_on_error` | No | `true` | Whether to fail replacement of field values if an error occurs.If `true` and there’s an error, the replacement of field values is stopped, and the original event is returned.If `false`, replacement continues even if an error occurs during replacement. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md new file mode 100644 index 0000000000..a889d7cb77 --- /dev/null +++ b/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md @@ -0,0 +1,27 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/running-on-aks-managed-by-fleet.html +--- + +# Run Elastic Agent on Azure AKS managed by Fleet [running-on-aks-managed-by-fleet] + +Please follow the steps to run the {{agent}} on [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) page. + + +## Important notes: [_important_notes_4] + +On managed Kubernetes solutions like AKS, {{agent}} has no access to several data sources. Find below the list of the non available data: + +1. Metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components are not available. Consequently metrics are not available for `kube-scheduler` and `kube-controller-manager` components. In this regard, the respective **dashboards** will not be populated with data. +2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {{agent}}. +3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with [Kubernetes integration](integration-docs://docs/reference/kubernetes.md). + + In this regard, you can use [`add_fields` processor](beats://docs/reference/filebeat/add-fields.md) to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each [Kubernetes integration](integration-docs://docs/reference/kubernetes.md)'s component: + + ```yaml + - add_fields: + target: orchestrator.cluster + fields: + name: clusterName + url: clusterURL + ``` diff --git a/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md new file mode 100644 index 0000000000..900cd0e65a --- /dev/null +++ b/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md @@ -0,0 +1,27 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/running-on-eks-managed-by-fleet.html +--- + +# Run Elastic Agent on Amazon EKS managed by Fleet [running-on-eks-managed-by-fleet] + +Please follow the steps to run the {{agent}} on [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) page. + + +## Important notes: [_important_notes_3] + +On managed Kubernetes solutions like EKS, {{agent}} has no access to several data sources. Find below the list of the non available data: + +1. Metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components are not available. Consequently metrics are not available for `kube-scheduler` and `kube-controller-manager` components. In this regard, the respective **dashboards** will not be populated with data. +2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {{agent}}. +3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with [Kubernetes integration](integration-docs://docs/reference/kubernetes.md). + + In this regard, you can use [`add_fields` processor](beats://docs/reference/filebeat/add-fields.md) to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each [Kubernetes integration](integration-docs://docs/reference/kubernetes.md)'s component: + + ```yaml + - add_fields: + target: orchestrator.cluster + fields: + name: clusterName + url: clusterURL + ``` diff --git a/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md new file mode 100644 index 0000000000..29e4cd8eac --- /dev/null +++ b/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md @@ -0,0 +1,29 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/running-on-gke-managed-by-fleet.html +--- + +# Run Elastic Agent on GKE managed by Fleet [running-on-gke-managed-by-fleet] + +Please follow the steps to run the {{agent}} on [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) page. + + +### Important notes: [_important_notes_2] + +On managed Kubernetes solutions like GKE, {{agent}} has no access to several data sources. Find below the list of the non-available data: + +1. Metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components are not available. Consequently, metrics are not available for `kube-scheduler` and `kube-controller-manager` components. In this regard, the respective **dashboards** will not be populated with data. +2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {{agent}}. + +## Autopilot GKE [_autopilot_gke] + +Although autopilot removes many administration challenges (like workload management, deployment automation etc. of kubernetes clusters), additionally restricts access to specific namespaces (i.e. `kube-system`) and host paths which is the reason that default Elastic Agent manifests would not work. + +Specific manifests are provided to cover [Autopilot environments](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-gke-autopilot.md). + +`kube-state-metrics` also must be installed to another namespace rather than the `default` as access to `kube-system` is not allowed. + +## Additonal Resources: [_additonal_resources] + +* Blog [Using Elastic to observe GKE Autopilot clusters](https://www.elastic.co/blog/elastic-observe-gke-autopilot-clusters) +* Elastic speakers webinar: ["Get full Kubernetes visibility into GKE Autopilot with Elastic Observability"](https://www.elastic.co/virtual-events/get-full-kubernetes-visibility-into-gke-autopilot-with-elastic-observability) diff --git a/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md new file mode 100644 index 0000000000..d1325ca9f6 --- /dev/null +++ b/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md @@ -0,0 +1,214 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-managed-by-fleet.html +--- + +# Run Elastic Agent on Kubernetes managed by Fleet [running-on-kubernetes-managed-by-fleet] + +## What you need [_what_you_need_2] + +* [kubectl installed](https://kubernetes.io/docs/tasks/tools/). +* {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it. + + ::::{tab-set} + + :::{tab-item} Elasticsearch Service + + To get started quickly, spin up a deployment of our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service). The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). + ::: + + :::{tab-item} Self-managed + + To install and run {{es}} and {{kib}}, see [Installing the {{stack}}](/deploy-manage/deploy/self-managed/deploy-cluster.md). + ::: + + :::: + +* `kube-state-metrics`. + + You need to deploy `kube-state-metrics` to get the metrics about the state of the objects on the cluster (see the [Kubernetes deployment](https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment) docs). You can do that by first downloading the project: + + ```sh + gh repo clone kubernetes/kube-state-metrics + ``` + + And then deploying it: + + ```sh + kubectl apply -k kube-state-metrics + ``` + + ::::{warning} + On managed Kubernetes solutions, such as AKS, GKE or EKS, {{agent}} does not have the required permissions to collect metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components, like `kube-scheduler` and `kube-controller-manager`. Audit logs are only available on Kubernetes control plane nodes as well, and hence cannot be collected by {{agent}}. Refer [here](integration-docs://docs/reference/kubernetes.md#kubernetes-scheduler-and-controllermanager) to find more information. For more information about specific cloud providers, refer to [Run {{agent}} on Azure AKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md), [Run {{agent}} on GKE managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md) and [Run {{agent}} on Amazon EKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md) + :::: + + + +### Step 1: Download the {{agent}} manifest [_step_1_download_the_agent_manifest] + +::::{note} +You can find {{agent}} Docker images [here](https://www.docker.elastic.co/r/elastic-agent/elastic-agent). +:::: + + +Download the manifest file: + +```sh +curl -L -O https://raw.githubusercontent.com/elastic/elastic-agent/master/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml +``` + +::::{note} +You might need to adjust [resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) of the {{agent}} container in the manifest. Container resource usage depends on the number of data streams and the environment size. +:::: + + +This manifest includes the Kubernetes integration to collect Kubernetes metrics and System integration to collect system level metrics and logs from nodes. + +The {{agent}} is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to ensure that there is a running instance on each node of the cluster. These instances are used to retrieve most metrics from the host, such as system metrics, Docker stats, and metrics from all the services running on top of Kubernetes. These metrics are accessed through the deployed `kube-state-metrics`. Notice that everything is deployed under the `kube-system` namespace by default. To change the namespace, modify the manifest file. + +Moreover, one of the Pods in the DaemonSet will constantly hold a *leader lock* which makes it responsible for handling cluster-wide monitoring. You can find more information about leader election configuration options at [leader election provider](/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md). The leader pod will retrieve metrics that are unique for the whole cluster, such as Kubernetes events or [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics). + +For Kubernetes Security Posture Management (KSPM) purposes, the {{agent}} requires read access to various types of Kubernetes resources, node processes, and files. To achieve this, read permissions are granted to the {{agent}} for the necessary resources, and volumes from the hosting node’s file system are mounted to allow accessibility to the {{agent}} pods. + +::::{tip} +The size and the number of nodes in a Kubernetes cluster can be large at times, and in such a case the Pod that will be collecting cluster level metrics might require more runtime resources than you would like to dedicate to all of the pods in the DaemonSet. The leader which is collecting the cluster wide metrics may face performance issues due to resource limitations if under-resourced. In this case users might consider avoiding the use of a single DaemonSet with the leader election strategy and instead run a dedicated standalone {{agent}} instance for collecting cluster wide metrics using a Deployment in addition to the DaemonSet to collect metrics for each node. Then both the Deployment and the DaemonSet can be resourced independently and appropriately. For more information check the [Scaling {{agent}} on {{k8s}}](/reference/ingestion-tools/fleet/scaling-on-kubernetes.md) page. +:::: + + + +### Step 2: Configure {{agent}} policy [_step_2_configure_agent_policy] + +The {{agent}} needs to be assigned to a policy to enable the proper inputs. To achieve Kubernetes observability, the policy needs to include the Kubernetes integration. Refer to [Create a policy](/reference/ingestion-tools/fleet/agent-policy.md#create-a-policy) and [Add an integration to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-integration) to learn how to configure the [Kubernetes integration](integration-docs://docs/reference/kubernetes.md). + + +### Step 3: Enroll {{agent}} to the policy [_step_3_enroll_agent_to_the_policy] + +Enrollment of an {{agent}} is defined as the action to register a specific agent to a running {{fleet-server}}. + +{{agent}} is enrolled to a running {{fleet-server}} by using `FLEET_URL` parameter. Additionally, the `FLEET_ENROLLMENT_TOKEN` parameter is used to connect {{agent}} to a specific {{agent}} policy. + +A new `FLEET_ENROLLMENT_TOKEN` will be created upon new policy creation and will be inserted inside the Elastic Agent Manifest during the Guided installation. + +Find more information for [Enrollment Tokens](/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md). + +To specify different destination/credentials, change the following parameters in the manifest file: + +```yaml +- name: FLEET_URL + value: "https://fleet-server_url:port" <1> +- name: FLEET_ENROLLMENT_TOKEN + value: "token" <2> +- name: FLEET_SERVER_POLICY_ID + value: "fleet-server-policy" <3> +- name: KIBANA_HOST + value: "" <4> +- name: KIBANA_FLEET_USERNAME + value: "" <5> +- name: KIBANA_FLEET_PASSWORD + value: "" <6> +``` + +1. URL to enroll the {{fleet-server}} into. You can find it in {{kib}}. Select **Management → {{fleet}} → Fleet Settings**, and copy the {{fleet-server}} host URL. +2. The token to use for enrollment. Close the flyout panel and select **Enrollment tokens**. Find the Agent policy you created before to enroll {{agent}} into, and display and copy the secret token. +3. The policy ID for {{fleet-server}} to use on itself. +4. The {{kib}} host. +5. The basic authentication username used to connect to {{kib}} and retrieve a `service_token` to enable {{fleet}}. +6. The basic authentication password used to connect to {{kib}} and retrieve a `service_token` to enable {{fleet}}. + + +If you need to run {{fleet-server}} as well, adjust the `docker run` command above by adding these environment variables: + +```yaml +- name: FLEET_SERVER_ENABLE + value: "true" <1> +- name: FLEET_SERVER_ELASTICSEARCH_HOST + value: "" <2> +- name: FLEET_SERVER_SERVICE_TOKEN + value: "" <3> +``` + +1. Set to `true` to bootstrap {{fleet-server}} on this {{agent}}. This automatically forces {{fleet}} enrollment as well. +2. The Elasticsearch host for Fleet Server to communicate with, for example `http://elasticsearch:9200`. +3. Service token to use for communication with {{es}} and {{kib}}. + + +Refer to [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md) for all available options. + + +### Step 4: Configure tolerations [_step_4_configure_tolerations] + +Kubernetes control plane nodes can use [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to limit the workloads that can run on them. The manifest for standalone {{agent}} defines tolerations to run on these. Agents running on control plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes. To disable {{agent}} from running on control plane nodes, remove the following part of the DaemonSet spec: + +```yaml +spec: + # Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes. + # Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes + tolerations: + - key: node-role.kubernetes.io/control-plane + effect: NoSchedule + - key: node-role.kubernetes.io/master + effect: NoSchedule +``` + +Both these two tolerations do the same, but `node-role.kubernetes.io/master` is [deprecated as of Kubernetes version v1.25](https://kubernetes.io/docs/reference/labels-annotations-taints/#node-role-kubernetes-io-master-taint). + + +### Step 5: Deploy the {{agent}} [_step_5_deploy_the_agent] + +To deploy {{agent}} to Kubernetes, run: + +```sh +kubectl create -f elastic-agent-managed-kubernetes.yaml +``` + +To check the status, run: + +```sh +$ kubectl -n kube-system get pods -l app=elastic-agent +NAME READY STATUS RESTARTS AGE +elastic-agent-4665d 1/1 Running 0 81m +elastic-agent-9f466c4b5-l8cm8 1/1 Running 0 81m +elastic-agent-fj2z9 1/1 Running 0 81m +elastic-agent-hs4pb 1/1 Running 0 81m +``` + +::::{admonition} Running {{agent}} on a read-only file system +:class: tip + +If you’d like to run {{agent}} on Kubernetes on a read-only file system, you can do so by specifying the `readOnlyRootFilesystem` option. + +:::: + + + +### Step 6: View your data in {{kib}} [_step_6_view_your_data_in_kib] + +1. Launch {{kib}}: + + ::::{tab-set} + + :::{tab-item} Elasticsearch Service + + 1. [Log in](https://cloud.elastic.co/) to your {{ecloud}} account. + 2. Navigate to the {{kib}} endpoint in your deployment. + ::: + + :::{tab-item} Self-managed + + Point your browser to [http://localhost:5601](http://localhost:5601), replacing `localhost` with the name of the {{kib}} host. + + ::: + + :::: + +2. To check if your {{agent}} is enrolled in {{fleet}}, go to **Management → {{fleet}} → Agents**. + + :::{image} images/kibana-fleet-agents.png + :alt: {{agent}}s {{fleet}} page + :class: screenshot + ::: + +3. To view data flowing in, go to **Analytics → Discover** and select the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can’t see these indexes, [create a data view](/explore-analyze/find-and-organize/data-views.md) for them. +4. To view predefined dashboards, either select **Analytics→Dashboard** or [install assets through an integration](/reference/ingestion-tools/fleet/view-integration-assets.md). + + diff --git a/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md b/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md new file mode 100644 index 0000000000..2300d6632d --- /dev/null +++ b/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md @@ -0,0 +1,266 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-standalone.html +--- + +# Run Elastic Agent Standalone on Kubernetes [running-on-kubernetes-standalone] + +## What you need [_what_you_need_3] + +* [kubectl installed](https://kubernetes.io/docs/tasks/tools/). +* {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it. + + ::::{tab-set} + + :::{tab-item} Elasticsearch Service + To get started quickly, spin up a deployment of our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service). The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). + ::: + + :::{tab-item} Self-managed + To install and run {{es}} and {{kib}}, see [Installing the {{stack}}](/deploy-manage/deploy/self-managed/deploy-cluster.md). + ::: + + :::: + +* `kube-state-metrics`. + + You need to deploy `kube-state-metrics` to get the metrics about the state of the objects on the cluster (see the [Kubernetes deployment](https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment) docs). You can do that by first downloading the project: + + ```sh + gh repo clone kubernetes/kube-state-metrics + ``` + + And then deploying it: + + ```sh + kubectl apply -k kube-state-metrics + ``` + + ::::{warning} + On managed Kubernetes solutions, such as AKS, GKE or EKS, {{agent}} does not have the required permissions to collect metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components, like `kube-scheduler` and `kube-controller-manager`. Audit logs are only available on Kubernetes control plane nodes as well, and hence cannot be collected by {{agent}}. Refer [here](integration-docs://docs/reference/kubernetes.md#kubernetes-scheduler-and-controllermanager) to find more information. For more information about specific cloud providers, refer to [Run {{agent}} on Azure AKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md), [Run {{agent}} on GKE managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md) and [Run {{agent}} on Amazon EKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md) + :::: + + + +### Step 1: Download the {{agent}} manifest [_step_1_download_the_agent_manifest_3] + +::::{note} +You can find {{agent}} Docker images [here](https://www.docker.elastic.co/r/elastic-agent/elastic-agent). +:::: + + +Download the manifest file: + +```sh +curl -L -O https://raw.githubusercontent.com/elastic/elastic-agent/v9.0.0/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml +``` + +::::{note} +You might need to adjust [resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) of the {{agent}} container in the manifest. Container resource usage depends on the number of data streams and the environment size. +:::: + + +This manifest includes the Kubernetes integration to collect Kubernetes metrics and System integration to collect system level metrics and logs from nodes. + +The {{agent}} is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to ensure that there is a running instance on each node of the cluster. These instances are used to retrieve most metrics from the host, such as system metrics, Docker stats, and metrics from all the services running on top of Kubernetes. These metrics are accessed through the deployed `kube-state-metrics`. Notice that everything is deployed under the `kube-system` namespace by default. To change the namespace, modify the manifest file. + +Moreover, one of the Pods in the DaemonSet will constantly hold a *leader lock* which makes it responsible for handling cluster-wide monitoring. You can find more information about leader election configuration options at [leader election provider](/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md). The leader pod will retrieve metrics that are unique for the whole cluster, such as Kubernetes events or [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics). We make sure that these metrics are retrieved from the leader pod by applying the following [condition](/reference/ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md) in the manifest, before declaring the data streams with these metricsets: + +```yaml +... +inputs: + - id: kubernetes-cluster-metrics + condition: ${kubernetes_leaderelection.leader} == true + type: kubernetes/metrics + # metricsets with the state_ prefix and the metricset event +... +``` + +For Kubernetes Security Posture Management (KSPM) purposes, the {{agent}} requires read access to various types of Kubernetes resources, node processes, and files. To achieve this, read permissions are granted to the {{agent}} for the necessary resources, and volumes from the hosting node’s file system are mounted to allow accessibility to the {{agent}} pods. + +::::{tip} +The size and the number of nodes in a Kubernetes cluster can be large at times, and in such a case the Pod that will be collecting cluster level metrics might require more runtime resources than you would like to dedicate to all of the pods in the DaemonSet. The leader which is collecting the cluster wide metrics may face performance issues due to resource limitations if under-resourced. In this case users might consider avoiding the use of a single DaemonSet with the leader election strategy and instead run a dedicated standalone {{agent}} instance for collecting cluster wide metrics using a Deployment in addition to the DaemonSet to collect metrics for each node. Then both the Deployment and the DaemonSet can be resourced independently and appropriately. For more information check the [Scaling {{agent}} on {{k8s}}](/reference/ingestion-tools/fleet/scaling-on-kubernetes.md) page. +:::: + + + +### Step 2: Connect to the {{stack}} [_step_2_connect_to_the_stack] + +Set the {{es}} settings before deploying the manifest: + +```yaml +- name: ES_USERNAME + value: "elastic" <1> +- name: ES_PASSWORD + value: "passpassMyStr0ngP@ss" <2> +- name: ES_HOST + value: "https://somesuperhostiduuid.europe-west1.gcp.cloud.es.io:9243" <3> +``` + +1. The basic authentication username used to connect to {{es}}. +2. The basic authentication password used to connect to {{kib}}. +3. The {{es}} host to communicate with. + + +Refer to [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md) for all available options. + + +### Step 3: Configure tolerations [_step_3_configure_tolerations] + +Kubernetes control plane nodes can use [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to limit the workloads that can run on them. The manifest for standalone {{agent}} defines tolerations to run on these. Agents running on control plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes. To disable {{agent}} from running on control plane nodes, remove the following part of the DaemonSet spec: + +```yaml +spec: + # Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes. + # Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes + tolerations: + - key: node-role.kubernetes.io/control-plane + effect: NoSchedule + - key: node-role.kubernetes.io/master + effect: NoSchedule +``` + +Both these two tolerations do the same, but `node-role.kubernetes.io/master` is [deprecated as of Kubernetes version v1.25](https://kubernetes.io/docs/reference/labels-annotations-taints/#node-role-kubernetes-io-master-taint). + + +### Step 4: Deploy the {{agent}} [_step_4_deploy_the_agent] + +To deploy {{agent}} to Kubernetes, run: + +```sh +kubectl create -f elastic-agent-standalone-kubernetes.yaml +``` + +To check the status, run: + +```sh +$ kubectl -n kube-system get pods -l app=elastic-agent +NAME READY STATUS RESTARTS AGE +elastic-agent-4665d 1/1 Running 0 81m +elastic-agent-9f466c4b5-l8cm8 1/1 Running 0 81m +elastic-agent-fj2z9 1/1 Running 0 81m +elastic-agent-hs4pb 1/1 Running 0 81m +``` + +::::{admonition} Running {{agent}} on a read-only file system +:class: tip + +If you’d like to run {{agent}} on Kubernetes on a read-only file system, you can do so by specifying the `readOnlyRootFilesystem` option. + +:::: + + + +### Step 5: View your data in {{kib}} [_step_5_view_your_data_in_kib_2] + +1. Launch {{kib}}: + + ::::{tab-set} + + :::{tab-item} Elasticsearch Service + + 1. [Log in](https://cloud.elastic.co/) to your {{ecloud}} account. + 2. Navigate to the {{kib}} endpoint in your deployment. + ::: + + :::{tab-item} Self-managed + Point your browser to [http://localhost:5601](http://localhost:5601), replacing `localhost` with the name of the {{kib}} host. + ::: + + :::: + +2. You can see data flowing in by going to **Analytics → Discover** and selecting the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can’t see these indexes, [create a data view](/explore-analyze/find-and-organize/data-views.md) for them. +3. You can see predefined dashboards by selecting **Analytics→Dashboard**, or by [installing assets through an integration](/reference/ingestion-tools/fleet/view-integration-assets.md). + + +## Red Hat OpenShift configuration [_red_hat_openshift_configuration] + +If you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and enable the container to run as privileged. + +1. In the manifest file, modify the `agent-node-datastreams` ConfigMap and adjust inputs: + + * `kubernetes-cluster-metrics` input: + + * If `https` is used to access `kube-state-metrics`, add the following settings to all `kubernetes.state_*` datasets: + + ```yaml + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + ssl.certificate_authorities: + - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt + ``` + + * `kubernetes-node-metrics` input: + + * Change the `kubernetes.controllermanager` data stream condition to: + + ```yaml + condition: ${kubernetes.labels.app} == 'kube-controller-manager' + ``` + + * Change the `kubernetes.scheduler` data stream condition to: + + ```yaml + condition: ${kubernetes.labels.app} == 'openshift-kube-scheduler' + ``` + + * The `kubernetes.proxy` data stream configuration should look like: + + ```yaml + - data_stream: + dataset: kubernetes.proxy + type: metrics + metricsets: + - proxy + hosts: + - 'localhost:29101' + period: 10s + ``` + + * Add the following settings to all data streams that connect to `https://${env.NODE_NAME}:10250`: + + ```yaml + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + ssl.certificate_authorities: + - /path/to/ca-bundle.crt + ``` + + ::::{note} + `ca-bundle.crt` can be any CA bundle that contains the issuer of the certificate used in the Kubelet API. According to each specific installation of OpenShift this can be found either in `secrets` or in `configmaps`. In some installations it can be available as part of the service account secret, in `/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt`. When using the [OpenShift installer](https://github.com/openshift/installer/blob/master/docs/user/gcp/install.md) for GCP, mount the following `configmap` in the elastic-agent pod and use `ca-bundle.crt` in `ssl.certificate_authorities`: + :::: + + + ```shell + Name: kubelet-serving-ca + Namespace: openshift-kube-apiserver + Labels: + Annotations: + + Data + ==== + ca-bundle.crt: + ``` + +2. Grant the `elastic-agent` service account access to the privileged SCC: + + ```shell + oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:elastic-agent + ``` + + This command enables the container to be privileged as an administrator for OpenShift. + +3. If the namespace where elastic-agent is running has the `"openshift.io/node-selector"` annotation set, elastic-agent might not run on all nodes. In this case consider overriding the node selector for the namespace to allow scheduling on any node: + + ```shell + oc patch namespace kube-system -p \ + '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' + ``` + + This command sets the node selector for the project to an empty string. + + + +### Autodiscover targeted Pods [_autodiscover_targeted_pods] + +Refer to [Kubernetes autodiscovery with {{agent}}](/reference/ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md) for more information. + + diff --git a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md new file mode 100644 index 0000000000..f48057ad9c --- /dev/null +++ b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md @@ -0,0 +1,295 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/scaling-on-kubernetes.html +--- + +# Scaling Elastic Agent on Kubernetes [scaling-on-kubernetes] + +For more information on how to deploy {{agent}} on {{k8s}}, please review these pages: + +* [Run {{agent}} on Kubernetes managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md). +* [Run {{agent}} Standalone on Kubernetes](/reference/ingestion-tools/fleet/running-on-kubernetes-standalone.md). + + +### Observability at scale [_observability_at_scale] + +This document summarizes some key factors and best practices for using [Elastic {{observability}}](/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) to monitor {{k8s}} infrastructure at scale. Users need to consider different parameters and adjust {{stack}} accordingly. These elements are affected as the size of {{k8s}} cluster increases: + +* The amount of metrics being collected from several {{k8s}} endpoints +* The {{agent}}'s resources to cope with the high CPU and Memory needs for the internal processing +* The {{es}} resources needed due to the higher rate of metric ingestion +* The Dashboard’s visualizations response times as more data are requested on a given time window + +The document is divided in two main sections: + +* [Configuration Best Practices](#configuration-practices) +* [Validation and Troubleshooting practices](#validation-and-troubleshooting-practices) + + +### Configuration best practices [configuration-practices] + + +#### Configure agent resources [_configure_agent_resources] + +The {{k8s}} {{observability}} is based on [Elastic {{k8s}} integration](integration-docs://docs/reference/kubernetes.md), which collects metrics from several components: + +* **Per node:** + + * kubelet + * controller-manager + * scheduler + * proxy + +* **Cluster wide (such as unique metrics for the whole cluster):** + + * kube-state-metrics + * apiserver + + +Controller manager and Scheduler datastreams are being enabled only on the specific node that actually runs based on autodiscovery rules + +The default manifest provided deploys {{agent}} as DaemonSet which results in an {{agent}} being deployed on every node of the {{k8s}} cluster. + +Additionally, by default one agent is elected as **leader** (for more information visit [Kubernetes LeaderElection Provider](/reference/ingestion-tools/fleet/kubernetes_leaderelection-provider.md)). The {{agent}} Pod which holds the leadership lock is responsible for collecting the cluster-wide metrics in addition to its node’s metrics. + +:::{image} images/k8sscaling.png +:alt: {{agent}} as daemonset +:class: screenshot +::: + +The above schema explains how {{agent}} collects and sends metrics to {{es}}. Because of Leader Agent being responsible to also collecting cluster-lever metrics, this means that it requires additional resources. + +The DaemonSet deployment approach with leader election simplifies the installation of the {{agent}} because we define less {{k8s}} Resources in our manifest and we only need one single Agent policy for our Agents. Hence it is the default supported method for [Managed {{agent}} installation](/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md) + + +#### Specifying resources and limits in agent manifests [_specifying_resources_and_limits_in_agent_manifests] + +Resourcing of your Pods and the Scheduling priority (check section [Scheduling priority](#agent-scheduling)) of them are two topics that might be affected as the {{k8s}} cluster size increases. The increasing demand of resources might result to under-resource the Elastic Agents of your cluster. + +Based on our tests we advise to configure only the `limit` section of the `resources` section in the manifest. In this way the `request`'s settings of the `resources` will fall back to the `limits` specified. The `limits` is the upper bound limit of your microservice process, meaning that can operate in less resources and protect {{k8s}} to assign bigger usage and protect from possible resource exhaustion. + +```yaml +resources: + limits: + cpu: "1500m" + memory: "800Mi" +``` + +Based on our [{{agent}} Scaling tests](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-scaling-tests.md), the following table provides guidelines to adjust {{agent}} limits on different {{k8s}} sizes: + +Sample Elastic Agent Configurations: + +| | | | +| --- | --- | --- | +| No of Pods in K8s Cluster | Leader Agent Resources | Rest of Agents | +| 1000 | cpu: "1500m", memory: "800Mi" | cpu: "300m", memory: "600Mi" | +| 3000 | cpu: "2000m", memory: "1500Mi" | cpu: "400m", memory: "800Mi" | +| 5000 | cpu: "3000m", memory: "2500Mi" | cpu: "500m", memory: "900Mi" | +| 10000 | cpu: "3000m", memory: "3600Mi" | cpu: "700m", memory: "1000Mi" | + +The above tests were performed with Elastic Agent version 8.7 and scraping period of `10sec` (period setting for the Kubernetes integration). Those numbers are just indicators and should be validated for each different Kubernetes environment and amount of workloads. + +#### Proposed agent installations for large scale [_proposed_agent_installations_for_large_scale] + +Although daemonset installation is simple, it can not accommodate the varying agent resource requirements depending on the collected metrics. The need for appropriate resource assignment at large scale requires more granular installation methods. + +{{agent}} deployment is broken in groups as follows: + +* A dedicated {{agent}} deployment of a single Agent for collecting cluster wide metrics from the apiserver +* Node level {{agent}}s(no leader Agent) in a Daemonset +* kube-state-metrics shards and {{agent}}s in the StatefulSet defined in the kube-state-metrics autosharding manifest + +Each of these groups of {{agent}}s will have its own policy specific to its function and can be resourced independently in the appropriate manifest to accommodate its specific resource requirements. + +Resource assignment led us to alternatives installation methods. + +::::{important} +The main suggestion for big scale clusters **is to install {{agent}} as side container along with `kube-state-metrics` Shard**. The installation is explained in details [{{agent}} with Kustomize in Autosharding](https://github.com/elastic/elastic-agent/tree/main/deploy/kubernetes#kube-state-metrics-ksm-in-autosharding-configuration) +:::: + + +The following **alternative configuration methods** have been verified: + +1. With `hostNetwork:false` + + * {{agent}} as Side Container within KSM Shard pod + * For non-leader {{agent}} deployments that collect per KSM shards + +2. With `taint/tolerations` to isolate the {{agent}} daemonset pods from rest of deployments + +You can find more information in the document called [{{agent}} Manifests in order to support Kube-State-Metrics Sharding](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-ksm-sharding.md). + +Based on our [{{agent}} scaling tests](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-scaling-tests.md), the following table aims to assist users on how to configure their KSM Sharding as {{k8s}} cluster scales: + +| | | | +| --- | --- | --- | +| No of Pods in K8s Cluster | No of KSM Shards | Agent Resources | +| 1000 | No Sharding can be handled with default KSM config | limits: memory: 700Mi , cpu:500m | +| 3000 | 4 Shards | limits: memory: 1400Mi , cpu:1500m | +| 5000 | 6 Shards | limits: memory: 1400Mi , cpu:1500m | +| 10000 | 8 Shards | limits: memory: 1400Mi , cpu:1500m | + +The tests above were performed with Elastic Agent version 8.8 + TSDB Enabled and scraping period of `10sec` (for the Kubernetes integration). Those numbers are just indicators and should be validated per different Kubernetes policy configuration, along with applications that the Kubernetes cluster might include + +::::{note} +Tests have run until 10K pods per cluster. Scaling to bigger number of pods might require additional configuration from {{k8s}} Side and Cloud Providers but the basic idea of installing {{agent}} while horizontally scaling KSM remains the same. +:::: + + + +#### Agent scheduling [agent-scheduling] + +Setting the low priority to {{agent}} comparing to other pods might also result to {{agent}} being in Pending State.The scheduler tries to preempt (evict) lower priority Pods to make scheduling of the higher pending Pods possible. + +Trying to prioritise the agent installation before rest of application microservices, [PriorityClasses suggested](https://github.com/elastic/elastic-agent/blob/main/docs/manifests/elastic-agent-managed-gke-autopilot.yaml#L8-L16) + + +#### {{k8s}} Package configuration [_k8s_package_configuration] + +Policy configuration of {{k8s}} package can heavily affect the amount of metrics collected and finally ingested. Factors that should be considered in order to make your collection and ingestion lighter: + +* Scraping period of {{k8s}} endpoints +* Disabling log collection +* Keep audit logs disabled +* Disable events dataset +* Disable {{k8s}} control plane datasets in Cloud managed {{k8s}} instances (see more info ** [Run {{agent}} on GKE managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-gke-managed-by-fleet.md), [Run {{agent}} on Amazon EKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md), [Run {{agent}} on Azure AKS managed by {{fleet}}](/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md) pages) + + +#### Dashboards and visualisations [_dashboards_and_visualisations] + +The [Dashboard Guidelines](https://github.com/elastic/integrations/blob/main/docs/dashboard_guidelines.md) document provides guidance on how to implement your dashboards and is constantly updated to track the needs of Observability at scale. + +User experience regarding Dashboard responses, is also affected from the size of data being requested. As dashboards can contain multiple visualisations, the general consideration is to split visualisations and group them according to the frequency of access. The less number of visualisations tends to improve user experience. + + +#### Disabling indexing host.ip and host.mac fields [_disabling_indexing_host_ip_and_host_mac_fields] + +A new environemntal variable `ELASTIC_NETINFO: false` has been introduced to globally disable the indexing of `host.ip` and `host.mac` fields in your {{k8s}} integration. For more information see [Environment variables](/reference/ingestion-tools/fleet/agent-environment-variables.md). + +Setting this to `false` is recommended for large scale setups where the `host.ip` and `host.mac` fields' index size increases. The number of IPs and MAC addresses reported increases significantly as a Kubenetes cluster grows. This leads to considerably increased indexing time, as well as the need for extra storage and additional overhead for visualization rendering. + + +#### Elastic Stack configuration [_elastic_stack_configuration] + +The configuration of Elastic Stack needs to be taken under consideration in large scale deployments. In case of Elastic Cloud deployments the choice of the deployment [{{ecloud}} hardware profile](/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md) is important. + +For heavy processing and big ingestion rate needs, the `CPU-optimised` profile is proposed. + + +### Validation and troubleshooting practices [validation-and-troubleshooting-practices] + + +#### Define if agents are collecting as expected [_define_if_agents_are_collecting_as_expected] + +After {{agent}} deployment, we need to verify that Agent services are healthy, not restarting (stability) and that collection of metrics continues with expected rate (latency). + +**For stability:** + +If {{agent}} is configured as managed, in {{kib}} you can observe under **Fleet>Agents** + +:::{image} images/kibana-fleet-agents.png +:alt: {{agent}} Status +:class: screenshot +::: + +Additionally you can verify the process status with following commands: + +```bash +kubectl get pods -A | grep elastic +kube-system elastic-agent-ltzkf 1/1 Running 0 25h +kube-system elastic-agent-qw6f4 1/1 Running 0 25h +kube-system elastic-agent-wvmpj 1/1 Running 0 25h +``` + +Find leader agent: + +```bash +❯ k get leases -n kube-system | grep elastic +NAME HOLDER AGE +elastic-agent-cluster-leader elastic-agent-leader-elastic-agent-qw6f4 25h +``` + +Exec into Leader agent and verify the process status: + +```bash +❯ kubectl exec -ti -n kube-system elastic-agent-qw6f4 -- bash +root@gke-gke-scaling-gizas-te-default-pool-6689889a-sz02:/usr/share/elastic-agent# ./elastic-agent status +State: HEALTHY +Message: Running +Fleet State: HEALTHY +Fleet Message: (no message) +Components: + * kubernetes/metrics (HEALTHY) + Healthy: communicating with pid '42423' + * filestream (HEALTHY) + Healthy: communicating with pid '42431' + * filestream (HEALTHY) + Healthy: communicating with pid '42443' + * beat/metrics (HEALTHY) + Healthy: communicating with pid '42453' + * http/metrics (HEALTHY) + Healthy: communicating with pid '42462' +``` + +It is a common problem of lack of CPU/memory resources that agent process restart as {{k8s}} size grows. In the logs of agent you + +```json +kubectl logs -n kube-system elastic-agent-qw6f4 | grep "kubernetes/metrics" +[ouptut truncated ...] + +(HEALTHY->STOPPED): Suppressing FAILED state due to restart for '46554' exited with code '-1'","log":{"source":"elastic-agent"},"component":{"id":"kubernetes/metrics-default","state":"STOPPED"},"unit":{"id":"kubernetes/metrics-default-kubernetes/metrics-kube-state-metrics-c6180794-70ce-4c0d-b775-b251571b6d78","type":"input","state":"STOPPED","old_state":"HEALTHY"},"ecs.version":"1.6.0"} +{"log.level":"info","@timestamp":"2023-04-03T09:33:38.919Z","log.origin":{"file.name":"coordinator/coordinator.go","file.line":861},"message":"Unit state changed kubernetes/metrics-default-kubernetes/metrics-kube-apiserver-c6180794-70ce-4c0d-b775-b251571b6d78 (HEALTHY->STOPPED): Suppressing FAILED state due to restart for '46554' exited with code '-1'","log":{"source":"elastic-agent"} +``` + +You can verify the instant resource consumption by running `top pod` command and identify if agents are close to the limits you have specified in your manifest. + +```bash +kubectl top pod -n kube-system | grep elastic +NAME CPU(cores) MEMORY(bytes) +elastic-agent-ltzkf 30m 354Mi +elastic-agent-qw6f4 67m 467Mi +elastic-agent-wvmpj 27m 357Mi +``` + + +#### Verify ingestion latency [_verify_ingestion_latency] + +{{kib}} Discovery can be used to identify frequency of your metrics being ingested. + +Filter for Pod dataset: + +:::{image} images/pod-latency.png +:alt: {{k8s}} Pod Metricset +:class: screenshot +::: + +Filter for State_Pod dataset + +:::{image} images/state-pod.png +:alt: {{k8s}} State Pod Metricset +:class: screenshot +::: + +Identify how many events have been sent to {{es}}: + +```bash +kubectl logs -n kube-system elastic-agent-h24hh -f | grep -i state_pod +[ouptut truncated ...] + +"state_pod":{"events":2936,"success":2936} +``` + +The number of events denotes the number of documents that should be depicted inside {{kib}} Discovery page. + +For example, in a cluster with 798 pods, then 798 docs should be depicted in block of ingestion inside {{kib}}. + +#### Define if {{es}} is the bottleneck of ingestion [_define_if_es_is_the_bottleneck_of_ingestion] + +In some cases maybe the {{es}} can not cope with the rate of data that are trying to be ingested. In order to verify the resource utilisation, installation of an [{{stack}} monitoring cluster](/deploy-manage/monitor/stack-monitoring.md) is advised. + +Additionally, in {{ecloud}} deployments you can navigate to **Manage Deployment > Deployments > Monitoring > Performance**. Corresponding dashboards for `CPU Usage`, `Index Response Times` and `Memory Pressure` can reveal possible problems and suggest vertical scaling of {{stack}} resources. + +## Relevant links [_relevant_links] + +* [Monitor {{k8s}} Infrastructure](/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) +* [Blog: Managing your {{k8s}} cluster with Elastic {{observability}}](https://www.elastic.co/blog/kubernetes-cluster-metrics-logs-monitoring) diff --git a/reference/ingestion-tools/fleet/script-processor.md b/reference/ingestion-tools/fleet/script-processor.md new file mode 100644 index 0000000000..836e04f9b7 --- /dev/null +++ b/reference/ingestion-tools/fleet/script-processor.md @@ -0,0 +1,106 @@ +--- +navigation_title: "script" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/script-processor.html +--- + +# Script Processor [script-processor] + + +The `script` processor executes Javascript code to process an event. The processor uses a pure Go implementation of ECMAScript 5.1 and has no external dependencies. This can be useful in situations where one of the other processors doesn’t provide the functionality you need to filter events. + +The processor can be configured by embedding Javascript in your configuration file or by pointing the processor at external files. + + +## Examples [_examples_10] + +```yaml + - script: + lang: javascript + source: > + function process(event) { + event.Tag("js"); + } +``` + +This example loads `filter.js` from disk: + +```yaml + - script: + lang: javascript + file: ${path.config}/filter.js +``` + +Parameters can be passed to the script by adding `params` to the config. This allows for a script to be made reusable. When using `params` the code must define a `register(params)` function to receive the parameters. + +```yaml + - script: + lang: javascript + tag: my_filter + params: + threshold: 15 + source: > + var params = {threshold: 42}; + function register(scriptParams) { + params = scriptParams; + } + function process(event) { + if (event.Get("severity") < params.threshold) { + event.Cancel(); + } + } +``` + +If the script defines a `test()` function, it will be invoked when the processor is loaded. Any exceptions thrown will cause the processor to fail to load. This can be used to make assertions about the behavior of the script. + +```javascript +function process(event) { + if (event.Get("event.code") === 1102) { + event.Put("event.action", "cleared"); + } + return event; +} + +function test() { + var event = process(new Event({event: {code: 1102}})); + if (event.Get("event.action") !== "cleared") { + throw "expected event.action === cleared"; + } +} +``` + + +## Configuration settings [_configuration_settings_38] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `lang` | Yes | | The value of this field must be `javascript`. | +| `tag` | No | | Optional identifier added to log messages. If defined, this tag enables metrics logging for this instance of the processor. The metrics include the number of exceptions and a histogram of the execution times for the `process` function. | +| `source` | | | Inline Javascript source code. | +| `file` | | | Path to a script file to load. Relative paths are interpreted as relative to the `path.config` directory. Globs are expanded. | +| `files` | | | List of script files to load. The scripts are concatenated together. Relative paths are interpreted as relative to the `path.config` directory. Globs are expanded. | +| `params` | | | A dictionary of parameters that are passed to the `register` of the script. | +| `tag_on_exception` | | `_js_exception` | Tag to add to events in case the Javascript code causes an exception while processing an event. | +| `timeout` | | no timeout | An execution timeout for the `process` function. When the `process` function takes longer than the `timeout` period, the function is interrupted. You can set this option to prevent a script from running for too long (like preventing an infinite `while` loop). | +| `max_cached_sessions` | | `4` | The maximum number of Javascript VM sessions that will be cached to avoid reallocation. | + + +## Event API [_event_api] + +The `Event` object passed to the `process` method has the following API. + +| Method | Description | +| --- | --- | +| `Get(string)` | Get a value from the event (either a scalar or an object). If the key does notexist `null` is returned. If no key is provided then an object containing allfields is returned.
**Example**: `var value = event.Get(key);` | +| `Put(string, value)` | Put a value into the event. If the key was already set then theprevious value is returned. It throws an exception if the key cannot be setbecause one of the intermediate values is not an object.
**Example**: `var old = event.Put(key, value);` | +| `Rename(string, string)` | Rename a key in the event. The target key must not exist. Itreturns true if the source key was successfully renamed to the target key.
**Example**: `var success = event.Rename("source", "target");` | +| `Delete(string)` | Delete a field from the event. It returns true on success.
**Example**: `var deleted = event.Delete("user.email");` | +| `Cancel()` | Flag the event as cancelled which causes the processor to dropevent.
**Example**: `event.Cancel(); return;` | +| `Tag(string)` | Append a tag to the `tags` field if the tag does not alreadyexist. Throws an exception if `tags` exists and is not a string or a list ofstrings.
**Example**: `event.Tag("user_event");` | +| `AppendTo(string, string)` | `AppendTo` is a specialized `Put` method that converts the existing value to anarray and appends the value if it does not already exist. If there is anexisting value that’s not a string or array of strings then an exception isthrown.
**Example**: `event.AppendTo("error.message", "invalid file hash");` | + diff --git a/reference/ingestion-tools/fleet/secret-files-guide.md b/reference/ingestion-tools/fleet/secret-files-guide.md new file mode 100644 index 0000000000..47ae5dbbee --- /dev/null +++ b/reference/ingestion-tools/fleet/secret-files-guide.md @@ -0,0 +1,139 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/secret-files-guide.html +--- + +# Secret files guide [secret-files-guide] + +This guide provides step-by-step examples with best practices on how to deploy secret files directly on a host or through the Kubernetes secrets engine. + +## Secrets on filesystem [secret-filesystem] + +Secret files can be provisioned as plain text files directly on filesystems and referenced or passed through {{agent}}. + +We recommend these steps to improve security. + +### File permissions [_file_permissions] + +File permissions should not allow for global read permissions. + +On MacOS and Linux, you can set file ownership and file permissions with the `chown` and `chmod` commands, respectively. {{fleet-server}} runs as the `root` user on MacOS and Linux, so given a file named `mySecret`, you can alter it with: + +```sh +sudo chown root:root mySecret # set the user:group to root +sudo chmod 0600 mySecret # set only the read/write permission flags for the user, clear group and global permissions. +``` + +On Windows, you can use `icacls` to alter the ACL list associated with the file: + +```powershell +Write-Output -NoNewline SECRET > mySecret # Create the file mySecret with the contents SECRET +icacls .\mySecret /inheritance:d # Remove inherited permissions from file +icacls .\mySecret /remove:g BUILTIN\Administrators # Remove Administrators group permissions +icacls .\mySecret /remove:g $env:UserName # Remove current user's permissions +``` + + +### Temporary filesystem [_temporary_filesystem] + +You can use a temporary filesystem (in RAM) to hold secret files in order to improve security. These types of filesystems are normally not included in backups and are cleared if the host is reset. If used, the filesystem and secret files need to be reprovisioned with every reset. + +On Linux you can use `mount` with the `tmpfs` filesystem to create a temporary filesystem in RAM: + +```sh +mount -o size=1G -t tmpfs none /mnt/fleet-server-secrets +``` + +On MacOS you can use a combination of `diskutil` and `hdiutil` to create a RAM disk: + +```sh +diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://2097152` +``` + +Windows systems do not offer built-in options to create a RAM disk, but several third party programs are available. + + +### Example [_example] + +Here is a step by step guide for provisioning a service token on a Linux system: + +```sh +sudo mkdir -p /mnt/fleet-server-secrets +sudo mount -o size=1G -t tmpfs none /mnt/fleet-server-secrets +echo -n MY-SERVICE-TOKEN > /mnt/fleet-server-secrets/service-token +sudo chown root:root /mnt/fleet-server-secrets/service-token +sudo chmod 0600 /mnt/fleet-server-secrets/service-token +``` + +::::{note} +The `-n` flag is used with `echo` to prevent a newline character from being appended at the end of the secret. Be sure that the secret file does not contain the trailing newline character. +:::: + + + + +## Secrets in containers [_secrets_in_containers] + +When you are using secret files directly in containers without using Kubernetes or another secrets management solution, you can pass the files into containers by mounting the file or directory. Provision the file in the same manner as it is in [Secrets on filesystem](#secret-filesystem) and mount it in read-only mode. For example, when using Docker. + +If you are using {{agent}} image: + +```sh +docker run \ + -v /path/to/creds:/creds:ro \ + -e FLEET_SERVER_CERT_KEY_PASSPHRASE=/creds/passphrase \ + -e FLEET_SERVER_SERVICE_TOKEN_PATH=/creds/service-token \ + --rm docker.elastic.co/elastic-agent/elastic-agent +``` + + +## Secrets in Kubernetes [_secrets_in_kubernetes] + +Kubernetes has a [secrets management engine](https://kubernetes.io/docs/concepts/configuration/secret/) that can be used to provision secret files to pods. + +For example, you can create the passphrase secret with: + +```sh +kubectl create secret generic fleet-server-key-passphrase \ + --from-literal=value=PASSPHRASE +``` + +And create the service token secret with: + +```sh +kubectl create secret generic fleet-server-service-token \ + --from-literal=value=SERVICE-TOKEN +``` + +Then include it in the pod specification, for example, when you are running {{fleet-server}} under {{agent}}: + +```yaml +spec: + volumes: + - name: key-passphrase + secret: + secretName: fleet-server-key-passphrase + - name: service-token + secret: + secretName: fleet-server-service-token + containers: + - name: fleet-server + image: docker.elastic.co/elastic-agent/elastic-agent + volumeMounts: + - name: key-passphrase + mountPath: /var/secrets/passphrase + - name: service-token + mountPath: /var/secrets/service-token + env: + - name: FLEET_SERVER_CERT_KEY_PASSPHRASE + value: /var/secrets/passphrase/value + - name: FLEET_SERVER_SERVICE_TOKEN_PATH + value: /var/secrets/service-token/value +``` + +### {{agent}} Kubernetes secrets provider [_agent_kubernetes_secrets_provider] + +When you are running {{fleet-server}} under {{agent}} in {{k8s}}, you can use {{agent}}'s [Kubernetes Secrets Provider](/reference/ingestion-tools/fleet/kubernetes_secrets-provider.md) to insert a {{k8s}} secret directly into {{fleet-server}}'s configuration. Note that due to how {{fleet-server}} is bootstrapped only the APM secrets (API key or secret token) can be specified with this provider. + + + diff --git a/reference/ingestion-tools/fleet/secure-connections.md b/reference/ingestion-tools/fleet/secure-connections.md new file mode 100644 index 0000000000..38fcc75eb7 --- /dev/null +++ b/reference/ingestion-tools/fleet/secure-connections.md @@ -0,0 +1,270 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/secure-connections.html +--- + +# Configure SSL/TLS for self-managed Fleet Servers [secure-connections] + +If you’re running a self-managed cluster, configure Transport Layer Security (TLS) to encrypt traffic between {{agent}}s, {{fleet-server}}, and other components in the {{stack}}. + +For the install settings specific to mutual TLS, as opposed to one-way TLS, refer to [{{agent}} deployment models with mutual TLS](/reference/ingestion-tools/fleet/mutual-tls.md). + +For a summary of flow by which TLS is established between components using either one-way or mutual TLS, refer to [One-way and mutual TLS certifications flow](/reference/ingestion-tools/fleet/tls-overview.md). + +::::{tip} +Our [hosted {{ess}}](https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body) on {{ecloud}} provides secure, encrypted connections out of the box! +:::: + + + +## Prerequisites [prereqs] + +Configure security and generate certificates for the {{stack}}. For more information about securing the {{stack}}, refer to [Configure security for the {{stack}}](/deploy-manage/deploy/self-managed/installing-elasticsearch.md). + +::::{important} +{{agent}}s require a PEM-formatted CA certificate to send encrypted data to {{es}}. If you followed the steps in [Configure security for the {{stack}}](/deploy-manage/deploy/self-managed/installing-elasticsearch.md), your certificate will be in a p12 file. To convert it, use OpenSSL: + +```shell +openssl pkcs12 -in path.p12 -out cert.crt -clcerts -nokeys +openssl pkcs12 -in path.p12 -out private.key -nocerts -nodes +``` + +Key passwords are not currently supported. + +:::: + + +::::{important} +When you run {{agent}} with the {{elastic-defend}} integration, the [TLS certificates](https://en.wikipedia.org/wiki/X.509) used to connect to {{fleet-server}} and {{es}} need to be generated using [RSA](https://en.wikipedia.org/wiki/RSA_(cryptosystem)). For a full list of available algorithms to use when configuring TLS or mTLS, see [Configure SSL/TLS for standalone {{agents}}](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md). These settings are available for both standalone and {{fleet}}-managed {{agent}}. +:::: + + + +## Generate a custom certificate and private key for {{fleet-server}} [generate-fleet-server-certs] + +This section describes how to use the `certutil` tool provided by {{es}}, but you can use whatever process you typically use to generate PEM-formatted certificates. + +1. Generate a certificate authority (CA). Skip this step if you want to use an existing CA. + + ```shell + ./bin/elasticsearch-certutil ca --pem + ``` + + This command creates a zip file that contains the CA certificate and key you’ll use to sign the {{fleet-server}} certificate. Extract the zip file: + + :::{image} images/ca.png + :alt: Screen capture of a folder called ca that contains two files: ca.crt and ca.key + ::: + + Store the files in a secure location. + +2. Use the certificate authority to generate certificates for {{fleet-server}}. For example: + + ```shell + ./bin/elasticsearch-certutil cert \ + --name fleet-server \ + --ca-cert /path/to/ca/ca.crt \ + --ca-key /path/to/ca/ca.key \ + --dns your.host.name.here \ + --ip 192.0.2.1 \ + --pem + ``` + + Where `dns` and `ip` specify the name and IP address of the {{fleet-server}}. Run this command for each {{fleet-server}} you plan to deploy. + + This command creates a zip file that includes a `.crt` and `.key` file. Extract the zip file: + + :::{image} images/fleet-server-certs.png + :alt: Screen capture of a folder called fleet-server that contains two files: fleet-server.crt and fleet-server.key + ::: + + Store the files in a secure location. You’ll need these files later to encrypt traffic between {{agent}}s and {{fleet-server}}. + + + +## Encrypt traffic between {{agent}}s, {{fleet-server}}, and {{es}} [_encrypt_traffic_between_agents_fleet_server_and_es] + +{{fleet-server}} needs a CA certificate or the CA fingerprint to connect securely to {{es}}. It also needs to expose a {{fleet-server}} certificate so other {{agent}}s can connect to it securely. + +For the steps in this section, imagine you have the following files: + +| | | +| --- | --- | +| `ca.crt` | The CA certificate to use to connect to {{fleet-server}}. This is theCA used to [generate a certificate and key](#generate-fleet-server-certs)for {{fleet-server}}. | +| `fleet-server.crt` | The certificate you generated for {{fleet-server}}. | +| `fleet-server.key` | The private key you generated for {{fleet-server}}.
If the `fleet-server.key` file is encrypted with a passphrase, the passphrase will need to be specified through a file. | +| `elasticsearch-ca.crt` | The CA certificate to use to connect to {{es}}. This is the CA used to generatecerts for {{es}} (see [Prerequisites](#prereqs)).
Note that the CA certificate’s SHA-256 fingerprint (hash) may be used instead of the `elasticsearch-ca.crt` file for securing connections to {{es}}. | + +To encrypt traffic between {{agent}}s, {{fleet-server}}, and {{es}}: + +1. Configure {{fleet}} settings. These settings are applied to all {{fleet}}-managed {{agent}}s. +2. In {{kib}}, open the main menu, then click **Management > {{fleet}} > Settings**. + + 1. Under **{{fleet-server}} hosts**, specify the URLs {{agent}}s will use to connect to {{fleet-server}}. For example, [https://192.0.2.1:8220](https://192.0.2.1:8220), where 192.0.2.1 is the host IP where you will install {{fleet-server}}. + + ::::{tip} + For host settings, use the `https` protocol. DNS-based names are also allowed. + :::: + + 2. Under **Outputs**, search for the default output, then click the **Edit** icon in the **Action** column. + 3. In the **Hosts** field, specify the {{es}} URLs where {{agent}}s will send data. For example, [https://192.0.2.0:9200](https://192.0.2.0:9200). + 4. Specify either a CA certificate or CA fingerprint to connect securely {{es}}: + + +* If you have a valid HEX encoded SHA-256 CA trusted fingerprint from root CA, specify it in the **Elasticsearch CA trusted fingerprint** field. To learn more, refer to the [{{es}} security documentation](/deploy-manage/deploy/self-managed/installing-elasticsearch.md). +* Otherwise, under **Advanced YAML configuration**, set `ssl.certificate_authorities` and specify the CA certificate to use to connect to {{es}}. You can specify a list of file paths (if the files are available), or embed a certificate directly in the YAML configuration. If you specify file paths, the certificates must be available on the hosts running the {{agent}}s. + + File path example: + + ```yaml + ssl.certificate_authorities: ["/path/to/your/elasticsearch-ca.crt"] <1> + ``` + + 1. The path to the CA certificate on the {{agent}} host. + + + Pasted certificate example: + + ```yaml + ssl: + certificate_authorities: + - | + -----BEGIN CERTIFICATE----- + MIIDSjCCAjKgAwIBAgIVAKlphSqJclcni3P83gVsirxzuDuwMA0GCSqGSIb3DQEB + CwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2Vu + ZXJhdGVkIENBMB4XDTIxMDYxNzAxMzIyOVoXDTI0MDYxNjAxMzIyOVowNDEyMDAG + A1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5lcmF0ZWQgQ0Ew + ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDOFgtVri7Msy2iR33nLrVO + /M/6IyF72kFXup1E67TzetI22avOxNlq+HZTpZoWGV1I4RgxiQeN12FLuxxhd9nm + rxfZEqpuIjvo6fvU9ifC03WjXg1opgdEb6JqH93RHKw0PYimxhQfFcwrKxFseHUx + DeUNQgHkMQhDZgIfNgr9H/1X6qSU4h4LemyobKY3HDKY6pGsuBzsF4iOCtIitE9p + sagiWR21l1gW/lNaEW2ICKhJXbaqbE/pis45/yyPI4Q1Jd1VqZv744ejnZJnpAx9 + mYSE5RqssMeV6Wlmu1xWljOPeerOVIKUfHY38y8GZwk7TNYAMajratG2dj+v9eAV + AgMBAAGjUzBRMB0GA1UdDgQWBBSCNCjkb66eVsIaa+AouwUsxU4b6zAfBgNVHSME + GDAWgBSCNCjkb66eVsIaa+AouwUsxU4b6zAPBgNVHRMBAf8EBTADAQH/MA0GCSqG + SIb3DQEBCwUAA4IBAQBVSbRObxPwYFk0nqF+THQDG/JfpAP/R6g+tagFIBkATLTu + zeZ6oJggWNSfgcBviTpXc6i1AT3V3iqzq9KZ5rfm9ckeJmjBd9gAcyqaeF/YpWEb + ZAtbxfgPLI3jK+Sn8S9fI/4djEUl6F/kARpq5ljYHt9BKlBDyL2sHymQcrDC3pTZ + hEOM4cDbyKHgt/rjcNhPRn/q8g3dDhBdzjlNzaCNH/kmqWpot9AwmhhfPTcf1VRc + gxdg0CTQvQvuceEvIYYYVGh/cIsIhV2AyiNBzV5jJw5ztQoVyWvdqn3B1YpMP8oK + +nadUcactH4gbsX+oXRULNC7Cdd9bp2G7sQc+aZm + -----END CERTIFICATE----- + ``` + + 1. Install an {{agent}} as a {{fleet-server}} on the host and configure it to use TLS: + + 1. If you don’t already have a {{fleet-server}} service token, click the **Agents** tab in {{fleet}} and follow the instructions to generate the service token now. + + ::::{tip} + The in-product installation steps are incomplete. Before running the `install` command, add the settings shown in the next step. + :::: + + 2. From the directory where you extracted {{fleet-server}}, run the `install` command and specify the certificates to use. + + The following command installs {{agent}} as a service, enrolls it in the {{fleet-server}} policy, and starts the service. + + ::::{note} + If you’re using DEB or RPM, or already have the {{agent}} installed, use the `enroll` command along with the following options, then start the service as described in [Start {{agent}}](/reference/ingestion-tools/fleet/start-stop-elastic-agent.md#start-elastic-agent-service). + :::: + + + ```shell + sudo ./elastic-agent install \ + --url=https://192.0.2.1:8220 \ + --fleet-server-es=https://192.0.2.0:9200 \ + --fleet-server-service-token=AAEBAWVsYXm0aWMvZmxlZXQtc2XydmVyL3Rva2VuLTE2MjM4OTAztDU1OTQ6dllfVW1mYnFTVjJwTC2ZQ0EtVnVZQQ \ + --fleet-server-policy=fleet-server-policy \ + --fleet-server-es-ca=/path/to/elasticsearch-ca.crt \ + --certificate-authorities=/path/to/ca.crt \ + --fleet-server-cert=/path/to/fleet-server.crt \ + --fleet-server-cert-key=/path/to/fleet-server.key \ + --fleet-server-port=8220 \ + --elastic-agent-cert=/tmp/fleet-server.crt \ + --elastic-agent-cert-key=/tmp/fleet-server.key \ + --elastic-agent-cert-key-passphrase=/tmp/fleet-server/passphrase-file \ + --fleet-server-es-cert=/tmp/fleet-server.crt \ + --fleet-server-es-cert-key=/tmp/fleet-server.key \ + --fleet-server-client-auth=required + ``` + + Where: + + `url` + : {{fleet-server}} URL. + + `fleet-server-es` + : {{es}} URL + + `fleet-server-service-token` + : Service token to use to communicate with {{es}}. + + `fleet-server-policy` + : The specific policy that {{fleet-server}} will use. + + `fleet-server-es-ca` + : CA certificate that the current {{fleet-server}} uses to connect to {{es}}. + + `certificate-authorities` + : List of paths to PEM-encoded CA certificate files that should be trusted for the other {{agents}} to connect to this {fleet-server} + + `fleet-server-cert` + : The path for the PEM-encoded certificate (or certificate chain) which is associated with the fleet-server-cert-key to expose this {{fleet-server}} HTTPS endpoint to the other {agents} + + `fleet-server-cert-key` + : Private key to use to expose this {{fleet-server}} HTTPS endpoint to the other {agents} + + `elastic-agent-cert` + : The certificate to use as the client certificate for {{agent}}'s connections to {{fleet-server}}. + + `elastic-agent-cert-key` + : The path to the private key to use as for {{agent}}'s connections to {{fleet-server}}. + + `elastic-agent-cert-key` + : The path to the file that contains the passphrase for the mutual TLS private key that {{agent}} will use to connect to {{fleet-server}}. The file must only contain the characters of the passphrase, no newline or extra non-printing characters. This option is only used if the `elastic-agent-cert-key` is encrypted and requires a passphrase to use. + + `fleet-server-es-cert` + : The path to the client certificate that {{fleet-server}} will use when connecting to {{es}}. + + `fleet-server-es-cert-key` + : The path to the private key that {{fleet-server}} will use when connecting to {{es}}. + + `fleet-server-client-auth` + : One of `none`, `optional`, or `required`. Defaults to `none`. {{fleet-server}}'s client_authentication option for client mTLS connections. If `optional` or `required` is specified, client certificates are verified using CAs specified in the `--certificate-authorities` flag. + + Note that additionally an optional passphrase for the private key may be specified with: + + `fleet-server-cert-key-passphrase` + : Passphrase file used to decrypt {{fleet-server}}'s private key. + + What happens if you enroll {{fleet-server}} without specifying certificates? + If the certificates are managed by your organization and installed at the system level, they will be used to encrypt traffic between {{agent}}s, {{fleet-server}}, and {{es}}. + If system-level certificates don’t exist, {{fleet-server}} automatically generates self-signed certificates. Traffic between {{fleet-server}} and {{agent}}s over HTTPS is encrypted, but the certificate chain cannot be verified. Any {{agent}}s enrolling in {{fleet-server}} will need to pass the `--insecure` flag to acknowledge that the certificate chain is not verified. + Allowing {{fleet-server}} to generate self-signed certificates is useful to get things running for development, but not recommended in a production environment. + + + 2. Install your {{agent}}s and enroll them in {{fleet}}. + + {{agent}}s connecting to a secured {{fleet-server}} need to pass in the CA certificate used by the {{fleet-server}}. The CA certificate used by {{es}} is already specified in the agent policy because it’s set under {{fleet}} settings in {{kib}}. You do not need to pass it on the command line. + + The following command installs {{agent}} as a service, enrolls it in the agent policy associated with the specified token, and starts the service. + + ```shell + sudo elastic-agent install --url=https://192.0.2.1:8220 \ + --enrollment-token= \ + --certificate-authorities=/path/to/ca.crt + ``` + + Where: + + `url` + : {{fleet-server}} URL to use to enroll the {{agent}} into {{fleet}}. + + `enrollment-token` + : The enrollment token for the policy that will be applied to the {{agent}}. + + `certificate-authorities` + : CA certificate to use to connect to {{fleet-server}}. This is the CA used to [generate a certificate and key](#generate-fleet-server-certs) for {{fleet-server}}. + + Don’t have an enrollment token? On the **Agents** tab in {{fleet}}, click **Add agent**. Under **Enroll and start the Elastic Agent**, follow the in-product installation steps, making sure that you add the `--certificate-authorities` option before you run the command. + + diff --git a/reference/ingestion-tools/fleet/secure-logstash-connections.md b/reference/ingestion-tools/fleet/secure-logstash-connections.md new file mode 100644 index 0000000000..f5ec4e7090 --- /dev/null +++ b/reference/ingestion-tools/fleet/secure-logstash-connections.md @@ -0,0 +1,224 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/secure-logstash-connections.html +--- + +# Configure SSL/TLS for the Logstash output [secure-logstash-connections] + +To send data from {{agent}} to {{ls}} securely, you need to configure Transport Layer Security (TLS). Using TLS ensures that your {{agent}}s send encrypted data to trusted {{ls}} servers, and that your {{ls}} servers receive data from trusted {{agent}} clients. + + +## Prerequisites [secure-logstash-prereqs] + +* Make sure your [subscription level](https://www.elastic.co/subscriptions) supports output to {{ls}}. +* On Windows, add port 8220 for {{fleet-server}} and 5044 for {{ls}} to the inbound port rules in Windows Advanced Firewall. +* If you are connecting to a self-managed {{es}} cluster, you need the CA certificate that was used to sign the certificates for the HTTP layer of {{es}} cluster. For more information, refer to the [{{es}} security docs](/deploy-manage/deploy/self-managed/installing-elasticsearch.md). + + +## Generate custom certificates and private keys [generate-logstash-certs] + +You can use whatever process you typically use to generate PEM-formatted certificates. The examples shown here use the `certutil` tool provided by {{es}}. + +::::{tip} +The `certutil` tool is not available on {{ecloud}}, but you can still use it to generate certificates for {{agent}} to {{ls}} connections. Just [download an {{es}} package](https://www.elastic.co/downloads/elasticsearch), extract it to a local directory, and run the `elasticsearch-certutil` command. There’s no need to start {{es}}! +:::: + + +1. Generate a certificate authority (CA). Skip this step if you want to use an existing CA. + + ```shell + ./bin/elasticsearch-certutil ca --pem + ``` + + This command creates a zip file that contains the CA certificate and key you’ll use to sign the certificates. Extract the zip file: + + :::{image} images/ca-certs.png + :alt: Screen capture of a folder called ca that contains two files: ca.crt and ca.key + ::: + +2. Generate a client SSL certificate signed by your CA. For example: + + ```shell + ./bin/elasticsearch-certutil cert \ + --name client \ + --ca-cert /path/to/ca/ca.crt \ + --ca-key /path/to/ca/ca.key \ + --pem + ``` + + Extract the zip file: + + :::{image} images/client-certs.png + :alt: Screen capture of a folder called client that contains two files: client.crt and client.key + ::: + +3. Generate a {{ls}} SSL certificate signed by your CA. For example: + + ```shell + ./bin/elasticsearch-certutil cert \ + --name logstash \ + --ca-cert /path/to/ca/ca.crt \ + --ca-key /path/to/ca/ca.key \ + --dns your.host.name.here \ + --ip 192.0.2.1 \ + --pem + ``` + + Extract the zip file: + + :::{image} images/logstash-certs.png + :alt: Screen capture of a folder called logstash that contains two files: logstash.crt and logstash.key + ::: + +4. Convert the {{ls}} key to pkcs8. For example, on Linux run: + + ```shell + openssl pkcs8 -inform PEM -in logstash.key -topk8 -nocrypt -outform PEM -out logstash.pkcs8.key + ``` + + +Store these files in a secure location. + + +## Configure the {{ls}} pipeline [configure-ls-ssl] + +::::{tip} +If you’ve already created the {{ls}} `elastic-agent-pipeline.conf` pipeline and added it to `pipelines.yml`, skip to the example configurations and modify your pipeline configuration as needed. +:::: + + +In your {{ls}} configuration directory, open the `pipelines.yml` file and add the following configuration. Replace the path to your file. + +```yaml +- pipeline.id: elastic-agent-pipeline + path.config: "/etc/path/to/elastic-agent-pipeline.conf" +``` + +In the `elastic-agent-pipeline.conf` file, add the pipeline configuration. Note that the configuration needed for {{ess}} on {{ecloud}} is different from self-managed {{es}} clusters. If you copied the configuration shown in {{fleet}}, adjust it as needed. + +{{ess}} example: + +```text +input { + elastic_agent { + port => 5044 + ssl_enabled => true + ssl_certificate_authorities => ["/path/to/ca.crt"] + ssl_certificate => "/path/to/logstash.crt" + ssl_key => "/path/to/logstash.pkcs8.key" + ssl_client_authentication => "required" + } +} + +output { + elasticsearch { + cloud_id => "xxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx=" <1> + api_key => "xxxx:xxxx" <2> + data_stream => true + ssl => true <3> + } +} +``` + +1. Use the `cloud_id` shown on your deployment page in {{ecloud}}. +2. In {{fleet}}, you can generate this API key when you add a {{ls}} output. +3. {{ess}} uses standard publicly trusted certificates, so there’s no need specify other SSL settings here. + + +Self-managed {{es}} cluster example: + +```text +input { + elastic_agent { + port => 5044 + ssl_enabled => true + ssl_certificate_authorities => ["/path/to/ca.crt"] + ssl_certificate => "/path/to/logstash.crt" + ssl_key => "/path/to/logstash.pkcs8.key" + ssl_client_authentication => "required" + } +} + +output { + elasticsearch { + hosts => "https://xxxx:9200" + api_key => "xxxx:xxxx" + data_stream => true + ssl => true + cacert => "/path/to/http_ca.crt" <1> + } +} +``` + +1. Use the certificate that was generated for {{es}}. + + +To learn more about the {{ls}} configuration, refer to: + +* [{{agent}} input plugin](logstash://docs/reference/plugins-inputs-elastic_agent.md) +* [{{es}} output plugin](logstash://docs/reference/plugins-outputs-elasticsearch.md) +* [Secure your connection to {{es}}](logstash://docs/reference/secure-connection.md) + +When you’re done configuring the pipeline, restart {{ls}}: + +```shell +bin/logstash +``` + + +## Add a {{ls}} output to {{fleet}} [add-ls-output] + +This section describes how to add a {{ls}} output and configure SSL settings in {{fleet}}. If you’re running {{agent}} standalone, refer to the [{{ls}} output](/reference/ingestion-tools/fleet/logstash-output.md) configuration docs. + +1. In {{kib}}, go to **{{fleet}} > Settings**. +2. Under **Outputs**, click **Add output**. If you’ve been following the {{ls}} steps in {{fleet}}, you might already be on this page. +3. Specify a name for the output. +4. For **Type**, select **Logstash**. +5. Under **Logstash hosts**, specify the host and port your agents will use to connect to {{ls}}. Use the format `host:port`. +6. In the **Server SSL certificate authorities** field, paste in the entire contents of the `ca.crt` file you [generated earlier](#generate-logstash-certs). +7. In the **Client SSL certificate** field, paste in the entire contents of the `client.crt` file you generated earlier. +8. In the **Client SSL certificate key** field, paste in the entire contents of the `client.key` file you generated earlier. + +:::{image} images/add-logstash-output.png +:alt: Screen capture of a folder called `logstash` that contains two files: logstash.crt and logstash.key +:class: screenshot +::: + +When you’re done, save and apply the settings. + + +## Select the {{ls}} output in an agent policy [use-ls-output] + +{{ls}} is now listening for events from {{agent}}, but events are not streaming into {{es}} yet. You need to select the {{ls}} output in an agent policy. You can edit an existing policy or create a new one: + +1. In {{kib}}, go to **{{fleet}} > Agent policies** and either create a new agent policy or click an existing policy to edit it: + + * To change the output settings in a new policy, click **Create agent policy** and expand **Advanced options**. + * To change the output settings in an existing policy, click the policy to edit it, then click **Settings**. + +2. Set **Output for integrations** and (optionally) **Output for agent monitoring** to use the {{ls}} output you created earlier. You might need to scroll down to see these options + + :::{image} images/agent-output-settings.png + :alt: Screen capture showing the {{ls}} output policy selected in an agent policy + :class: screenshot + ::: + +3. Save your changes. + +Any {{agent}}s enrolled in the agent policy will begin sending data to {{es}} via {{ls}}. If you don’t have any installed {{agent}}s enrolled in the agent policy, do that now. + +There might be a slight delay while the {{agent}}s update to the new policy and connect to {{ls}} over a secure connection. + + +## Test the connection [test-ls-connection] + +To make sure {{ls}} is sending data, run the following command from the host where {{ls}} is running: + +```shell +curl -XGET localhost:9600/_node/stats/events +``` + +The request should return stats on the number of events in and out. If these values are 0, check the {{agent}} logs for problems. + +When data is streaming to {{es}}, go to **{{observability}}** and click **Metrics** to view metrics about your system. + diff --git a/reference/ingestion-tools/fleet/secure.md b/reference/ingestion-tools/fleet/secure.md new file mode 100644 index 0000000000..64ad7700a1 --- /dev/null +++ b/reference/ingestion-tools/fleet/secure.md @@ -0,0 +1,20 @@ +--- +navigation_title: "Secure connections" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/secure.html +--- + +# Secure {{agent}} connections [secure] + + +Some connections may require you to generate certificates and configure SSL/TLS. + +* [Configure SSL/TLS for self-managed {{fleet-server}}s](/reference/ingestion-tools/fleet/secure-connections.md) +* [{{agent}} deployment models with mutual TLS](/reference/ingestion-tools/fleet/mutual-tls.md) +* [Configure SSL/TLS for the {{ls}} output](/reference/ingestion-tools/fleet/secure-logstash-connections.md) + + + + + + diff --git a/reference/ingestion-tools/fleet/set-inactivity-timeout.md b/reference/ingestion-tools/fleet/set-inactivity-timeout.md new file mode 100644 index 0000000000..b0eacaac46 --- /dev/null +++ b/reference/ingestion-tools/fleet/set-inactivity-timeout.md @@ -0,0 +1,28 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/set-inactivity-timeout.html +--- + +# Set inactivity timeout [set-inactivity-timeout] + +The inactivity timeout moves {{agent}}s to inactive status after being offline for a set amount of time. Inactive {{agent}}s are still valid {{agent}}s, but are removed from the main {{fleet}} UI allowing you to better manage {{agent}}s and declutter the {{fleet}} UI. + +When {{fleet-server}} receives a check-in from an inactive {{agent}}, it returns to healthy status. + +For example, if an employee is on holiday with their laptop off, the {{agent}} will transition to offline then inactive once the inactivity timeout limit is reached. This prevents the inactive {{agent}} from cluttering the {{fleet}} UI. When the employee returns, the {{agent}} checks in and returns to healthy status with valid API keys. + +If an {{agent}} is no longer valid, you can manually [unenroll](/reference/ingestion-tools/fleet/unenroll-elastic-agent.md) inactive {{agent}}s to revoke the API keys. Unenrolled agents need to be re-enrolled to be operational again. + +For more on {{agent}} statuses, see [view agent status](/reference/ingestion-tools/fleet/monitor-elastic-agent.md#view-agent-status). + + +## Set the inactivity timeout [setting-inactivity-timeout] + +Set the inactivity timeout in the {{agent}} policy to the amount of time after which you want an offline {{agent}} to become inactive. + +To set the inactivity timeout: + +1. In **{{fleet}}**, select **Agent policies**. +2. Click the policy name, then click **Settings**. +3. In the **Inactivity timeout** field, enter a value in seconds. The default value is 1209600 seconds or two weeks. + diff --git a/reference/ingestion-tools/fleet/start-stop-elastic-agent.md b/reference/ingestion-tools/fleet/start-stop-elastic-agent.md new file mode 100644 index 0000000000..f679856f4a --- /dev/null +++ b/reference/ingestion-tools/fleet/start-stop-elastic-agent.md @@ -0,0 +1,147 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/start-stop-elastic-agent.html +--- + +# Start and stop Elastic Agents on edge hosts [start-stop-elastic-agent] + +You can start and stop the {{agent}} service on the host where it’s running, and it will no longer send data to {{es}}. + + +## Start {{agent}} [start-elastic-agent-service] + +If you’ve stopped the {{agent}} service and want to restart it, use the commands that work with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```shell +sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist +``` +:::::: + +::::::{tab-item} Linux +```shell +sudo service elastic-agent start +``` +:::::: + +::::::{tab-item} Windows +```shell +Start-Service Elastic Agent +``` +:::::: + +::::::{tab-item} DEB +The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to start the agent: + +```shell +sudo systemctl start elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent start +``` +:::::: + +::::::{tab-item} RPM +The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to start the agent: + +```shell +sudo systemctl start elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent start +``` +:::::: + +::::::: + +## Stop {{agent}} [stop-elastic-agent-service] + +To stop {{agent}} and its related executables, stop the {{agent}} service. Use the commands that work with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```shell +sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} Linux +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} Windows +```shell +Stop-Service Elastic Agent +``` + +If necessary, use Task Manager on Windows to stop {{agent}}. This will kill the `elastic-agent` process and any sub-processes it created (such as {{beats}}). + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} DEB +The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to stop the agent: + +```shell +sudo systemctl stop elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} RPM +The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to stop the agent: + +```shell +sudo systemctl stop elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::: diff --git a/reference/ingestion-tools/fleet/structure-config-file.md b/reference/ingestion-tools/fleet/structure-config-file.md new file mode 100644 index 0000000000..5ae91e7f9e --- /dev/null +++ b/reference/ingestion-tools/fleet/structure-config-file.md @@ -0,0 +1,40 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/structure-config-file.html +--- + +# Structure of a config file [structure-config-file] + +The `elastic-agent.yml` policy file contains all of the settings that determine how {{agent}} runs. The most important and commonly used settings are described here, including input and output options, providers used for variables and conditional output, security settings, logging options, enabling of special features, and specifications for {{agent}} upgrades. + +An `elastic-agent.yml` file is modular: You can combine input, output, and all other settings to enable the [{{integrations}}](integration-docs://docs/reference/index.md) to use with {{agent}}. Refer to [Create a standalone {{agent}} policy](/reference/ingestion-tools/fleet/create-standalone-agent-policy.md) for the steps to download the settings to use as a starting point, and then refer to the following examples to learn about the available settings: + +* [Config file examples](/reference/ingestion-tools/fleet/config-file-examples.md) +* [Use standalone {{agent}} to monitor nginx](/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md). + + +## Config file components [structure-config-file-components] + +The following categories include the most common settings used to configure standalone {{agent}}. Follow each link for more detail and examples. + +[Inputs](/reference/ingestion-tools/fleet/elastic-agent-input-configuration.md) +: Specify how {{agent}} locates and processes input data. + +[Providers](/reference/ingestion-tools/fleet/providers.md) +: Specify the key-value pairs used for variable substitution and conditionals in {{agent}} output. + +[Outputs](/reference/ingestion-tools/fleet/elastic-agent-output-configuration.md) +: Specify where {{agent}} sends data. + +[SSL/TLS](/reference/ingestion-tools/fleet/elastic-agent-ssl-configuration.md) +: Configure SSL including SSL protocols and settings for certificates and keys. + +[Logging](/reference/ingestion-tools/fleet/elastic-agent-standalone-logging-config.md) +: Configure the {{agent}} logging output. + +[Feature flags](/reference/ingestion-tools/fleet/elastic-agent-standalone-feature-flags.md) +: Configure any experiemental features in {{agent}}. These are disabled by default. + +[Agent download](/reference/ingestion-tools/fleet/elastic-agent-standalone-download.md) +: Specify the location of required artifacts and other settings used for {{agent}} upgrades. + diff --git a/reference/ingestion-tools/fleet/syslog-processor.md b/reference/ingestion-tools/fleet/syslog-processor.md new file mode 100644 index 0000000000..48047a7e9b --- /dev/null +++ b/reference/ingestion-tools/fleet/syslog-processor.md @@ -0,0 +1,119 @@ +--- +navigation_title: "syslog" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/syslog-processor.html +--- + +# Syslog [syslog-processor] + + +The syslog processor parses RFC 3146 and/or RFC 5424 formatted syslog messages that are stored in a field. The processor itself does not handle receiving syslog messages from external sources. This is done through an input, such as the TCP input. Certain integrations, when enabled through configuration, will embed the syslog processor to process syslog messages, such as Custom TCP Logs and Custom UDP Logs. + + +## Example [_example_33] + +```yaml + - syslog: + field: message +``` + +```json +{ + "message": "<165>1 2022-01-11T22:14:15.003Z mymachine.example.com eventslog 1024 ID47 [exampleSDID@32473 iut=\"3\" eventSource=\"Application\" eventID=\"1011\"][examplePriority@32473 class=\"high\"] this is the message" +} +``` + +Will produce the following output: + +```json +{ + "@timestamp": "2022-01-11T22:14:15.003Z", + "log": { + "syslog": { + "priority": 165, + "facility": { + "code": 20, + "name": "local4" + }, + "severity": { + "code": 5, + "name": "Notice" + }, + "hostname": "mymachine.example.com", + "appname": "eventslog", + "procid": "1024", + "msgid": "ID47", + "version": 1, + "structured_data": { + "exampleSDID@32473": { + "iut": "3", + "eventSource": "Application", + "eventID": "1011" + }, + "examplePriority@32473": { + "class": "high" + } + } + } + }, + "message": "this is the message" +} +``` + + +## Configuration settings [_configuration_settings_39] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | `message` | Source field containing the syslog message. | +| `format` | No | `auto` | Syslog format to use: `rfc3164` or `rfc5424`. To automatically detect the format from the log entries, set this option to `auto`. | +| `timezone` | No | `Local` | IANA time zone name (for example, `America/New York`) or a fixed time offset (for example, `+0200`) to use when parsing syslog timestamps that do not contain a time zone. Specify `Local` to use the machine’s local time zone. | +| `overwrite_keys` | No | `true` | Whether keys that already exist in the event are overwritten by keys from the syslog message. | +| `ignore_missing` | No | `false` | Whether to ignore missing fields. If `true` the processor does not return an error when a specified field does not exist. | +| `ignore_failure` | No | `false` | Whether to ignore all errors produced by the processor. | +| `tag` | No | | An identifier for this processor. Useful for debugging. | + + +## Timestamps [_timestamps] + +The RFC 3164 format accepts the following forms of timestamps: + +* Local timestamp (`Mmm dd hh:mm:ss`): + + * `Jan 23 14:09:01` + +* RFC-3339*: + + * `2003-10-11T22:14:15Z` + * `2003-10-11T22:14:15.123456Z` + * `2003-10-11T22:14:15-06:00` + * `2003-10-11T22:14:15.123456-06:00` + + +::::{note} +The local timestamp (for example, `Jan 23 14:09:01`) that accompanies an RFC 3164 message lacks year and time zone information. The time zone will be enriched using the `timezone` configuration option, and the year will be enriched using the system’s local time (accounting for time zones). Because of this, it is possible for messages to appear in the future. For example, this might happen if logs generated on December 31 2021 are ingested on January 1 2022. The logs would be enriched with the year 2022 instead of 2021. +:::: + + +The RFC 5424 format accepts the following forms of timestamps: + +* RFC-3339: + + * `2003-10-11T22:14:15Z` + * `2003-10-11T22:14:15.123456Z` + * `2003-10-11T22:14:15-06:00` + * `2003-10-11T22:14:15.123456-06:00` + + +Formats with an asterisk (*) are a non-standard allowance. + + +## Structured Data [_structured_data] + +For RFC 5424-formatted logs, if the structured data cannot be parsed according to RFC standards, the original structured data text will be prepended to the message field, separated by a space. + diff --git a/reference/ingestion-tools/fleet/timestamp-processor.md b/reference/ingestion-tools/fleet/timestamp-processor.md new file mode 100644 index 0000000000..1ac586367f --- /dev/null +++ b/reference/ingestion-tools/fleet/timestamp-processor.md @@ -0,0 +1,74 @@ +--- +navigation_title: "timestamp" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/timestamp-processor.html +--- + +# Timestamp [timestamp-processor] + + +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: + + +The `timestamp` processor parses a timestamp from a field. By default the timestamp processor writes the parsed result to the `@timestamp` field. You can specify a different field by setting the `target_field` parameter. The timestamp value is parsed according to the `layouts` parameter. Multiple layouts can be specified and they will be used sequentially to attempt parsing the timestamp field. + +::::{note} +The timestamp layouts used by this processor are different than the formats supported by date processors in Logstash and Elasticsearch Ingest Node. +:::: + + +The `layouts` are described using a reference time that is based on this specific time: + +``` +Mon Jan 2 15:04:05 MST 2006 +``` +Since MST is GMT-0700, the reference time is: + +``` +01/02 03:04:05PM '06 -0700 +``` +To define your own layout, rewrite the reference time in a format that matches the timestamps you expect to parse. For more layout examples and details see the [Go time package documentation](https://godoc.org/time#pkg-constants). + +If a layout does not contain a year then the current year in the specified `timezone` is added to the time value. + + +## Example [_example_34] + +Here is an example that parses the `start_time` field and writes the result to the `@timestamp` field then deletes the `start_time` field. When the processor is loaded, it will immediately validate that the two `test` timestamps parse with this configuration. + +```yaml + - timestamp: + field: start_time + layouts: + - '2006-01-02T15:04:05Z' + - '2006-01-02T15:04:05.999Z' + - '2006-01-02T15:04:05.999-07:00' + test: + - '2019-06-22T16:33:51Z' + - '2019-11-18T04:59:51.123Z' + - '2020-08-03T07:10:20.123456+02:00' + - drop_fields: + fields: [start_time] +``` + + +## Configuration settings [_configuration_settings_40] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | Source field containing the time to be parsed. | +| `target_field` | No | `@timestamp` | Target field for the parsed time value. The target value is always written as UTC. | +| `layouts` | Yes | | Timestamp layouts that define the expected time value format. In addition layouts, `UNIX` and `UNIX_MS` are accepted. | +| `timezone` | No | `UTC` | IANA time zone name (for example, `America/New_York`) or fixed time offset (for example, `+0200`) to use when parsing times that do not contain a time zone. Specify `Local` to use the machine’s local time zone. | +| `ignore_missing` | No | `false` | Whether to ignore errors when the source field is missing. | +| `ignore_failure` | No | `false` | Whether to gnore all errors produced by the processor. | +| `test` | No | | List of timestamps that must parse successfully when loading the processor. | +| `id` | No | | Identifier for this processor instance. Useful for debugging. | + diff --git a/reference/ingestion-tools/fleet/tls-overview.md b/reference/ingestion-tools/fleet/tls-overview.md new file mode 100644 index 0000000000..099425b9b2 --- /dev/null +++ b/reference/ingestion-tools/fleet/tls-overview.md @@ -0,0 +1,96 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/tls-overview.html +--- + +# One-way and mutual TLS certifications flow [tls-overview] + +This page provides an overview of the relationship between the various certificates and certificate authorities (CAs) that you configure for {{fleet-server}} and {{agent}}, using the `elastic-agent install` TLS command options. + +* [Simple one-way TLS connection](#one-way-tls-connection) +* [Mutual TLS connection](#mutual-tls-connection) + + +## Simple one-way TLS connection [one-way-tls-connection] + +The following `elastic-agent install` command configures a {{fleet-server}} with the required certificates and certificate authorities to enable one-way TLS connections between the components involved: + +```shell +elastic-agent install --url=https://your-fleet-server.elastic.co:443 \ +--certificate-authorities=/path/to/fleet-ca \ +--fleet-server-es=https://es.elastic.com:443 \ +--fleet-server-es-ca=/path/to/es-ca \ +--fleet-server-cert=/path/to/fleet-cert \ +--fleet-server-cert-key=/path/to/fleet-cert-key \ +--fleet-server-service-token=FLEET-SERVER-SERVICE-TOKEN \ +--fleet-server-policy=FLEET-SERVER-POLICY-ID \ +--fleet-server-port=8220 +``` + +{{agent}} is configured with `fleet-ca` as the certificate authority that it needs to validate certificates from {{fleet-server}}. + +During the TLS connection setup, {{fleet-server}} presents its certificate `fleet-cert` to the agent and the agent (as a client) uses `fleet-ca` to validate the presented certificate. + +:::{image} images/tls-overview-oneway-fs-agent.png +:alt: Diagram of one-way TLS connection between Fleet Server and Elastic Agent +::: + +{{fleet-server}} also establishes a secure connection to an {{es}} cluster. In this case, {{fleet-server}} is configured with the certificate authority from the {{es}} `es-ca`. {{es}} presents its certificate, `es-cert`, and {{fleet-server}} validates the presented certificate using the certificate authority `es-ca`. + +:::{image} images/tls-overview-oneway-fs-es.png +:alt: Diagram of one-way TLS connection between Fleet Server and Elasticsearch +::: + + +### Relationship between components in a one-way TLS connection [_relationship_between_components_in_a_one_way_tls_connection] + +:::{image} images/tls-overview-oneway-all.jpg +:alt: Diagram of one-way TLS connection between components +::: + + +## Mutual TLS connection [mutual-tls-connection] + +The following `elastic-agent install` command configures a {{fleet-server}} with the required certificates and certificate authorities to enable mutual TLS connections between the components involved: + +```shell +elastic-agent install --url=https://your-fleet-server.elastic.co:443 \ +--certificate-authorities=/path/to/fleet-ca,/path/to/agent-ca \ +--elastic-agent-cert=/path/to/agent-cert \ +--elastic-agent-cert-key=/path/to/agent-cert-key \ +--elastic-agent-cert-key=/path/to/agent-cert-key-passphrase \ +--fleet-server-es=https://es.elastic.com:443 \ +--fleet-server-es-ca=/path/to/es-ca \ +--fleet-server-es-cert=/path/to/fleet-es-cert \ +--fleet-server-es-cert-key=/path/to/fleet-es-cert-key \ +--fleet-server-cert=/path/to/fleet-cert \ +--fleet-server-cert-key=/path/to/fleet-cert-key \ +--fleet-server-client-auth=required \ +--fleet-server-service-token=FLEET-SERVER-SERVICE-TOKEN \ +--fleet-server-policy=FLEET-SERVER-POLICY-ID \ +--fleet-server-port=8220 +``` + +As with the [one-way TLS example](#one-way-tls-connection), {{agent}} is configured with `fleet-ca` as the certificate authority that it needs to validate certificates from the {{fleet-server}}. {{fleet-server}} presents its certificate `fleet-cert` to the agent and the agent (as a client) uses `fleet-ca` to validate the presented certificate. + +To establish a mutual TLS connection, the agent presents its certificate, `agent-cert`, and {{fleet-server}} validates this certificate using the `agent-ca` that it has stored in memory. + +:::{image} images/tls-overview-mutual-fs-agent.png +:alt: Diagram of mutual TLS connection between Fleet Server and Elastic Agent +::: + +{{fleet-server}} can also establish a mutual TLS connection to the {{es}} cluster. In this case, {{fleet-server}} is configured with the certificate authority from the {{es}} `es-ca` and uses this to validate the certificate `es-cert` presented to it by {{es}}. + +:::{image} images/tls-overview-mutual-fs-es.png +:alt: Diagram of mutual TLS connection between Fleet Server and Elasticsearch +::: + +Note that you can also configure mutual TLS for {{fleet-server}} and {{agent}} [using a proxy](/reference/ingestion-tools/fleet/mutual-tls.md#mutual-tls-cloud-proxy). + + +### Relationship between components in a mutual TLS connection [_relationship_between_components_in_a_mutual_tls_connection] + +:::{image} images/tls-overview-mutual-all.jpg +:alt: Diagram of mutual TLS connection between components +::: + diff --git a/reference/ingestion-tools/fleet/translate_sid-processor.md b/reference/ingestion-tools/fleet/translate_sid-processor.md new file mode 100644 index 0000000000..f6b665de1f --- /dev/null +++ b/reference/ingestion-tools/fleet/translate_sid-processor.md @@ -0,0 +1,46 @@ +--- +navigation_title: "translate_sid" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/translate_sid-processor.html +--- + +# Translate SID [translate_sid-processor] + + +The `translate_sid` processor translates a Windows security identifier (SID) into an account name. It retrieves the name of the account associated with the SID, the first domain on which the SID is found, and the type of account. This is only available on Windows. + +Every account on a network is issued a unique SID when the account is first created. Internal processes in Windows refer to an account’s SID rather than the account’s user or group name, and these values sometimes appear in logs. + +If the SID is invalid (malformed) or does not map to any account on the local system or domain, the processor will return an error unless `ignore_failure` is set. + + +## Example [_example_35] + +```yaml + - translate_sid: + field: winlog.event_data.MemberSid + account_name_target: user.name + domain_target: user.domain + ignore_missing: true + ignore_failure: true +``` + + +## Configuration settings [_configuration_settings_41] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `field` | Yes | | Source field containing a Windows security identifier (SID). | +| `account_name_target` | Yes* | | Target field for the account name value. | +| `account_type_target` | Yes* | | Target field for the account type value. | +| `domain_target` | Yes* | | Target field for the domain value. | +| `ignore_missing` | No | `false` | Ignore errors when the source field is missing. | +| `ignore_failure` | No | `false` | Ignore all errors produced by the processor. | + +* At least one of `account_name_target`, `account_type_target`, and `domain_target` must be configured. + diff --git a/reference/ingestion-tools/fleet/truncate_fields-processor.md b/reference/ingestion-tools/fleet/truncate_fields-processor.md new file mode 100644 index 0000000000..9d9670a54d --- /dev/null +++ b/reference/ingestion-tools/fleet/truncate_fields-processor.md @@ -0,0 +1,41 @@ +--- +navigation_title: "truncate_fields" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/truncate_fields-processor.html +--- + +# Truncate fields [truncate_fields-processor] + + +The `truncate_fields` processor truncates a field to a given size. If the size of the field is smaller than the limit, the field is left as is. + + +## Example [_example_36] + +This configuration truncates the field named `message` to five characters: + +```yaml + - truncate_fields: + fields: + - message + max_characters: 5 + fail_on_error: false + ignore_missing: true +``` + + +## Configuration settings [_configuration_settings_42] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | List of fields to truncate. You can use the `@metadata.` prefix to truncate values in the event metadata instead of event fields. | +| `max_bytes` | Yes | | Maximum number of bytes in a field. Mutually exclusive with `max_characters`. | +| `max_characters` | Yes | | Maximum number of characters in a field. Mutually exclusive with `max_bytes`. | +| `fail_on_error` | No | `true` | If `true` and an error occurs, any changes to the event are reverted, and the original event is returned. If `false`, processing continues even if an error occurs. | +| `ignore_missing` | No | `false` | Whether to ignore events that lack the source field. If `false`, processing of the event fails if a field is missing. | + diff --git a/reference/ingestion-tools/fleet/unenroll-elastic-agent.md b/reference/ingestion-tools/fleet/unenroll-elastic-agent.md new file mode 100644 index 0000000000..af23b47d2e --- /dev/null +++ b/reference/ingestion-tools/fleet/unenroll-elastic-agent.md @@ -0,0 +1,23 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/unenroll-elastic-agent.html +--- + +# Unenroll Elastic Agents [unenroll-elastic-agent] + +You can unenroll {{agent}}s to invalidate the API key used to connect to {{es}}. + +1. In {{fleet}}, select **Agents**. +2. To unenroll a single agent, choose **Unenroll agent** from the **Actions** menu next to the agent you want to unenroll. +3. To unenroll multiple agents, bulk select the agents and click **Unenroll agents**. + + Unable to select multiple agents? Confirm that your subscription level supports selective agent unenrollment in {{fleet}}. For more information, refer to [{{stack}} subscriptions](https://www.elastic.co/subscriptions). + + +Unenrolled agents will continue to run, but will not be able to send data. They will show this error instead: `invalid api key to authenticate with fleet`. + +::::{tip} +If unenrollment hangs, select **Force unenroll** to invalidate all API keys related to the agent and change the status to `inactive` so that the agent no longer appears in {{fleet}}. +:::: + + diff --git a/reference/ingestion-tools/fleet/uninstall-elastic-agent.md b/reference/ingestion-tools/fleet/uninstall-elastic-agent.md new file mode 100644 index 0000000000..f147cb2411 --- /dev/null +++ b/reference/ingestion-tools/fleet/uninstall-elastic-agent.md @@ -0,0 +1,91 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/uninstall-elastic-agent.html +--- + +# Uninstall Elastic Agents from edge hosts [uninstall-elastic-agent] + + +## Uninstall on macOS, Linux, and Windows [_uninstall_on_macos_linux_and_windows] + +To uninstall {{agent}}, run the `uninstall` command from the directory where {{agent}} is running. + +::::{important} +Be sure to run the `uninstall` command from a directory outside of where {{agent}} is installed. + +For example, on a Windows system the install location is `C:\Program Files\Elastic\Agent`. Run the uninstall command from `C:\Program Files\Elastic` or `\tmp`, or even your default home directory: + +```shell +C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall +``` + +:::: + + +:::::::{tab-set} + +::::::{tab-item} macOS +::::{tip} +You must run this command as the root user. +:::: + + +```shell +sudo /Library/Elastic/Agent/elastic-agent uninstall +``` +:::::: + +::::::{tab-item} Linux +::::{tip} +You must run this command as the root user. +:::: + + +```shell +sudo /opt/Elastic/Agent/elastic-agent uninstall +``` +:::::: + +::::::{tab-item} Windows +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, run: + +```shell +C:\"Program Files"\Elastic\Agent\elastic-agent.exe uninstall +``` +:::::: + +::::::: +Follow the prompts to confirm that you want to uninstall {{agent}}. The command stops and uninstalls any managed programs, such as {{beats}} and {{elastic-endpoint}}, before it stops and uninstalls {{agent}}. + +If you run into problems, refer to [Troubleshoot common problems](/troubleshoot/ingest/fleet/common-problems.md). + +If you are using DEB or RPM, you can use the package manager to remove the installed package. + +::::{note} +For hosts enrolled in the {{elastic-defend}} integration with Agent tamper protection enabled, you’ll need to include the uninstall token in the command, using the `--uninstall-token` flag. Refer to the [Agent tamper protection docs](/reference/security/elastic-defend/agent-tamper-protection.md) for more information. +:::: + + + +## Remove {{agent}} files manually [_remove_agent_files_manually] + +You might need to remove {{agent}} files manually if there’s a failure during installation. + +To remove {{agent}} manually from your system: + +1. [Unenroll the agent](/reference/ingestion-tools/fleet/unenroll-elastic-agent.md) if it’s managed by {{fleet}}. +2. For standalone agents, back up any configuration files you want to preserve. +3. On your host, [stop the agent](/reference/ingestion-tools/fleet/start-stop-elastic-agent.md#stop-elastic-agent-service). If any {{agent}}-related processes are still running, stop them too. + + ::::{tip} + Search for these processes and stop them if they’re still running: `filebeat`, `metricbeat`, `fleet-server`, and `elastic-endpoint`. + :::: + +4. Manually remove the {{agent}} files from your system. For example, if you’re running {{agent}} on macOS, delete `/Library/Elastic/Agent/*`. Not sure where the files are installed? Refer to [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md). +5. If you’ve configured the {{elastic-defend}} integration, also remove the files installed for endpoint protection. The directory structure is similar to {{agent}}, for example, `/Library/Elastic/Endpoint/*`. + + ::::{note} + When you remove the {{elastic-defend}} integration from a macOS host (10.13, 10.14, or 10.15), the Endpoint System Extension is left on disk intentionally. If you want to remove the extension, refer to the documentation for your operating system. + :::: diff --git a/reference/ingestion-tools/fleet/upgrade-elastic-agent.md b/reference/ingestion-tools/fleet/upgrade-elastic-agent.md new file mode 100644 index 0000000000..518a9e1d6a --- /dev/null +++ b/reference/ingestion-tools/fleet/upgrade-elastic-agent.md @@ -0,0 +1,255 @@ +--- +navigation_title: "Upgrade {{agent}}s" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/upgrade-elastic-agent.html +--- + +# Upgrade {{fleet}}-managed {{agent}}s [upgrade-elastic-agent] + + +::::{tip} +Want to upgrade a standalone agent instead? See [Upgrade standalone {{agent}}s](/reference/ingestion-tools/fleet/upgrade-standalone.md). +:::: + + +With {{fleet}} upgrade capabilities, you can view and select agents that are out of date, and trigger selected agents to download, install, and run the new version. You can trigger upgrades to happen immediately, or specify a maintenance window during which the upgrades will occur. If your {{stack}} subscription level supports it, you can schedule upgrades to occur at a specific date and time. + +In most failure cases the {{agent}} may retry an upgrade after a short wait. The wait durations between retries are: 1m, 5m, 10m, 15m, 30m, and 1h. During this time, the {{agent}} may show up as "retrying" in the {{fleet}} UI. As well, if agent upgrades have been detected to have stalled, you can restart the upgrade process for a [single agent](#restart-upgrade-single) or in bulk for [multiple agents](#restart-upgrade-multiple). + +This approach simplifies the process of keeping your agents up to date. It also saves you time because you don’t need third-party tools or processes to manage upgrades. + +By default, {{agent}}s require internet access to perform binary upgrades from {{fleet}}. However, you can host your own artifacts repository and configure {{agent}}s to download binaries from it. For more information, refer to [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md). + +::::{note} +The upgrade feature is not supported for upgrading DEB/RPM packages or Docker images. Refer to [Upgrade RPM and DEB system packages](#upgrade-system-packages) to upgrade a DEB or RPM package manually. +:::: + + +For a detailed view of the {{agent}} upgrade process and the interactions between {{fleet}}, {{agent}}, and {{es}}, refer to the [Communications amongst components](https://github.com/elastic/elastic-agent/blob/main/docs/upgrades.md) diagram in the `elastic-agent` GitHub repository. + + +## Restrictions [upgrade-agent-restrictions] + +Note the following restrictions with upgrading an {{agent}}: + +* {{agent}} cannot be upgraded to a version higher than the highest currently installed version of {{fleet-server}}. When you upgrade a set of {{agents}} that are currently at the same version, you should first upgrade any agents that are acting as {{fleet-server}} (any agents that have a {{fleet-server}} policy associated with them). +* To be upgradeable, {{agent}} must not be running inside a container. +* To be upgradeable in a Linux environment, {{agent}} must be running as a service. The Linux Tar install instructions for {{agent}} provided in {{fleet}} include the commands to run it as a service. {{agent}} RPM and DEB system packages cannot be upgraded through {{fleet}}. + +These restrictions apply whether you are upgrading {{agents}} individually or in bulk. In the event that an upgrade isn’t eligible, {{fleet}} generates a warning message when you attempt the upgrade. + + +## Upgrading {{agent}} [upgrade-agent] + +To upgrade your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}}. You can perform the following upgrade-related actions: + +| User action | Result | +| --- | --- | +| [Upgrade a single {{agent}}](#upgrade-an-agent) | Upgrade a single agent to a specific version. | +| [Do a rolling upgrade of multiple {{agent}}s](#rolling-agent-upgrade) | Do a rolling upgrade of multiple agents over a specific time period. | +| [Schedule an upgrade](#schedule-agent-upgrade) | Schedule an upgrade of one or more agents to begin at a specific time. | +| [View upgrade status](#view-upgrade-status) | View the detailed status of an agent upgrade, including upgrade metrics and agent logs. | +| [Restart an upgrade for a single agent](#restart-upgrade-single) | Restart an upgrade process that has stalled for a single agent. | +| [Restart an upgrade for multiple agents](#restart-upgrade-multiple) | Do a bulk restart of the upgrade process for a set of agents. | + + +## Upgrade a single {{agent}} [upgrade-an-agent] + +1. On the **Agents** tab, agents that can be upgraded are identified with an **Upgrade available** indicator. + + :::{image} images/upgrade-available-indicator.png + :alt: Indicator on the UI showing that the agent can be upgraded + :class: screenshot + ::: + + You can also click the **Upgrade available** button to filter the list agents to only those that currently can be upgraded. + +2. From the **Actions** menu next to the agent, choose **Upgrade agent**. + + :::{image} images/upgrade-single-agent.png + :alt: Menu for upgrading a single {agent} + :class: screenshot + ::: + +3. In the Upgrade agent window, select or specify an upgrade version and click **Upgrade agent**. + + In certain cases the latest available {{agent}} version may not be recognized by {{kib}}. For instance, this occurs when the {{kib}} version is lower than the {{agent}} version. You can specify a custom version for {{agent}} to upgrade to by entering the version into the **Upgrade version** text field. + + :::{image} images/upgrade-agent-custom.png + :alt: Menu for upgrading a single {agent} + :class: screenshot + ::: + + + +## Do a rolling upgrade of multiple {{agent}}s [rolling-agent-upgrade] + +You can do rolling upgrades to avoid exhausting network resources when updating a large number of {{agent}}s. + +1. On the **Agents** tab, select multiple agents, and click **Actions**. +2. From the **Actions** menu, choose to upgrade the agents. +3. In the Upgrade agents window, select an upgrade version. +4. Select the amount of time available for the maintenance window. The upgrades are spread out uniformly across this maintenance window to avoid exhausting network resources. + + To force selected agents to upgrade immediately when the upgrade is triggered, select **Immediately**. Avoid using this setting for batches of more than 10 agents. + +5. Upgrade the agents. + + +## Schedule an upgrade [schedule-agent-upgrade] + +1. On the **Agents** tab, select one or more agents, and click **Actions**. +2. From the **Actions** menu, choose to schedule an upgrade. + + :::{image} images/schedule-upgrade.png + :alt: Menu for scheduling {{agent}} upgrades + :class: screenshot + ::: + + If the schedule option is grayed out, it may not be available at your subscription level. For more information, refer to [{{stack}} subscriptions](https://www.elastic.co/subscriptions). + +3. In the Upgrade window, select an upgrade version. +4. Select a maintenance window. For more information, refer to [Do a rolling upgrade of multiple {{agent}}s](#rolling-agent-upgrade). +5. Set the date and time when you want the upgrade to begin. +6. Click **Schedule**. + + +## View upgrade status [view-upgrade-status] + +On the Agents tab, when you trigger an upgrade, agents that are upgrading have the status `Updating` until the upgrade is complete, and then the status changes back to `Healthy`. + +Agents on version 8.12 and higher that are currently upgrading additionally show a detailed upgrade status indicator. + +:::{image} images/upgrade-states.png +:alt: Detailed state of an upgrading agent +:class: screenshot +::: + +The following table explains the upgrade states in the order that they can occur. + +| State | Description | +| --- | --- | +| Upgrade requested | {{agent}} has received the upgrade notice from {{fleet}}. | +| Upgrade scheduled | {{agent}} has received the upgrade notice from {{fleet}} and the upgrade will start at the indicated time. | +| Upgrade downloading | {{agent}} is downloading the archive containing the new version artifact. | +| Upgrade extracting | {{agent}} is extracting the new version artifact from the downloaded archive. | +| Upgrade replacing | {{agent}} is currently replacing the former, pre-upgrade agent artifact with the new one. | +| Upgrade restarting | {{agent}} has been replaced with a new version and is now restarting in order to apply the update. | +| Upgrade monitoring | The newly upgraded {{agent}} has started and is being monitored for errors. | +| Upgrade rolled back | The upgrade was unsuccessful. {{agent}} is being rolled back to the former, pre-upgrade version. | +| Upgrade failed | An error has been detected in the newly upgraded {{agent}} and the attempt to roll the upgrade back to the previous version has failed. | + +Under routine circumstances, an {{agent}} upgrade happens quickly. You typically will not see the agent transition through each of the upgrade states. The detailed upgrade status can be a very useful tool especially if you need to diagnose the state of an agent that may have become stuck, or just appears to have become stuck, during the upgrade process. + +Beside the upgrade status indicator, you can hover your cursor over the information icon to get more detail about the upgrade. + +:::{image} images/upgrade-detailed-state01.png +:alt: Granular upgrade details shown as hover text (agent has requested an upgrade) +:class: screenshot +::: + +:::{image} images/upgrade-detailed-state02.png +:alt: Granular upgrade details shown as hover text (agent is restarting to apply the update) +:class: screenshot +::: + +Note that when you upgrade agents from versions below 8.12, the upgrade details are not provided. + +:::{image} images/upgrade-non-detailed.png +:alt: An earlier release agent showing only the updating state without additional details +:class: screenshot +::: + +When upgrading many agents, you can fine tune the maintenance window by viewing stats and metrics about the upgrade: + +1. On the **Agents** tab, click the host name to view agent details. If you don’t see the host name, try refreshing the page. +2. Click **View more agent metrics** to open the **[{{agent}}] Agent metrics** dashboard. + +If an upgrade appears to have stalled, you can [restart it](#restart-upgrade-single). + +If an upgrade fails, you can view the agent logs to find the reason: + +1. In {{fleet}}, in the Host column, click the agent’s name. +2. Open the **Logs** tab. +3. Search for failures. + + :::{image} images/upgrade-failure.png + :alt: Agent logs showing upgrade failure + :class: screenshot + ::: + + + +## Restart an upgrade for a single agent [restart-upgrade-single] + +An {{agent}} upgrade process may sometimes stall. This can happen for various reasons, including, for example, network connectivity issues or a delayed shutdown. + +When an {{agent}} upgrade has been detected to be stuck, a warning indicator appears on the UI. When this occurs, you can restart the upgrade from either the **Agents** tab on the main {{fleet}} page or from the details page for any individual agent. + +Note that there is a required 10 minute cooldown period in between restart attempts. After launching a restart action you need to wait for the cooldown to complete before initiating another restart. + +Restart from main {{fleet}} page: + +1. From the **Actions** menu next to an agent that is stuck in an `Updating` state, choose **Restart upgrade**. +2. In the **Restart upgrade** window, select an upgrade version and click **Upgrade agent**. + +Restart from an agent details page: + +1. In {{fleet}}, in the **Host** column, click the agent’s name. On the **Agent details** tab, a warning notice appears if the agent is detected to have stalled during an upgrade. +2. Click **Restart upgrade**. +3. In the **Restart upgrade** window, select an upgrade version and click **Upgrade agent**. + + +## Restart an upgrade for multiple agents [restart-upgrade-multiple] + +When the upgrade process for multiple agents has been detected to have stalled, you can restart the upgrade process in bulk. As with [restarting an upgrade for a single agent](#restart-upgrade-single), a 10 minute cooldown period is enforced between restarts. + +1. On the **Agents** tab, select any set of the agents that are indicated to be stuck, and click **Actions**. +2. From the **Actions** menu, select **Restart upgrade agents**. +3. In the **Restart upgrade…​** window, select an upgrade version. +4. Select the amount of time available for the maintenance window. The upgrades are spread out uniformly across this maintenance window to avoid exhausting network resources. + + To force selected agents to upgrade immediately when the upgrade is triggered, select **Immediately**. Avoid using this setting for batches of more than 10 agents. + +5. Restart the upgrades. + + +## Upgrade RPM and DEB system packages [upgrade-system-packages] + +If you have installed and enrolled {{agent}} using either a DEB (for a Debian-based Linux distribution) or RPM (for a RedHat-based Linux distribution) install package, the upgrade cannot be managed by {{fleet}}. Instead, you can perform the upgrade using these steps. + +For installation steps refer to [Install {{fleet}}-managed {{agent}}s](/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). + + +### Upgrade a DEB {{agent}} installation: [_upgrade_a_deb_agent_installation] + +1. Download the {{agent}} Debian install package for the release that you want to upgrade to: + + ```bash + curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.0.0-beta1-amd64.deb + ``` + +2. Upgrade {{agent}} to the target release: + + ```bash + sudo dpkg -i elastic-agent-9.0.0-beta1-amd64.deb + ``` + +3. Confirm in {{fleet}} that the agent has been upgraded to the target version. Note that the **Upgrade agent** option in the **Actions** menu next to the agent will be disabled since [fleet]-managed upgrades are not supported for this package type. + + +### Upgrade an RPM {{agent}} installation: [_upgrade_an_rpm_agent_installation] + +1. Download the {{agent}} RPM install package for the release that you want to upgrade to: + + ```bash + curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.0.0-beta1-x86_64.rpm + ``` + +2. Upgrade {{agent}} to the target release: + + ```bash + sudo rpm -U elastic-agent-9.0.0-beta1-x86_64.rpm + ``` + +3. Confirm in {{fleet}} that the agent has been upgraded to the target version. Note that the **Upgrade agent** option in the **Actions** menu next to the agent will be disabled since [fleet]-managed upgrades are not supported for this package type. diff --git a/reference/ingestion-tools/fleet/upgrade-integration.md b/reference/ingestion-tools/fleet/upgrade-integration.md new file mode 100644 index 0000000000..1e93adbf36 --- /dev/null +++ b/reference/ingestion-tools/fleet/upgrade-integration.md @@ -0,0 +1,131 @@ +--- +navigation_title: "Upgrade an integration" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/upgrade-integration.html +--- + +# Upgrade an {{agent}} integration [upgrade-integration] + + +::::{important} +By default, {{kib}} requires an internet connection to download integration packages from the {{package-registry}}. Make sure the {{kib}} server can connect to `https://epr.elastic.co` on port `443`. If network restrictions prevent {{kib}} from reaching the public {{package-registry}}, you can use a proxy server or host your own {{package-registry}}. To learn more, refer to [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md). +:::: + + +Elastic releases {{agent}} integration updates periodically. To use new features and capabilities, upgrade the installed integration to the latest version and optionally upgrade integration policies to use the new version. + +::::{tip} +In larger deployments, you should test integration upgrades on a sample {{agent}} before rolling out a larger upgrade initiative. +:::: + + + +## Upgrade an integration to the latest version [upgrade-integration-to-latest-version] + +1. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select the integration you want to upgrade. Notice there is a warning icon next to the version number to indicate a new version is available. +2. Click the **Settings** tab and notice the message about the new version. + + :::{image} images/upgrade-integration.png + :alt: Settings tab under Integrations shows how to upgrade the integration + :class: screenshot + ::: + +3. Before upgrading the integration, decide whether to upgrade integration policies to the latest version, too. To use new features and capabilities, you’ll need to upgrade existing integration policies. However, the upgrade may introduce changes, such as field changes, that require you to resolve conflicts. + + * Select **Upgrade integration policies** to upgrade any eligible integration policies when the integration is upgraded. + * To continue using the older package version, deselect **Upgrade integration policies**. You can still choose to [upgrade integration policies manually](#upgrade-integration-policies-manually) later on. + +4. Click **Upgrade to latest version**. + + If you selected **Upgrade integration policies** and there are conflicts, [upgrade integration policies manually](#upgrade-integration-policies-manually) and resolve the conflicts in the policy editor. + +5. After the upgrade is complete, verify that the installed version and latest version match. + +::::{note} +You must upgrade standalone agents separately. If you used {{kib}} to create and download your standalone agent policy, see [Upgrade standalone agent policies after upgrading an integration](/reference/ingestion-tools/fleet/create-standalone-agent-policy.md#update-standalone-policies). +:::: + + + +## Keep integration policies up to date automatically [upgrade-integration-policies-automatically] + +Some integration packages, like System, are installed by default during {{fleet}} setup. These integrations are upgraded automatically when {{fleet}} detects that a new version is available. + +The following integrations are installed automatically when you select certain options in the {{fleet}} UI. All of them have an option to upgrade integration policies automatically, too: + +* [Elastic Agent](integration-docs://docs/reference/elastic_agent.md) - installed automatically when the default **Collect agent logs** or **Collect agent metrics** option is enabled in an {{agent}} policy). +* [Fleet Server](integration-docs://docs/reference/fleet_server.md) - installed automatically when {{fleet-server}} is set up through the {{fleet}} UI. +* [System](integration-docs://docs/reference/system.md) - installed automatically when the default **Collect system logs and metrics** option is enabled in an {{agent}} policy). + +The [Elastic Defend](integration-docs://docs/reference/endpoint.md) integration also has an option to upgrade installation policies automatically. + +Note that for the following integrations, when the integration is updated automatically the integration policy is upgraded automatically as well. This behavior cannot be disabled. + +* [Elastic APM](integration-docs://docs/reference/apm.md) +* [Cloud Security Posture Management](integration-docs://docs/reference/cloud_security_posture.md#cloud_security_posture-cloud-security-posture-management-cspm) +* [Elastic Synthetics](/solutions/observability/apps/synthetic-monitoring.md) + +For integrations that support the option to auto-upgrade the integration policy, when this option is selected (the default), {{fleet}} automatically upgrades your policies when a new version of the integration is available. If there are conflicts during the upgrade, your integration policies will not be upgraded, and you’ll need to [upgrade integration policies manually](#upgrade-integration-policies-manually). + +To keep integration policies up to data automatically: + +1. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select the integration you want to configure. +2. Click **Settings** and make sure **Keep integration policies up to data automatically** is selected. + + :::{image} images/upgrade-integration-policies-automatically.png + :alt: Settings tab under Integrations shows how to keep integration policies up to date automatically + :class: screenshot + ::: + + If this option isn’t available on the **Settings** tab, this feature is not available for the integration you’re viewing. + + + +## Upgrade integration policies manually [upgrade-integration-policies-manually] + +If you can’t upgrade integration policies when you upgrade the integration, upgrade them manually. + +1. Click the **Policies** tab and find the integration policies you want to upgrade. + + :::{image} images/upgrade-package-policy.png + :alt: Policies tab under Integrations shows how to upgrade the package policy + :class: screenshot + ::: + +2. Click **Upgrade** to begin the upgrade process. + + The upgrade will open in the policy editor. + + :::{image} images/upgrade-policy-editor.png + :alt: Upgrade integration example in the policy editor + :class: screenshot + ::: + +3. Make any required configuration changes and, if necessary, resolve conflicts. For more information, refer to [Resolve conflicts](#resolve-conflicts). +4. Repeat this process for each policy with an out-of-date integration. + +Too many conflicts to resolve? Refer to the [troubleshooting docs](/troubleshoot/ingest/fleet/common-problems.md#upgrading-integration-too-many-conflicts) for manual steps. + + +## Resolve conflicts [resolve-conflicts] + +When attempting to upgrade an integration policy, it’s possible that breaking changes or conflicts exist between versions of an integration. For example, if a new version of an integration has a required field and doesn’t specify a default value, {{fleet}} cannot perform the upgrade without user input. Conflicts are also likely if an experimental package greatly restructures its fields and configuration settings between releases. + +If {{fleet}} detects a conflict while automatically upgrading an integration policy, it will not attempt to upgrade it. You’ll need to: + +1. [Upgrade the integration policy manually](#upgrade-integration-policies-manually). +2. Use the policy editor to fix any conflicts or errors. + + :::{image} images/upgrade-resolve-conflicts.png + :alt: Resolve field conflicts in the policy editor + :class: screenshot + ::: + + 1. Under **Review field conflicts**, notice that you can click **previous configuration** to view the raw JSON representation of the old integration policy and compare values. This feature is useful when fields have been deprecated or removed between releases. + + :::{image} images/upgrade-view-previous-config.png + :alt: View previous configuration to resolve conflicts + :class: screenshot + ::: + + 2. In the policy editor, fix any errors and click **Upgrade integration**. diff --git a/reference/ingestion-tools/fleet/upgrade-standalone.md b/reference/ingestion-tools/fleet/upgrade-standalone.md new file mode 100644 index 0000000000..8b81270014 --- /dev/null +++ b/reference/ingestion-tools/fleet/upgrade-standalone.md @@ -0,0 +1,79 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/upgrade-standalone.html +--- + +# Upgrade standalone Elastic Agents [upgrade-standalone] + +To upgrade a standalone agent running on an edge node: + +1. Make sure the `elastic-agent` service is running. +2. From the directory where {{agent}} is installed, run the `upgrade` command to upgrade to a new version. Not sure where the agent is installed? Refer to [Installation layout](/reference/ingestion-tools/fleet/installation-layout.md). + + For example, on macOS, to upgrade the agent from version 8.8.0 to 8.8.1, you would run: + + ```shell + cd /Library/Elastic/Agent/ + sudo elastic-agent upgrade 8.8.1 + ``` + + +This command upgrades the binary. Your agent policy should continue to work, but you might need to upgrade it to use new features and capabilities. + +For more command-line options, see the help for the [`upgrade`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-upgrade-command) command. + +## Upgrading standalone {{agent}} in an air-gapped environmment [upgrade-standalone-air-gapped] + +The basic upgrade scenario should work for most use cases. However, in an air-gapped environment {{agent}} is not able to access the {{artifact-registry}} at `artifacts.elastic.co` directly. + +As an alterative, you can do one of the following: + +* [Configure a proxy server](/reference/ingestion-tools/fleet/fleet-agent-proxy-support.md) for standalone {{agent}} to access the {{artifact-registry}}. +* [Host your own artifact registry](/reference/ingestion-tools/fleet/air-gapped.md#host-artifact-registry) for standalone {{agent}} to access binary downloads. + +As well, starting from version 8.9.0, during the upgrade process {{agent}} needs to download a PGP/GPG key. Refer to [Configure {{agents}} to download a PGP/GPG key from {{fleet-server}}](/reference/ingestion-tools/fleet/air-gapped.md#air-gapped-pgp-fleet) for the steps to configure the key download location in an air-gapped environment. + +Refer to [Air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md) for more details. + + +## Verifying {{agent}} package signatures [upgrade-standalone-verify-package] + +Standalone {{agent}} verifies each package that it downloads using publically available SHA hash and .asc PGP signature files. The SHA file is used to verify that the package has not been modified, and the .asc file is used to verify that the package was released by Elastic. For this purpose, the Elastic public GPG key is embedded in {{agent}} itself. + +At times, the Elastic private GPG key may need to be rotated, either due to the key expiry or due to the private key having been exposed. In this case, standalone {{agent}} upgrades can fail because the embedded public key no longer works. + +In the event of a private GPG key rotation, you can use the following options with the [`upgrade`](/reference/ingestion-tools/fleet/agent-command-reference.md#elastic-agent-upgrade-command) command to either skip the verification process (not recommended) or force {{agent}} to use a new public key for verification: + +`--skip-verify` +: Skip the package verification process. This option is not recommended as it is insecure. + + Example: + + ```yaml + ./elastic-agent upgrade 8.8.0 --skip-verify + ``` + + +`--pgp-path ` +: Use a locally stored copy of the PGP key to verify the upgrade package. + + Example: + + ```yaml + ./elastic-agent upgrade 8.8.0 --pgp-path /home/elastic-agent/GPG-KEY-elasticsearch + ``` + + +`--pgp-uri ` +: Use the specified online PGP key to verify the upgrade package. + + Example: + + ```yaml + ./elastic-agent upgrade 8.7.0-SNAPSHOT --pgp-uri "https://artifacts.elastic.co/GPG-KEY-elasticsearch" + ``` + + +Under the basic upgrade scenario standalone {{agent}} will automatically fetch the most current public key, however in an air-gapped environment or in the event that the {{artifact-registry}} is otherwise inaccessible, these commands can be used instead. + + diff --git a/reference/ingestion-tools/fleet/urldecode-processor.md b/reference/ingestion-tools/fleet/urldecode-processor.md new file mode 100644 index 0000000000..0cba6dbdd3 --- /dev/null +++ b/reference/ingestion-tools/fleet/urldecode-processor.md @@ -0,0 +1,41 @@ +--- +navigation_title: "urldecode" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/urldecode-processor.html +--- + +# URL Decode [urldecode-processor] + + +The `urldecode` processor specifies a list of fields to decode from URL encoded format. + + +## Example [_example_37] + +In this example, `field1` is decoded in `field2`. + +```yaml + - urldecode: + fields: + - from: "field1" + to: "field2" + ignore_missing: false + fail_on_error: true +``` + + +## Configuration settings [_configuration_settings_43] + +::::{note} +{{agent}} processors execute *before* ingest pipelines, which means that your processor configurations cannot refer to fields that are created by ingest pipelines or {{ls}}. For more limitations, refer to [What are some limitations of using processors?](/reference/ingestion-tools/fleet/agent-processors.md#limitations) +:::: + + +| Name | Required | Default | Description | +| --- | --- | --- | --- | +| `fields` | Yes | | Contains:

* `from: "source-field"`, where `from` is the source field name
* `to: "target-field"`, where `to` is the target field name (defaults to the `from` value)
| +| `ignore_missing` | No | `false` | Whether to ignore missing keys. If `true`, no error is logged if a key that should be URL-decoded is missing. | +| `fail_on_error` | No | `true` | Whether to fail if an error occurs. If `true` and an error occurs, the URL-decoding of fields is stopped, and the original event is returned. If `false`, decoding continues even if an error occurs during decoding. | + +See [Conditions](/reference/ingestion-tools/fleet/dynamic-input-configuration.md#conditions) for a list of supported conditions. + diff --git a/reference/ingestion-tools/fleet/view-integration-assets.md b/reference/ingestion-tools/fleet/view-integration-assets.md new file mode 100644 index 0000000000..a0ef9315ba --- /dev/null +++ b/reference/ingestion-tools/fleet/view-integration-assets.md @@ -0,0 +1,17 @@ +--- +navigation_title: "View integration assets" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/view-integration-assets.html +--- + +# View {{agent}} integration assets [view-integration-assets] + + +{{agent}} integrations come with a number of assets, such as dashboards, saved searches, and visualizations for analyzing data. + +To view integration assets: + +1. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select the integration you want to view. +2. Click the **Assets** tab to see a list of assets. If this tab does not exist, the assets are not installed. + +If {{agent}} is already streaming data to {{es}}, you can follow the links to view the assets in {{kib}}. diff --git a/reference/ingestion-tools/fleet/view-integration-policies.md b/reference/ingestion-tools/fleet/view-integration-policies.md new file mode 100644 index 0000000000..ec44139153 --- /dev/null +++ b/reference/ingestion-tools/fleet/view-integration-policies.md @@ -0,0 +1,17 @@ +--- +navigation_title: "View integration policies" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/view-integration-policies.html +--- + +# View {{agent}} integration policies [view-integration-policies] + + +An integration policy is created when you add an [integration](integration-docs://docs/reference/index.md) to an {{agent}} policy. + +To view details about all the integration policies for a specific integration: + +1. In {{kib}}, go to the **Integrations** page and open the **Installed integrations** tab. Search for and select the integration you want to view. You can select a category to narrow your search. +2. Click the **Policies** tab to see the list of integration policies. + +Open the **Actions** menu to see other actions you can perform from this view. diff --git a/reference/ingestion-tools/observability/apm-settings.md b/reference/ingestion-tools/observability/apm-settings.md new file mode 100644 index 0000000000..ff5c417bfa --- /dev/null +++ b/reference/ingestion-tools/observability/apm-settings.md @@ -0,0 +1,28 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/apm-configuring-howto-apm-server.html +--- + +# APM settings [apm-configuring-howto-apm-server] + +How you configure the APM Server depends on your deployment method. + +* **APM Server binary** users need to edit the `apm-server.yml` configuration file. The location of the file varies by platform. To locate the file, see [Installation layout](/solutions/observability/apps/installation-layout.md). +* **Fleet-managed** users configure the APM Server directly in {{kib}}. Each configuration page describes the specific location. +* **Elastic cloud** users should see [Add APM user settings](/reference/ingestion-tools/cloud/apm-settings.md) for information on how to configure Elastic APM. + +The following topics describe how to configure APM Server: + +* [General configuration options](/solutions/observability/apps/general-configuration-options.md) +* [Anonymous authentication](/solutions/observability/apps/configure-anonymous-authentication.md) +* [APM agent authorization](/solutions/observability/apps/apm-agent-authorization.md) +* [APM agent central configuration](/solutions/observability/apps/configure-apm-agent-central-configuration.md) +* [Instrumentation](/solutions/observability/apps/configure-apm-instrumentation.md) +* [{{kib}} endpoint](/solutions/observability/apps/configure-kibana-endpoint.md) +* [Logging](/solutions/observability/apps/configure-logging.md) +* [Output](/solutions/observability/apps/configure-output.md) +* [Project paths](/solutions/observability/apps/configure-project-paths.md) +* [Real User Monitoring (RUM)](/solutions/observability/apps/configure-real-user-monitoring-rum.md) +* [SSL/TLS settings](/solutions/observability/apps/ssltls-settings.md) +* [Tail-based sampling](/solutions/observability/apps/tail-based-sampling.md) +* [Use environment variables in the configuration](/solutions/observability/apps/use-environment-variables-in-configuration.md) diff --git a/reference/ingestion-tools/observability/apm.md b/reference/ingestion-tools/observability/apm.md new file mode 100644 index 0000000000..8353c62907 --- /dev/null +++ b/reference/ingestion-tools/observability/apm.md @@ -0,0 +1,37 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/apm.html +--- + +# APM [apm] + +Elastic APM is an application performance monitoring system built on the {{stack}}. It allows you to monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. This makes it easy to pinpoint and fix performance problems quickly. + +:::{image} ../../../images/observability-apm-app-landing.png +:alt: Applications UI in {kib} +:class: screenshot +::: + +Elastic APM also automatically collects unhandled errors and exceptions. Errors are grouped based primarily on the stack trace, so you can identify new errors as they appear and keep an eye on how many times specific errors happen. + +Metrics are another vital source of information when debugging production systems. Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. + + +## Give Elastic APM a try [_give_elastic_apm_a_try] + +Use [Get started with application traces and APM](/solutions/observability/apps/fleet-managed-apm-server.md) to quickly spin up an APM deployment. Want to host everything yourself instead? See [Get started](/solutions/observability/apps/get-started-with-apm.md). + + + + + + + + + + + + + + + diff --git a/reference/observability/elastic-entity-model.md b/reference/observability/elastic-entity-model.md new file mode 100644 index 0000000000..4e8f6b90e6 --- /dev/null +++ b/reference/observability/elastic-entity-model.md @@ -0,0 +1,62 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/elastic-entity-model.html +--- + +# Elastic Entity Model [elastic-entity-model] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +The Elastic Entity Model consists of: + +* a data model and related entity indices +* an Entity Discovery Framework, which consists of [transforms](/explore-analyze/transforms.md) and [Ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md) that read from signal indices and write data to entity indices +* a set of management APIs that empower entity-centric Elastic solution features and workflows + +In the context of Elastic Observability, an *entity* is an object of interest that can be associated with produced telemetry and identified as unique. Note that this definition is intentionally closely aligned to the work of the [OpenTelemetry Entities SIG](https://github.com/open-telemetry/oteps/blob/main/text/entities/0256-entities-data-model.md#data-model). Examples of entities include (but are not limited to) services, hosts, and containers. + +The concept of an entity is important as a means to unify observability signals based on the underlying entity that the signals describe. + +::::{note} +* The Elastic Entity Model currently supports the [new Inventory experience](/solutions/observability/apps/inventory.md) limited to service, host, and container entities. +* During Technical Preview, Entity Discovery Framework components are not enabled by default. + +:::: + + + +## Enable the Elastic Entity Model [_enable_the_elastic_entity_model] + +You can enable the Elastic Entity Model from the new [Inventory](/solutions/observability/apps/inventory.md). If already enabled, you will not be prompted to enable the Elastic Entity Model. + +The following {{es}} privileges are required: + +| | | +| --- | --- | +| **Index privileges** | names: [`.entities*`], privileges: [`create_index`, `index`, `create_doc`, `auto_configure`, `read`]
names: [`logs-*`, `filebeat*`, `metrics-*`, `metricbeat*`, `traces-*`, `.entities*`], privileges: [`read`, `view_index_metadata`] | +| **Cluster privileges** | `manage_transform`, `manage_ingest_pipelines`, `manage_index_templates` | +| **Application privileges** | application: `kibana-.kibana`, privileges: [`saved_object:entity-definition/*`, `saved_object:entity-discovery-api-key/*`], resources: [*] | + +For more information, refer to [Security privileges](elasticsearch://docs/reference/elasticsearch/security-privileges.md) in the {{es}} documentation. + + +## Disable the Elastic Entity Model [_disable_the_elastic_entity_model] + +From the Dev Console, run the command: `DELETE kbn:/internal/entities/managed/enablement` + +The following {{es}} privileges are required to delete {{es}} resources: + +| | | +| --- | --- | +| **Index privileges** | names: [`.entities*`], privileges: [`delete_index`] | +| **Cluster privileges** | `manage_transform`, `manage_ingest_pipelines`, `manage_index_templates` | +| **Application privileges** | application: `kibana-.kibana`, privileges: [`saved_object:entity-definition/delete`, `saved_object:entity-discovery-api-key/delete`], resources: [*] | + + +## Limitations [elastic-entity-model-limitations] + +* [Cross-cluster search (CCS)](/solutions/search/cross-cluster-search.md) is not supported. EEM cannot leverage data stored on a remote cluster. +* Services are only detected from documents where `service.name` is detected in index patterns that match either `logs-*` or `apm-*`. diff --git a/reference/observability/fields-and-object-schemas.md b/reference/observability/fields-and-object-schemas.md new file mode 100644 index 0000000000..cb14ddbbb9 --- /dev/null +++ b/reference/observability/fields-and-object-schemas.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/fields-reference.html +--- + +# Fields and object schemas [fields-reference] + +This section lists Elastic Common Schema (ECS) fields the Logs and Infrastructure apps use to display data. + +ECS is an open source specification that defines a standard set of fields to use when storing event data in {{es}}, such as logs and metrics. + +Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Logs and Infrastructure apps. If you cannot use {{beats}}, map your data to [ECS fields](ecs://docs/reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool. + +This reference covers: + +* [Logs Explorer fields](/reference/observability/fields-and-object-schemas/logs-app-fields.md) +* [{{infrastructure-app}} fields](/reference/observability/fields-and-object-schemas/metrics-app-fields.md) + + + diff --git a/reference/observability/fields-and-object-schemas/logs-app-fields.md b/reference/observability/fields-and-object-schemas/logs-app-fields.md new file mode 100644 index 0000000000..cb0d361bfd --- /dev/null +++ b/reference/observability/fields-and-object-schemas/logs-app-fields.md @@ -0,0 +1,127 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/logs-app-fields.html +--- + +# Logs Explorer fields [logs-app-fields] + +This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://docs/reference/index.md#_what_is_ecs). + +`@timestamp` +: Date/time when the event originated. + + This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. + + type: date + + required: True + + ECS field: True + + example: `May 27, 2020 @ 15:22:27.982` + + +`_doc` +: This field is used to break ties between two entries with the same timestamp. + + required: True + + ECS field: False + + +`container.id` +: Unique container id. + + type: keyword + + required: True + + ECS field: True + + example: `data` + + +`event.dataset` +: Name of the dataset. + + If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. + + It’s recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. + + type: keyword + + required: True, if you want to use the {{ml-features}}. + + ECS field: True + + example: `apache.access` + + +`host.hostname` +: Name of the host. + + It normally contains what the `hostname` command returns on the host machine. + + type: keyword + + required: True, if you want to enable and use the **View in Context** feature. + + ECS field: True + + example: `Elastic.local` + + +`host.name` +: Name of the host. + + It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. + + type: keyword + + required: True + + ECS field: True + + example: `MacBook-Elastic.local` + + +`kubernetes.pod.uid` +: Kubernetes Pod UID. + + type: keyword + + required: True + + ECS field: False + + example: `8454328b-673d-11ea-7d80-21010a840123` + + +`log.file.path` +: Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. + + If the event wasn’t read from a log file, do not populate this field. + + type: keyword + + required: True, if you want to use the **View in Context** feature. + + ECS field: True + + example: `/var/log/demo.log` + + +`message` +: For log events the message field contains the log message, optimized for viewing in a log viewer. + + For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. + + If multiple messages exist, they can be combined into one message. + + type: text + + required: True + + ECS field: True + + example: `Hello World` diff --git a/reference/observability/fields-and-object-schemas/metrics-app-fields.md b/reference/observability/fields-and-object-schemas/metrics-app-fields.md new file mode 100644 index 0000000000..c04a3a2ed8 --- /dev/null +++ b/reference/observability/fields-and-object-schemas/metrics-app-fields.md @@ -0,0 +1,391 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/metrics-app-fields.html +--- + +# Infrastructure app fields [metrics-app-fields] + +This section lists the required fields the {{infrastructure-app}} uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://docs/reference/index.md#_what_is_ecs). + + +## Additional field details [_additional_field_details] + +The `event.dataset` field is required to display data properly in some views. This field is a combination of `metricset.module`, which is the {{metricbeat}} module name, and `metricset.name`, which is the metricset name. + +To determine each metric’s optimal time interval, all charts use `metricset.period`. If `metricset.period` is not available, then it falls back to 1 minute intervals. + + +## Base fields [base-fields] + +The `base` field set contains all fields which are on the top level. These fields are common across all types of events. + +`@timestamp` +: Date/time when the event originated. + + This is the date/time extracted from the event, typically representing when the source generated the event. If the event source has no original timestamp, this value is typically populated by the first time the pipeline received the event. Required field for all events. + + type: date + + required: True + + ECS field: True + + example: `May 27, 2020 @ 15:22:27.982` + + +`message` +: For log events the message field contains the log message, optimized for viewing in a log viewer. + + For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. + + If multiple messages exist, they can be combined into one message. + + type: text + + required: True + + ECS field: True + + example: `Hello World` + + + +## Hosts fields [host-fields] + +These fields must be mapped to display host data in the {{infrastructure-app}}. + +`host.name` +: Name of the host. + + It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. + + type: keyword + + required: True + + ECS field: True + + example: `MacBook-Elastic.local` + + +`host.ip` +: IP of the host that records the event. + + type: `ip` + + required: True + + ECS field: True + + + +## Docker container fields [docker-fields] + +These fields must be mapped to display Docker container data in the {{infrastructure-app}}. + +`container.id` +: Unique container id. + + type: keyword + + required: True + + ECS field: True + + example: `data` + + +`container.name` +: Container name. + + type: keyword + + required: True + + ECS field: True + + +`container.ip_address` +: IP of the container. + + type: `ip` + + required: True + + ECS field: False + + + +## Kubernetes pod fields [kubernetes-fields] + +These fields must be mapped to display Kubernetes pod data in the {{infrastructure-app}}. + +`kubernetes.pod.uid` +: Kubernetes Pod UID. + + type: keyword + + required: True + + ECS field: False + + example: `8454328b-673d-11ea-7d80-21010a840123` + + +`kubernetes.pod.name` +: Kubernetes pod name. + + type: keyword + + required: True + + ECS field: False + + example: `nginx-demo` + + +`kubernetes.pod.ip` +: IP of the Kubernetes pod. + + type: keyword + + required: True + + ECS field: False + + + +## AWS EC2 instance fields [aws-ec2-fields] + +These fields must be mapped to display EC2 instance data in the {{infrastructure-app}}. + +`cloud.instance.id` +: Instance ID of the host machine. + + type: keyword + + required: True + + ECS field: True + + example: `i-1234567890abcdef0` + + +`cloud.instance.name` +: Instance name of the host machine. + + type: keyword + + required: True + + ECS field: True + + +`aws.ec2.instance.public.ip` +: Instance public IP of the host machine. + + type: keyword + + required: True + + ECS field: False + + + +## AWS S3 bucket fields [aws-s3-fields] + +These fields must be mapped to display S3 bucket data in the {{infrastructure-app}}. + +`aws.s3.bucket.name` +: The name or ID of the AWS S3 bucket. + + type: keyword + + required: True + + ECS field: False + + + +## AWS SQS queue fields [aws-sqs-fields] + +These fields must be mapped to display SQS queue data in the {{infrastructure-app}}. + +`aws.sqs.queue.name` +: The name or ID of the AWS SQS queue. + + type: keyword + + required: True + + ECS field: False + + + +## AWS RDS database fields [aws-rds-fields] + +These fields must be mapped to display RDS database data in the {{infrastructure-app}}. + +`aws.rds.db_instance.arn` +: Amazon Resource Name (ARN) for each RDS. + + type: keyword + + required: True + + ECS field: False + + +`aws.rds.db_instance.identifier` +: Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance. + + type: keyword + + required: True + + ECS field: False + + + +## Additional grouping fields [group-inventory-fields] + +Depending on which entity you select in the **Infrastructure inventory** view, these additional fields can be mapped to group entities by. + +`cloud.availability_zone` +: Availability zone in which this host is running. + + type: keyword + + required: True + + ECS field: True + + example: `us-east-1c` + + +`cloud.machine.type` +: Machine type of the host machine. + + type: keyword + + required: True + + ECS field: True + + example: `t2.medium` + + +`cloud.region` +: Region in which this host is running. + + type: keyword + + required: True + + ECS field: True + + example: `us-east-1` + + +`cloud.instance.id` +: Instance ID of the host machine. + + type: keyword + + required: True + + ECS field: True + + example: `i-1234567890abcdef0` + + +`cloud.provider` +: Name of the cloud provider. Example values are `aws`, `azure`, `gcp`, or `digitalocean`. + + type: keyword + + required: True + + ECS field: True + + example: `aws` + + +`cloud.instance.name` +: Instance name of the host machine. + + type: keyword + + required: True + + ECS field: True + + +`cloud.project.id` +: Name of the project in Google Cloud. + + type: keyword + + required: True + + ECS field: False + + +`service.type` +: The type of the service data is collected from. + + The type can be used to group and correlate logs and metrics from one service type. + + Example: If metrics are collected from {{es}}, service.type would be `elasticsearch`. + + type: keyword + + required: True + + ECS field: False + + example: `elasticsearch` + + +`host.hostname` +: Name of the host. + + It normally contains what the `hostname` command returns on the host machine. + + type: keyword + + required: True, if you want to use the {{ml-features}}. + + ECS field: True + + example: `Elastic.local` + + +`host.os.name` +: Operating system name, without the version. + + Multi-fields: + + * os.name.text (type: text) + + type: keyword + + required: True + + ECS field: True + + example: `Mac OS X` + + +`host.os.kernel` +: Operating system kernel version as a raw string. + + type: keyword + + required: True + + ECS field: True + + example: `4.4.0-112-generic` + + diff --git a/reference/observability/index.md b/reference/observability/index.md new file mode 100644 index 0000000000..abb5329c4a --- /dev/null +++ b/reference/observability/index.md @@ -0,0 +1,20 @@ +--- +applies_to: + stack: all + serverless: all +--- +# Observability + +% TO-DO: Add links to "What is Elastic Observability?"% + +This section contains reference information for Elastic Observability features, including: + +* Fields reference + * Logs Explorer fields + * Infrastructure app fields +* Elastic Entity Model + +You can use these APIs to interface with Elastic Observability features: + +* [Observability Intake Serverless APIs](https://www.elastic.co/docs/api/doc/observability-serverless) +* [Service level objectives](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-slo) diff --git a/reference/observability/serverless/infrastructure-app-fields.md b/reference/observability/serverless/infrastructure-app-fields.md new file mode 100644 index 0000000000..3156a8de31 --- /dev/null +++ b/reference/observability/serverless/infrastructure-app-fields.md @@ -0,0 +1,115 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/observability-infrastructure-monitoring-required-fields.html +--- + +# Infrastructure app fields [observability-infrastructure-monitoring-required-fields] + +This section lists the fields the Infrastructure UI uses to display data. Please note that some of the fields listed here are not [ECS fields](ecs://docs/reference/index.md#_what_is_ecs). + + +## Additional field details [observability-infrastructure-monitoring-required-fields-additional-field-details] + +The `event.dataset` field is required to display data properly in some views. This field is a combination of `metricset.module`, which is the {{metricbeat}} module name, and `metricset.name`, which is the metricset name. + +To determine each metric’s optimal time interval, all charts use `metricset.period`. If `metricset.period` is not available, then it falls back to 1 minute intervals. + + +## Base fields [base-fields] + +The `base` field set contains all fields which are on the top level. These fields are common across all types of events. + +| Field | Description | Type | +| --- | --- | --- | +| `@timestamp` | Date/time when the event originated.

This is the date/time extracted from the event, typically representing when the source generated the event. If the event source has no original timestamp, this value is typically populated by the first time the pipeline received the event. Required field for all events.

Example: `May 27, 2020 @ 15:22:27.982`
| date | +| `message` | For log events the message field contains the log message, optimized for viewing in a log viewer.

For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event.

If multiple messages exist, they can be combined into one message.

Example: `Hello World`
| text | + + +## Hosts fields [host-fields] + +These fields must be mapped to display host data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `host.name` | Name of the host.

It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use.

Example: `MacBook-Elastic.local`
| keyword | +| `host.ip` | IP of the host that records the event. | ip | + + +## Docker container fields [docker-fields] + +These fields must be mapped to display Docker container data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `container.id` | Unique container id.

Example: `data`
| keyword | +| `container.name` | Container name. | keyword | +| `container.ip_address` | IP of the container.

*Not an ECS field*
| ip | + + +## Kubernetes pod fields [kubernetes-fields] + +These fields must be mapped to display Kubernetes pod data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `kubernetes.pod.uid` | Kubernetes Pod UID.

Example: `8454328b-673d-11ea-7d80-21010a840123`

*Not an ECS field*
| keyword | +| `kubernetes.pod.name` | Kubernetes pod name.

Example: `nginx-demo`

*Not an ECS field*
| keyword | +| `kubernetes.pod.ip` | IP of the Kubernetes pod.

*Not an ECS field*
| keyword | + + +## AWS EC2 instance fields [aws-ec2-fields] + +These fields must be mapped to display EC2 instance data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `cloud.instance.id` | Instance ID of the host machine.

Example: `i-1234567890abcdef0`
| keyword | +| `cloud.instance.name` | Instance name of the host machine. | keyword | +| `aws.ec2.instance.public.ip` | Instance public IP of the host machine.

*Not an ECS field*
| keyword | + + +## AWS S3 bucket fields [aws-s3-fields] + +These fields must be mapped to display S3 bucket data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `aws.s3.bucket.name` | The name or ID of the AWS S3 bucket.

*Not an ECS field*
| keyword | + + +## AWS SQS queue fields [aws-sqs-fields] + +These fields must be mapped to display SQS queue data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `aws.sqs.queue.name` | The name or ID of the AWS SQS queue.

*Not an ECS field*
| keyword | + + +## AWS RDS database fields [aws-rds-fields] + +These fields must be mapped to display RDS database data in the {{infrastructure-app}}. + +| Field | Description | Type | +| --- | --- | --- | +| `aws.rds.db_instance.arn` | Amazon Resource Name (ARN) for each RDS.

*Not an ECS field*
| keyword | +| `aws.rds.db_instance.identifier` | Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance.

*Not an ECS field*
| keyword | + + +## Additional grouping fields [group-inventory-fields] + +Depending on which entity you select in the **Infrastructure inventory** view, these additional fields can be mapped to group entities by. + +| Field | Description | Type | +| --- | --- | --- | +| `cloud.availability_zone` | Availability zone in which this host is running.

Example: `us-east-1c`
| keyword | +| `cloud.machine.type` | Machine type of the host machine.

Example: `t2.medium`
| keyword | +| `cloud.region` | Region in which this host is running.

Example: `us-east-1`
| keyword | +| `cloud.instance.id` | Instance ID of the host machine.

Example: `i-1234567890abcdef0`
| keyword | +| `cloud.provider` | Name of the cloud provider. Example values are `aws`, `azure`, `gcp`, or `digitalocean`.

Example: `aws`
| keyword | +| `cloud.instance.name` | Instance name of the host machine. | keyword | +| `cloud.project.id` | Name of the project in Google Cloud.

*Not an ECS field*
| keyword | +| `service.type` | The type of service data is collected from.

The type can be used to group and correlate logs and metrics from one service type.

For example, the service type for metrics collected from {{es}} is `elasticsearch`.

Example: `elasticsearch`

*Not an ECS field*
| keyword | +| `host.hostname` | Name of the host. This field is required if you want to use {{ml-features}}

It normally contains what the `hostname` command returns on the host machine.

Example: `Elastic.local`
| keyword | +| `host.os.name` | Operating system name, without the version.

Multi-fields:

os.name.text (type: text)

Example: `Mac OS X`
| keyword | +| `host.os.kernel` | Operating system kernel version as a raw string.

Example: `4.4.0-112-generic`
| keyword | diff --git a/reference/overview/index.md b/reference/overview/index.md new file mode 100644 index 0000000000..ea561e4127 --- /dev/null +++ b/reference/overview/index.md @@ -0,0 +1,17 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/api-reference.html +--- + +# Reference [api-reference] + +Explore the reference documentation for Elastic APIs. + +| | | +| --- | --- | +| {{es}} | * [{{es}}](elasticsearch://docs/reference/elasticsearch/rest-apis/index.md)
* [{{es}} Serverless](https://www.elastic.co/docs/api/doc/elasticsearch-serverless)
| +| {{kib}} | * [{{kib}}](https://www.elastic.co/docs/api/doc/kibana)
* [{{kib}} Serverless](https://www.elastic.co/docs/api/doc/serverless)
* [{{fleet}}](/reference/ingestion-tools/fleet/fleet-api-docs.md)
* [{{observability}} Serverless SLOs](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-slo)
* [{{elastic-sec}}](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-ai-assistant-api)
* [{{elastic-sec}} Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-security-ai-assistant-api)
| +| {{ls}} | * [Monitoring {{ls}}](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html)
| +| APM | * [APM](/solutions/observability/apps/apm-server-api.md)
* [APM Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-apm-agent-configuration)
* [Observability intake Serverless](https://www.elastic.co/docs/api/doc/observability-serverless)
| +| {{ecloud}} | * [{{ess}}](https://www.elastic.co/docs/api/doc/cloud)
* [{{ecloud}} Serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless)
* [{{ece}}](https://www.elastic.co/docs/api/doc/cloud-enterprise)
* [{{eck}}](cloud-on-k8s://docs/reference/k8s-api-reference.md)
| + diff --git a/reference/search/search.md b/reference/search/search.md new file mode 100644 index 0000000000..98cbdea4d3 --- /dev/null +++ b/reference/search/search.md @@ -0,0 +1,20 @@ +--- +navigation_title: "Search" +--- +# Search reference + +% Derived from https://www.elastic.co/platform +% Build powerful AI search experiences with the best vector database and platform for RAG. + +This section contains reference information for Elastic Search features, in particular the [Search UI](search-ui://docs/index.md). + +You can also use [Elasticsearch](https://www.elastic.co/docs/api/doc/elasticsearch) or [Elasticsearch Serverless](https://www.elastic.co/docs/api/doc/elasticsearch-serverless) APIs to interface with search features. +For example: + +* [Inference APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference): Manage inference models and perform inference. +* [Query rule APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-query_rules): Manage query rulesets. +* [Search APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-search): Search and aggregate data stored in Elasticsearch indices and data streams. +* [Search application APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-search_application): Manage tasks and resources related to search applications. + +% Link to connectors reference information in ingest section. +% TO-DO: Link to the search overview in the solutions section \ No newline at end of file diff --git a/reference/security/elastic-defend/agent-tamper-protection.md b/reference/security/elastic-defend/agent-tamper-protection.md new file mode 100644 index 0000000000..3d669607df --- /dev/null +++ b/reference/security/elastic-defend/agent-tamper-protection.md @@ -0,0 +1,60 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/agent-tamper-protection.html +--- + +# Prevent Elastic Agent uninstallation [agent-tamper-protection] + +For hosts enrolled in {{elastic-defend}}, you can prevent unauthorized attempts to uninstall {{agent}} and {{elastic-endpoint}} by enabling **Agent tamper protection** on the Agent policy. This offers an additional layer of security by preventing users from bypassing or disabling {{elastic-defend}}'s endpoint protections. + +When enabled, {{agent}} and {{elastic-endpoint}} can only be uninstalled on the host by including an uninstall token in the uninstall CLI command. One unique uninstall token is generated per Agent policy, and you can retrieve uninstall tokens in an Agent policy’s settings or in the {{fleet}} UI. + +::::{admonition} Requirements +* Agent tamper protection requires a [Platinum or higher subscription](https://www.elastic.co/pricing). +* Hosts must be enrolled in the {{elastic-defend}} integration. +* {{agent}}s must be version 8.11.0 or later. +* This feature is supported for all operating systems. + +:::: + + +:::{image} ../../../images/security-agent-tamper-protection.png +:alt: Agent tamper protection setting highlighted on Agent policy settings page +:class: screenshot +::: + + +## Enable Agent tamper protection [enable-agent-tamper-protection] + +You can enable Agent tamper protection by configuring the {{agent}} policy. + +1. Find **{{fleet}}** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Select **Agent policies**, then select the Agent policy you want to configure. +3. Select the **Settings** tab on the policy details page. +4. In the **Agent tamper protection** section, turn on the **Prevent agent tampering** setting. + + This makes the **Get uninstall command** link available, which you can follow to get the uninstall token and CLI command if you need to [uninstall an Agent](/reference/security/elastic-defend/uninstall-agent.md) on this policy. + + ::::{tip} + You can also access an Agent policy’s uninstall tokens on the **Uninstall tokens** tab on the **{{fleet}}** page. Refer to [Access uninstall tokens](#fleet-uninstall-tokens) for more information. + :::: + +5. Select **Save changes**. + + +## Access uninstall tokens [fleet-uninstall-tokens] + +If you need the uninstall token to remove {{agent}} from an endpoint, you can find it in several ways: + +* **On the Agent policy** — Go to the Agent policy’s **Settings** tab, then click the **Get uninstall command** link. The **Uninstall agent** flyout opens, containing the full uninstall command with the token. +* **On the {{fleet}} page** — Select **Uninstall tokens** for a list of the uninstall tokens generated for your Agent policies. You can: + + * Click the **Show token** icon in the **Token** column to reveal a specific token. + * Click the **View uninstall command** icon in the **Actions** column to open the **Uninstall agent** flyout, containing the full uninstall command with the token. + + +::::{tip} +If you have many tamper-protected {{agent}} policies, you may want to [Provide multiple uninstall tokens](/reference/security/elastic-defend/uninstall-agent.md#multiple-uninstall-tokens) in a single command. +:::: + + diff --git a/reference/security/elastic-defend/artifact-control.md b/reference/security/elastic-defend/artifact-control.md new file mode 100644 index 0000000000..e1ea775116 --- /dev/null +++ b/reference/security/elastic-defend/artifact-control.md @@ -0,0 +1,26 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/artifact-control.html +--- + +# Configure updates for protection artifacts [artifact-control] + +On the **Protection updates** tab of the {{elastic-defend}} integration policy, you can configure how {{elastic-defend}} receives updates from Elastic with the latest threat detections, global exceptions, malware models, rule packages, and other protection artifacts. By default, these artifacts are automatically updated regularly, ensuring your environment is up to date with the latest protections. + +You can disable automatic updates and freeze your protection artifacts to a specific date, allowing you to control when to receive and install the updates. For example, you might want to temporarily disable updates to ensure resource availability during a high-volume period, test updates in a controlled staging environment before rolling out to production, or roll back to a previous version of protections. + +Protection artifacts will expire after 18 months, and you’ll no longer be able to select them as a deployed version. If you’re already using a specific version when it expires, you’ll keep using it until you either select a later non-expired version or re-enable automatic updates. + +::::{warning} +It is strongly advised to keep automatic updates enabled to ensure the highest level of security for your environment. Proceed with caution if you decide to disable automatic updates. +:::: + + +To configure the protection artifacts version deployed in your environment: + +1. Find **Policies** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Select an {{elastic-defend}} integration policy, then select the **Protection updates** tab. +3. Turn off the **Enable automatic updates** toggle. +4. Use the **Version to deploy** date picker to select the date of the protection artifacts you want to use in your environment. +5. (Optional) Enter a **Note** to explain the reason for selecting a particular version of protection artifacts. +6. Select **Save**. diff --git a/reference/security/elastic-defend/configure-endpoint-integration-policy.md b/reference/security/elastic-defend/configure-endpoint-integration-policy.md new file mode 100644 index 0000000000..2abfab8376 --- /dev/null +++ b/reference/security/elastic-defend/configure-endpoint-integration-policy.md @@ -0,0 +1,244 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html +--- + +# Configure an integration policy for Elastic Defend [configure-endpoint-integration-policy] + +After the {{agent}} is installed with the {{elastic-defend}} integration, several protections features — including preventions against malware, ransomware, memory threats, and malicious behavior — are automatically enabled on protected hosts (some features require a Platinum or Enterprise license). If needed, you can update the integration policy to configure protection settings, event collection, antivirus settings, trusted applications, event filters, host isolation exceptions, and blocked applications to meet your organization’s security needs. + +You can also create multiple {{elastic-defend}} integration policies to maintain unique configuration profiles. To create an additional {{elastic-defend}} integration policy, find **Integrations** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), then follow the steps for [adding the {{elastic-defend}} integration](/reference/security/elastic-defend/install-endpoint.md#add-security-integration). + +::::{admonition} Requirements +You must have the **{{elastic-defend}} Policy Management : All** [privilege](/reference/security/elastic-defend/endpoint-management-req.md) to configure an integration policy. + +:::: + + +::::{tip} +In addition to configuring an {{elastic-defend}} policy through the {{elastic-sec}} UI, you can create and customize an {{elastic-defend}} policy [through the API](/reference/security/elastic-defend/create-defend-policy-api.md). +:::: + + +To configure an integration policy: + +1. Find **Policies** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Select the integration policy you want to configure. The integration policy configuration page appears. +3. On the **Policy settings** tab, review and configure the following settings as appropriate: + + * [Malware protection](#malware-protection) + * [Ransomware protection](#ransomware-protection) + * [Memory threat protection](#memory-protection) + * [Malicious behavior protection](#behavior-protection) + * [Attack surface reduction](#attack-surface-reduction) + * [Event collection](#event-collection) + * [Register {{elastic-sec}} as antivirus (optional)](#register-as-antivirus) + * [Advanced policy settings (optional)](#adv-policy-settings) + * [Save the general policy settings](#save-policy) + +4. Click the **Trusted applications**, **Event filters***, ***Host isolation exceptions**, and **Blocklist** tabs to review the endpoint policy artifacts assigned to this integration policy (for more information, refer to [*Trusted applications*](/solutions/security/manage-elastic-defend/trusted-applications.md), [*Event filters*](/solutions/security/manage-elastic-defend/event-filters.md), [*Host isolation exceptions*](/solutions/security/manage-elastic-defend/host-isolation-exceptions.md), and [*Blocklist*](/solutions/security/manage-elastic-defend/blocklist.md)). On these tabs, you can: + + * Expand and view an artifact — Click the arrow next to its name. + * View an artifact’s details — Click the actions menu (**…​**), then select **View full details**. + * Unassign an artifact (Platinum or Enterprise subscription) — Click the actions menu (**…​**), then select **Remove from policy**. This does not delete the artifact; this just unassigns it from the current policy. + * Assign an existing artifact (Platinum or Enterprise subscription) — Click **Assign *x* to policy**, then select an item from the flyout. This view lists any existing artifacts that aren’t already assigned to the current policy. + + ::::{note} + You can’t create a new endpoint policy artifact while configuring an integration policy. To create a new artifact, go to its main page in the {{security-app}} (for example, to create a new trusted application, find **Trusted applications** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search)). + :::: + +5. Click the **Protection updates** tab to configure how {{elastic-defend}} receives updates from Elastic with the latest threat detections, malware models, and other protection artifacts. Refer to [Configure updates for protection artifacts](/reference/security/elastic-defend/artifact-control.md) for more information. + + +## Malware protection [malware-protection] + +{{elastic-defend}} malware prevention detects and stops malicious attacks by using a [machine learning model](/solutions/security/detect-and-alert.md#machine-learning-model) that looks for static attributes to determine if a file is malicious or benign. + +By default, malware protection is enabled on Windows, macOS, and Linux hosts. To disable malware protection, turn off the **Malware protections** toggle. + +Malware protection levels are: + +* **Detect**: Detects malware on the host and generates an alert. The agent will **not** block malware. You must pay attention to and analyze any malware alerts that are generated. +* **Prevent** (Default): Detects malware on the host, blocks it from executing, and generates an alert. + +These additional options are available for malware protection: + +* **Blocklist**: Enable or disable the [blocklist](/solutions/security/manage-elastic-defend/blocklist.md) for all hosts associated with this {{elastic-defend}} policy. The blocklist allows you to prevent specified applications from running on hosts, extending the list of processes that {{elastic-defend}} considers malicious. +* **Scan files upon modification**: By default, {{elastic-defend}} scans files every time they’re modified, which can be resource-intensive on hosts where files are frequently modified, such as servers and developer machines. Turn off this option to only scan files when they’re executed. {{elastic-defend}} will continue to identify malware as it attempts to run, providing a robust level of protection while improving endpoint performance. + +Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. + +::::{tip} +Platinum and Enterprise customers can customize these notifications using the `Elastic Security {{action}} {{filename}}` syntax. +:::: + + +:::{image} ../../../images/security-malware-protection.png +:alt: Detail of malware protection section. +:class: screenshot +::: + + +### Manage quarantined files [manage-quarantined-files] + +When **Prevent** is enabled for malware protection, {{elastic-defend}} will quarantine any malicious file it finds (this includes files defined in the [*Blocklist*](/solutions/security/manage-elastic-defend/blocklist.md)). Specifically {{elastic-defend}} will remove the file from its current location, encrypt it with the encryption key `ELASTIC`, move it to a different folder, and rename it as a GUID string, such as `318e70c2-af9b-4c3a-939d-11410b9a112c`. + +The quarantine folder location varies by operating system: + +* macOS: `/System/Volumes/Data/.equarantine` +* Linux: `.equarantine` at the root of the mount point of the file being quarantined +* Windows - {{elastic-defend}} versions 8.5 and later: `[DriveLetter:]\.equarantine`, unless the files are from the `C:` drive. These files are moved to `C:\Program Files\Elastic\Endpoint\state\.equarantine`. +* Windows - {{elastic-defend}} versions 8.4 and earlier: `[DriveLetter:]\.equarantine`, for any drive + +To restore a quarantined file to its original state and location, [add an exception](/solutions/security/detect-and-alert/add-manage-exceptions.md) to the rule that identified the file as malicious. If the exception would’ve stopped the rule from identifying the file as malicious, {{elastic-defend}} restores the file. + +You can access a quarantined file by using the `get-file` [response action command](/solutions/security/endpoint-response-actions.md#response-action-commands) in the response console. To do this, copy the path from the alert’s **Quarantined file path** field (`file.Ext.quarantine_path`), which appears under **Highlighted fields** in the alert details flyout. Then paste the value into the `--path` parameter. This action doesn’t restore the file to its original location, so you will need to do this manually. + +::::{note} +Response actions and the response console UI are [Enterprise subscription](https://www.elastic.co/pricing) features. +:::: + + + +## Ransomware protection [ransomware-protection] + +Behavioral ransomware prevention detects and stops ransomware attacks on Windows systems by analyzing data from low-level system processes. It is effective across an array of widespread ransomware families — including those targeting the system’s master boot record. + +Ransomware protection is a paid feature and is enabled by default if you have a [Platinum or Enterprise license](https://www.elastic.co/pricing). If you upgrade to a Platinum or Enterprise license from Basic or Gold, ransomware protection will be disabled by default. + +Ransomware protection levels are: + +* **Detect**: Detects ransomware on the host and generates an alert. {{elastic-defend}} will **not** block ransomware. You must pay attention to and analyze any ransomware alerts that are generated. +* **Prevent** (Default): Detects ransomware on the host, blocks it from executing, and generates an alert. + +When ransomware protection is enabled, canary files placed in targeted locations on your hosts provide an early warning system for potential ransomware activity. When a canary file is modified, Elastic Defend immediately generates a ransomware alert. If **prevent** ransomware is active, {{elastic-defend}} terminates the process that modified the file. + +Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. + +::::{tip} +Platinum and Enterprise customers can customize these notifications using the `Elastic Security {{action}} {{filename}}` syntax. +:::: + + +:::{image} ../../../images/security-ransomware-protection.png +:alt: Detail of ransomware protection section. +:class: screenshot +::: + + +## Memory threat protection [memory-protection] + +Memory threat protection detects and stops in-memory threats, such as shellcode injection, which are used to evade traditional file-based detection techniques. + +Memory threat protection is a paid feature and is enabled by default if you have a [Platinum or Enterprise license](https://www.elastic.co/pricing). If you upgrade to a Platinum or Enterprise license from Basic or Gold, memory threat protection will be disabled by default. + +Memory threat protection levels are: + +* **Detect**: Detects memory threat activity on the host and generates an alert. {{elastic-defend}} will **not** block the in-memory activity. You must pay attention to and analyze any alerts that are generated. +* **Prevent** (Default): Detects memory threat activity on the host, forces the process or thread to stop, and generates an alert. + +Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. + +::::{tip} +Platinum and Enterprise customers can customize these notifications using the `Elastic Security {{action}} {{rule}}` syntax. +:::: + + +:::{image} ../../../images/security-memory-protection.png +:alt: Detail of memory protection section. +:class: screenshot +::: + + +## Malicious behavior protection [behavior-protection] + +Malicious behavior protection detects and stops threats by monitoring the behavior of system processes for suspicious activity. Behavioral signals are much more difficult for adversaries to evade than traditional file-based detection techniques. + +Malicious behavior protection is a paid feature and is enabled by default if you have a [Platinum or Enterprise license](https://www.elastic.co/pricing). If you upgrade to a Platinum or Enterprise license from Basic or Gold, malicious behavior protection will be disabled by default. + +Malicious behavior protection levels are: + +* **Detect**: Detects malicious behavior on the host and generates an alert. {{elastic-defend}} will **not** block the malicious behavior. You must pay attention to and analyze any alerts that are generated. +* **Prevent** (Default): Detects malicious behavior on the host, forces the process to stop, and generates an alert. + +Select whether you want to use **Reputation service** for additional protection. Elastic’s reputation service leverages our extensive threat intelligence knowledge to make high confidence real-time prevention decisions. For example, reputation service can detect suspicious downloads of binaries with low or malicious reputation. Endpoints communicate with the reputation service directly at [https://cloud.security.elastic.co](https://cloud.security.elastic.co). + +::::{note} +Reputation service requires an active [Platinum or Enterprise subscription](https://www.elastic.co/pricing) and is available on cloud deployments only. +:::: + + +Select **Notify user** to send a push notification in the host operating system when activity is detected or prevented. Notifications are enabled by default for the **Prevent** option. + +::::{tip} +Platinum and Enterprise customers can customize these notifications using the `Elastic Security {{action}} {{rule}}` syntax. +:::: + + +:::{image} ../../../images/security-behavior-protection.png +:alt: Detail of behavior protection section. +:class: screenshot +::: + + +## Attack surface reduction [attack-surface-reduction] + +This section helps you reduce vulnerabilities that attackers can target on Windows endpoints. + +* **Credential hardening**: Prevents attackers from stealing credentials stored in Windows system process memory. Turn on the toggle to remove any overly permissive access rights that aren’t required for standard interaction with the Local Security Authority Subsystem Service (LSASS). This feature enforces the principle of least privilege without interfering with benign system activity that is related to LSASS. + +:::{image} ../../../images/security-attack-surface-reduction.png +:alt: Detail of attack surface reduction section. +:class: screenshot +::: + + +## Event collection [event-collection] + +In the **Settings** section, select which categories of events to collect on each operating system. Most categories are collected by default, as seen below. + +:::{image} ../../../images/security-event-collection.png +:alt: Detail of event collection section. +:class: screenshot +::: + + +## Register {{elastic-sec}} as antivirus (optional) [register-as-antivirus] + +You can register {{elastic-sec}} as your hosts' antivirus software by enabling **Register as antivirus**. + +::::{note} +Windows Server versions are not supported. Antivirus registration requires Windows Security Center, which is not included in Windows Server operating systems. +:::: + + +By default, the **Sync with malware protection level** is selected to automatically set antivirus registration to match how you’ve configured {{elastic-defend}}'s [malware protection](#malware-protection). If malware protection is turned on *and* set to **Prevent**, antivirus registration will also be enabled; in any other case, antivirus registration will be disabled. + +If you don’t want to sync antivirus registration, you can set it manually with **Enabled** or **Disabled**. + +:::{image} ../../../images/security-register-as-antivirus.png +:alt: Detail of Register as antivirus option. +:class: screenshot +::: + + +## Advanced policy settings (optional) [adv-policy-settings] + +Users with unique configuration and security requirements can select **Show advanced settings** while configuring an {{elastic-defend}} integration policy to support advanced use cases. Hover over each setting to view its description. + +::::{note} +Advanced settings are not recommended for most users. +:::: + + +This section includes: + +* [Turn off diagnostic data for {{elastic-defend}}](/reference/security/elastic-defend/endpoint-diagnostic-data.md) +* [Configure self-healing rollback for Windows endpoints](/reference/security/elastic-defend/self-healing-rollback.md) +* [Configure Linux file system monitoring](/reference/security/elastic-defend/linux-file-monitoring.md) +* [Configure data volume](/reference/security/elastic-defend/endpoint-data-volume.md) + + +## Save the general policy settings [save-policy] + +After you have configured the general settings on the **Policy settings** tab, click **Save**. A confirmation message appears. diff --git a/reference/security/elastic-defend/create-defend-policy-api.md b/reference/security/elastic-defend/create-defend-policy-api.md new file mode 100644 index 0000000000..0891e71e42 --- /dev/null +++ b/reference/security/elastic-defend/create-defend-policy-api.md @@ -0,0 +1,817 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/create-defend-policy-api.html +--- + +# Create an {{elastic-defend}} policy using API [create-defend-policy-api] + +In addition to [configuring an {{elastic-defend}} policy](configure-endpoint-integration-policy.md) through the {{elastic-sec}} UI, you can create and customize an {{elastic-defend}} policy through the API. This is a three-step process involving the [{{fleet}} API](/reference/ingestion-tools/fleet/fleet-api-docs.md). You can repeat steps 2 and 3 to make more modifications to the {{elastic-defend}} policy. + +::::{admonition} Requirements +You must have the **{{elastic-defend}} Policy Management: All** [privilege](endpoint-management-req.md) to configure an integration policy. + +:::: + + + +## Step 1: Create an agent policy [create-agent-policy] + +Make the following API call to create a new agent policy where you will add your {{elastic-defend}} integration. Replace `` with your version of {{kib}}. + +```console +curl --user : --request POST \ + --url 'https://:5601/api/fleet/agent_policies' \ + -H 'Accept: */*' \ + -H 'Accept-Language: en-US,en;q=0.9' \ + -H 'Connection: keep-alive' \ + -H 'Content-Type: application/json' \ + -H 'Sec-Fetch-Dest: empty' \ + -H 'Sec-Fetch-Mode: cors' \ + -H 'Sec-Fetch-Site: same-origin' \ + -H 'kbn-version: ' \ <1> + -d \ +' +{ + "name": "My Policy Name", + "description": "", + "namespace": "default", + "inactivity_timeout": 1209600 +}' +``` + +1. `` to be replaced + + +Make a note of the `` you receive in the response. You will use this in step 2 to add {{elastic-defend}}. + +::::{dropdown} Click to display example response +```json +{ + "item": { + "id": "", <1> + "name": "My Policy Name", + "description": "", + "namespace": "default", + "inactivity_timeout": 1209600, + "is_protected": false, + "status": "active", + "is_managed": false, + "revision": 1, + "updated_at": "2023-07-24T18:35:00.233Z", + "updated_by": "elastic", + "schema_version": "1.1.1" + } +} +``` + +1. `` needed in step 2 + + +:::: + + + +## Step 2: Add the {{elastic-defend}} integration [add-defend-integration] + +Next, make the following call to add the {{elastic-defend}} integration to the policy that you created in step 1. + +Replace these values: + +1. `` with your version of {{kib}}. +2. `` with the agent policy ID you received in step 1. +3. `` with the latest {{elastic-defend}} package version (for example, `8.9.1`). To find it, navigate to **Integrations** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), and select **{{elastic-defend}}**. + +This adds the {{elastic-defend}} integration to your agent policy with the default settings. + +```console +curl --user : --request POST \ + --url 'https://:5601/api/fleet/package_policies' \ + -H 'Accept: */*' \ + -H 'Accept-Language: en-US,en;q=0.9' \ + -H 'Connection: keep-alive' \ + -H 'Content-Type: application/json' \ + -H 'Sec-Fetch-Dest: empty' \ + -H 'Sec-Fetch-Mode: cors' \ + -H 'Sec-Fetch-Site: same-origin' \ + -H 'kbn-version: ' \ <1> + -d \ +' +{ + "name": "Protect", + "description": "", + "namespace": "default", + "policy_id": "", <2> + "enabled": true, + "inputs": [ + { + "enabled": true, + "streams": [], + "type": "ENDPOINT_INTEGRATION_CONFIG", + "config": { + "_config": { + "value": { + "type": "endpoint", + "endpointConfig": { + "preset": "EDRComplete" + } + } + } + } + } + ], + "package": { + "name": "endpoint", + "title": "Elastic Defend", + "version": "" <3> + } +}' +``` + +1. `` to be replaced +2. `` to be replaced +3. `` to be replaced + + +Make a note of the `` you receive in the response. This refers to the {{elastic-defend}} policy and you will use it in step 3. + +::::{dropdown} Click to display example response +```json +{ + "item": { + "id": "", <1> + "version": "WzMwOTcsMV0=", + "name": "Protect", + "namespace": "default", + "description": "", + "package": { + "name": "endpoint", + "title": "Elastic Defend", + "version": "8.5.0" + }, + "enabled": true, + "policy_id": "b4be0860-d492-11ed-a59c-3ffbbd16325a", + "inputs": [ + { + "type": "endpoint", + "enabled": true, + "streams": [], + "config": { + "integration_config": { + "value": { + "type": "endpoint", + "endpointConfig": { + "preset": "EDRComplete" + } + } + }, + "artifact_manifest": { + "value": { + "manifest_version": "1.0.2", + "schema_version": "v1", + "artifacts": { + "endpoint-exceptionlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-exceptionlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-exceptionlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-exceptionlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-exceptionlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-exceptionlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-trustlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-trustlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-trustlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-trustlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-trustlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-trustlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-eventfilterlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-eventfilterlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-eventfilterlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-eventfilterlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-eventfilterlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-eventfilterlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-hostisolationexceptionlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-hostisolationexceptionlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-hostisolationexceptionlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-hostisolationexceptionlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-hostisolationexceptionlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-hostisolationexceptionlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-blocklist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-blocklist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-blocklist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-blocklist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-blocklist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-blocklist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + } + } + } + }, + "policy": { + "value": { + "windows": { + "events": { + "dll_and_driver_load": true, + "dns": true, + "file": true, + "network": true, + "process": true, + "registry": true, + "security": true + }, + "malware": { + "mode": "prevent", + "blocklist": true + }, + "ransomware": { + "mode": "prevent", + "supported": true + }, + "memory_protection": { + "mode": "prevent", + "supported": true + }, + "behavior_protection": { + "mode": "prevent", + "supported": true + }, + "popup": { + "malware": { + "message": "", + "enabled": true + }, + "ransomware": { + "message": "", + "enabled": true + }, + "memory_protection": { + "message": "", + "enabled": true + }, + "behavior_protection": { + "message": "", + "enabled": true + } + }, + "logging": { + "file": "info" + }, + "antivirus_registration": { + "enabled": false + }, + "attack_surface_reduction": { + "credential_hardening": { + "enabled": true + } + } + }, + "mac": { + "events": { + "process": true, + "file": true, + "network": true + }, + "malware": { + "mode": "prevent", + "blocklist": true + }, + "behavior_protection": { + "mode": "prevent", + "supported": true + }, + "memory_protection": { + "mode": "prevent", + "supported": true + }, + "popup": { + "malware": { + "message": "", + "enabled": true + }, + "behavior_protection": { + "message": "", + "enabled": true + }, + "memory_protection": { + "message": "", + "enabled": true + } + }, + "logging": { + "file": "info" + } + }, + "linux": { + "events": { + "process": true, + "file": true, + "network": true, + "session_data": false, + "tty_io": false + }, + "malware": { + "mode": "prevent", + "blocklist": true + }, + "behavior_protection": { + "mode": "prevent", + "supported": true + }, + "memory_protection": { + "mode": "prevent", + "supported": true + }, + "popup": { + "malware": { + "message": "", + "enabled": true + }, + "behavior_protection": { + "message": "", + "enabled": true + }, + "memory_protection": { + "message": "", + "enabled": true + } + }, + "logging": { + "file": "info" + } + } + } + } + } + } + ], + "revision": 1, + "created_at": "2023-04-06T15:53:14.020Z", + "created_by": "elastic", + "updated_at": "2023-04-06T15:53:14.020Z", + "updated_by": "elastic" + } +} +``` + +1. `` needed in step 3 + + +:::: + + + +## Step 3: Customize and save the {{elastic-defend}} policy settings [customize-policy-settings] + +The response you received in step 2 represents the default configuration of your new {{elastic-defend}} integration. You’ll need to modify the default configuration, then make another API call to save your customized policy settings. + + +### Modify the configuration [modify-configuration] + +1. From the response you received in step 2, copy the content within the top level `item` object. +2. From that content, remove the following fields: + + ```json + "id": "", + "revision": 1, + "created_at": "2023-04-06T15:53:14.020Z", + "created_by": "elastic", + "updated_at": "2023-04-06T15:53:14.020Z", + "updated_by": "elastic" + ``` + +3. Make any changes to the `policy` object to customize the {{elastic-defend}} configuration. + + +### Save your customized policy settings [save-customized-policy] + +Include the resulting JSON object in the following call to save your customized {{elastic-defend}} policy. Replace these values: + +1. `` with the {{elastic-defend}} policy ID you received in step 2. +2. `` with your version of {{kib}}. +3. `` with the latest {{elastic-defend}} package version (for example, `8.9.1`). To find it, navigate to **Integrations** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), and select **{{elastic-defend}}**. + +```console +curl --user : --request PUT \ + --url 'https://:5601/api/fleet/package_policies/' \ <1> + -H 'Accept: */*' \ + -H 'Accept-Language: en-US,en;q=0.9' \ + -H 'Connection: keep-alive' \ + -H 'Content-Type: application/json' \ + -H 'Sec-Fetch-Dest: empty' \ + -H 'Sec-Fetch-Mode: cors' \ + -H 'Sec-Fetch-Site: same-origin' \ + -H 'kbn-version: ' \ <2> + -d \ +' +{ + "version": "WzMwOTcsMV0=", + "name": "Protect", + "namespace": "default", + "description": "", + "package": { + "name": "endpoint", + "title": "Elastic Defend", + "version": "" <3> + }, + "enabled": true, + "policy_id": "b4be0860-d492-11ed-a59c-3ffbbd16325a", + "inputs": [ + { + "type": "endpoint", + "enabled": true, + "streams": [], + "config": { + "integration_config": { + "value": { + "type": "endpoint", + "endpointConfig": { + "preset": "EDRComplete" + } + } + }, + "artifact_manifest": { + "value": { + "manifest_version": "1.0.2", + "schema_version": "v1", + "artifacts": { + "endpoint-exceptionlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-exceptionlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-exceptionlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-exceptionlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-exceptionlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-exceptionlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-trustlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-trustlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-trustlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-trustlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-trustlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-trustlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-eventfilterlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-eventfilterlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-eventfilterlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-eventfilterlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-eventfilterlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-eventfilterlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-hostisolationexceptionlist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-hostisolationexceptionlist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-hostisolationexceptionlist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-hostisolationexceptionlist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-hostisolationexceptionlist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-hostisolationexceptionlist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-blocklist-macos-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-blocklist-macos-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-blocklist-windows-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-blocklist-windows-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + }, + "endpoint-blocklist-linux-v1": { + "encryption_algorithm": "none", + "decoded_sha256": "d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "decoded_size": 14, + "encoded_sha256": "f8e6afa1d5662f5b37f83337af774b5785b5b7f1daee08b7b00c2d6813874cda", + "encoded_size": 22, + "relative_url": "/api/fleet/artifacts/endpoint-blocklist-linux-v1/d801aa1fb7ddcc330a5e3173372ea6af4a3d08ec58074478e85aa5603e926658", + "compression_algorithm": "zlib" + } + } + } + }, + "policy": { + "value": { + "windows": { + "events": { + "dll_and_driver_load": true, + "dns": true, + "file": true, + "network": true, + "process": true, + "registry": true, + "security": true + }, + "malware": { + "mode": "prevent", + "blocklist": true + }, + "ransomware": { + "mode": "prevent", + "supported": true + }, + "memory_protection": { + "mode": "prevent", + "supported": true + }, + "behavior_protection": { + "mode": "prevent", + "supported": true + }, + "popup": { + "malware": { + "message": "", + "enabled": true + }, + "ransomware": { + "message": "", + "enabled": true + }, + "memory_protection": { + "message": "", + "enabled": true + }, + "behavior_protection": { + "message": "", + "enabled": true + } + }, + "logging": { + "file": "info" + }, + "antivirus_registration": { + "enabled": false + }, + "attack_surface_reduction": { + "credential_hardening": { + "enabled": true + } + } + }, + "mac": { + "events": { + "process": true, + "file": true, + "network": true + }, + "malware": { + "mode": "prevent", + "blocklist": true + }, + "behavior_protection": { + "mode": "prevent", + "supported": true + }, + "memory_protection": { + "mode": "prevent", + "supported": true + }, + "popup": { + "malware": { + "message": "", + "enabled": true + }, + "behavior_protection": { + "message": "", + "enabled": true + }, + "memory_protection": { + "message": "", + "enabled": true + } + }, + "logging": { + "file": "info" + } + }, + "linux": { + "events": { + "process": true, + "file": true, + "network": true, + "session_data": false, + "tty_io": false + }, + "malware": { + "mode": "prevent", + "blocklist": true + }, + "behavior_protection": { + "mode": "prevent", + "supported": true + }, + "memory_protection": { + "mode": "prevent", + "supported": true + }, + "popup": { + "malware": { + "message": "", + "enabled": true + }, + "behavior_protection": { + "message": "", + "enabled": true + }, + "memory_protection": { + "message": "", + "enabled": true + } + }, + "logging": { + "file": "info" + } + } + } + } + } + } + ] +}' +``` + +1. `` to be replaced +2. `` to be replaced +3. `` to be replaced diff --git a/reference/security/elastic-defend/deploy-elastic-endpoint-ven.md b/reference/security/elastic-defend/deploy-elastic-endpoint-ven.md new file mode 100644 index 0000000000..5e0e9d340e --- /dev/null +++ b/reference/security/elastic-defend/deploy-elastic-endpoint-ven.md @@ -0,0 +1,125 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/deploy-elastic-endpoint-ven.html +--- + +# Enable access for macOS Ventura and higher [deploy-elastic-endpoint-ven] + +To properly install and configure {{elastic-defend}} manually without a Mobile Device Management (MDM) profile, there are additional permissions that must be enabled on the host before {{elastic-endpoint}}—the installed component that performs {{elastic-defend}}'s threat monitoring and prevention—is fully functional: + +* [Approve the system extension](#system-extension-endpoint-ven) +* [Approve network content filtering](#allow-filter-content-ven) +* [Enable Full Disk Access](#enable-fda-endpoint-ven) + +::::{note} +The following permissions that need to be enabled are required after you [configure and install the {{elastic-defend}} integration](/reference/security/elastic-defend/install-endpoint.md), which includes [enrolling the {{agent}}](/reference/security/elastic-defend/install-endpoint.md#enroll-security-agent). +:::: + + + +## Approve the system extension for {{elastic-endpoint}} [system-extension-endpoint-ven] + +For macOS Ventura (13.0) and later, {{elastic-endpoint}} will attempt to load a system extension during installation. This system extension must be loaded in order to provide insight into system events such as process events, file system events, and network events. + +The following message appears during installation: + +:::{image} ../../../images/security-system_extension_blocked_warning_ven.png +:alt: system extension blocked warning ven +:class: screenshot +::: + +1. Click **Open System Settings**. +2. In the left pane, click **Privacy & Security**. + + :::{image} ../../../images/security-privacy_security_ven.png + :alt: privacy security ven + :class: screenshot + ::: + +3. On the right pane, scroll down to the Security section. Click **Allow** to allow the ElasticEndpoint system extension to load. + + :::{image} ../../../images/security-allow_system_extension_ven.png + :alt: allow system extension ven + :class: screenshot + ::: + +4. Enter your username and password and click **Modify Settings** to save your changes. + + :::{image} ../../../images/security-enter_login_details_to_confirm_ven.png + :alt: enter login details to confirm ven + :class: screenshot + ::: + + + +## Approve network content filtering for {{elastic-endpoint}} [allow-filter-content-ven] + +After successfully loading the ElasticEndpoint system extension, an additional message appears, asking to allow {{elastic-endpoint}} to filter network content. + +:::{image} ../../../images/security-allow_network_filter_ven.png +:alt: allow network filter ven +:class: screenshot +::: + +Click **Allow** to enable content filtering for the ElasticEndpoint system extension. Without this approval, {{elastic-endpoint}} cannot receive network events and, therefore, cannot enable network-related features such as [host isolation](/solutions/security/endpoint-response-actions/isolate-host.md). + + +## Enable Full Disk Access for {{elastic-endpoint}} [enable-fda-endpoint-ven] + +{{elastic-endpoint}} requires Full Disk Access to subscribe to system events via the {{elastic-defend}} framework and to protect your network from malware and other cybersecurity threats. Full Disk Access permissions is a privacy feature introduced in macOS Mojave (10.14) that prevents some applications from accessing your data. + +If you have not granted Full Disk Access, the following notification prompt will appear. + +:::{image} ../../../images/security-allow_full_disk_access_notification_ven.png +:alt: allow full disk access notification ven +:class: screenshot +::: + +To enable Full Disk Access, you must manually approve {{elastic-endpoint}}. + +::::{note} +The following instructions apply only to {{elastic-endpoint}} version 8.0.0 and later. To see Full Disk Access requirements for the Endgame sensor, refer to Endgame’s documentation. +:::: + + +1. Open the **System Settings** application. +2. In the left pane, select **Privacy & Security**. + + :::{image} ../../../images/security-privacy_security_ven.png + :alt: privacy security ven + :class: screenshot + ::: + +3. From the right pane, select **Full Disk Access**. + + :::{image} ../../../images/security-select_fda_ven.png + :alt: Select Full Disk Access + :class: screenshot + ::: + +4. Enable `ElasticEndpoint` and `co.elastic` to properly enable Full Disk Access. + + :::{image} ../../../images/security-allow_fda_ven.png + :alt: allow fda ven + :class: screenshot + ::: + + +If the endpoint is running {{elastic-endpoint}} version 7.17.0 or earlier: + +1. Click the **+** button to view **Finder**. +2. The system may prompt you to enter your username and password if you haven’t already. + + :::{image} ../../../images/security-enter_login_details_to_confirm_ven.png + :alt: enter login details to confirm ven + :class: screenshot + ::: + +3. Navigate to `/Library/Elastic/Endpoint`, then select the `elastic-endpoint` file. +4. Click **Open**. +5. In the **Privacy** tab, confirm that `ElasticEndpoint` and `co.elastic.systemextension` are selected to properly enable Full Disk Access. + + :::{image} ../../../images/security-verify_fed_granted_ven.png + :alt: Select Full Disk Access + :class: screenshot + ::: diff --git a/reference/security/elastic-defend/deploy-elastic-endpoint.md b/reference/security/elastic-defend/deploy-elastic-endpoint.md new file mode 100644 index 0000000000..b6e56ab36d --- /dev/null +++ b/reference/security/elastic-defend/deploy-elastic-endpoint.md @@ -0,0 +1,100 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/deploy-elastic-endpoint.html +--- + +# Enable access for macOS Monterey [deploy-elastic-endpoint] + +To properly install and configure {{elastic-defend}} manually without a Mobile Device Management (MDM) profile, there are additional permissions that must be enabled on the host before {{elastic-endpoint}}—the installed component that performs {{elastic-defend}}'s threat monitoring and prevention—is fully functional: + +* [Approve the system extension](#system-extension-endpoint) +* [Approve network content filtering](#allow-filter-content) +* [Enable Full Disk Access](#enable-fda-endpoint) + +::::{note} +The following permissions that need to be enabled are required after you [configure and install the {{elastic-defend}} integration](/reference/security/elastic-defend/install-endpoint.md), which includes [enrolling the {{agent}}](/reference/security/elastic-defend/install-endpoint.md#enroll-security-agent). +:::: + + + +## Approve the system extension for {{elastic-endpoint}} [system-extension-endpoint] + +For macOS Monterey (12.x), {{elastic-endpoint}} will attempt to load a system extension during installation. This system extension must be loaded in order to provide insight into system events such as process events, file system events, and network events. + +The following message appears during installation: + +:::{image} ../../../images/security-system-ext-blocked.png +:alt: system ext blocked +::: + +1. Click **Open Security Preferences**. +2. In the lower-left corner of the **Security & Privacy** pane, click the **Lock button**, then enter your credentials to authenticate. + + :::{image} ../../../images/security-lock-button.png + :alt: lock button + ::: + +3. Click **Allow** to allow the {{elastic-endpoint}} system extension to load. + + :::{image} ../../../images/security-allow-system-ext.png + :alt: allow system ext + ::: + + + #### Approve network content filtering for {{elastic-endpoint}} [allow-filter-content] + + After successfully loading the {{elastic-endpoint}} system extension, an additional message appears, asking to allow {{elastic-endpoint}} to filter network content. + + :::{image} ../../../images/security-filter-network-content.png + :alt: filter network content + ::: + + +* Click **Allow** to enable content filtering for the {{elastic-endpoint}} system extension. Without this approval, {{elastic-endpoint}} cannot receive network events and, therefore, cannot enable network-related features such as [host isolation](/solutions/security/endpoint-response-actions/isolate-host.md). + + +## Enable Full Disk Access for {{elastic-endpoint}} [enable-fda-endpoint] + +{{elastic-endpoint}} requires Full Disk Access to subscribe to system events via the {{elastic-defend}} framework and to protect your network from malware and other cybersecurity threats. To enable Full Disk Access on endpoints running macOS Catalina (10.15) and later, you must manually approve {{elastic-endpoint}}. + +::::{note} +The following instructions apply only to {{elastic-endpoint}} running version 8.0.0 and later. To see Full Disk Access requirements for the Endgame sensor, refer to Endgame’s documentation. +:::: + + +1. Open the **System Preferences** application. +2. Select **Security and Privacy**. + + :::{image} ../../../images/security-sec-privacy-pane.png + :alt: sec privacy pane + :class: screenshot + ::: + +3. On the **Security and Privacy** pane, select the **Privacy** tab. +4. From the left pane, select **Full Disk Access**. + + :::{image} ../../../images/security-select-fda.png + :alt: Select Full Disk Access + :class: screenshot + ::: + +5. In the lower-left corner of the pane, click the **Lock button**, then enter your credentials to authenticate. +6. In the **Privacy** tab, confirm that `ElasticEndpoint` AND `co.elastic.systemextension` are selected to properly enable Full Disk Access. + + :::{image} ../../../images/security-select-endpoint-ext.png + :alt: role+"screenshot" + ::: + + +If the endpoint is running {{elastic-endpoint}} version 7.17.0 or earlier: + +1. In the lower-left corner of the pane, click the **Lock button**, then enter your credentials to authenticate. +2. Click the **+** button to view **Finder**. +3. Navigate to `/Library/Elastic/Endpoint`, then select the `elastic-endpoint` file. +4. Click **Open**. +5. In the **Privacy** tab, confirm that `elastic-endpoint` AND `co.elastic.systemextension` are selected to properly enable Full Disk Access. + +:::{image} ../../../images/security-fda-7-16.png +:alt: fda 7 16 +::: + diff --git a/reference/security/elastic-defend/deploy-with-mdm.md b/reference/security/elastic-defend/deploy-with-mdm.md new file mode 100644 index 0000000000..0c423b2bf4 --- /dev/null +++ b/reference/security/elastic-defend/deploy-with-mdm.md @@ -0,0 +1,144 @@ +--- +navigation_title: "Deploy on macOS with MDM" +mapped_pages: + - https://www.elastic.co/guide/en/security/current/deploy-with-mdm.html +--- + +# Deploy {{elastic-defend}} on macOS with mobile device management [deploy-with-mdm] + + +To silently install and deploy {{elastic-defend}}, you need to configure a mobile device management (MDM) profile for {{elastic-endpoint}}—the installed component that performs {{elastic-defend}}'s threat monitoring and prevention. This allows you to pre-approve the {{elastic-endpoint}} system extension and grant Full Disk Access to all the necessary components. + +This page explains how to deploy {{elastic-defend}} silently using Jamf. + + +## Configure a Jamf MDM profile [configure-jamf-profile] + +In Jamf, create a configuration profile for {{elastic-endpoint}}. Follow these steps to configure the profile: + +1. [Approve the system extension.](#system-extension-jamf) +2. [Approve network content filtering.](#content-filtering-jamf) +3. [Enable notifications.](#notifications-jamf) +4. [Enable Full Disk Access.](#fda-jamf) + + +### Approve the system extension [system-extension-jamf] + +1. Select the **System Extensions** option to configure the system extension policy for the {{elastic-endpoint}} configuration profile. +2. Make sure that **Allow users to approve system extensions** is selected. +3. In the **Allowed Team IDs and System Extensions** section, add the {{elastic-endpoint}} system extension: + + 1. (Optional) Enter a **Display Name** for the {{elastic-endpoint}} system extension. + 2. From the **System Extension Types** dropdown, select **Allowed System Extensions**. + 3. Under **Team Identifier**, enter `2BT3HPN62Z`. + 4. Under **Allowed System Extensions**, enter `co.elastic.systemextension`. + +4. Save the configuration. + +:::{image} ../../../images/security-system-extension-jamf.png +:alt: system extension jamf +:class: screenshot +::: + + +### Approve network content filtering [content-filtering-jamf] + +1. Select the **Content Filter** option to configure the Network Extension policy for the {{elastic-endpoint}} configuration profile. +2. Under **Filter Name**, enter `ElasticEndpoint`. +3. Under **Identifier**, enter `co.elastic.endpoint`. +4. In the **Socket Filter** section, fill in these fields: + + 1. **Socket Filter Bundle Identifier**: Enter `co.elastic.systemextension` + 2. **Socket Filter Designated Requirement**: Enter the following: + + ```shell + identifier "co.elastic.systemextension" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" + ``` + +5. In the **Network Filter** section, fill in these fields: + + 1. **Network Filter Bundle Identifier**: Enter `co.elastic.systemextension` + 2. **Network Filter Designated Requirement**: Enter the following: + + ```shell + identifier "co.elastic.systemextension" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" + ``` + +6. Save the configuration. + +:::{image} ../../../images/security-content-filtering-jamf.png +:alt: content filtering jamf +:class: screenshot +::: + + +### Enable notifications [notifications-jamf] + +1. Select the **Notifications** option to configure the Notification Center policy for the {{elastic-endpoint}} configuration profile. +2. Under **App Name**, enter `Elastic Security.app`. +3. Under **Bundle ID**, enter `co.elastic.alert`. +4. In the **Settings** section, include these options with the following settings: + + 1. **Critical Alerts**: Enable + 2. **Notifications**: Enable + 3. **Banner alert type**: Persistent + 4. **Notifications on Lock Screen**: Display + 5. **Notifications in Notification Center**: Display + 6. **Badge app icon**: Display + 7. **Play sound for notifications**: Enable + +5. Save the configuration. + +:::{image} ../../../images/security-notifications-jamf.png +:alt: notifications jamf +:class: screenshot +::: + + +### Enable Full Disk Access [fda-jamf] + +1. Select the **Privacy Preferences Policy Control** option to configure the Full Disk Access policy for the {{elastic-endpoint}} configuration profile. +2. Add a new entry with the following details: + + 1. Under **Identifier**, enter `co.elastic.systemextension`. + 2. From the **Identifier Type** dropdown, select **Bundle ID**. + 3. Under **Code Requirement**, enter the following: + + ```shell + identifier "co.elastic.systemextension" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" + ``` + + 4. Make sure that **Validate the Static Code Requirement** is selected. + +3. Add a second entry with the following details: + + 1. Under **Identifier**, enter `co.elastic.endpoint`. + 2. From the **Identifier Type** dropdown, select **Bundle ID**. + 3. Under **Code Requirement**, enter the following: + + ```shell + identifier "co.elastic.endpoint" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" + ``` + + 4. Make sure that **Validate the Static Code Requirement** is selected. + +4. Add a third entry with the following details: + + 1. Under **Identifier**, enter `co.elastic.elastic-agent`. + 2. From the **Identifier Type** dropdown, select **Bundle ID**. + 3. Under **Code Requirement**, enter the following: + + ```shell + identifier "co.elastic.elastic-agent" and anchor apple generic and certificate 1[field.1.2.840.113635.100.6.2.6] /* exists */ and certificate leaf[field.1.2.840.113635.100.6.1.13] /* exists */ and certificate leaf[subject.OU] = "2BT3HPN62Z" + ``` + + 4. Make sure that **Validate the Static Code Requirement** is selected. + +5. Save the configuration. + +:::{image} ../../../images/security-fda-jamf.png +:alt: fda jamf +:class: screenshot +::: + +After you complete these steps, generate the mobile configuration profile and install it onto the macOS machines. Once the profile is installed, {{elastic-defend}} can be deployed without the need for user interaction. diff --git a/reference/security/elastic-defend/elastic-endpoint-deploy-reqs.md b/reference/security/elastic-defend/elastic-endpoint-deploy-reqs.md new file mode 100644 index 0000000000..6532969433 --- /dev/null +++ b/reference/security/elastic-defend/elastic-endpoint-deploy-reqs.md @@ -0,0 +1,20 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/elastic-endpoint-deploy-reqs.html +--- + +# Elastic Defend requirements [elastic-endpoint-deploy-reqs] + +To properly deploy {{elastic-defend}} without a Mobile Device Management (MDM) profile, you must manually enable additional permissions on the host before {{elastic-endpoint}}—the installed component that performs {{elastic-defend}}'s threat monitoring and prevention—is fully functional. For more information, refer to the instructions for your macOS version: + +* [Enable access for macOS Monterey](/reference/security/elastic-defend/deploy-elastic-endpoint.md) +* [Enable access for macOS Ventura and higher](/reference/security/elastic-defend/deploy-elastic-endpoint-ven.md) + + +## Minimum system requirements [_minimum_system_requirements] + +| Requirement | Value | +| --- | --- | +| **CPU** | Under 2% | +| **Disk space** | 1 GB | +| **Resident set size (RSS) memory** | 500 MB | diff --git a/reference/security/elastic-defend/endpoint-data-volume.md b/reference/security/elastic-defend/endpoint-data-volume.md new file mode 100644 index 0000000000..5d11d52ea9 --- /dev/null +++ b/reference/security/elastic-defend/endpoint-data-volume.md @@ -0,0 +1,75 @@ +--- +navigation_title: "Configure data volume" +mapped_pages: + - https://www.elastic.co/guide/en/security/current/endpoint-data-volume.html +--- + +# Configure data volume for {{elastic-endpoint}} [endpoint-data-volume] + + +{{elastic-endpoint}}, the installed component that performs {{elastic-defend}}'s threat monitoring and prevention, is optimized to reduce data volume and CPU usage. You can disable or modify some of these optimizations by reconfiguring the following [advanced settings](/reference/security/elastic-defend/configure-endpoint-integration-policy.md#adv-policy-settings) in the {{elastic-defend}} integration policy. + +::::{important} +Modifying these advanced settings from their defaults will increase the volume of data that {{elastic-endpoint}} processes and ingests, and increase {{elastic-endpoint}}'s CPU usage. Make sure you’re aware of how these changes will affect your storage capabilities and performance. +:::: + + +Each setting has several OS-specific variants, represented by `[linux|mac|windows]` in the names listed below. Use the variant relevant to your hosts' operating system (for example, `windows.advanced.events.deduplicate_network_events` to configure network event deduplication for Windows hosts). + + +## Network event deduplication [network-event-deduplication] + +[8.15] When repeated network connections are detected from the same process, {{elastic-endpoint}} will not produce network events for subsequent connections. To disable or reduce deduplication of network events, use these advanced settings: + +`[linux|mac|windows].advanced.events.deduplicate_network_events` +: Enter `false` to completely disable network event deduplication. Default: `true` + +`[linux|mac|windows].advanced.events.deduplicate_network_events_below_bytes` +: Enter a transfer size threshold (in bytes) for events you want to deduplicate. Connections below the threshold are deduplicated, and connections above it are not deduplicated. This allows you to suppress repeated connections for smaller data transfers but always generate events for larger transfers. Default: `1048576` (1MB) + + +## Data in `host.*` fields [host-fields] + +[8.18] {{elastic-endpoint}} includes only a small subset of the data in the `host.*` fieldset in event documents. Full `host.*` information is still included in documents written to the `metrics-*` index pattern and in {{elastic-endpoint}} alerts. To override this behavior and include all `host.*` data for events, use this advanced setting: + +`[linux|mac|windows].advanced.set_extended_host_information` +: Enter `true` to include all `host.*` event data. Default: `false` + +::::{note} +Users should take note of how a lack of some `host.*` information may affect their [event filters](/solutions/security/manage-elastic-defend/event-filters.md) or [Endpoint alert exceptions](/solutions/security/detect-and-alert/add-manage-exceptions.md#endpoint-rule-exceptions). +:::: + + + +## Merged process and network events [merged-process-network] + +[8.18] {{elastic-endpoint}} merges process `create`/`terminate` events (Windows) and `fork`/`exec`/`end` events (macOS/Linux) when possible. This means short-lived processes only generate a single event containing the details from when the process terminated. {{elastic-endpoint}} also merges network `connection/termination` events (Windows/macOS/Linux) when possible for short-lived connections. To disable this behavior, use these advanced settings: + +`[linux|mac|windows].advanced.events.aggregate_process` +: Enter `false` to disable merging of process events. Default: `true` + +`[linux|mac|windows].advanced.events.aggregate_network` +: Enter `false` to disable merging of network events. Default: `true` + +::::{note} +Merged events can affect the results of [event filters](/solutions/security/manage-elastic-defend/event-filters.md). Notably, for merged events, `event.action` is an array containing all actions merged into the single event, such as `event.action=[fork, exec, end]`. In that example, if your event filter omits all fork events (`event.action : fork`), it will also filter out all merged events that include a `fork` action. To prevent such issues, you’ll need to modify your event filters accordingly, or set the `[linux|mac|windows].advanced.events.aggregate_process` and `[linux|mac|windows].advanced.events.aggregate_network` advanced settings to `false` to prevent {{elastic-endpoint}} from merging events. +:::: + + + +## MD5 and SHA-1 hashes [md5-sha1-hashes] + +[8.18] {{elastic-endpoint}} does not report MD5 and SHA-1 hashes in event data by default. These will still be reported if any [trusted applications](/solutions/security/manage-elastic-defend/trusted-applications.md), [blocklist entries](/solutions/security/manage-elastic-defend/blocklist.md), [event filters](/solutions/security/manage-elastic-defend/event-filters.md), or [Endpoint exceptions](/solutions/security/detect-and-alert/add-manage-exceptions.md#endpoint-rule-exceptions) require them. To include these hashes in all event data, use these advanced settings: + +`[linux|mac|windows].advanced.events.hash.md5` +: Enter `true` to compute and include MD5 hashes for processes and libraries in events. Default: `false` + +`[linux|mac|windows].advanced.events.hash.sha1` +: Enter `true` to compute and include SHA-1 hashes for processes and libraries in events. Default: `false` + +`[linux|mac|windows].advanced.alerts.hash.md5` +: Enter `true` to compute and include MD5 hashes for processes and libraries in alerts. Default: `false` + +`[linux|mac|windows].advanced.alerts.hash.sha1` +: Enter `true` to compute and include SHA-1 hashes for processes and libraries in alerts. Default: `false` + diff --git a/reference/security/elastic-defend/endpoint-diagnostic-data.md b/reference/security/elastic-defend/endpoint-diagnostic-data.md new file mode 100644 index 0000000000..d8a83eb233 --- /dev/null +++ b/reference/security/elastic-defend/endpoint-diagnostic-data.md @@ -0,0 +1,24 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/endpoint-diagnostic-data.html +--- + +# Turn off diagnostic data for Elastic Defend [endpoint-diagnostic-data] + +By default, {{elastic-defend}} streams diagnostic data to your cluster, which Elastic uses to tune protection features. You can stop producing this diagnostic data by configuring the advanced settings in the {{elastic-defend}} integration policy. + +::::{note} +{{kib}} also collects usage telemetry, which includes {{elastic-defend}} diagnostic data. You can modify telemetry preferences in [Advanced Settings](kibana://docs/reference/configuration-reference/telemetry-settings.md). +:::: + + +1. To view the Endpoints list, find **Endpoints** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. Locate the endpoint for which you want to disable diagnostic data, then click the integration policy in the **Policy** column. +3. Scroll down to the bottom of the policy and click **Show advanced settings**. +4. Enter `false` for these settings: + + * `windows.advanced.diagnostic.enabled` + * `linux.advanced.diagnostic.enabled` + * `mac.advanced.diagnostic.enabled` + +5. Click **Save**. diff --git a/reference/security/elastic-defend/endpoint-management-req.md b/reference/security/elastic-defend/endpoint-management-req.md new file mode 100644 index 0000000000..a89736e1fe --- /dev/null +++ b/reference/security/elastic-defend/endpoint-management-req.md @@ -0,0 +1,50 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/endpoint-management-req.html +--- + +# Elastic Defend feature privileges [endpoint-management-req] + +You can create user roles and define privileges to manage feature access in {{elastic-sec}}. This allows you to use the principle of least privilege while managing access to {{elastic-defend}}'s features. + +To configure roles and privileges, find **Roles** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). For more details on using this UI, refer to [{{kib}} privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md#adding_kibana_privileges). + +::::{note} +{{elastic-defend}}'s feature privileges must be assigned to **All Spaces**. You can’t assign them to an individual space. +:::: + + +To grant access, select **All** for the **Security** feature in the **Assign role to space** configuration UI, then turn on the **Customize sub-feature privileges** switch. + +::::{important} +Selecting **All** for the overall **Security** feature does NOT enable any sub-features. You must also enable the **Customize sub-feature privileges** switch, and then enable each sub-feature privilege individually. +:::: + + +For each of the following sub-feature privileges, select the type of access you want to allow: + +* **All**: Users have full access to the feature, which includes performing all available actions and managing configuration. +* **Read**: Users can view the feature, but can’t perform any actions or manage configuration (some features don’t have this privilege). +* **None**: Users can’t access or view the feature. + +| | | +| --- | --- | +| **Endpoint List** | Access the [Endpoints](/solutions/security/manage-elastic-defend/endpoints.md) page, which lists all hosts running {{elastic-defend}}, and associated integration details. | +| **Trusted Applications** | Access the [Trusted applications](/solutions/security/manage-elastic-defend/trusted-applications.md) page to remediate conflicts with other software, such as antivirus or endpoint security applications. | +| **Host Isolation Exceptions** | Access the [Host isolation exceptions](/solutions/security/manage-elastic-defend/host-isolation-exceptions.md) page to add specific IP addresses that isolated hosts can still communicate with. | +| **Blocklist** | Access the [Blocklist](/solutions/security/manage-elastic-defend/blocklist.md) page to prevent specified applications from running on hosts, extending the list of processes that {{elastic-defend}} considers malicious. | +| **Event Filters** | Access the [Event Filters](/solutions/security/manage-elastic-defend/event-filters.md) page to filter out endpoint events that you don’t want stored in {{es}}. | +| **{{elastic-defend}} Policy Management** | Access the [Policies](/solutions/security/manage-elastic-defend/policies.md) page and {{elastic-defend}} integration policies to configure protections, event collection, and advanced policy features. | +| **Response Actions History** | Access the [response actions history](/solutions/security/endpoint-response-actions/response-actions-history.md) for endpoints. | +| **Host Isolation** | Allow users to [isolate and release hosts](/solutions/security/endpoint-response-actions/isolate-host.md). | +| **Process Operations** | Perform host process-related [response actions](/solutions/security/endpoint-response-actions.md), including `processes`, `kill-process`, and `suspend-process`. | +| **File Operations** | Perform file-related [response actions](/solutions/security/endpoint-response-actions.md) in the response console. | +| **Execute Operations** | Perform shell commands and script-related [response actions](/solutions/security/endpoint-response-actions.md) in the response console.

::::{warning}
The commands are run on the host using the same user account running the {{elastic-defend}} integration, which normally has full control over the system. Only grant this feature privilege to {{elastic-sec}} users who require this level of access.
::::

| +| **Scan Operations** | Perform folder scan [response actions](/solutions/security/endpoint-response-actions.md) in the response console. | + + +## Upgrade considerations [_upgrade_considerations] + +After upgrading from {{elastic-sec}} 8.6 or earlier, existing user roles will be assigned **None** by default for any new endpoint management feature privileges, and you’ll need to explicitly assign them. However, many features previously required the built-in `superuser` role, and users who previously had this role will still have it after upgrading. + +You’ll probably want to replace the broadly permissive `superuser` role with more focused feature-based privileges to ensure that users have access to only the specific features that they need. Refer to [{{kib}} role management](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) for more details on assigning roles and privileges. diff --git a/reference/security/elastic-defend/index.md b/reference/security/elastic-defend/index.md new file mode 100644 index 0000000000..f317bcbebc --- /dev/null +++ b/reference/security/elastic-defend/index.md @@ -0,0 +1,8 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/endpoint-protection-intro.html +--- + +# Elastic Defend [endpoint-protection-intro] + +This section contains information on installing and configuring {{elastic-defend}} for endpoint protection. diff --git a/reference/security/elastic-defend/install-endpoint.md b/reference/security/elastic-defend/install-endpoint.md new file mode 100644 index 0000000000..e80af36890 --- /dev/null +++ b/reference/security/elastic-defend/install-endpoint.md @@ -0,0 +1,119 @@ +--- +navigation_title: "Install {{elastic-defend}}" +mapped_pages: + - https://www.elastic.co/guide/en/security/current/install-endpoint.html +--- + +# Install the {{elastic-defend}} integration [install-endpoint] + + +Like other Elastic integrations, {{elastic-defend}} is integrated into the {{agent}} using [{{fleet}}](/reference/ingestion-tools/fleet/index.md). Upon configuration, the integration allows the {{agent}} to monitor events on your host and send data to the {{security-app}}. + +::::{admonition} Requirements +* {{fleet}} is required for {{elastic-defend}}. +* To configure the {{elastic-defend}} integration on the {{agent}}, you must have permission to use {{fleet}} in {{kib}}. +* You must have the **{{elastic-defend}} Policy Management : All** [privilege](/reference/security/elastic-defend/endpoint-management-req.md) to configure an integration policy, and the **Endpoint List** [privilege](/reference/security/elastic-defend/endpoint-management-req.md) to access the **Endpoints** page. + +:::: + + + +## Before you begin [security-before-you-begin] + +If you’re using macOS, some versions may require you to grant Full Disk Access to different kernels, system extensions, or files. Refer to [*{{elastic-defend}} requirements*](/reference/security/elastic-defend/elastic-endpoint-deploy-reqs.md) for more information. + +::::{note} +{{elastic-defend}} does not support deployment within an {{agent}} DaemonSet in Kubernetes. +:::: + + + +## Add the {{elastic-defend}} integration [add-security-integration] + +1. Find **Integrations** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). + + :::{image} ../../../images/security-endpoint-cloud-sec-integrations-page.png + :alt: Search result for "{{elastic-defend}}" on the Integrations page. + :class: screenshot + ::: + +2. Search for and select **{{elastic-defend}}**, then select **Add {{elastic-defend}}**. The integration configuration page appears. + + ::::{note} + If this is the first integration you’ve installed and the **Ready to add your first integration?** page appears instead, select **Add integration only (skip agent installation)** to proceed. You can [install {{agent}}](#enroll-agent) after setting up the {{elastic-defend}} integration. + :::: + + + :::{image} ../../../images/security-endpoint-cloud-security-configuration.png + :alt: Add {{elastic-defend}} integration page + :class: screenshot + ::: + +3. Configure the {{elastic-defend}} integration with an **Integration name** and optional **Description**. +4. Select the type of environment you want to protect, either **Traditional Endpoints** or **Cloud Workloads**. +5. Select a configuration preset. Each preset comes with different default settings for {{agent}} — you can further customize these later by [configuring the {{elastic-defend}} integration policy](/reference/security/elastic-defend/configure-endpoint-integration-policy.md). + + | | | + | --- | --- | + | **Traditional Endpoint presets** | All traditional endpoint presets *except **Data Collection*** have these preventions enabled by default: malware, ransomware, memory threat, malicious behavior, and credential theft. Each preset collects the following events:

* **Data Collection:** All events; no preventions
* **Next-Generation Antivirus (NGAV):** Process events; all preventions
* **Essential EDR (Endpoint Detection & Response):** Process, Network, File events; all preventions
* **Complete EDR (Endpoint Detection & Response):** All events; all preventions
| + | **Cloud Workloads presets** | Both cloud workload presets are intended for monitoring cloud-based Linux hosts. Therefore, [session data](/solutions/security/investigate/session-view.md) collection, which enriches process events, is enabled by default. They both have all preventions disabled by default, and collect process, network, and file events.

* **All events:** Includes data from automated sessions.
* **Interactive only:** Filters out data from non-interactive sessions by creating an [event filter](/solutions/security/manage-elastic-defend/event-filters.md).
| + +6. Enter a name for the agent policy in **New agent policy name**. If other agent policies already exist, you can click the **Existing hosts** tab and select an existing policy instead. For more details on {{agent}} configuration settings, refer to [{{agent}} policies](/reference/ingestion-tools/fleet/agent-policy.md). +7. When you’re ready, click **Save and continue**. +8. To complete the integration, select **Add {{agent}} to your hosts** and continue to the next section to install the {{agent}} on your hosts. + + +## Configure and enroll the {{agent}} [enroll-security-agent] + +To enable the {{elastic-defend}} integration, you must enroll agents in the relevant policy using {{fleet}}. + +::::{important} +Before you add an {{agent}}, a {{fleet-server}} must be running. Refer to [Add a {{fleet-server}}](/reference/ingestion-tools/fleet/deployment-models.md). + +{{elastic-defend}} cannot be integrated with an {{agent}} in standalone mode. + +:::: + + + +### Important information about {{fleet-server}} [fleet-server-upgrade] + +::::{note} +If you are running an {{stack}} version earlier than 7.13.0, you can skip this section. +:::: + + +If you have upgraded to an {{stack}} version that includes {{fleet-server}} 7.13.0 or newer, you will need to redeploy your agents. Review the following scenarios to ensure you take the appropriate steps. + +* If you redeploy the {{agent}} to the same machine through the {{fleet}} application after you upgrade, a new agent will appear. +* If you want to remove the {{agent}} entirely without transitioning to the {{fleet-server}}, then you will need to manually uninstall the {{agent}} on the machine. This will also uninstall the endpoint. Refer to [Uninstall Elastic Agent](/reference/ingestion-tools/fleet/uninstall-elastic-agent.md). +* In the rare event that the {{agent}} fails to uninstall, you might need to manually uninstall the endpoint. Refer to [Uninstall an endpoint](/reference/security/elastic-defend/uninstall-agent.md#uninstall-endpoint) at the end of this topic. + + +### Add the {{agent}} [enroll-agent] + +1. If you’re in the process of installing an {{agent}} integration (such as {{elastic-defend}}), the **Add agent** UI opens automatically. Otherwise, find **{{fleet}}*** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), and select ***Agents** → **Add agent**. + + :::{image} ../../../images/security-endpoint-cloud-sec-add-agent.png + :alt: Add agent flyout on the Fleet page. + :class: screenshot + ::: + +2. Select an agent policy for the {{agent}}. You can select an existing policy, or select **Create new agent policy** to create a new one. For more details on {{agent}} configuration settings, refer to [{{agent}} policies](/reference/ingestion-tools/fleet/agent-policy.md). + + The selected agent policy should include the integration you want to install on the hosts covered by the agent policy (in this example, {{elastic-defend}}). + + :::{image} ../../../images/security-endpoint-cloud-sec-add-agent-detail.png + :alt: Add agent flyout with {{elastic-defend}} integration highlighted. + :class: screenshot + ::: + +3. Ensure that the **Enroll in {{fleet}}** option is selected. {{elastic-defend}} cannot be integrated with {{agent}} in standalone mode. +4. Select the appropriate platform or operating system for the host, then copy the provided commands. +5. On the host, open a command-line interface and navigate to the directory where you want to install {{agent}}. Paste and run the commands from {{fleet}} to download, extract, enroll, and start {{agent}}. +6. (Optional) Return to the **Add agent** flyout in {{fleet}}, and observe the **Confirm agent enrollment** and **Confirm incoming data** steps automatically checking the host connection. It may take a few minutes for data to arrive in {{es}}. +7. After you have enrolled the {{agent}} on your host, you can click **View enrolled agents** to access the list of agents enrolled in {{fleet}}. Otherwise, select **Close**. + + The host will now appear on the **Endpoints** page in the {{security-app}}. It may take another minute or two for endpoint data to appear in {{elastic-sec}}. + +8. For macOS, continue with [these instructions](/reference/security/elastic-defend/deploy-elastic-endpoint.md) to grant {{elastic-endpoint}} the required permissions. diff --git a/reference/security/elastic-defend/linux-file-monitoring.md b/reference/security/elastic-defend/linux-file-monitoring.md new file mode 100644 index 0000000000..9c259825c7 --- /dev/null +++ b/reference/security/elastic-defend/linux-file-monitoring.md @@ -0,0 +1,98 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/linux-file-monitoring.html +--- + +# Configure Linux file system monitoring [linux-file-monitoring] + +By default, {{elastic-defend}} monitors specific Linux file system types that Elastic has tested for compatibility. If your network includes nonstandard, proprietary, or otherwise unrecognized Linux file systems, you can configure the integration policy to extend monitoring and protections to those additional file systems. You can also have {{elastic-defend}} ignore unrecognized file system types if they don’t require monitoring or cause unexpected problems. + +::::{warning} +Ignoring file systems can create gaps in your security coverage. Use additional security layers for any file systems ignored by {{elastic-defend}}. +:::: + + +To monitor or ignore additional file systems, configure the following advanced settings related to **fanotify**, a Linux feature that monitors file system events. Find **Policies** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), click a policy’s name, then scroll down and select **Show advanced settings**. + +::::{note} +Even when configured to monitor all file systems (`ignore_unknown_filesystems` is `false`), {{elastic-defend}} will still ignore specific file systems that Elastic has internally identified as incompatible. The following settings apply to any *other* file systems. +:::: + + +$$$ignore-unknown-filesystems$$$ + +`linux.advanced.fanotify.ignore_unknown_filesystems` +: Determines whether to ignore unrecognized file systems. Enter one of the following: + + * `true`: (Default) Monitor only Elastic-tested file systems, and ignore all others. You can still monitor or ignore specific file systems with `monitored_filesystems` and `ignored_filesystems`, respectively. + * `false`: Monitor all file systems. You can still ignore specific file systems with `ignored_filesystems`. + + ::::{note} + If you’ve upgraded from 8.3 or earlier, this value will be `false` for backwards compatibility. If you don’t need to monitor additional file systems, it’s recommended to change `ignore_unknown_filesystems` to `true` after upgrading. + :::: + + +$$$monitored-filesystems$$$ + +`linux.advanced.fanotify.monitored_filesystems` +: Specifies additional file systems to monitor. Enter a comma-separated list of [file system names](#find-file-system-names) as they appear in `/proc/filesystems` (for example: `jfs,ufs,ramfs`). + + ::::{note} + It’s recommended to avoid monitoring network-backed file systems. + :::: + + + This setting isn’t recognized if `ignore_unknown_filesystems` is `false`, since that would mean you’re already monitoring *all* file systems. + + Entries in this setting are overridden by entries in `ignored_filesystems`. + + +$$$ignored-filesystems$$$ + +`linux.advanced.fanotify.ignored_filesystems` +: Specifies additional file systems to ignore. Enter a comma-separated list of [file system names](#find-file-system-names) as they appear in `/proc/filesystems` (for example: `ext4,tmpfs`). + + Entries in this setting override entries in `monitored_filesystems`. + + +## Find file system names [find-file-system-names] + +This section provides a few ways to determine the file system names needed for `linux.advanced.fanotify.monitored_filesystems` and `linux.advanced.fanotify.ignored_filesystems`. + +In a typical setup, when you install {{agent}}, {{filebeat}} is installed alongside {{elastic-endpoint}} and will automatically ship {{elastic-endpoint}} logs to {{es}}. {{elastic-endpoint}} will generate a log message about the file that was scanned when an event occurs. + +To find the system file name: + +1. Find **Hosts** in the navigation menu, or search for `Security/Explore/Hosts` by using the [global search field](/get-started/the-stack.md#kibana-navigation-search). +2. From the Hosts page, search for `message: "Current sync path"` to reveal the file path. +3. If you have access to the endpoint, run `findmnt -o FSTYPE -T ` to return the file system. For example: + + ```shell + > findmnt -o FSTYPE -T /etc/passwd + FSTYPE + ext4 + ``` + + This returns the file system name as `ext4`. + + +Alternatively, you can also find the file system name by correlating data from two other log messages: + +1. Search the logs for `message: "Current fdinfo"` to reveal the `mnt_id` value of the file path. In this example, the `mnt_id` value is `29`: + + ```shell + pos: 12288 + flags: 02500002 + mnt_id: 29 + ino: 2367737 + ``` + +2. Search the logs for `message: "Current mountinfo"` to reveal the file system that corresponds to the `mnt_id` value you found in the previous step: + + ```shell + + 29 1 8:2 / / rw,relatime shared:1 - ext4 /dev/sda2 rw,errors=remount-ro + + ``` + + The first number, `29`, is the `mnt_id`, and the first field after the hyphen (`-`) is the file system name, `ext4`. diff --git a/reference/security/elastic-defend/offline-endpoint.md b/reference/security/elastic-defend/offline-endpoint.md new file mode 100644 index 0000000000..430885f057 --- /dev/null +++ b/reference/security/elastic-defend/offline-endpoint.md @@ -0,0 +1,211 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/offline-endpoint.html +--- + +# Configure offline endpoints and air-gapped environments [offline-endpoint] + +By default, {{elastic-endpoint}} continuously defends against the latest threats by automatically downloading global artifact updates from [https://artifacts.security.elastic.co](https://artifacts.security.elastic.co). When running {{elastic-endpoint}} in a restricted network, you can set up a local mirror server to proxy updates to endpoints that cannot access `elastic.co` URLs directly. + +* If your endpoints cannot access the internet directly, set up a local HTTP mirror server. Refer to [Host an {{elastic-endpoint}} artifact mirror](offline-endpoint.md#artifact-mirror). +* If your endpoints are running in an air-gapped environment, set up a local HTTP server and manually copy global artifact updates. Refer to [Host an air-gapped {{elastic-endpoint}} artifact server](offline-endpoint.md#air-gapped-artifact-server). + + +## Host an {{elastic-endpoint}} artifact mirror [artifact-mirror] + +You can deploy your own {{elastic-endpoint}} global artifact mirror to enable endpoints to update their global artifacts automatically through another server acting as a proxy. This allows endpoints to get updates even when they can’t directly access the internet. + +Complete these steps: + +1. Deploy an HTTP reverse proxy server. +2. Configure {{elastic-endpoint}} to read from the proxy server. + + +### Step 1: Deploy an HTTP reverse proxy server [_step_1_deploy_an_http_reverse_proxy_server] + +Set up and configure an HTTP reverse proxy to forward requests to [https://artifacts.security.elastic.co](https://artifacts.security.elastic.co) and include response headers from the elastic.co server when proxying. + +::::{important} +The entity tag (`Etag`) header is a mandatory HTTP response header that you *must* set in your server configuration file. {{elastic-endpoint}} uses the `Etag` header to determine whether your global artifacts have been updated since they were last downloaded. If your server configuration file does not contain an `ETag` header, {{elastic-endpoint}} won’t download new artifacts when they’re available. +:::: + + +:::::{dropdown} *Example: Nginx* +This example script starts an Nginx Docker image and configures it to proxy artifacts: + +```sh +cat > nginx.conf << EOF +server { + location / { + proxy_pass https://artifacts.security.elastic.co; + } +} +EOF +docker run -v "$PWD"/nginx.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 nginx +``` + +::::{important} +This example script is not appropriate for production environments. We recommend configuring the Nginx server to use [TLS](http://nginx.org/en/docs/http/configuring_https_servers.md) according to your IT policies. Refer to [Nginx documentation](https://docs.nginx.com/nginx/admin-guide/installing-nginx/) for more information on downloading and configuring Nginx. +:::: + + +::::: + + +:::::{dropdown} *Example: Apache HTTPD* +This example script starts an Apache httpd Docker image and configures it to proxy artifacts: + +```sh +docker run --rm httpd cat /usr/local/apache2/conf/httpd.conf > httpd.conf +cat >> httpd.conf << EOF +LoadModule proxy_module modules/mod_proxy.so +LoadModule proxy_http_module modules/mod_proxy_http.so +LoadModule ssl_module modules/mod_ssl.so + +SSLProxyEngine on +ServerName localhost +ProxyPass / https://artifacts.security.elastic.co/ +ProxyPassReverse / https://artifacts.security.elastic.co/ +EOF +docker run -p 80:80 -v "$PWD"/httpd.conf:/usr/local/apache2/conf/httpd.conf httpd +``` + +::::{important} +This example script is not appropriate for production environments. We recommend configuring httpd to use [TLS](https://httpd.apache.org/docs/trunk/ssl/ssl_howto.md) according to your IT policies. Refer to [Apache documentation](https://httpd.apache.org) for more information on downloading and configuring Apache httpd. +:::: + + +::::: + + + +### Step 2: Configure {{elastic-endpoint}} [_step_2_configure_elastic_endpoint] + +Set the `advanced.artifacts.global.base_url` advanced setting for each [{{elastic-defend}} integration policy](configure-endpoint-integration-policy.md) that needs to use the mirror. Note that there’s a separate setting for each operating system: + +* `linux.advanced.artifacts.global.base_url` +* `mac.advanced.artifacts.global.base_url` +* `windows.advanced.artifacts.global.base_url` + +:::{image} ../../../images/security-offline-adv-settings.png +:alt: Integration policy advanced settings +:class: screenshot +::: + + +## Host an air-gapped {{elastic-endpoint}} artifact server [air-gapped-artifact-server] + +If {{elastic-endpoint}} needs to operate completely offline in a closed network, you can set up a mirror server and manually update it with new artifact updates regularly. + +Complete these steps: + +1. Deploy an HTTP file server. +2. Configure {{elastic-endpoint}} to read from the file server. +3. Manually copy artifact updates to the file server. + + +### Step 1: Deploy an HTTP file server [_step_1_deploy_an_http_file_server] + +Deploy an HTTP file server to serve files from a local directory, which will be filled with artifact update files in a later step. + +::::{important} +The entity tag (`Etag`) header is a mandatory HTTP response header that you *must* set in your server configuration file. {{elastic-endpoint}} uses the `Etag` header to determine whether your global artifacts have been updated since they were last downloaded. If your server configuration file does not contain an `ETag` header, {{elastic-endpoint}} won’t download new artifacts when they’re available. +:::: + + +:::::{dropdown} *Example: Nginx* +This example script starts an Nginx Docker image and configures it as a file server: + +```sh +cat > nginx.conf << 'EOF' +# set compatible etag format +map $sent_http_etag $elastic_etag { + "~(.*)-(.*)" "$1$2"; +} +server { + root /app/static; + location / { + add_header ETag "$elastic_etag"; + } +} +EOF +docker run -v "$PWD"/nginx.conf:/etc/nginx/conf.d/default.conf:ro -v "$PWD"/static:/app/static:ro -p 80:80 nginx +``` + +::::{important} +This example script is not appropriate for production environments. We recommend configuring the Nginx server to use [TLS](http://nginx.org/en/docs/http/configuring_https_servers.md) according to your IT policies. Refer to [Nginx documentation](https://docs.nginx.com/nginx/admin-guide/installing-nginx/) for more information on downloading and configuring Nginx. +:::: + + +::::: + + +:::::{dropdown} *Example: Apache HTTPD* +This example script starts an Apache httpd Docker image and configures it as a file server: + +```sh +docker run --rm httpd cat /usr/local/apache2/conf/httpd.conf > my-httpd.conf +cat >> my-httpd.conf << 'EOF' +# set compatible etag format +FileETag MTime +EOF +docker run -p 80:80 -v "$PWD/static":/usr/local/apache2/htdocs/ -v "$PWD"/my-httpd.conf:/usr/local/apache2/conf/httpd.conf:ro httpd +``` + +::::{important} +This example script is not appropriate for production environments. We recommend configuring httpd to use [TLS](https://httpd.apache.org/docs/trunk/ssl/ssl_howto.md) according to your IT policies. Refer to [Apache documentation](https://httpd.apache.org) for more information on downloading and configuring Apache httpd. +:::: + + +::::: + + + +### Step 2: Configure {{elastic-endpoint}} [_step_2_configure_elastic_endpoint_2] + +Set the `advanced.artifacts.global.base_url` advanced setting for each [{{elastic-defend}} integration policy](configure-endpoint-integration-policy.md) that needs to use the mirror. Note that there’s a separate setting for each operating system: + +* `linux.advanced.artifacts.global.base_url` +* `mac.advanced.artifacts.global.base_url` +* `windows.advanced.artifacts.global.base_url` + +:::{image} ../../../images/security-offline-adv-settings.png +:alt: Integration policy advanced settings +:class: screenshot +::: + + +### Step 3: Manually copy artifact updates [_step_3_manually_copy_artifact_updates] + +Download the most recent artifact files from the Elastic global artifact server, then copy those files to the server instance you created in step 1. + +Below is an example script that downloads all the global artifact updates. There are different artifact files for each version of {{elastic-endpoint}}. Change the value of the `ENDPOINT_VERSION` variable in the example script to match the deployed version of {{elastic-endpoint}}. + +```sh +export ENDPOINT_VERSION=9.0.0-beta1 && wget -P downloads/endpoint/manifest https://artifacts.security.elastic.co/downloads/endpoint/manifest/artifacts-$ENDPOINT_VERSION.zip && zcat -q downloads/endpoint/manifest/artifacts-$ENDPOINT_VERSION.zip | jq -r '.artifacts | to_entries[] | .value.relative_url' | xargs -I@ curl "https://artifacts.security.elastic.co@" --create-dirs -o ".@" +``` + +This command will download files and directory structure that should be directly copied to the file server. + +Elastic releases updates continuously as detection engines are improved. Therefore, we recommend updating air-gapped environments at least monthly to stay current with artifact updates. + + +## Validate your self-hosted artifact server [validate-artifact-server] + +Each new global artifact update release increments a version identifier that you can check to ensure that {{elastic-endpoint}} has received and installed the latest version. + +To confirm the latest version of the artifacts for a given {{elastic-endpoint}} version, check the published version. This example script checks the version: + +```sh +curl -s https://artifacts.security.elastic.co/downloads/endpoint/manifest/artifacts-9.0.0-beta1.zip | zcat -q | jq -r .manifest_version +``` + +Replace `https://artifacts.security.elastic.co` in the command above with your local mirror server to validate that the artifacts are served correctly. + +After updating the {{elastic-endpoint}} configuration to read from the mirror server, use {{kib}}'s [Discover view](/explore-analyze/discover.md) to search the `metrics-*` data view for `endpoint.policy` response documents, then check the installed version (`Endpoint.policy.applied.artifacts.global.version`) and compare with the output from the command above: + +:::{image} ../../../images/security-offline-endpoint-version-discover.png +:alt: Searching for `endpoint.policy` in Discover +:class: screenshot +::: + diff --git a/reference/security/elastic-defend/self-healing-rollback.md b/reference/security/elastic-defend/self-healing-rollback.md new file mode 100644 index 0000000000..aa5fa9289c --- /dev/null +++ b/reference/security/elastic-defend/self-healing-rollback.md @@ -0,0 +1,25 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/self-healing-rollback.html +--- + +# Configure self-healing rollback for Windows endpoints [self-healing-rollback] + +{{elastic-defend}}'s self-healing feature rolls back file changes on Windows endpoints when a prevention alert is generated by enabled protection features. File changes that occurred on the host within five minutes before the prevention alert will revert to their previous state (which may be up to two hours before the alert). + +This can help contain the impact of malicious activity, as {{elastic-defend}} not only stops the activity but also erases any attack artifacts deployed prior to detection. + +Self-healing rollback is a [Platinum or Enterprise subscription](https://www.elastic.co/pricing) feature and is only supported for Windows endpoints. + +::::{warning} +This feature can cause permanent data loss since it overwrites recent changes and deletes recently added files on the host. Self-healing rollback targets the changes related to a detected threat, but may also include incidental actions that aren’t directly related to the threat. + +Also, rollback is triggered by *every* {{elastic-defend}} prevention alert, so you should tune your system to eliminate false positives before enabling this feature. + +:::: + + +1. Find **Policies** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), then select the integration policy you want to configure. +2. Scroll down to the bottom of the policy and click **Show advanced settings**. +3. Enter `true` for the setting `windows.advanced.alerts.rollback.self_healing.enabled`. +4. Click **Save**. diff --git a/reference/security/elastic-defend/uninstall-agent.md b/reference/security/elastic-defend/uninstall-agent.md new file mode 100644 index 0000000000..0b6f318206 --- /dev/null +++ b/reference/security/elastic-defend/uninstall-agent.md @@ -0,0 +1,78 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/uninstall-agent.html +--- + +# Uninstall Elastic Agent [uninstall-agent] + +To uninstall {{agent}} from a host, run the `uninstall` command from the directory where it’s running. Refer to the [{{fleet}} and {{agent}} documentation](/reference/ingestion-tools/fleet/uninstall-elastic-agent.md) for more information. + +If [Agent tamper protection](/reference/security/elastic-defend/agent-tamper-protection.md) is enabled on the Agent policy for the host, you’ll need to include the uninstall token in the command, using the `--uninstall-token` flag. You can [find the uninstall token](/reference/security/elastic-defend/agent-tamper-protection.md#fleet-uninstall-tokens) on the Agent policy. Alternatively, find **{{fleet}}** in the navigation menu or by using the [global search field](/get-started/the-stack.md#kibana-navigation-search), and select **Uninstall tokens**. + +For example, to uninstall {{agent}} on a macOS or Linux host: + +```shell +sudo elastic-agent uninstall --uninstall-token 12345678901234567890123456789012 +``` + + +## Provide multiple uninstall tokens [multiple-uninstall-tokens] + +If you have multiple tamper-protected {{agent}} policies, you may want to provide multiple uninstall tokens in a single command. There are two ways to do this: + +* The `--uninstall-token` command can receive multiple uninstall tokens separated by a comma, without spaces. + + ```shell + sudo elastic-agent uninstall -f --uninstall-token 7b3d364db8e0deb1cda696ae85e42644,a7336b71e243e7c92d9504b04a774266 + ``` + +* `--uninstall-token`'s argument can also be a path to a text file with one uninstall token per line. + + ::::{note} + You must use the full file path, otherwise the file may not be found. + :::: + + + ```shell + sudo elastic-agent uninstall -f --uninstall-token /tmp/tokens.txt + ``` + + In this example, `tokens.txt` would contain: + + ```txt + 7b3d364db8e0deb1cda696ae85e42644 + a7336b71e243e7c92d9504b04a774266 + ``` + + + +## Uninstall {{elastic-endpoint}} [uninstall-endpoint] + +Use these commands to uninstall {{elastic-endpoint}} from a host **ONLY** if [uninstalling an {{agent}}](/reference/ingestion-tools/fleet/uninstall-elastic-agent.md) is unsuccessful. + +Windows + +```shell +cd %TEMP% +copy "c:\Program Files\Elastic\Endpoint\elastic-endpoint.exe" elastic-endpoint.exe +.\elastic-endpoint.exe uninstall +del .\elastic-endpoint.exe +``` + +macOS + +```shell +cd /tmp +cp /Library/Elastic/Endpoint/elastic-endpoint elastic-endpoint +sudo ./elastic-endpoint uninstall +rm elastic-endpoint +``` + +Linux + +```shell +cd /tmp +cp /opt/Elastic/Endpoint/elastic-endpoint elastic-endpoint +sudo ./elastic-endpoint uninstall +rm elastic-endpoint +``` diff --git a/reference/security/fields-and-object-schemas/alert-schema.md b/reference/security/fields-and-object-schemas/alert-schema.md new file mode 100644 index 0000000000..a69e63e2ec --- /dev/null +++ b/reference/security/fields-and-object-schemas/alert-schema.md @@ -0,0 +1,139 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/alert-schema.html + - https://www.elastic.co/guide/en/serverless/current/security-alert-schema.html +--- + +# Alert schema [alert-schema] + +{{elastic-sec}} stores alerts that have been generated by detection rules in hidden {{es}} indices. The index pattern is `.alerts-security.alerts-`. + +::::{note} +Users are advised NOT to use the `_source` field in alert documents, but rather to use the `fields` option in the search API to programmatically obtain the list of fields used in these documents. Learn more about [retrieving selected fields from a search](elasticsearch://docs/reference/elasticsearch/rest-apis/retrieve-selected-fields.md). +:::: + + +::::{note} +The non-ECS fields listed below are beta and subject to change. +:::: + + +| Alert field | Description | +| --- | --- | +| [`@timestamp`](ecs://docs/reference/ecs-base.md#field-timestamp) | ECS field, represents the time when the alert was created or most recently updated. | +| [`message`](ecs://docs/reference/ecs-base.md#field-message) | ECS field copied from the source document, if present, for custom query and indicator match rules. | +| [`tags`](ecs://docs/reference/ecs-base.md#field-tags) | ECS field copied from the source document, if present, for custom query and indicator match rules. | +| [`labels`](ecs://docs/reference/ecs-base.md#field-labels) | ECS field copied from the source document, if present, for custom query and indicator match rules. | +| [`ecs.version`](ecs://docs/reference/ecs-ecs.md#field-ecs-version) | ECS mapping version of the alert. | +| [`event.kind`](ecs://docs/reference/ecs-allowed-values-event-kind.md) | ECS field, always `signal` for alert documents. | +| [`event.category`](ecs://docs/reference/ecs-allowed-values-event-category.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | +| [`event.type`](ecs://docs/reference/ecs-allowed-values-event-type.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | +| [`event.outcome`](ecs://docs/reference/ecs-allowed-values-event-outcome.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | +| [`agent.*`](ecs://docs/reference/ecs-agent.md) | ECS `agent.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`client.*`](ecs://docs/reference/ecs-client.md) | ECS `client.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`cloud.*`](ecs://docs/reference/ecs-cloud.md) | ECS `cloud.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`container.*`](ecs://docs/reference/ecs-container.md) | ECS `container.* fields` copied from the source document, if present, for custom query and indicator match rules. | +| [`data_stream.*`](ecs://docs/reference/ecs-data_stream.md) | ECS `data_stream.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: These fields may be constant keywords in the source documents, but are copied into the alert documents as keywords. | +| [`destination.*`](ecs://docs/reference/ecs-destination.md) | ECS `destination.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`dll.*`](ecs://docs/reference/ecs-dll.md) | ECS `dll.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`dns.*`](ecs://docs/reference/ecs-dns.md) | ECS `dns.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`error.*`](ecs://docs/reference/ecs-error.md) | ECS `error.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`event.*`](ecs://docs/reference/ecs-event.md) | ECS `event.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: categorization fields above (`event.kind`, `event.category`, `event.type`, `event.outcome`) are listed separately above. | +| [`file.*`](ecs://docs/reference/ecs-file.md) | ECS `file.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`group.*`](ecs://docs/reference/ecs-group.md) | ECS `group.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`host.*`](ecs://docs/reference/ecs-host.md) | ECS `host.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`http.*`](ecs://docs/reference/ecs-http.md) | ECS `http.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`log.*`](ecs://docs/reference/ecs-log.md) | ECS `log.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`network.*`](ecs://docs/reference/ecs-network.md) | ECS `network.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`observer.*`](ecs://docs/reference/ecs-observer.md) | ECS `observer.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`orchestrator.*`](ecs://docs/reference/ecs-orchestrator.md) | ECS `orchestrator.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`organization.*`](ecs://docs/reference/ecs-organization.md) | ECS `organization.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`package.*`](ecs://docs/reference/ecs-package.md) | ECS `package.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`process.*`](ecs://docs/reference/ecs-process.md) | ECS `process.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`registry.*`](ecs://docs/reference/ecs-registry.md) | ECS `registry.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`related.*`](ecs://docs/reference/ecs-related.md) | ECS `related.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`rule.*`](ecs://docs/reference/ecs-rule.md) | ECS `rule.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: These fields are not related to the detection rule that generated the alert. | +| [`server.*`](ecs://docs/reference/ecs-server.md) | ECS `server.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`service.*`](ecs://docs/reference/ecs-service.md) | ECS `service.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`source.*`](ecs://docs/reference/ecs-source.md) | ECS `source.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`span.*`](ecs://docs/reference/ecs-tracing.md#field-span-id) | ECS `span.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`threat.*`](ecs://docs/reference/ecs-threat.md) | ECS `threat.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`tls.*`](ecs://docs/reference/ecs-tls.md) | ECS `tls.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`trace.*`](ecs://docs/reference/ecs-tracing.md) | ECS `trace.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`transaction.*`](ecs://docs/reference/ecs-tracing.md#field-transaction-id) | ECS `transaction.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`url.*`](ecs://docs/reference/ecs-url.md) | ECS `url.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`user.*`](ecs://docs/reference/ecs-user.md) | ECS `user.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`user_agent.*`](ecs://docs/reference/ecs-user_agent.md) | ECS `user_agent.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`vulnerability.*`](ecs://docs/reference/ecs-vulnerability.md) | ECS `vulnerability.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| `kibana.alert.ancestors.*` | Type: object | +| `kibana.alert.depth` | Type: Long | +| `kibana.alert.new_terms` | The value of the new term that generated this alert.
Type: keyword | +| `kibana.alert.original_event.*` | Type: object | +| `kibana.alert.original_time` | The value copied from the source event (`@timestamp`).
Type: date | +| `kibana.alert.reason` | Type: keyword | +| `kibana.alert.rule.author` | The value of the `author` who created the rule. Refer to [configure advanced rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params).
Type: keyword | +| `kibana.alert.building_block_type` | The value of `building_block_type` from the rule that generated this alert. Refer to [configure advanced rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params).
Type: keyword | +| `kibana.alert.rule.created_at` | The value of `created.at` from the rule that generated this alert.
Type: date | +| `kibana.alert.rule.created_by` | Type: keyword | +| `kibana.alert.rule.description` | Type: keyword | +| `kibana.alert.rule.enabled` | Type: keyword | +| `kibana.alert.rule.false_positives` | Type: keyword | +| `kibana.alert.rule.from` | Type: keyword | +| `kibana.alert.rule.uuid` | Type: keyword | +| `kibana.alert.rule.immutable` | Type: keyword | +| `kibana.alert.rule.interval` | Type: keyword | +| `kibana.alert.rule.license` | Type: keyword | +| `kibana.alert.rule.max_signals` | Type: long | +| `kibana.alert.rule.name` | Type: keyword | +| `kibana.alert.rule.note` | Type: keyword | +| `kibana.alert.rule.references` | Type: keyword | +| `kibana.alert.risk_score` | Type: float | +| `kibana.alert.rule.rule_id` | Type: keyword | +| `kibana.alert.rule.rule_name_override` | Type: keyword | +| `kibana.alert.severity` | Alert severity, populated by the `rule_type` at alert creation. Must have a value of `low`, `medium`, `high`, `critical`.
Type: keyword | +| `kibana.alert.rule.tags` | Type: keyword | +| `kibana.alert.rule.threat.*` | Type: object | +| `kibana.alert.rule.timeline_id` | Type: keyword | +| `kibana.alert.rule.timeline_title` | Type: keyword | +| `kibana.alert.rule.timestamp_override` | Type: keyword | +| `kibana.alert.rule.to` | Type: keyword | +| `kibana.alert.rule.type` | Type: keyword | +| `kibana.alert.rule.updated_at` | Type: date | +| `kibana.alert.rule.updated_by` | Type: keyword | +| `kibana.alert.rule.version` | A number that represents a rule’s version.
Type: keyword | +| `kibana.alert.rule.revision` | A number that gets incremented each time you edit a rule.
Type: long | +|`kibana.alert.workflow_status` | Type: keyword | +|`kibana.alert.workflow_status_updated_at` | The timestamp of when the alert’s status was last updated.
Type: date | +| `kibana.alert.threshold_result.*` | Type: object | +| `kibana.alert.group.id` | Type: keyword | +| `kibana.alert.group.index` | Type: integer | +| `kibana.alert.rule.parameters.index` | Type: flattened | +| `kibana.alert.rule.parameters.language` | Type: flattened | +| `kibana.alert.rule.parameters.query` | Type: flattened | +| `kibana.alert.rule.parameters.risk_score_mapping` | Type: flattened | +| `kibana.alert.rule.parameters.saved_id` | Type: flattened | +| `kibana.alert.rule.parameters.severity_mapping` | Type: flattened | +| `kibana.alert.rule.parameters.threat_filters` | Type: flattened | +| `kibana.alert.rule.parameters.threat_index` | Names of the indicator indices.
Type: flattened | +| `kibana.alert.rule.parameters.threat_indicator_path` | Type: flattened | +| `kibana.alert.rule.parameters.threat_language` | Type: flattened | +| `kibana.alert.rule.parameters.threat_mapping.*` | Controls which fields will be compared in the indicator and source documents.
Type: flattened | +| `kibana.alert.rule.parameters.threat_query` | Type: flattened | +| `kibana.alert.rule.parameters.threshold.*` | Type: flattened | +| `kibana.space_ids` | Type: keyword | +| `kibana.alert.rule.consumer` | Type: keyword | +| `kibana.alert.status` | Type: keyword | +| `kibana.alert.rule.category` | Type: keyword | +| `kibana.alert.rule.execution.uuid` | Type: keyword | +| `kibana.alert.rule.producer` | Type: keyword | +| `kibana.alert.rule.rule_type_id` | Type: keyword | +| `kibana.alert.suppression.terms.field` | The fields used to group alerts for suppression.
Type: keyword | +| `kibana.alert.suppression.terms.value` | The values in the suppression fields.
Type: keyword | +| `kibana.alert.suppression.start` | The timestamp of the first document in the suppression group.
Type: date | +| `kibana.alert.suppression.end` | The timestamp of the last document in the suppression group.
Type: date | +| `kibana.alert.suppression.docs_count` | The number of suppressed alerts.
Type: long | +| `kibana.alert.url` | The shareable URL for the alert.
NOTE: This field appears only if you’ve set the [`server.publicBaseUrl`](kibana://docs/reference/configuration-reference/general-settings.md#server-publicBaseUrl) configuration setting in the `kibana.yml` file.
Type: long | +| `kibana.alert.workflow_tags` | List of tags added to an alert.

This field can contain an array of values, for example: `["False Positive", "production"]`

Type: keyword
| +| `kibana.alert.workflow_assignee_ids` | List of users assigned to an alert.

An array of unique identifiers (UIDs) for user profiles, for example: `["u_1-0CcWliOCQ9T2MrK5YDjhpxZ_AcxPKt3pwaICcnAUY_0, u_2-0CcWliOCQ9T2MrK5YDjhpxZ_AcxPKt3pwaICcnAUY_1"]`

UIDs are linked to user profiles that are automatically created when users first log into a deployment. These profiles contain names, emails, profile avatars, and other user settings.

Type: string[]
| +| `kibana.alert.intended_timestamp` | Shows the alert’s estimated timestamp, had the alert been created when the source event initially occurred. The value in this field is determined by the way the rule was run:

* **Scheduled run**: Alerts created by scheduled runs have the same timestamp as the `@timestamp` field, which shows when the alert was created.
* **Manual run**: Alerts created by manual runs have a timestamp that falls within the time range specified for the manual run. For example, if you set a rule to manually run on event data from `10/01/2024 05:00 PM` to `10/07/2024 05:00 PM`, the `kibana.alert.intended_timestamp` value will be a date and time within that range.

Type: date
| +| `kibana.alert.rule.execution.type` | Shows if an alert was created by a manual run or a scheduled run. The value can be `manual` or `scheduled`.

Type: keyword
| diff --git a/reference/security/fields-and-object-schemas/index.md b/reference/security/fields-and-object-schemas/index.md new file mode 100644 index 0000000000..23452a5f42 --- /dev/null +++ b/reference/security/fields-and-object-schemas/index.md @@ -0,0 +1,14 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/security-ref-intro.html +--- + +# Fields and object schemas [security-ref-intro] + +This reference section provides details on the fields {{elastic-sec}} uses to display data in the UI and {{elastic-sec}} JSON object schemas: + +* [Create runtime fields in Elastic Security](/reference/security/fields-and-object-schemas/runtime-fields.md) +* [ECS fields required and/or used to analyze and display data](/reference/security/fields-and-object-schemas/siem-field-reference.md) +* [Timeline object schema](/reference/security/fields-and-object-schemas/timeline-object-schema.md) +* [Alert schema](/reference/security/fields-and-object-schemas/alert-schema.md) + diff --git a/reference/security/fields-and-object-schemas/runtime-fields.md b/reference/security/fields-and-object-schemas/runtime-fields.md new file mode 100644 index 0000000000..1f59c29bfd --- /dev/null +++ b/reference/security/fields-and-object-schemas/runtime-fields.md @@ -0,0 +1,59 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/runtime-fields.html +--- + +# Create runtime fields in Elastic Security [runtime-fields] + +Runtime fields are fields that you can add to documents after you’ve ingested your data. For example, you could combine two fields and treat them as one, or perform calculations on existing data and use the result as a separate field. Runtime fields are evaluated when a query is run. + +You can create a runtime field and add it to your detection alerts or events from any page that lists alerts or events in a data grid table, such as **Alerts**, **Timelines**, **Hosts**, and **Users**. Once created, the new field is added to the current [data view](/solutions/security/get-started/data-views-elastic-security.md) and becomes available to all {{elastic-sec}} alerts and events in the data view. + +::::{note} +Runtime fields can impact performance because they’re evaluated each time a query runs. Refer to [Runtime fields](/manage-data/data-store/mapping/runtime-fields.md) for more information. +:::: + + +To create a runtime field: + +1. Go to a page that lists alerts or events (for example, **Alerts** or **Timelines** → **Name of Timeline**). +2. Do one of the following: + + * In the Alerts table, click the **Fields** toolbar button in the table’s upper-left. From the **Fields** browser, click **Create field**. The **Create field** flyout opens. + + :::{image} ../../../images/security-fields-browser.png + :alt: Fields browser + :class: screenshot + ::: + + * In Timeline, go to the bottom of the sidebar, then click **Add a field**. The **Create field** flyout opens. + + :::{image} ../../../images/security-create-runtime-fields-timeline.png + :alt: Create runtime fields button in Timeline + :class: screenshot + ::: + +3. Enter a **Name** for the new field. +4. Select a **Type** for the field’s data type. +5. Turn on the **Set value** toggle and enter a [Painless script](/explore-analyze/scripting/modules-scripting-painless.md) to define the field’s value. The script must match the selected **Type**. For more on adding fields and Painless scripting examples, refer to [Explore your data with runtime fields](/explore-analyze/find-and-organize/data-views.md#runtime-fields). +6. Use the **Preview** to help you build the script so it returns the expected field value. +7. Configure other field settings as needed. + + ::::{note} + Some runtime field settings, such as custom labels and display formats, display in other areas of {{kib}} but may not display in the {{security-app}}. + :::: + +8. Click **Save**. The new field appears as a new column in the data grid. + + +## Manage runtime fields [manage-runtime-fields] + +You can edit or delete existing runtime fields from the **Alerts**, **Timelines**, **Hosts**, and **Users** pages. + +1. Click the **Fields** button to open the **Fields** browser, then search for the runtime field you want. + + ::::{tip} + Click the **Runtime** column header twice to reorder the fields table with all runtime fields at the top. + :::: + +2. In the **Actions** column, select an option to edit or delete the runtime field. diff --git a/reference/security/fields-and-object-schemas/siem-field-reference.md b/reference/security/fields-and-object-schemas/siem-field-reference.md new file mode 100644 index 0000000000..b02e97c6c5 --- /dev/null +++ b/reference/security/fields-and-object-schemas/siem-field-reference.md @@ -0,0 +1,256 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/siem-field-reference.html + - https://www.elastic.co/guide/en/serverless/current/security-siem-field-reference.html +--- + +# Elastic Security ECS field reference [siem-field-reference] + +This section lists [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current to provide an optimal SIEM and security analytics experience to users. These fields are used to display data, provide rule previews, enable detection by prebuilt detection rules, provide context during rule triage and investigation, escalate to cases, and more. + +::::{important} +We recommend you use {{agent}} integrations or {{beats}} to ship your data to {{elastic-sec}}. {{agent}} integrations and Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, which means data they ship to {{elastic-sec}} will automatically populate the relevant ECS fields. If you plan to use a custom implementation to map your data to ECS fields (see [how to map data to ECS](ecs://docs/reference/ecs-converting.md)), ensure the [always required fields](#siem-always-required-fields) are populated. Ideally, all relevant ECS fields should be populated as well. +:::: + + +For detailed information about which ECS fields can appear in documents generated by {{elastic-endpoint}}, refer to the [Endpoint event documentation](https://github.com/elastic/endpoint-package/tree/main/custom_documentation/doc/endpoint). + + +## Always required fields [siem-always-required-fields] + +{{elastic-sec}} requires all event and threat intelligence data to be normalized to ECS. For proper operation, all data must contain the following ECS fields: + +* `@timestamp` +* `ecs.version` +* `event.kind` +* `event.category` +* `event.type` + + +## Fields required for process events [siem-required-process-event-fields] + +{{elastic-sec}} relies on these fields to analyze and display process data: + +* `process.name` +* `process.pid` + + +## Fields required for host events [siem-host-fields] + +{{elastic-sec}} relies on these fields to analyze and display host data: + +* `host.name` +* `host.id` + +{{elastic-sec}} may use these fields to display additional host data: + +* `cloud.instance.id` +* `cloud.machine.type` +* `cloud.provider` +* `cloud.region` +* `host.architecture` +* `host.ip` +* `host.mac` +* `host.os.family` +* `host.os.name` +* `host.os.platform` +* `host.os.version` + + +#### Authentication fields [_authentication_fields] + +{{elastic-sec}} relies on these fields and values to analyze and display host authentication data: + +* `event.category:authentication` +* `event.outcome:success` or `event.outcome:failure` + +{{elastic-sec}} may also use this field to display additional host authentication data: + +* `user.name` + + +#### Uncommon process fields [_uncommon_process_fields] + +{{elastic-sec}} relies on this field to analyze and display host uncommon process data: + +* `process.name` + +{{elastic-sec}} may also use these fields to display uncommon process data: + +* `agent.type` +* `event.action` +* `event.code` +* `event.dataset` +* `event.module` +* `process.args` +* `user.id` +* `user.name` + + +## Fields required for network events [siem-required-network-fields] + +{{elastic-sec}} relies on these fields to analyze and display network data: + +* `destination.geo.location` (required for display of [map data](/solutions/security/explore/configure-network-map-data.md)) +* `destination.ip` +* `source.geo.location` (required to display map data) +* `source.ip` + +{{elastic-sec}} may also use these fields to analyze and display network data: + +* `destination.as.number` +* `destination.as.organization.name` +* `destination.bytes` +* `destination.domain` +* `destination.geo.country_iso_code` +* `source.as.number` +* `source.as.organization.name` +* `source.bytes` +* `source.domain` +* `source.geo.country_iso_code` + + +#### DNS query fields [_dns_query_fields] + +{{elastic-sec}} relies on these fields to analyze and display DNS data: + +* `dns.question.name` +* `dns.question.registered_domain` + +{{elastic-sec}} may also use this field to display DNS data: + +* `dns.question.type` + + ::::{note} + If you want to be able to filter out PTR records, make sure relevant events have `dns.question.type` fields with values of `PTR`. + :::: + + + +#### HTTP request fields [_http_request_fields] + +{{elastic-sec}} relies on these fields to analyze and display HTTP request data: + +* `http.request.method` +* `http.response.status_code` +* `url.domain` +* `url.path` + + +#### TLS fields [_tls_fields] + +{{elastic-sec}} relies on this field to analyze and display TLS data: + +* `tls.server.hash.sha1` + +{{elastic-sec}} may also use these fields to analyze and display TLS data: + +* `tls.server.issuer` +* `tls.server.ja3s` +* `tls.server.not_after` +* `tls.server.subject` + + +## Fields required for events and external alerts [_fields_required_for_events_and_external_alerts] + +{{elastic-sec}} relies on this field to analyze and display event and external alert data: + +* `event.kind` + + ::::{note} + For external alerts, the `event.kind` field’s value must be `alert`. + :::: + + +{{elastic-sec}} may also use these fields to analyze and display event and external alert data: + +* `destination.bytes` +* `destination.geo.city_name` +* `destination.geo.continent_name` +* `destination.geo.country_iso_code` +* `destination.geo.country_name` +* `destination.geo.region_iso_code` +* `destination.geo.region_name` +* `destination.ip` +* `destination.packets` +* `destination.port` +* `dns.question.name` +* `dns.question.type` +* `dns.resolved_ip` +* `dns.response_code` +* `event.action` +* `event.code` +* `event.created` +* `event.dataset` +* `event.duration` +* `event.end` +* `event.hash` +* `event.id` +* `event.module` +* `event.original` +* `event.outcome` +* `event.provider` +* `event.risk_score_norm` +* `event.risk_score` +* `event.severity` +* `event.start` +* `event.timezone` +* `file.ctime` +* `file.device` +* `file.extension` +* `file.gid` +* `file.group` +* `file.inode` +* `file.mode` +* `file.mtime` +* `file.name` +* `file.owner` +* `file.path` +* `file.size` +* `file.target_path` +* `file.type` +* `file.uid` +* `host.id` +* `host.ip` +* `http.request.body.bytes` +* `http.request.body.content` +* `http.request.method` +* `http.request.referrer` +* `http.response.body.bytes` +* `http.response.body.content` +* `http.response.status_code` +* `http.version` +* `message` +* `network.bytes` +* `network.community_id` +* `network.direction` +* `network.packets` +* `network.protocol` +* `network.transport` +* `pe.original_file_name` +* `process.args` +* `process.executable` +* `process.hash.md5` +* `process.hash.sha1` +* `process.hash.sha256` +* `process.name` +* `process.parent.executable` +* `process.parent.name` +* `process.pid` +* `process.ppid` +* `process.title` +* `process.working_directory` +* `rule.reference` +* `source.bytes` +* `source.geo.city_name` +* `source.geo.continent_name` +* `source.geo.country_iso_code` +* `source.geo.country_name` +* `source.geo.region_iso_code` +* `source.geo.region_name` +* `source.ip` +* `source.packets` +* `source.port` +* `user.domain` +* `user.name` + diff --git a/reference/security/fields-and-object-schemas/timeline-object-schema.md b/reference/security/fields-and-object-schemas/timeline-object-schema.md new file mode 100644 index 0000000000..d8de62a354 --- /dev/null +++ b/reference/security/fields-and-object-schemas/timeline-object-schema.md @@ -0,0 +1,143 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/timeline-object-schema.html + - https://www.elastic.co/guide/en/serverless/current/security-timeline-object-schema.html +--- + +# Timeline schema [timeline-object-schema] + +The Timeline schema lists all the JSON fields and objects required to create a Timeline or a Timeline template using the Create Timeline API. + +::::{important} +All column, dropzone, and filter fields must be [ECS fields](ecs://docs/reference/index.md). +:::: + + +This screenshot maps the Timeline UI components to their JSON objects: + +:::{image} ../../../images/security-timeline-object-ui.png +:alt: timeline object ui +:class: screenshot +::: + +1. [Title](#timeline-object-title) (`title`) +2. [Global notes](#timeline-object-global-notes) (`globalNotes`) +3. [Data view](#timeline-object-dataViewId) (`dataViewId`) +4. [KQL bar query](#timeline-object-kqlquery) (`kqlQuery`) +5. [Time filter](#timeline-object-daterange) (`dateRange`) +6. [Additional filters](#timeline-object-filters) (`filters`) +7. [KQL bar mode](#timeline-object-kqlmode) (`kqlMode`) +8. [Dropzone](#timeline-object-dropzone) (each clause is contained in its own `dataProviders` object) +9. [Column headers](#timeline-object-columns) (`columns`) +10. [Event-specific notes](#timeline-object-event-notes) (`eventNotes`) + +| Name | Type | Description | +| --- | --- | --- | +| $$$timeline-object-columns$$$`columns` | [columns[]](#col-obj) | The Timeline’scolumns. | +| `created` | Float | The time the Timeline was created, using a 13-digit Epochtimestamp. | +| `createdBy` | String | The user who created the Timeline. | +| $$$timeline-object-dropzone$$$`dataProviders` | [dataProviders[]](#dataProvider-obj) | Object containing dropzone queryclauses. | +| $$$timeline-object-dataViewId$$$`dataViewId` | String | ID of the Timeline’s Data View, for example: `"dataViewId":"security-solution-default"`. | +| $$$timeline-object-daterange$$$`dateRange` | dateRange | The Timeline’s search period:

* `end`: The time up to which events are searched, using a 13-digit Epoch timestamp.
* `start`: The time from which events are searched, using a 13-digit Epoch timestamp.
| +| `description` | String | The Timeline’s description. | +| $$$timeline-object-event-notes$$$`eventNotes` | [eventNotes[]](#eventNotes-obj) | Notes added to specific events in the Timeline. | +| `eventType` | String | Event types displayed in the Timeline, which can be:

* `All data sources`
* `Events`: Event sources only
* `Detection Alerts`: Detection alerts only
| +| `favorite` | [favorite[]](#favorite-obj) | Indicates when and who marked aTimeline as a favorite. | +| $$$timeline-object-filters$$$`filters` | [filters[]](#filters-obj) | Filters usedin addition to the dropzone query. | +| $$$timeline-object-global-notes$$$`globalNotes` | [globalNotes[]](#globalNotes-obj) | Global notes added to the Timeline. | +| $$$timeline-object-kqlmode$$$`kqlMode` | String | Indicates whether the KQL bar filters the dropzone query results or searches for additional results, where:

* `filter`: filters dropzone query results
* `search`: displays additional search results
| +| $$$timeline-object-kqlquery$$$`kqlQuery` | [kqlQuery](#kqlQuery-obj) | KQL barquery. | +| `pinnedEventIds` | pinnedEventIds[] | IDs of events pinned to the Timeline’ssearch results. | +| `savedObjectId` | String | The Timeline’s saved object ID. | +| `savedQueryId` | String | If used, the saved query ID used to filter or searchdropzone query results. | +| `sort` | sort | Object indicating how rows are sorted in the Timeline’s grid:

* `columnId` (string): The ID of the column used to sort results.
* `sortDirection` (string): The sort direction, which can be either `desc` or `asc`.
| +| `templateTimelineId` | String | A unique ID (UUID) for Timeline templates. For Timelines, the value is `null`.
| +| `templateTimelineVersion` | Integer | Timeline template version number. ForTimelines, the value is `null`. | +| $$$timeline-object-typeField$$$`timelineType` | String | Indicates whether the Timeline is a template or not, where:

* `default`: Indicates a Timeline used to actively investigate events.
* `template`: Indicates a Timeline template used when detection rule alerts are investigated in Timeline.
| +| $$$timeline-object-title$$$`title` | String | The Timeline’s title. | +| `updated` | Float | The last time the Timeline was updated, using a13-digit Epoch timestamp. | +| `updatedBy` | String | The user who last updated the Timeline. | +| `version` | String | The Timeline’s version. | + + +## columns object [col-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `aggregatable` | Boolean | Indicates whether the field can be aggregated acrossall indices (used to sort columns in the UI). | +| `category` | String | The ECS field set to which the field belongs. | +| `description` | String | UI column field description tooltip. | +| `example` | String | UI column field example tooltip. | +| `indexes` | String | Security indices in which the field exists and has the same{{es}} type. `null` when all the security indices have the field with the sametype. | +| `id` | String | ECS field name, displayed as the column header in the UI. | +| `type` | String | The field’s type. | + + +## dataProviders object [dataProvider-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `and` | dataProviders[] | Array containing dropzone query clauses using `AND`logic. | +| `enabled` | Boolean | Indicates if the dropzone query clause is enabled. | +| `excluded` | Boolean | Indicates if the dropzone query clause uses `NOT` logic. | +| `id` | String | The dropzone query clause’s unique ID. | +| `name` | String | The dropzone query clause’s name (the clause’s valuewhen Timelines are exported from the UI). | +| `queryMatch` | queryMatch | The dropzone query clause:

* `field` (string): The field used to search Security indices.
* `operator` (string): The clause’s operator, which can be:

* `:` - The `field` has the specified `value`.
* `:*` - The field exists.

* `value` (string): The field’s value used to match results.
| + + +## eventNotes object [eventNotes-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `created` | Float | The time the note was created, using a 13-digit Epochtimestamp. | +| `createdBy` | String | The user who added the note. | +| `eventId` | String | The ID of the event to which the note was added. | +| `note` | String | The note’s text. | +| `noteId` | String | The note’s ID | +| `timelineId` | String | The ID of the Timeline to which the note was added. | +| `updated` | Float | The last time the note was updated, using a13-digit Epoch timestamp. | +| `updatedBy` | String | The user who last updated the note. | +| `version` | String | The note’s version. | + + +## favorite object [favorite-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `favoriteDate` | Float | The time the Timeline was marked as a favorite, using a13-digit Epoch timestamp. | +| `fullName` | String | The full name of the user who marked the Timeline asa favorite. | +| `keySearch` | String | `userName` encoded in Base64. | +| `userName` | String | The {{kib}} username of the user who marked theTimeline as a favorite. | + + +## filters object [filters-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `exists` | String | [Exists term query](elasticsearch://docs/reference/query-languages/query-dsl-exists-query.md) for thespecified field (`null` when undefined). For example, `{"field":"user.name"}`. | +| `meta` | meta | Filter details:

* `alias` (string): UI filter name.
* `disabled` (boolean): Indicates if the filter is disabled.
* `key`(string): Field name or unique string ID.
* `negate` (boolean): Indicates if the filter query clause uses `NOT` logic.
* `params` (string): Value of `phrase` filter types.
* `type` (string): Type of filter. For example, `exists` and `range`. For more information about filtering, see [Query DSL](elasticsearch://docs/reference/query-languages/querydsl.md).
| +| `match_all` | String | [Match all term query](elasticsearch://docs/reference/query-languages/query-dsl-match-all-query.md)for the specified field (`null` when undefined). | +| `query` | String | [DSL query](elasticsearch://docs/reference/query-languages/querydsl.md) (`null` when undefined). Forexample, `{"match_phrase":{"ecs.version":"1.4.0"}}`. | +| `range` | String | [Range query](elasticsearch://docs/reference/query-languages/query-dsl-range-query.md) (`null` whenundefined). For example, `{"@timestamp":{"gte":"now-1d","lt":"now"}}"`. | + + +## globalNotes object [globalNotes-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `created` | Float | The time the note was created, using a 13-digit Epochtimestamp. | +| `createdBy` | String | The user who added the note. | +| `note` | String | The note’s text. | +| `noteId` | String | The note’s ID | +| `timelineId` | String | The ID of the Timeline to which the note was added. | +| `updated` | Float | The last time the note was updated, using a13-digit Epoch timestamp. | +| `updatedBy` | String | The user who last updated the note. | +| `version` | String | The note’s version. | + + +## kqlQuery object [kqlQuery-obj] + +| Name | Type | Description | +| --- | --- | --- | +| `filterQuery` | filterQuery | Object containing query details:

* `kuery`: Object containing the query’s clauses and type:

* `expression`(string): The query’s clauses.
* `kind` (string): The type of query, which can be `kuery` or `lucene`.

* `serializedQuery` (string): The query represented in JSON format.
| + diff --git a/reference/security/index.md b/reference/security/index.md new file mode 100644 index 0000000000..acf046b6dc --- /dev/null +++ b/reference/security/index.md @@ -0,0 +1,20 @@ +# Security + +% TO-DO: Add links to "What is Elastic Security?"% + +This section of the documentation contains reference information for Elastic Security features, including: + +* Prebuilt rules +* Downloadable rule updates +* Prebuilt jobs +* Fields and object schemas + +You can use these APIs to interface with Elastic Security features: + +* [Detections API](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-security-detections-api): Manage detection rules and alerts +* [Exceptions API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-exceptions-api): Create and manage rule exceptions +* [Lists API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-lists-api): Create source event value lists for use with rule exceptions +* [Timeline API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-timeline-api): Import and export timelines +* [Cases API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-cases): Open and manage cases +* [Elastic AI Assistant API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-ai-assistant-api): Interact with and manage Elastic AI Assistant +* [Asset criticality API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-entity-analytics-api): Create and manage asset criticality records \ No newline at end of file diff --git a/reference/toc.yml b/reference/toc.yml new file mode 100644 index 0000000000..24c2ffd805 --- /dev/null +++ b/reference/toc.yml @@ -0,0 +1,281 @@ +toc: + - file: overview/index.md + children: + - file: security/index.md + children: + - file: security/elastic-defend/index.md + - file: security/elastic-defend/elastic-endpoint-deploy-reqs.md + - file: security/elastic-defend/install-endpoint.md + children: + - file: security/elastic-defend/deploy-elastic-endpoint.md + - file: security/elastic-defend/deploy-elastic-endpoint-ven.md + - file: security/elastic-defend/deploy-with-mdm.md + - file: security/elastic-defend/agent-tamper-protection.md + - file: security/elastic-defend/endpoint-management-req.md + - file: security/elastic-defend/configure-endpoint-integration-policy.md + children: + - file: security/elastic-defend/artifact-control.md + - file: security/elastic-defend/endpoint-diagnostic-data.md + - file: security/elastic-defend/self-healing-rollback.md + - file: security/elastic-defend/linux-file-monitoring.md + - file: security/elastic-defend/endpoint-data-volume.md + - file: security/elastic-defend/create-defend-policy-api.md + - file: security/elastic-defend/offline-endpoint.md + - file: security/elastic-defend/uninstall-agent.md + - file: security/fields-and-object-schemas/index.md + - file: security/fields-and-object-schemas/runtime-fields.md + - file: security/fields-and-object-schemas/siem-field-reference.md + - file: security/fields-and-object-schemas/timeline-object-schema.md + - file: security/fields-and-object-schemas/alert-schema.md + - file: observability/index.md + - file: observability/fields-and-object-schemas.md + children: + - file: observability/fields-and-object-schemas/logs-app-fields.md + - file: observability/fields-and-object-schemas/metrics-app-fields.md + - file: observability/elastic-entity-model.md + - file: observability/serverless/infrastructure-app-fields.md + - file: search/search.md + - file: elasticsearch.md + children: + - file: elasticsearch/clients/index.md + - file: ingestion-tools.md + children: + - file: ingestion-tools/fleet/index.md + - file: ingestion-tools/fleet/fleet-agent-serverless-restrictions.md + - file: ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md + children: + - file: ingestion-tools/fleet/migrate-auditbeat-to-agent.md + - file: ingestion-tools/fleet/deployment-models.md + children: + - file: ingestion-tools/fleet/fleet-server.md + - file: ingestion-tools/fleet/add-fleet-server-cloud.md + - file: ingestion-tools/fleet/add-fleet-server-on-prem.md + - file: ingestion-tools/fleet/add-fleet-server-mixed.md + - file: ingestion-tools/fleet/add-fleet-server-kubernetes.md + - file: ingestion-tools/fleet/fleet-server-scalability.md + - file: ingestion-tools/fleet/fleet-server-secrets.md + children: + - file: ingestion-tools/fleet/secret-files-guide.md + - file: ingestion-tools/fleet/fleet-server-monitoring.md + - file: ingestion-tools/fleet/install-elastic-agents.md + children: + - file: ingestion-tools/fleet/install-fleet-managed-elastic-agent.md + - file: ingestion-tools/fleet/install-standalone-elastic-agent.md + children: + - file: ingestion-tools/fleet/upgrade-standalone.md + - file: ingestion-tools/fleet/install-elastic-agents-in-containers.md + children: + - file: ingestion-tools/fleet/elastic-agent-container.md + - file: ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md + - file: ingestion-tools/fleet/install-on-kubernetes-using-helm.md + - file: ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md + - file: ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md + - file: ingestion-tools/fleet/advanced-kubernetes-managed-by-fleet.md + - file: ingestion-tools/fleet/configuring-kubernetes-metadata.md + - file: ingestion-tools/fleet/running-on-gke-managed-by-fleet.md + - file: ingestion-tools/fleet/running-on-eks-managed-by-fleet.md + - file: ingestion-tools/fleet/running-on-aks-managed-by-fleet.md + - file: ingestion-tools/fleet/running-on-kubernetes-standalone.md + - file: ingestion-tools/fleet/scaling-on-kubernetes.md + - file: ingestion-tools/fleet/ingest-pipeline-kubernetes.md + - file: ingestion-tools/fleet/agent-environment-variables.md + - file: ingestion-tools/fleet/otel-agent.md + - file: ingestion-tools/fleet/elastic-agent-unprivileged.md + - file: ingestion-tools/fleet/install-agent-msi.md + - file: ingestion-tools/fleet/installation-layout.md + - file: ingestion-tools/fleet/air-gapped.md + - file: ingestion-tools/fleet/fleet-agent-proxy-support.md + children: + - file: ingestion-tools/fleet/elastic-agent-proxy-config.md + - file: ingestion-tools/fleet/host-proxy-env-vars.md + - file: ingestion-tools/fleet/fleet-agent-proxy-managed.md + - file: ingestion-tools/fleet/fleet-agent-proxy-standalone.md + - file: ingestion-tools/fleet/epr-proxy-setting.md + - file: ingestion-tools/fleet/uninstall-elastic-agent.md + - file: ingestion-tools/fleet/start-stop-elastic-agent.md + - file: ingestion-tools/fleet/_agent_configuration_encryption.md + - file: ingestion-tools/fleet/secure.md + children: + - file: ingestion-tools/fleet/secure-connections.md + - file: ingestion-tools/fleet/certificates-rotation.md + - file: ingestion-tools/fleet/mutual-tls.md + - file: ingestion-tools/fleet/tls-overview.md + - file: ingestion-tools/fleet/secure-logstash-connections.md + - file: ingestion-tools/fleet/manage-elastic-agents-in-fleet.md + children: + - file: ingestion-tools/fleet/fleet-settings.md + children: + - file: ingestion-tools/fleet/es-output-settings.md + - file: ingestion-tools/fleet/ls-output-settings.md + - file: ingestion-tools/fleet/kafka-output-settings.md + - file: ingestion-tools/fleet/remote-elasticsearch-output.md + - file: ingestion-tools/fleet/fleet-settings-changing-outputs.md + - file: ingestion-tools/fleet/manage-agents.md + children: + - file: ingestion-tools/fleet/unenroll-elastic-agent.md + - file: ingestion-tools/fleet/set-inactivity-timeout.md + - file: ingestion-tools/fleet/upgrade-elastic-agent.md + - file: ingestion-tools/fleet/migrate-elastic-agent.md + - file: ingestion-tools/fleet/monitor-elastic-agent.md + - file: ingestion-tools/fleet/agent-health-status.md + - file: ingestion-tools/fleet/filter-agent-list-by-tags.md + - file: ingestion-tools/fleet/agent-policy.md + children: + - file: ingestion-tools/fleet/create-policy-no-ui.md + - file: ingestion-tools/fleet/enable-custom-policy-settings.md + - file: ingestion-tools/fleet/fleet-agent-environment-variables.md + - file: ingestion-tools/fleet/fleet-roles-privileges.md + - file: ingestion-tools/fleet/fleet-enrollment-tokens.md + - file: ingestion-tools/fleet/fleet-api-docs.md + - file: ingestion-tools/fleet/configure-standalone-elastic-agents.md + children: + - file: ingestion-tools/fleet/create-standalone-agent-policy.md + - file: ingestion-tools/fleet/structure-config-file.md + - file: ingestion-tools/fleet/elastic-agent-input-configuration.md + children: + - file: ingestion-tools/fleet/elastic-agent-simplified-input-configuration.md + - file: ingestion-tools/fleet/elastic-agent-inputs-list.md + - file: ingestion-tools/fleet/dynamic-input-configuration.md + - file: ingestion-tools/fleet/providers.md + children: + - file: ingestion-tools/fleet/local-provider.md + - file: ingestion-tools/fleet/agent-provider.md + - file: ingestion-tools/fleet/host-provider.md + - file: ingestion-tools/fleet/env-provider.md + - file: ingestion-tools/fleet/kubernetes_secrets-provider.md + - file: ingestion-tools/fleet/kubernetes_leaderelection-provider.md + - file: ingestion-tools/fleet/local-dynamic-provider.md + - file: ingestion-tools/fleet/docker-provider.md + - file: ingestion-tools/fleet/kubernetes-provider.md + - file: ingestion-tools/fleet/elastic-agent-output-configuration.md + children: + - file: ingestion-tools/fleet/elasticsearch-output.md + - file: ingestion-tools/fleet/kafka-output.md + - file: ingestion-tools/fleet/logstash-output.md + - file: ingestion-tools/fleet/elastic-agent-ssl-configuration.md + - file: ingestion-tools/fleet/elastic-agent-standalone-logging-config.md + - file: ingestion-tools/fleet/elastic-agent-standalone-feature-flags.md + - file: ingestion-tools/fleet/elastic-agent-standalone-download.md + - file: ingestion-tools/fleet/config-file-examples.md + children: + - file: ingestion-tools/fleet/config-file-example-apache.md + - file: ingestion-tools/fleet/config-file-example-nginx.md + - file: ingestion-tools/fleet/grant-access-to-elasticsearch.md + - file: ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md + - file: ingestion-tools/fleet/example-standalone-monitor-nginx.md + - file: ingestion-tools/fleet/debug-standalone-agents.md + - file: ingestion-tools/fleet/elastic-agent-kubernetes-autodiscovery.md + children: + - file: ingestion-tools/fleet/conditions-based-autodiscover.md + - file: ingestion-tools/fleet/hints-annotations-autodiscovery.md + - file: ingestion-tools/fleet/elastic-agent-monitoring-configuration.md + - file: ingestion-tools/fleet/elastic-agent-reference-yaml.md + - file: ingestion-tools/fleet/manage-integrations.md + children: + - file: ingestion-tools/fleet/package-signatures.md + - file: ingestion-tools/fleet/add-integration-to-policy.md + - file: ingestion-tools/fleet/view-integration-policies.md + - file: ingestion-tools/fleet/edit-delete-integration-policy.md + - file: ingestion-tools/fleet/install-uninstall-integration-assets.md + - file: ingestion-tools/fleet/view-integration-assets.md + - file: ingestion-tools/fleet/integration-level-outputs.md + - file: ingestion-tools/fleet/upgrade-integration.md + - file: ingestion-tools/fleet/managed-integrations-content.md + - file: ingestion-tools/fleet/integrations-assets-best-practices.md + - file: ingestion-tools/fleet/data-streams.md + children: + - file: ingestion-tools/fleet/data-streams-ilm-tutorial.md + - file: ingestion-tools/fleet/data-streams-scenario1.md + - file: ingestion-tools/fleet/data-streams-scenario2.md + - file: ingestion-tools/fleet/data-streams-scenario3.md + - file: ingestion-tools/fleet/data-streams-pipeline-tutorial.md + - file: ingestion-tools/fleet/data-streams-advanced-features.md + - file: ingestion-tools/fleet/agent-command-reference.md + - file: ingestion-tools/fleet/agent-processors.md + children: + - file: ingestion-tools/fleet/processor-syntax.md + - file: ingestion-tools/fleet/add-cloud-metadata-processor.md + - file: ingestion-tools/fleet/add_cloudfoundry_metadata-processor.md + - file: ingestion-tools/fleet/add_docker_metadata-processor.md + - file: ingestion-tools/fleet/add_fields-processor.md + - file: ingestion-tools/fleet/add_host_metadata-processor.md + - file: ingestion-tools/fleet/add_id-processor.md + - file: ingestion-tools/fleet/add_kubernetes_metadata-processor.md + - file: ingestion-tools/fleet/add_labels-processor.md + - file: ingestion-tools/fleet/add_locale-processor.md + - file: ingestion-tools/fleet/add_network_direction-processor.md + - file: ingestion-tools/fleet/add_nomad_metadata-processor.md + - file: ingestion-tools/fleet/add_observer_metadata-processor.md + - file: ingestion-tools/fleet/add_process_metadata-processor.md + - file: ingestion-tools/fleet/add_tags-processor.md + - file: ingestion-tools/fleet/community_id-processor.md + - file: ingestion-tools/fleet/convert-processor.md + - file: ingestion-tools/fleet/copy_fields-processor.md + - file: ingestion-tools/fleet/decode_base64_field-processor.md + - file: ingestion-tools/fleet/decode_cef-processor.md + - file: ingestion-tools/fleet/decode_csv_fields-processor.md + - file: ingestion-tools/fleet/decode_duration-processor.md + - file: ingestion-tools/fleet/decode-json-fields.md + - file: ingestion-tools/fleet/decode_xml-processor.md + - file: ingestion-tools/fleet/decode_xml_wineventlog-processor.md + - file: ingestion-tools/fleet/decompress_gzip_field-processor.md + - file: ingestion-tools/fleet/detect_mime_type-processor.md + - file: ingestion-tools/fleet/dissect-processor.md + - file: ingestion-tools/fleet/dns-processor.md + - file: ingestion-tools/fleet/drop_event-processor.md + - file: ingestion-tools/fleet/drop_fields-processor.md + - file: ingestion-tools/fleet/extract_array-processor.md + - file: ingestion-tools/fleet/fingerprint-processor.md + - file: ingestion-tools/fleet/include_fields-processor.md + - file: ingestion-tools/fleet/move_fields-processor.md + - file: ingestion-tools/fleet/processor-parse-aws-vpc-flow-log.md + - file: ingestion-tools/fleet/rate_limit-processor.md + - file: ingestion-tools/fleet/registered_domain-processor.md + - file: ingestion-tools/fleet/rename-processor.md + - file: ingestion-tools/fleet/replace-fields.md + - file: ingestion-tools/fleet/script-processor.md + - file: ingestion-tools/fleet/syslog-processor.md + - file: ingestion-tools/fleet/timestamp-processor.md + - file: ingestion-tools/fleet/translate_sid-processor.md + - file: ingestion-tools/fleet/truncate_fields-processor.md + - file: ingestion-tools/fleet/urldecode-processor.md + - file: ingestion-tools/observability/apm.md + children: + - file: ingestion-tools/observability/apm-settings.md + - file: ingestion-tools/cloud/apm-settings.md + - file: ingestion-tools/cloud-enterprise/apm-settings.md + - file: ingestion-tools/apm/apm-agents.md + - file: ecs.md + - file: data-analysis/index.md + children: + - file: data-analysis/machine-learning/supplied-anomaly-detection-configurations.md + children: + - file: data-analysis/machine-learning/ootb-ml-jobs-apache.md + - file: data-analysis/machine-learning/ootb-ml-jobs-apm.md + - file: data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md + - file: data-analysis/machine-learning/ootb-ml-jobs-logs-ui.md + - file: data-analysis/machine-learning/ootb-ml-jobs-metricbeat.md + - file: data-analysis/machine-learning/ootb-ml-jobs-metrics-ui.md + - file: data-analysis/machine-learning/ootb-ml-jobs-nginx.md + - file: data-analysis/machine-learning/ootb-ml-jobs-siem.md + - file: data-analysis/machine-learning/ootb-ml-jobs-uptime.md + - file: data-analysis/machine-learning/machine-learning-functions.md + children: + - file: data-analysis/machine-learning/ml-count-functions.md + - file: data-analysis/machine-learning/ml-geo-functions.md + - file: data-analysis/machine-learning/ml-info-functions.md + - file: data-analysis/machine-learning/ml-metric-functions.md + - file: data-analysis/machine-learning/ml-rare-functions.md + - file: data-analysis/machine-learning/ml-sum-functions.md + - file: data-analysis/machine-learning/ml-time-functions.md + - file: data-analysis/observability/index.md + - file: data-analysis/observability/metrics-reference-serverless.md + children: + - file: data-analysis/observability/observability-host-metrics-serverless.md + - file: data-analysis/observability/observability-container-metrics-serverless.md + - file: data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md + - file: data-analysis/observability/observability-aws-metrics-serverless.md + - file: data-analysis/kibana/canvas-functions.md + children: + - file: data-analysis/kibana/tinymath-functions.md + - file: glossary/index.md \ No newline at end of file diff --git a/release-notes/breaking-changes/elastic-apm.md b/release-notes/breaking-changes/elastic-apm.md new file mode 100644 index 0000000000..ffb5c9bc88 --- /dev/null +++ b/release-notes/breaking-changes/elastic-apm.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Elastic APM" +--- + +# Elastic APM breaking changes [elastic-apm-breaking-changes] +Before you upgrade, carefully review the Elastic APM breaking changes and take the necessary steps to mitigate any issues. + +To learn how to upgrade, check out . + +% ## Next version [elastic-apm-nextversion-breaking-changes] +% **Release date:** Month day, year + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [elastic-apm-900-breaking-changes] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/breaking-changes/elastic-observability.md b/release-notes/breaking-changes/elastic-observability.md new file mode 100644 index 0000000000..464d04038b --- /dev/null +++ b/release-notes/breaking-changes/elastic-observability.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Elastic Observability" +--- + +# Elastic Observability breaking changes [elastic-observability-breaking-changes] +Before you upgrade, carefully review the Elastic Observability breaking changes and take the necessary steps to mitigate any issues. + +To learn how to upgrade, check out . + +% ## Next version [elastic-observability-nextversion-breaking-changes] +% **Release date:** Month day, year + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [elastic-observability-900-breaking-changes] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/breaking-changes/elastic-security.md b/release-notes/breaking-changes/elastic-security.md new file mode 100644 index 0000000000..73a9a6870a --- /dev/null +++ b/release-notes/breaking-changes/elastic-security.md @@ -0,0 +1,71 @@ +--- +navigation_title: "Elastic Security" +--- + +# Elastic Security breaking changes [elastic-security-breaking-changes] +Before you upgrade, carefully review the Elastic Security breaking changes and take the necessary steps to mitigate any issues. + +To learn how to upgrade, check out . + +% ## Next version [elastic-security-nextversion-breaking-changes] +% **Release date:** Month day, year + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +## 9.0.0 [elastic-security-900-breaking-changes] +**Release date:** March 25, 2025 + +::::{dropdown} Removed legacy security rules bulk endpoints +* `POST /api/detection_engine/rules/_bulk_create` has been replaced by `POST /api/detection_engine/rules/_import` +* `PUT /api/detection_engine/rules/_bulk_update` has been replaced by `POST /api/detection_engine/rules/_bulk_action` +* `PATCH /api/detection_engine/rules/_bulk_update has been replaced by `POST /api/detection_engine/rules/_bulk_action` +* `DELETE /api/detection_engine/rules/_bulk_delete` has been replaced by `POST /api/detection_engine/rules/_bulk_action` +* `POST api/detection_engine/rules/_bulk_delete` has been replaced by `POST /api/detection_engine/rules/_bulk_action` + +These changes were introduced in [#197422](https://github.com/elastic/kibana/pull/197422). + +**Impact**
Deprecated endpoints will fail with a 404 status code starting from version 9.0.0. + +**Action**
+ +Update your implementations to use the new endpoints: + +* **For bulk creation of rules:** + + * Use `POST /api/detection_engine/rules/_import` ([API documentation](https://www.elastic.co/docs/api/doc/kibana/operation/operation-importrules)) to create multiple rules along with their associated entities (for example, exceptions and action connectors). + * Alternatively, create rules individually using `POST /api/detection_engine/rules` ([API documentation](https://www.elastic.co/docs/api/doc/kibana/operation/operation-createrule)). + +* **For bulk updates of rules:** + + * Use `POST /api/detection_engine/rules/_bulk_action` ([API documentation](https://www.elastic.co/docs/api/doc/kibana/operation/operation-performrulesbulkaction)) to update fields in multiple rules simultaneously. + * Alternatively, update rules individually using `PUT /api/detection_engine/rules` ([API documentation](https://www.elastic.co/docs/api/doc/kibana/operation/operation-updaterule)). + +* **For bulk deletion of rules:** + + * Use `POST /api/detection_engine/rules/_bulk_action` ([API documentation](https://www.elastic.co/docs/api/doc/kibana/operation/operation-performrulesbulkaction)) to delete multiple rules by IDs or query. + * Alternatively, delete rules individually using `DELETE /api/detection_engine/rules` ([API documentation](https://www.elastic.co/docs/api/doc/kibana/operation/operation-deleterule)). +:::: + +::::{dropdown} Remove deprecated endpoint management endpoints +* `POST /api/endpoint/isolate` has been replaced by `POST /api/endpoint/action/isolate` +* `POST /api/endpoint/unisolate` has been replaced by `POST /api/endpoint/action/unisolate` +* `GET /api/endpoint/policy/summaries` has been deprecated without replacement. Will be removed in v9.0.0 +* `POST /api/endpoint/suggestions/{{suggestion_type}}` has been deprecated without replacement. Will be removed in v9.0.0 +* `GET /api/endpoint/action_log/{{agent_id}}` has been deprecated without replacement. Will be removed in v9.0.0 +* `GET /api/endpoint/metadata/transforms` has been deprecated without replacement. Will be removed in v9.0.0 + +**Impact**
Deprecated endpoints will fail with a 404 status code starting from version 9.0.0. + +**Action**
+ +* Remove references to `GET /api/endpoint/policy/summaries` endpoint. +* Remove references to `POST /api/endpoint/suggestions/{{suggestion_type}}` endpoint. +* Remove references to `GET /api/endpoint/action_log/{{agent_id}}` endpoint. +* Remove references to `GET /api/endpoint/metadata/transforms` endpoint. +* Replace references to deprecated endpoints with the replacements listed in the breaking change details. +:::: \ No newline at end of file diff --git a/release-notes/breaking-changes/fleet-elastic-agent.md b/release-notes/breaking-changes/fleet-elastic-agent.md new file mode 100644 index 0000000000..56c75098e5 --- /dev/null +++ b/release-notes/breaking-changes/fleet-elastic-agent.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Fleet and Elastic Agent" +--- + +# Fleet and Elastic Agent breaking changes [fleet-elastic-agent-breaking-changes] +Before you upgrade, carefully review the Fleet and Elastic Agent breaking changes and take the necessary steps to mitigate any issues. + +To learn how to upgrade, check out . + +% ## Next version [fleet-elastic-agent-nextversion-breaking-changes] +% **Release date:** Month day, year + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [fleet-elastic-agent-900-breaking-changes] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Title of breaking change +% Description of the breaking change. +% For more information, check [PR #](PR link). +% **Impact**
Impact of the breaking change. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/breaking-changes/index.md b/release-notes/breaking-changes/index.md new file mode 100644 index 0000000000..0d275a8ece --- /dev/null +++ b/release-notes/breaking-changes/index.md @@ -0,0 +1,7 @@ +# Breaking changes [elastic-breaking-changes] + +% GitHub issue: https://github.com/elastic/docs-projects/issues/318 + +Breaking changes can impact your Elastic applications, potentially disrupting normal operations. Before you upgrade, carefully review the breaking changes for your Elastic version and take the necessary steps to mitigate any issues. + +To learn how to upgrade, check out . \ No newline at end of file diff --git a/release-notes/deprecations/elastic-apm.md b/release-notes/deprecations/elastic-apm.md new file mode 100644 index 0000000000..7b1ab8582a --- /dev/null +++ b/release-notes/deprecations/elastic-apm.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Elastic APM" +--- + +# Elastic APM deprecations [elastic-apm-deprecations] +Review the deprecated functionality for your Elastic APM version. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. + +To learn how to upgrade, check out . + +% ## Next version +% **Release date:** Month day, year + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [elastic-apm-900-deprecations] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/deprecations/elastic-cloud-serverless.md b/release-notes/deprecations/elastic-cloud-serverless.md new file mode 100644 index 0000000000..697349e9e6 --- /dev/null +++ b/release-notes/deprecations/elastic-cloud-serverless.md @@ -0,0 +1,31 @@ +--- +navigation_title: "Elastic Cloud Serverless" +--- + +# Elastic Cloud Serverless deprecations [elastic-cloud-serverless-deprecations] +Review the deprecated functionality for Elastic Cloud Serverless. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. + +To learn how to upgrade, check out . + +For serverless API deprecations, check [APIs Changelog](https://www.elastic.co/docs/api/changes). + +% ## Next release date Month Day, Year [elastic-cloud-serverless-releasedate-deprecations] +% Description of the deprecation and steps to update implementation. +% For more information, check [PR #](PR link). + +## January 27, 2025 [elastic-cloud-serverless-01272025-deprecations] +* Deprecates a subset of Elastic Security Serverless endpoint management APIs. For more information, check [#206903](https://github.com/elastic/kibana/pull/206903). + +## January 13, 2025 [elastic-cloud-serverless-01132025-deprecations] +* Remove all legacy risk engine code and features. For more information, check [#201810](https://github.com/elastic/kibana/pull/201810). + +## January 6, 2025 [elastic-cloud-serverless-01062025-deprecations] +* Disables Elastic Observability Serverless log stream and settings pages. For more information, check [#203996](https://github.com/elastic/kibana/pull/203996). +* Removes Logs Explorer in Elastic Observability Serverless. For more information, check [#203685](https://github.com/elastic/kibana/pull/203685). + +## December 16, 2024 [elastic-cloud-serverless-12162024-deprecations] +* Deprecates the `discover:searchFieldsFromSource` setting. For more information, check [#202679](https://github.com/elastic/kibana/pull/202679). +* Disables scripted field creation in the Data Views management page. For more information, check [#202250](https://github.com/elastic/kibana/pull/202250). +* Removes all logic based on the following settings: `xpack.reporting.roles.enabled`, `xpack.reporting.roles.allow`. For more information, check [#200834](https://github.com/elastic/kibana/pull/200834). +* Removes the legacy table from Discover. For more information, check [#201254](https://github.com/elastic/kibana/pull/201254). +* Deprecates ephemeral tasks from action and alerting plugins. For more information, check [#197421](https://github.com/elastic/kibana/pull/197421). \ No newline at end of file diff --git a/release-notes/deprecations/elastic-observability.md b/release-notes/deprecations/elastic-observability.md new file mode 100644 index 0000000000..b3142ff8ad --- /dev/null +++ b/release-notes/deprecations/elastic-observability.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Elastic Observability" +--- + +# Elastic Observability deprecations [elastic-observability-deprecations] +Review the deprecated functionality for your Elastic Observability version. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. + +To learn how to upgrade, check out . + +% ## Next version +% **Release date:** Month day, year + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [elastic-observability-900-deprecations] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/deprecations/elastic-security.md b/release-notes/deprecations/elastic-security.md new file mode 100644 index 0000000000..c778eff7f3 --- /dev/null +++ b/release-notes/deprecations/elastic-security.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Elastic Security" +--- + +# Elastic Security deprecations [elastic-security-deprecations] +Review the deprecated functionality for your Elastic Security version. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. + +To learn how to upgrade, check out . + +% ## Next version +% **Release date:** Month day, year + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [elastic-security-900-deprecations] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/deprecations/fleet-elastic-agent.md b/release-notes/deprecations/fleet-elastic-agent.md new file mode 100644 index 0000000000..e5f51ae891 --- /dev/null +++ b/release-notes/deprecations/fleet-elastic-agent.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Fleet and Elastic Agent" +--- + +# Fleet and Elastic Agent deprecations [fleet-elastic-agent-deprecations] +Review the deprecated functionality for your Fleet and Elastic Agent version. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. + +To learn how to upgrade, check out . + +% ## Next version +% **Release date:** Month day, year + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: + +% ## 9.0.0 [fleet-elastic-agent-900-deprecations] +% **Release date:** March 25, 2025 + +% ::::{dropdown} Deprecation title +% Description of the deprecation. +% For more information, check [PR #](PR link). +% **Impact**
Impact of deprecation. +% **Action**
Steps for mitigating deprecation impact. +% :::: \ No newline at end of file diff --git a/release-notes/deprecations/index.md b/release-notes/deprecations/index.md new file mode 100644 index 0000000000..7d88bc7be5 --- /dev/null +++ b/release-notes/deprecations/index.md @@ -0,0 +1,10 @@ +# Deprecations [elastic-deprecations] + +% GitHub issue: https://github.com/elastic/docs-projects/issues/319 + +Over time, certain Elastic functionality becomes outdated and is replaced or removed. To help with the transition, Elastic deprecates functionality for a period before removal, giving you time to update your applications. + +Review the deprecated functionality for your Elastic version. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. + +To learn how to upgrade, check out . + diff --git a/release-notes/elastic-apm.md b/release-notes/elastic-apm.md new file mode 100644 index 0000000000..ccb4615407 --- /dev/null +++ b/release-notes/elastic-apm.md @@ -0,0 +1,29 @@ +--- +navigation_title: "Elastic APM" +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/apm-release-notes.html + - https://www.elastic.co/guide/en/observability/master/release-notes-head.html +--- + +# Elastic APM release notes [elastic-apm-release-notes] +Review the changes, fixes, and more in each version of Elastic APM. + +To check for security updates, go to [Security announcements for the Elastic stack](https://discuss.elastic.co/c/announcements/security-announcements/31). + +% Release notes include only features, enhancements, and fixes. Add breaking changes, deprecations, and known issues to the applicable release notes sections. +% For each new version section, include the Elastic APM and Kibana changes. + +% ## version.next [elastic-apm-next-release-notes] +% **Release date:** Month day, year + +% ### Features and enhancements [elastic-apm-next-features-enhancements] + +% ### Fixes [elastic-apm-next-fixes] + +## 9.0.0 [elastic-apm-900-release-notes] +**Release date:** March 25, 2025 + +### Features and enhancements [elastic-apm-900-features-enhancements] + +### Fixes [elastic-apm-900-fixes] +* Fix overflow in validation of `apm-server.agent.config.cache.expiration` on 32-bit architectures [#15216](https://github.com/elastic/apm-server/pull/15216) \ No newline at end of file diff --git a/release-notes/elastic-cloud-serverless.md b/release-notes/elastic-cloud-serverless.md new file mode 100644 index 0000000000..6542f71be6 --- /dev/null +++ b/release-notes/elastic-cloud-serverless.md @@ -0,0 +1,209 @@ +--- +navigation_title: "Elastic Cloud Serverless" +mapped_pages: + - https://www.elastic.co/guide/en/serverless/current/serverless-changelog.html +--- + +# Elastic Cloud Serverless changelog [elastic-cloud-serverless-changelog] +Review the changes, fixes, and more to Elastic Cloud Serverless. + +For serverless API changes, refer to [APIs Changelog](https://www.elastic.co/docs/api/changes). + +For serverless changes in Cloud Console, check out [Elastic Cloud Hosted release notes](cloud://docs/release-notes/cloud-hosted/index.md). + +% Release notes include only features, enhancements, and fixes. Add breaking changes, deprecations, and known issues to the applicable release notes sections. + +% ## version.next [elastic-cloud-serverless-changelog-releasedate] + +% ### Features and enhancements [elastic-cloud-serverless-releasedate-features-enhancements] + +% ### Fixes [elastic-cloud-serverless-releasedate-fixes] + +## January 27, 2025 [serverless-changelog-01272025] + +### Features and enhancements [elastic-cloud-serverless-01272025-features-enhancements] +* Breaks out timeline and note privileges in Elastic Security Serverless ({{kibana-pull}}201780[#201780]). +* Adds service enrichment to the detection engine in Elastic Security Serverless ({{kibana-pull}}206582[#206582]). +* Updates the Entity Store Dashboard to prompt for the Service Entity Type in Elastic Security Serverless ({{kibana-pull}}207336[#207336]). +* Adds `enrichPolicyExecutionInterval` to entity enablement and initialization APIs in Elastic Security Serverless ({{kibana-pull}}207374[#207374]). +* Introduces a lookback period configuration for the Entity Store in Elastic Security Serverless ({{kibana-pull}}206421[#206421]). +* Allows pre-configured connectors to opt into exposing their configurations by setting `exposeConfig` in Alerting ({{kibana-pull}}207654[#207654]). +* Adds selector syntax support to log source profiles in Elastic Observability Serverless ({{kibana-pull}}206937[#206937]). +* Displays stack traces in the logs overview tab in Elastic Observability Serverless ({{kibana-pull}}204521[#204521]). +* Enables the use of the rule form to create rules in Elastic Observability Serverless ({{kibana-pull}}206774[#206774]). +* Checks only read privileges of existing indices during rule execution in Elastic Security Serverless ({{kibana-pull}}177658[#177658]). +* Updates KNN search and query template autocompletion in Elasticsearch Serverless ({{kibana-pull}}207187[#207187]). +* Updates JSON schemas for code editors in Machine Learning ({{kibana-pull}}207706[#207706]). +* Reindexes the `.kibana_security_session_1` index to the 8.x format in Security ({{kibana-pull}}204097[#204097]). + +### Fixes [elastic-cloud-serverless-01272025-fixes] +* Fixes editing alerts filters for multi-consumer rule types in Alerting ({{kibana-pull}}206848[#206848]). +* Resolves an issue where Chrome was no longer hidden for reports in Dashboards and Visualizations ({{kibana-pull}}206988[#206988]). +* Updates library transforms and duplicate functionality in Dashboards and Visualizations ({{kibana-pull}}206140[#206140]). +* Fixes an issue where drag previews are now absolutely positioned in Dashboards and Visualizations ({{kibana-pull}}208247[#208247]). +* Fixes an issue where an accessible label now appears on the range slider in Dashboards and Visualizations ({{kibana-pull}}205308[#205308]). +* Fixes a dropdown label sync issue when sorting by "Type" ({{kibana-pull}}206424[#206424]). +* Fixes an access bug related to user instructions in Elastic Observability Serverless ({{kibana-pull}}207069[#207069]). +* Fixes the Open Explore in Discover link to open in a new tab in Elastic Observability Serverless ({{kibana-pull}}207346[#207346]). +* Returns an empty object for tool arguments when none are provided in Elastic Observability Serverless ({{kibana-pull}}207943[#207943]). +* Ensures similar cases count is not fetched without the proper license in Elastic Security Serverless ({{kibana-pull}}207220[#207220]). +* Fixes table leading actions to use standardized colors in Elastic Security Serverless ({{kibana-pull}}207743[#207743]). +* Adds missing fields to the AWS S3 manifest in Elastic Security Serverless ({{kibana-pull}}208080[#208080]). +* Prevents redundant requests when loading Discover sessions and toggling chart visibility in ES|QL ({{kibana-pull}}206699[#206699]). +* Fixes a UI error when agents move to an orphaned state in Fleet ({{kibana-pull}}207746[#207746]). +* Restricts non-local Elasticsearch output types for agentless integrations and policies in Fleet ({{kibana-pull}}207296[#207296]). +* Fixes table responsiveness in the Notifications feature of Machine Learning ({{kibana-pull}}206956[#206956]). + +## January 13, 2025 [serverless-changelog-01132025] + +### Features and enhancements [elastic-cloud-serverless-01132025-features-enhancements] +* Adds last alert status change to Elastic Security Serverless flyout ({{kibana-pull}}205224[#205224]). +* Case templates are now GA ({{kibana-pull}}205940[#205940]). +* Adds format to JSON messages in Elastic Observability Serverless Logs profile ({{kibana-pull}}205666[#205666]). +* Adds inference connector in Elastic Security Serverless AI features ({{kibana-pull}}204505[#204505]). +* Adds inference connector for Auto Import in Elastic Security Serverless ({{kibana-pull}}206111[#206111]). +* Adds Feature Flag Support for Cloud Security Posture Plugin in Elastic Security Serverless ({{kibana-pull}}205438[#205438]). +* Adds the ability to sync Machine Learning saved objects to all spaces ({{kibana-pull}}202175[#202175]). +* Improves messages for recovered alerts in Machine Learning Transforms ({{kibana-pull}}205721[#205721]). + +### Fixes [elastic-cloud-serverless-01132025-fixes] +* Fixes an issue where "KEEP" columns are not applied after an Elasticsearch error in Discover ({{kibana-pull}}205833[#205833]). +* Resolves padding issues in the document comparison table in Discover ({{kibana-pull}}205984[#205984]). +* Fixes a bug affecting bulk imports for the knowledge base in Elastic Observability Serverless ({{kibana-pull}}205075[#205075]). +* Enhances the Find API by adding cursor-based pagination (search_after) as an alternative to offset-based pagination ({{kibana-pull}}203712[#203712]). +* Updates Elastic Observability Serverless to use architecture-specific Elser models ({{kibana-pull}}205851[#205851]). +* Fixes dynamic batching in the timeline for Elastic Security Serverless ({{kibana-pull}}204034[#204034]). +* Resolves a race condition bug in Elastic Security Serverless related to OpenAI errors ({{kibana-pull}}205665[#205665]). +* Improves the integration display by ensuring all policies are listed in Elastic Security Serverless ({{kibana-pull}}205103[#205103]). +* Renames color variables in the user interface for better clarity and consistency ({{kibana-pull}}204908[#204908]). +* Allows editor suggestions to remain visible when the inline documentation flyout is open in ES|QL ({{kibana-pull}}206064[#206064]). +* Ensures the same time range is applied to documents and the histogram in ES|QL ({{kibana-pull}}204694[#204694]). +* Fixes validation for the "required" field in multi-text input fields in Fleet ({{kibana-pull}}205768[#205768]). +* Fixes timeout issues for bulk actions in Fleet ({{kibana-pull}}205735[#205735]). +* Handles invalid RRule parameters to prevent infinite loops in alerts ({{kibana-pull}}205650[#205650]). +* Fixes privileges display for features and sub-features requiring "All Spaces" permissions in Fleet ({{kibana-pull}}204402[#204402]). +* Prevents password managers from modifying disabled input fields ({{kibana-pull}}204269[#204269]). +* Updates the listing control in the user interface ({{kibana-pull}}205914[#205914]). +* Improves consistency in the help dropdown design ({{kibana-pull}}206280[#206280]). + +## January 6, 2025 [serverless-changelog-01062025] + +### Features and enhancements [elastic-cloud-serverless-01062025-features-enhancements] +* Introduces case observables in Elastic Security Serverless ({{kibana-pull}}190237[#190237]). +* Adds a JSON field called "additional fields" to ServiceNow cases when sent using connector, containing the internal names of the ServiceNow table columns ({{kibana-pull}}201948[#201948]). +* Adds the ability to configure the appearance color mode to sync dark mode with the system value ({{kibana-pull}}203406[#203406]). +* Makes the "Copy" action visible on cell hover in Discover ({{kibana-pull}}204744[#204744]). +* Updates the `EnablementModalCallout` name to `AdditionalChargesMessage` in Elastic Security Serverless ({{kibana-pull}}203061[#203061]). +* Adds more control over which Elastic Security Serverless alerts in Attack Discovery are included as context to the large language model ({{kibana-pull}}205070[#205070]). +* Adds a consistent layout and other UI enhancements for {{ml}} pages ({{kibana-pull}}203813[#203813]). + +### Fixes [elastic-cloud-serverless-01062025-fixes] +* Fixes an issue that caused dashboards to lag when dragging the time slider ({{kibana-pull}}201885[#201885]). +* Updates the CloudFormation template to the latest version and adjusts the documentation to reflect the use of a single Firehose stream created by the new template ({{kibana-pull}}204185[#204185]). +* Fixes Integration and Datastream name validation in Elastic Security Serverless ({{kibana-pull}}204943[#204943]). +* Fixes an issue in the Automatic Import process where there is now inclusion of the `@timestamp` field in ECS field mappings whenever possible ({{kibana-pull}}204931[#204931]). +* Allows Automatic Import to safely parse Painless field names that are not valid Painless identifiers in `if` contexts ({{kibana-pull}}205220[#205220]). +* Aligns the Box Native Connector configuration fields with the source of truth in the connectors codebase, correcting mismatches and removing unused configurations ({{kibana-pull}}203241[#203241]). +* Fixes the "Show all agent tags" option in Fleet when the agent list is filtered ({{kibana-pull}}205163[#205163]). +* Updates the Results Explorer flyout footer buttons alignment in Data Frame Analytics ({{kibana-pull}}204735[#204735]). +* Adds a missing space between lines in the Data Frame Analytics delete job modal ({{kibana-pull}}204732[#204732]). +* Fixes an issue where the Refresh button in the Anomaly Detection Datafeed counts table was unresponsive ({{kibana-pull}}204625[#204625]). +* Fixes the inference timeout check in File Upload ({{kibana-pull}}204722[#204722]). +* Fixes the side bar navigation for the Data Visualizer ({{kibana-pull}}205170[#205170]). + +## December 16, 2024 [serverless-changelog-12162024] + +### Features and enhancements [elastic-cloud-serverless-12162024-features-enhancements] +* Optimizes the Kibana Trained Models API ({{kibana-pull}}200977[#200977]). +* Adds a **Create Case** action to the **Log rate analysis** page ({{kibana-pull}}201549[#201549]). +* Improves AI Assistant’s response quality by giving it access to Elastic’s product documentation ({{kibana-pull}}199694[#199694]). +* Adds support for suppressing EQL sequence alerts ({{kibana-pull}}189725[#189725]). +* Adds an **Advanced settings** section to the SLO form ({{kibana-pull}}200822[#200822]). +* Adds a new sub-feature privilege under **Synthetics and Uptime** `Can manage private locations` ({{kibana-pull}}201100[#201100]). + +### Fixes [elastic-cloud-serverless-12162024-fixes] +* Fixes point visibility regression ({{kibana-pull}}202358[#202358]). +* Improves help text of creator and view count features on dashboard listing page ({{kibana-pull}}202488[#202488]). +* Highlights matching field values when performing a KQL search on a keyword field ({{kibana-pull}}201952[#201952]). +* Supports "Inspect" in saved search embeddables ({{kibana-pull}}202947[#202947]). +* Fixes your ability to clear the user-specific system prompt ({{kibana-pull}}202279[#202279]). +* Fixes error when opening rule flyout ({{kibana-pull}}202386[#202386]). +* Fixes to Ops Genie as a default connector ({{kibana-pull}}201923[#201923]). +* Fixes actions on charts ({{kibana-pull}}202443[#202443]). +* Adds flyout to table view in Infrastructure Inventory ({{kibana-pull}}202646[#202646]). +* Fixes service names with spaces not being URL encoded properly for `context.viewInAppUrl` ({{kibana-pull}}202890[#202890]). +* Allows access query logic to handle user ID and name conditions ({{kibana-pull}}202833[#202833]). +* Fixes APM rule error message for invalid KQL filter ({{kibana-pull}}203096[#203096]). +* Rejects CEF logs from Automatic Import and redirects you to the CEF integration instead ({{kibana-pull}}201792[#201792]). +* Updates the install rules title and message ({{kibana-pull}}202226[#202226]). +* Fixes error on second entity engine init API call ({{kibana-pull}}202903[#202903]). +* Restricts unsupported log formats ({{kibana-pull}}202994[#202994]). +* Removes errors related to Enterprise Search nodes ({{kibana-pull}}202437[#202437]). +* Improves web crawler name consistency ({{kibana-pull}}202738[#202738]). +* Fixes editor cursor jumpiness ({{kibana-pull}}202389[#202389]). +* Fixes rollover datastreams on subobjects mapper exception ({{kibana-pull}}202689[#202689]). +* Fixes spaces sync to retrieve 10,000 trained models ({{kibana-pull}}202712[#202712]). +* Fixes log rate analysis embeddable error on the Alerts page ({{kibana-pull}}203093[#203093]). +* Fixes Slack API connectors not displayed under Slack connector type when adding new connector to rule ({{kibana-pull}}202315[#202315]). + +## December 9, 2024 [serverless-changelog-12092024] + +### Features and enhancements [elastic-cloud-serverless-12092024-features-enhancements] +* Elastic Observability Serverless adds a new sub-feature for managing private locations ({{kibana-pull}}201100[#201100]). +* Elastic Observability Serverless adds the ability to configure SLO advanced settings from the UI ({{kibana-pull}}200822[#200822]). +* Elastic Security Serverless adds support for suppressing EQL sequence alerts ({{kibana-pull}}189725[#189725]). +* Elastic Security Serverless adds a `/trained_models_list` endpoint to retrieve complete data for the Trained Model UI ({{kibana-pull}}200977[#200977]). +* Machine Learning adds an action to include log rate analysis in a case ({{kibana-pull}}199694[#199694]). +* Machine Learning enhances the Kibana API to optimize trained models ({{kibana-pull}}201549[#201549]). + +### Fixes [elastic-cloud-serverless-12092020-fixes] +* Fixes Slack API connectors not being displayed under the Slack connector type when adding a new connector to a rule in Alerting ({{kibana-pull}}202315[#202315]). +* Fixes point visibility regression in dashboard visualizations ({{kibana-pull}}202358[#202358]). +* Improves help text for creator and view count features on the Dashboard listing page ({{kibana-pull}}202488[#202488]). +* Highlights matching field values when performing a KQL search on a keyword field in Discover ({{kibana-pull}}201952[#201952]). +* Adds support for the **Inspect** option in saved search embeddables in Discover ({{kibana-pull}}202947[#202947]). +* Enables the ability to clear user-specific system prompts in Elastic Observability Serverless ({{kibana-pull}}202279[#202279]). +* Fixes an error when opening the rule flyout in Elastic Observability Serverless ({{kibana-pull}}202386[#202386]). +* Improves handling of Opsgenie as the default connector in Elastic Observability Serverless ({{kibana-pull}}201923[#201923]). +* Fixes issues with actions on charts in Elastic Observability Serverless ({{kibana-pull}}202443[#202443]). +* Adds a flyout to the table view in Infrastructure Inventory in Elastic Observability Serverless ({{kibana-pull}}202646[#202646]). +* Fixes service names with spaces not being URL-encoded properly for `{{context.viewInAppUrl}}` in Elastic Observability Serverless ({{kibana-pull}}202890[#202890]). +* Enhances access query logic to handle user ID and name conditions in Elastic Observability Serverless ({{kibana-pull}}202833[#202833]). +* Fixes an APM rule error message when a KQL filter is invalid in Elastic Observability Serverless ({{kibana-pull}}203096[#203096]). +* Restricts and rejects CEF logs in automatic import and redirects them to the CEF integration in Elastic Security Serverless ({{kibana-pull}}201792[#201792]). +* Updates the copy of the install rules title and message in Elastic Security Serverless ({{kibana-pull}}202226[#202226]). +* Clears errors on the second entity engine initialization API call in Elastic Security Serverless ({{kibana-pull}}202903[#202903]). +* Restricts unsupported log formats in Elastic Security Serverless ({{kibana-pull}}202994[#202994]). +* Removes errors related to Enterprise Search nodes in Elasticsearch Serverless ({{kibana-pull}}202437[#202437]). +* Ensures consistency in web crawler naming in Elasticsearch Serverless ({{kibana-pull}}202738[#202738]). +* Fixes editor cursor jumpiness in ES|QL ({{kibana-pull}}202389[#202389]). +* Implements rollover of data streams on subobject mapper exceptions in Fleet ({{kibana-pull}}202689[#202689]). +* Fixes trained models to retrieve up to 10,000 models when spaces are synced in Machine Learning ({{kibana-pull}}202712[#202712]). +* Fixes a Log Rate Analysis embeddable error on the Alerts page in AiOps ({{kibana-pull}}203093[#203093]). + +## December 3, 2024 [serverless-changelog-12032024] + +### Features and enhancements [elastic-cloud-serverless-12032024-features-enhancements] +* Adds tabs for Import Entities and Engine Status to the Entity Store ({{kibana-pull}}201235[#201235]). +* Adds status tracking for agentless integrations to {{fleet}} ({{kibana-pull}}199567[#199567]). +* Adds a new {{ml}} module that can detect anomalous activity in host-based logs ({{kibana-pull}}195582[#195582]). +* Allows custom Mapbox Vector Tile sources to style map layers and provide custom legends ({{kibana-pull}}200656[#200656]). +* Excludes stale SLOs from counts of healthy and violated SLOs ({{kibana-pull}}201027[#201027]). +* Adds a **Continue without adding integrations** button to the {{elastic-sec}} Dashboards page that takes you to the Entity Analytics dashboard ({{kibana-pull}}201363[#201363]). +* Displays visualization descriptions under their titles ({{kibana-pull}}198816[#198816]). + +### Fixes [elastic-cloud-serverless-12032024-fixes] +* Hides the **Clear** button when no filters are selected ({{kibana-pull}}200177[#200177]). +* Fixes a mismatch between how wildcards were handled in previews versus actual rule executions ({{kibana-pull}}201553[#201553]). +* Fixes incorrect Y-axis and hover values in the Service Inventory’s Log rate chart ({{kibana-pull}}201361[#201361]). +* Disables the **Add note** button in the alert details flyout for users who lack privileges ({{kibana-pull}}201707[#201707]). +* Fixes the descriptions of threshold rules that use cardinality ({{kibana-pull}}201162[#201162]). +* Disables the **Install All** button on the **Add Elastic Rules** page when rules are installing ({{kibana-pull}}201731[#201731]). +* Reintroduces a data usage warning on the Entity Analytics Enablement modal ({{kibana-pull}}201920[#201920]). +* Improves accessibility for the **Create a connector** page ({{kibana-pull}}201590[#201590]). +* Fixes a bug that could cause {{agents}} to get stuck updating during scheduled upgrades ({{kibana-pull}}202126[#202126]). +* Fixes a bug related to starting {{ml}} deployments with autoscaling and no active nodes ({{kibana-pull}}201256[#201256]). +* Initializes saved objects when the **Trained Model** page loads ({{kibana-pull}}201426[#201426]). +* Fixes the display of deployment stats for unallocated deployments of {{ml}} models ({{kibana-pull}}202005[#202005]). +* Enables the solution type search for instant deployments ({{kibana-pull}}201688[#201688]). +* Improves the consistency of alert counts across different views ({{kibana-pull}}202188[#202188]). diff --git a/release-notes/elastic-observability.md b/release-notes/elastic-observability.md new file mode 100644 index 0000000000..16752c721c --- /dev/null +++ b/release-notes/elastic-observability.md @@ -0,0 +1,24 @@ +--- +navigation_title: "Elastic Observability" +--- + +# Elastic Observability release notes [elastic-observability-release-notes] +Review the changes, fixes, and more in each version of Elastic Observability. + +To check for security updates, go to [Security announcements for the Elastic stack](https://discuss.elastic.co/c/announcements/security-announcements/31). + +% Release notes include only features, enhancements, and fixes. Add breaking changes, deprecations, and known issues to the applicable release notes sections. + +% ## version.next [elastic-observability-next-release-notes] +% **Release date:** Month day, year + +% ### Features and enhancements [elastic-observability-next-features-enhancements] + +% ### Fixes [elastic-observability-next-fixes] + +## 9.0.0 [elastic-observability-900-release-notes] +**Release date:** March 25, 2025 + +### Features and enhancements [elastic-observability-900-features-enhancements] + +### Fixes [elastic-observability-900-fixes] \ No newline at end of file diff --git a/release-notes/elastic-security.md b/release-notes/elastic-security.md new file mode 100644 index 0000000000..519295a559 --- /dev/null +++ b/release-notes/elastic-security.md @@ -0,0 +1,28 @@ +--- +navigation_title: "Elastic Security" +mapped_pages: + - https://www.elastic.co/guide/en/security/master/release-notes-header-9.0.0.html +--- +# Elastic Security release notes [elastic-security-release-notes] + +Review the changes, fixes, and more in each version of Elastic Security. + +To check for security updates, go to [Security announcements for the Elastic stack](https://discuss.elastic.co/c/announcements/security-announcements/31). + +% Release notes include only features, enhancements, and fixes. Add breaking changes, deprecations, and known issues to the applicable release notes sections. + +% ## version.next [elastic-security-next-release-notes] +% **Release date:** Month day, year + +% ### Features and enhancements [elastic-security-next-features-enhancements] +% * + +% ### Fixes [elastic-security-next-fixes] +% * + +## 9.0.0 [elastic-security-900-release-notes] +**Release date:** March 25, 2025 + +### Features and enhancements [elastic-security-900-features-enhancements] + +### Fixes [elastic-security-900-fixes] diff --git a/release-notes/fleet-elastic-agent.md b/release-notes/fleet-elastic-agent.md new file mode 100644 index 0000000000..95e0db2d33 --- /dev/null +++ b/release-notes/fleet-elastic-agent.md @@ -0,0 +1,38 @@ +--- +navigation_title: "Fleet and Elastic Agent" +mapped_pages: + - https://www.elastic.co/guide/en/fleet/current/release-notes.html +--- + +# Fleet and Elastic Agent release notes [fleet-elastic-agent-release-notes] + +Review the changes, fixes, and more in each version of Fleet and Elastic Agent. + +To check for security updates, go to [Security announcements for the Elastic stack](https://discuss.elastic.co/c/announcements/security-announcements/31). + +Elastic Agent integrates and manages Beats for data collection, and Beats changes may impact Elastic Agent functionality. To check for Elastic Agent changes in Beats, go to [{{beats}} release notes](beats://docs/release-notes/index.md). + +% Release notes include only features, enhancements, and fixes. Add breaking changes, deprecations, and known issues to the applicable release notes sections. +% For each new version section, include the Fleet and Elastic Agent and Kibana changes. + +% ## version.next [fleet-elastic-agent-next-release-notes] +% **Release date:** Month day, year + +% ### Features and enhancements [fleet-elastic-agent-next-features-enhancements] +% * + +% ### Fixes [fleet-elastic-agent-next-fixes] +% * + + +## 9.0.0 [fleet-elastic-agent-900-release-notes] +**Release date:** March 25, 2025 + +### Features and enhancements [fleet-elastic-agent-900-features-enhancements] + +### Fixes [fleet-elastic-agent-900-fixes] + + + + + diff --git a/release-notes/index.md b/release-notes/index.md new file mode 100644 index 0000000000..b7b7e56519 --- /dev/null +++ b/release-notes/index.md @@ -0,0 +1,16 @@ +# Release notes + +% What needs to be done: Write from scratch + +% GitHub issue: https://github.com/elastic/docs-projects/issues/316 + +Learn about the latest changes, issues, fixes, and deprecations for Elastic releases by reviewing the release notes. + +For information about the latest changes to Elastic APIs, check the [APIs changelog](https://www.elastic.co/docs/api/changes). + +We recommend you upgrade to the latest Elastic version. To learn how to upgrade, check out . + +Additional resources: +* Join the [Elastic community forums](https://discuss.elastic.co/) +* Check out the [Elastic Blog](https://www.elastic.co/blog) +* Reach out to [Elastic Support](https://mail.google.com/mail/u/0/?fs=1&tf=cm&source=mailto&to=support@elastic.co) diff --git a/release-notes/known-issues/apm.md b/release-notes/known-issues/apm.md new file mode 100644 index 0000000000..f168fa1ff5 --- /dev/null +++ b/release-notes/known-issues/apm.md @@ -0,0 +1,320 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/apm-known-issues.html + +navigation_title: "Elastic APM" +--- + +# Elastic APM known issues [apm-known-issues] + +% Use the following template to add entries to this page. + +% :::{dropdown} Title of known issue +% **Details** +% On [Month/Day/Year], a known issue was discovered that [description of known issue]. + +% **Workaround** +% Workaround description. + +% **Resolved** +% On [Month/Day/Year], this issue was resolved. + +::: + +APM has the following known issues: + +## `prefer_ilm` required in component templates to create custom lifecycle policies [_prefer_ilm_required_in_component_templates_to_create_custom_lifecycle_policies] + +*Elastic Stack versions: 8.15.1+* + +The issue occurs when creating a *new* cluster using version 8.15.1+. The issue occurs for any APM data streams created in 8.15.1+. The issue does *not* occur if custom component template has been created in or before version 8.15.0. + +In 8.15.0, APM Server began using the [apm-data plugin](https://github.com/elastic/elasticsearch/tree/main/x-pack/plugin/apm-data) to manage data streams, ingest pipelines, lifecycle policies, and more. In 8.15.1, a fix was introduced to address unmanaged indices in older clusters using default ILM policies. This fix added a fallback to the default ILM policy (if it exists) and set the `prefer_ilm` configuration to `false`. This setting impacts clusters where both ILM and data stream lifecycles (DSL) are in effect—such as when configuring custom ILM policies using `@custom` component templates, under the conditions mentioned above. + +To override ILM policies for these new clusters using component template, set the `prefer_ilm` configuration to `true` by following the [updated guide to customize ILM](/solutions/observability/apps/index-lifecycle-management.md). + + +## Upgrading to v8.15.x may cause ingestion to fail [_upgrading_to_v8_15_x_may_cause_ingestion_to_fail] + +*Elastic Stack versions: 8.15.0, 8.15.1, 8.15.2, 8.15.3*
*Fixed in Elastic Stack version 8.15.4* + +The issue only occurs when *upgrading* the {{stack}} from 8.12.2 or lower directly to any 8.15.x version prior to 8.15.4. The issue does *not* occur when creating a *new* cluster using any 8.15.x version, or when upgrading from 8.12.2 to 8.13.x or 8.14.x and then to 8.15.x. + +In APM Servers versions prior to 8.13.0, an ingestion pipeline exists to perform a check on the version. The version check would fail any APM document produced with a different version of APM server compared to the version of the installed APM’s ingest pipeline. In 8.13.0 the version check in the ingest pipeline was removed. Due to the combination of an internal change in how apm data management assets are set up from 8.15 onwards and a bug in Elasticsearch, related to [lazy rollover of data streams](https://github.com/elastic/elasticsearch/issues/112781), the ingestion pipeline conducting the version check is not removed on upgrade and prevents the ingestion of data. + +If the deployment is running 8.15.0, upgrade the deployment to 8.15.1 or above. A manual rollover of all APM data streams is required to pick up the new index templates and remove the faulty ingest pipeline version check. Perform the following requests to Elasticsearch (they are assuming the `default` namespace is used, adjust if necessary): + +```txt +POST /traces-apm-default/_rollover +POST /traces-apm.rum-default/_rollover +POST /logs-apm.error-default/_rollover +POST /logs-apm.app-default/_rollover +POST /metrics-apm.app-default/_rollover +POST /metrics-apm.internal-default/_rollover +POST /metrics-apm.service_destination.1m-default/_rollover +POST /metrics-apm.service_destination.10m-default/_rollover +POST /metrics-apm.service_destination.60m-default/_rollover +POST /metrics-apm.service_summary.1m-default/_rollover +POST /metrics-apm.service_summary.10m-default/_rollover +POST /metrics-apm.service_summary.60m-default/_rollover +POST /metrics-apm.service_transaction.1m-default/_rollover +POST /metrics-apm.service_transaction.10m-default/_rollover +POST /metrics-apm.service_transaction.60m-default/_rollover +POST /metrics-apm.transaction.1m-default/_rollover +POST /metrics-apm.transaction.10m-default/_rollover +POST /metrics-apm.transaction.60m-default/_rollover +``` + + +## Upgrading to v8.15.0 may cause APM indices to lose their lifecycle policy [_upgrading_to_v8_15_0_may_cause_apm_indices_to_lose_their_lifecycle_policy] + +*Elastic Stack versions: 8.15.0*
*Fixed in Elastic Stack version 8.15.1* + +The issue only occurs when *upgrading* the {{stack}} to 8.15.0. The issue does *not* occur when creating a *new* cluster using 8.15.0. The issue also does not occur if a custom ILM policy is configured using a custom component template. + +In 8.15.0, APM Server switched to use data stream lifecycle to manage data retention for APM indices for new deployments as well as for upgraded deployments with default lifecycle configurations. Unfortunately, since any data stream created before 8.15.0 does not have a data stream lifecycle configuration, such existing data streams become unmanaged for default lifecycle configurations. + +Upgrading to 8.15.1 resolves the lifecycle issue for any new indices created for APM data streams. However, indices created in version 8.15.0 will remain unmanaged if the default ILM policy is in place. To fix these unmanaged indices, consider one of the following approaches: + +1. Manually delete the unmanaged indices when they are no longer needed. +2. Explicitly configure APM data streams to use the default data stream lifecycle configuration. This approach migrates all affected data streams to use data stream lifecycles, maintaining behavior equivalent to the default ILM policies. Apply this fix only to data streams that have unmanaged indices due to missing default ILM policies. + + ```txt + PUT _data_stream/{{data_stream_name}}-{{data_stream_namespace}}/_lifecycle + { + "data_retention": + } + ``` + + +Default `` for each data stream is available in [this guide](/solutions/observability/apps/index-lifecycle-management.md). + +This issue is fixed in 8.15.1 ([elastic/elasticsearch#112432](https://github.com/elastic/elasticsearch/pull/112432)). + + +## Upgrading to v8.13.0 to v8.13.2 breaks APM anomaly rules [broken-apm-anomaly-rule] + +*Elastic Stack versions: 8.13.0, 8.13.1, 8.13.2*
*Fixed in Elastic Stack version 8.13.3* + +This issue occurs when upgrading the Elastic Stack to version 8.13.0, 8.13.1, or 8.13.2. This issue may go unnoticed unless you actively monitor your {{kib}} logs. The following log indicates the presence of this issue: + +```shell +"params invalid: [anomalyDetectorTypes]: expected value of type [array] but got [undefined]" +``` + +This issue occurs because a non-optional parameter, `anomalyDetectorTypes` was added in 8.13.0 without the presence of an automation migration script. This breaks pre-existing rules as they do not have this parameter and will fail validation. This issue is fixed in v8.13.3. + +There are three ways to fix this error: + +* Upgrade to version 8.13.3 +* Fix broken anomaly rules in the APM UI (no upgrade required) +* Fix broken anomaly rules with Kibana APIs (no upgrade required) + +**Fix broken anomaly rules in the APM UI** + +1. From any APM page in Kibana, select **Alerts and rules** → **Manage rules**. +2. Filter your rules by setting **Type** to **APM Anomaly**. +3. For each anomaly rule in the list, select the pencil icon to edit the rule. +4. Add one or more **DETECTOR TYPES** to the rule. + + The detector type determines when the anomaly rule triggers. For example, a latency anomaly rule will trigger when the latency of the service being monitored is abnormal. Supported detector types are `latency`, `throughput`, and `failed transaction rate`. + +5. Click **Save**. + +**Fix broken anomaly rules with Kibana APIs** + +1. Find broken rules + + :::::{admonition} + To identify rules in this exact state, you can use the [find rules endpoint](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-alerting) and search for the APM anomaly rule type as well as this exact error message indicating that the rule is in the broken state. We will also use the `fields` parameter to specify only the fields required when making the update request later. + + * `search_fields=alertTypeId` + * `search=apm.anomaly` + * `filter=alert.attributes.executionStatus.error.message:"params invalid: [anomalyDetectorTypes]: expected value of type [array] but got [undefined]"` + * `fields=[id, name, actions, tags, schedule, notify_when, throttle, params]` + + The encoded request might look something like this: + + ```shell + curl -u "$KIBANA_USER":"$KIBANA_PASSWORD" "$KIBANA_URL/api/alerting/rules/_find?search_fields=alertTypeId&search=apm.anomaly&filter=alert.attributes.executionStatus.error.message%3A%22params%20invalid%3A%20%5BanomalyDetectorTypes%5D%3A%20expected%20value%20of%20type%20%5Barray%5D%20but%20got%20%5Bundefined%5D%22&fields=id&fields=name&fields=actions&fields=tags&fields=schedule&fields=notify_when&fields=throttle&fields=params" + ``` + + ::::{dropdown} Example result: + ```json + { + "page": 1, + "total": 1, + "per_page": 10, + "data": [ + { + "id": "d85e54de-f96a-49b5-99d4-63956f90a6eb", + "name": "APM Anomaly Jason Test FAILING [2]", + "tags": [ + "test", + "jasonrhodes" + ], + "throttle": null, + "schedule": { + "interval": "1m" + }, + "params": { + "windowSize": 30, + "windowUnit": "m", + "anomalySeverityType": "warning", + "environment": "ENVIRONMENT_ALL" + }, + "notify_when": null, + "actions": [] + } + ] + } + ``` + + :::: + + + ::::: + +2. Prepare the update JSON doc(s) + + ::::{admonition} + For each broken rule found, create a JSON rule document with what was returned from the API in the previous step. You will need to make two changes to each document: + + 1. Remove the `id` key but keep the value connected to this document (e.g. rename the file to `{{id}}.json`). **The `id` cannot be sent as part of the request body for the PUT request, but you will need it for the URL path.** + 2. Add the `"anomalyDetectorTypes"` to the `"params"` block, using the default value as seen below to mimic the pre-8.13 behavior: + + ```json + { + "params": { + // ... other existing params should stay here, + // with the required one added to this object + "anomalyDetectorTypes": [ + "txLatency", + "txThroughput", + "txFailureRate" + ] + } + } + ``` + + + :::: + +3. Update each rule using the `PUT /api/alerting/rule/{{id}}` API + + ::::{admonition} + For each rule, submit a PUT request to the [update rule endpoint](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-alerting) using that rule’s ID and its stored update document from the previous step. For example, assuming the first broken rule’s ID is `046c0d4f`: + + ```shell + curl -u "$KIBANA_USER":"$KIBANA_PASSWORD" -XPUT "$KIBANA_URL/api/alerting/rule/046c0d4f" -H 'Content-Type: application/json' -H 'kbn-xsrf: rule-update' -d @046c0d4f.json + ``` + + Once the PUT request executes successfully, the rule will no longer be broken. + + :::: + + + +## Upgrading APM Server to 8.11+ might break event intake from older APM Java agents [apm-empty-metricset-values] + +*APM Server versions: >=8.11.0*
*Elastic APM Java agent versions: < 1.43.0* + +If you are using APM Server (> v8.11.0) and the Elastic APM Java agent (< v1.43.0), the agent may be sending empty histogram metricsets. + +In previous APM Server versions some data validation was not properly applied, leading the APM Server to accept empty histogram metricsets where it shouldn’t. This bug was fixed in the APM Server in 8.11.0. + +The APM Java agent (< v1.43.0) was sending this kind of invalid data under certain circumstances. If you upgrade the APM Server to v8.11.0+ *without* upgrading the APM Java agent version, metricsets can be rejected by the APM Server and can result in additional error logs in the Java agent. + +The fix is to upgrade the Elastic APM Java agent to a version >= 1.43.0. Find details in [elastic/apm-data#157](https://github.com/elastic/apm-data/pull/157). + + +## traces-apm@custom ingest pipeline applied to certain data streams unintentionally [_traces_apmcustom_ingest_pipeline_applied_to_certain_data_streams_unintentionally] + +*APM Server versions: 8.12.0*
+ +If you’re using the Elastic APM Server v8.12.0, the `traces-apm@custom` ingest pipeline is now additionally applied to data streams `traces-apm.sampled-*` and `traces-apm.rum-*`, and applied twice for `traces-apm-*`. This bug impacts users with a non-empty `traces-apm@custom` ingest pipeline. + +If you rely on this unintended behavior in 8.12.0, please rename your pipeline to `traces-apm.integration@custom` to preserve this behavior in later versions. + +A fix was released in 8.12.1: [elastic/kibana#175448](https://github.com/elastic/kibana/pull/175448). + + +## Ingesting new JVM metrics in 8.9 and 8.10 breaks upgrade to 8.11 and stops ingestion [_ingesting_new_jvm_metrics_in_8_9_and_8_10_breaks_upgrade_to_8_11_and_stops_ingestion] + +*APM Server versions: 8.11.0, 8.11.1*
*Elastic APM Java agent versions: 1.39.0+* + +If you’re using the Elastic APM Java agent v1.39.0+ to send new JVM metrics to APM Server v8.9.x and v8.10.x, upgrading to 8.11.0 or 8.11.1 will silently fail and stop ingesting APM metrics. + +After upgrading, you will see the following errors: + +* APM Server error logs: + + ```txt + failed to index document in 'metrics-apm.internal-default' (fail_processor_exception): Document produced by APM Server v8.11.1, which is newer than the installed APM integration (v8.10.3-preview-1695284222). The APM integration must be upgraded. + ``` + +* Fleet error on integration package upgrade: + + ```txt + Failed installing package [apm] due to error: [ResponseError: mapper_parsing_exception + Root causes: + mapper_parsing_exception: Field [jvm.memory.non_heap.pool.committed] attempted to shadow a time_series_metric] + ``` + + +A fix was released in 8.11.2: [elastic/kibana#171712](https://github.com/elastic/kibana/pull/171712). + + +## APM integration package upgrade through Fleet causes excessive data stream rollovers [_apm_integration_package_upgrade_through_fleet_causes_excessive_data_stream_rollovers] + +*APM Server versions: <= 8.12.1 +* + +If you’re upgrading APM integration package to any versions <= 8.12.1, in some rare cases, the upgrade fails with a mapping conflict error. The upgrade process keeps rolling over the data stream in an unsuccessful attempt to work around the error. As a result, many empty backing indices for APM data streams are created. + +During upgrade, you will see errors similar to the one below: + +* Fleet error on integration package upgrade: + + ```txt + Mappings update for metrics-apm.service_destination.10m-default failed due to ResponseError: illegal_argument_exception + Root causes: + illegal_argument_exception: Mapper for [metricset.interval] conflicts with existing mapper: + Cannot update parameter [value] from [10m] to [null] + ``` + + +A fix was released in 8.12.2: [elastic/apm-server#12219](https://github.com/elastic/apm-server/pull/12219). + + +## Performance regression: APM issues too many small bulk requests for Elasticsearch output [_performance_regression_apm_issues_too_many_small_bulk_requests_for_elasticsearch_output] + +*APM Server versions: >=8.13.0, <= 8.14.2*
+ +If you’re on APM server version >=8.13.0, <= 8.14.2_, using Elasticsearch output, do not specify any `output.elasticsearch.flush_bytes`, and do not disable compression explicitly by setting `output.elasticsearch.compression_level` to `0`, APM server will issue smaller bulk requests of 24KB size, and more bulk requests will need to be made to maintain the original throughput. This causes Elasticsearch to experience higher load, and APM server may exhibit Elasticsearch backpressure symptoms. + +This happens because a performance regression was introduced, such that the default value of bulk indexer flush bytes was reduced from 1MB to 24KB. + +Affected APM servers will emit the following log: + +```txt +flush_bytes config value is too small (0) and might be ignored by the indexer, increasing value to 24576 +``` + +To workaround the issue, modify the Elasticsearch output configuration in APM. + +* For APM Server binary + + * In `apm-server.yml`, set `output.elasticsearch.flush_bytes: 1mib` + +* For Fleet-managed APM (non-Elastic Cloud) + + * In Fleet, open the Settings tab. + * Under Outputs, identify the Elasticsearch output that receives from APM, select the edit icon. + * In the Edit output flyout, in "Advanced YAML configuration" field, add line `flush_bytes: 1mib`. + +* For Elastic Cloud + + * It is not possible to edit the Fleet "Elastic Cloud internal output". + + +A fix will be released in 8.14.3: [elastic/apm-server#13576](https://github.com/elastic/apm-server/pull/13576). \ No newline at end of file diff --git a/release-notes/known-issues/fleet.md b/release-notes/known-issues/fleet.md new file mode 100644 index 0000000000..960783fdf1 --- /dev/null +++ b/release-notes/known-issues/fleet.md @@ -0,0 +1,21 @@ +--- +navigation_title: "Fleet and Elastic Agent" +--- + +# Fleet and Elastic Agent known issues [fleet-agent-known-issues] + +% What needs to be done: Write from scratch + +% Use the following template to add entries to this page. + +% :::{dropdown} Title of known issue +% **Details** +% On [Month/Day/Year], a known issue was discovered that [description of known issue]. + +% **Workaround** +% Workaround description. + +% **Resolved** +% On [Month/Day/Year], this issue was resolved. + +::: \ No newline at end of file diff --git a/release-notes/known-issues/index.md b/release-notes/known-issues/index.md new file mode 100644 index 0000000000..9419421fb1 --- /dev/null +++ b/release-notes/known-issues/index.md @@ -0,0 +1,5 @@ +# Known issues + +% GitHub issue: https://github.com/elastic/docs-projects/issues/317 + +Known issues are significant problems or defects in the software that development teams are actively working on. You should review known issues when making important decisions, such as upgrading to a newer version of the software. \ No newline at end of file diff --git a/release-notes/known-issues/observability.md b/release-notes/known-issues/observability.md new file mode 100644 index 0000000000..5744e07231 --- /dev/null +++ b/release-notes/known-issues/observability.md @@ -0,0 +1,22 @@ +--- +navigation_title: "Elastic Observability" +--- + +# Elastic Observability known issues [elastic-observability-known-issues] + +% What needs to be done: Write from scratch + +% Use the following template to add entries to this page. + +% :::{dropdown} Title of known issue +% **Details** +% On [Month/Day/Year], a known issue was discovered that [description of known issue]. + +% **Workaround** +% Workaround description. + +% **Resolved** +% On [Month/Day/Year], this issue was resolved. + +::: + diff --git a/release-notes/known-issues/search-ui.md b/release-notes/known-issues/search-ui.md new file mode 100644 index 0000000000..c446c1a3c2 --- /dev/null +++ b/release-notes/known-issues/search-ui.md @@ -0,0 +1,24 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/search-ui/current/known-issues.html + +navigation_title: "Search UI" +--- + +# Search UI known issues [search-ui-known-issues] + +% Use the following template to add entries to this page. + +% :::{dropdown} Title of known issue +% **Details** +% On [Month/Day/Year], a known issue was discovered that [description of known issue]. + +% **Workaround** +% Workaround description. + +% **Resolved** +% On [Month/Day/Year], this issue was resolved. + +::: + +* When using the **Elasticsearch connector** Search UI does not render nested objects. \ No newline at end of file diff --git a/release-notes/known-issues/security.md b/release-notes/known-issues/security.md new file mode 100644 index 0000000000..495db57eb4 --- /dev/null +++ b/release-notes/known-issues/security.md @@ -0,0 +1,32 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/master/release-notes-header-9.0.0.html#known-issue-9.0.0 + +navigation_title: "Elastic Security" +--- + +# Elastic Security known issues [elastic-security-known-issues] + +% Use the following template to add entries to this page. + +% :::{dropdown} Title of known issue +% **Details** +% On [Month/Day/Year], a known issue was discovered that [description of known issue]. + +% **Workaround** +% Workaround description. + +% **Resolved** +% On [Month/Day/Year], this issue was resolved. + +::: + +% What needs to be done: Write from scratch + +## 9.0.0 [known-issues] + +:::{dropdown} Duplicate alerts can be produced from manually running threshold rules +**Details** +On November 12, 2024, it was discovered that manually running threshold rules could produce duplicate alerts if the date range was already covered by a scheduled rule execution. +::: + diff --git a/release-notes/known-issues/serverless.md b/release-notes/known-issues/serverless.md new file mode 100644 index 0000000000..b7bec5b083 --- /dev/null +++ b/release-notes/known-issues/serverless.md @@ -0,0 +1,3 @@ +# Serverless known issues + +% What needs to be done: Write from scratch \ No newline at end of file diff --git a/release-notes/toc.yml b/release-notes/toc.yml new file mode 100644 index 0000000000..3bcdfff7e1 --- /dev/null +++ b/release-notes/toc.yml @@ -0,0 +1,33 @@ +toc: + - file: index.md + children: + - file: known-issues/index.md + children: + - file: known-issues/fleet.md + - file: known-issues/observability.md + children: + - file: known-issues/apm.md + - file: known-issues/security.md + - file: known-issues/search-ui.md + - file: known-issues/serverless.md + - file: breaking-changes/index.md + children: + - file: breaking-changes/fleet-elastic-agent.md + - file: breaking-changes/elastic-observability.md + children: + - file: breaking-changes/elastic-apm.md + - file: breaking-changes/elastic-security.md + - file: deprecations/index.md + children: + - file: deprecations/fleet-elastic-agent.md + - file: deprecations/elastic-cloud-serverless.md + - file: deprecations/elastic-observability.md + children: + - file: deprecations/elastic-apm.md + - file: deprecations/elastic-security.md + - file: fleet-elastic-agent.md + - file: elastic-cloud-serverless.md + - file: elastic-observability.md + children: + - file: elastic-apm.md + - file: elastic-security.md \ No newline at end of file