Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,12 +73,12 @@ These permissions are needed to manage each {{stack}} application. For example,

| Name | API group | Optional? |
| --- | --- | --- |
| `Elasticsearch<br>Elasticsearch/status<br>Elasticsearch/finalizers` | `elasticsearch.k8s.elastic.co` | no |
| `Kibana<br>Kibana/status<br>Kibana/finalizers` | `kibana.k8s.elastic.co` | no |
| `APMServer<br>APMServer/status<br>APMServer/finalizers` | `apm.k8s.elastic.co` | no |
| `EnterpriseSearch<br>EnterpriseSearch/status<br>EnterpriseSearch/finalizers` | `enterprisesearch.k8s.elastic.co` | no |
| `Beat<br>Beat/status<br>Beat/finalizers` | `beat.k8s.elastic.co` | no |
| `Agent<br>Agent/status<br>Agent/finalizers` | `agent.k8s.elastic.co` | no |
| `ElasticMapsServer<br>ElasticMapsServer/status<br>ElasticMapsServer/finalizers` | `maps.k8s.elastic.co` | no |
| `Logstash<br>Logstash/status<br>Logstash/finalizers` | `logstashes.k8s.elastic.co` | no |
| `Elasticsearch`<br>`Elasticsearch/status`<br>`Elasticsearch/finalizers` | `elasticsearch.k8s.elastic.co` | no |
| `Kibana`<br>`Kibana/status`<br>`Kibana/finalizers` | `kibana.k8s.elastic.co` | no |
| `APMServer`<br>`APMServer/status`<br>`APMServer/finalizers` | `apm.k8s.elastic.co` | no |
| `EnterpriseSearch`<br>`EnterpriseSearch/status`<br>`EnterpriseSearch/finalizers` | `enterprisesearch.k8s.elastic.co` | no |
| `Beat`<br>`Beat/status`<br>`Beat/finalizers` | `beat.k8s.elastic.co` | no |
| `Agent`<br>`Agent/status`<br>`Agent/finalizers` | `agent.k8s.elastic.co` | no |
| `ElasticMapsServer`<br>`ElasticMapsServer/status`<br>`ElasticMapsServer/finalizers` | `maps.k8s.elastic.co` | no |
| `Logstash`<br>`Logstash/status`<br>`Logstash/finalizers` | `logstashes.k8s.elastic.co` | no |

Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,6 @@ $$$azure-integration-azure-tenant-info$$$What Azure tenant information does Elas

* Data defined in the marketplace [Saas fulfillment Subscription APIs](https://docs.microsoft.com/en-us/azure/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api).
* The following additional data:

* Marketplace subscription ID
* Marketplace plan ID
* Azure Account ID
Expand All @@ -222,22 +221,19 @@ $$$azure-integration-cli-api$$$What other methods are available to deploy {{es}}
: Use any of the following methods:

* **Deploy using Azure tools**

* The Azure console
* [Azure Terraform](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/elastic_cloud_elasticsearch)
* The [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/elastic?view=azure-cli-latest)
* The Azure [REST API](https://docs.microsoft.com/en-us/rest/api/elastic)
* [PowerShell](https://docs.microsoft.com/en-us/powershell/module/az.elastic/?view=azps-8.0.0#elastic)

* **Deploy using official Azure SDKs**

* [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/README.md)
* [Java](https://github.com/Azure/azure-sdk-for-java/blob/azure-resourcemanager-elastic_1.0.0-beta.1/README.md)
* [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/README.md)
* [Rust](https://github.com/Azure/azure-sdk-for-rust/blob/main/services/README.md)

* **Deploy using {{ecloud}}**

* The {{ecloud}} [console](https://cloud.elastic.co?page=docs&placement=docs-body)
* The {{ecloud}} [REST API](cloud://reference/cloud-hosted/ec-api-restful.md)
* The {{ecloud}} [command line tool](ecctl://reference/index.md)
Expand All @@ -248,7 +244,6 @@ $$$azure-integration-cli-api$$$What other methods are available to deploy {{es}}

$$$azure-integration-migrate$$$How do I migrate my data from the classic Azure marketplace account to the native integration?
: First create a new account configured with {{ecloud}} Azure Native ISV Service, then perform the migration as follows:

1. From your classic Azure marketplace account, navigate to the deployment and [configure a custom snapshot repository using Azure Blog Storage](../../tools/snapshot-and-restore/ec-azure-snapshotting.md).
2. Using the newly configured snapshot repository, [create a snapshot](../../tools/snapshot-and-restore/create-snapshots.md) of the data to migrate.
3. Navigate to Azure and log in as the user that manages the {{es}} resources.
Expand Down
79 changes: 65 additions & 14 deletions reference/fleet/add_kubernetes_metadata-processor.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,18 +97,72 @@ This configuration disables the default indexers and matchers, and then enables
{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/fleet/agent-processors.md#limitations)
::::

`host`
: (Optional) Node to scope {{agent}} to in case it cannot be accurately detected, as when running {{agent}} in host network mode.

| Name | Required | Default | Description |
| --- | --- | --- | --- |
| `host` | No | | Node to scope {{agent}} to in case it cannot be accurately detected, as when running {{agent}} in host network mode. |
| `scope` | No | `node` | Whether the processor should have visibility at the node level (`node`) or at the entire cluster level (`cluster`). |
| `namespace` | No | | Namespace to collect the metadata from. If no namespaces is specified, collects metadata from all namespaces. |
| `add_resource_metadata` | No | | Filters and configuration for adding extra metadata to the event. This setting accepts the following settings:<br><br>* `node` or `namespace`: Labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included, but annotations are not. To change the default behavior, you can set `include_labels`, `exclude_labels`, and `include_annotations`. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. To turn off enrichment of `node` or `namespace` metadata individually, set `enabled: false`.<br>* `deployment`: If the resource is `pod` and it is created from a `deployment`, the deployment name is not added by default. To enable this behavior, set `deployment: true`.<br>* `cronjob`: If the resource is `pod` and it is created from a `cronjob`, the cronjob name is not added by default. To enable this behavior, set `cronjob: true`.<br><br>::::{dropdown} Expand this to see an example<br>```yaml<br> add_resource_metadata:<br> namespace:<br> include_labels: ["namespacelabel1"]<br> # use_regex_include: false<br> # use_regex_exclude: false<br> # exclude_labels: ["namespacelabel2"]<br> #labels.dedot: true<br> #annotations.dedot: true<br> node:<br> # use_regex_include: false<br> include_labels: ["nodelabel2"]<br> include_annotations: ["nodeannotation1"]<br> # use_regex_exclude: false<br> # exclude_annotations: ["nodeannotation2"]<br> #labels.dedot: true<br> #annotations.dedot: true<br> deployment: true<br> cronjob: true<br>```<br><br>::::<br><br> |
| `kube_config` | No | `KUBECONFIG` environment variable, if present | Config file to use as the configuration for the Kubernetes client. |
| `kube_client_options` | No | | Additional configuration options for the Kubernetes client. Currently client QPS and burst are supported. If this setting is not configured, the Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) is used.<br><br>::::{dropdown} Expand this to see an example<br>```yaml<br> kube_client_options:<br> qps: 5<br> burst: 10<br>```<br><br>::::<br><br> |
| `cleanup_timeout` | No | `60s` | Time of inactivity before stopping the running configuration for a container. |
| `sync_period` | No | | Timeout for listing historical resources. |
| `labels.dedot` | No | `true` | Whether to replace dots (`.`) in labels with underscores (`_`).<br>`annotations.dedot` |
`scope`
: (Optional) Whether the processor should have visibility at the node level (`node`) or at the entire cluster level (`cluster`).

**Default**: `node`

`namespace`
: (Optional) Namespace to collect the metadata from. If no namespaces is specified, collects metadata from all namespaces.

`add_resource_metadata`
: (Optional) Filters and configuration for adding extra metadata to the event. This setting accepts the following settings:
* `node` or `namespace`: Labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included, but annotations are not. To change the default behavior, you can set `include_labels`, `exclude_labels`, and `include_annotations`. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. To turn off enrichment of `node` or `namespace` metadata individually, set `enabled: false`.
* `deployment`: If the resource is `pod` and it is created from a `deployment`, the deployment name is not added by default. To enable this behavior, set `deployment: true`.
* `cronjob`: If the resource is `pod` and it is created from a `cronjob`, the cronjob name is not added by default. To enable this behavior, set `cronjob: true`.

::::{dropdown} Expand this to see an example
```yaml
add_resource_metadata:
namespace:
include_labels: ["namespacelabel1"]
# use_regex_include: false
# use_regex_exclude: false
# exclude_labels: ["namespacelabel2"]
#labels.dedot: true
#annotations.dedot: true
node:
# use_regex_include: false
include_labels: ["nodelabel2"]
include_annotations: ["nodeannotation1"]
# use_regex_exclude: false
# exclude_annotations: ["nodeannotation2"]
#labels.dedot: true
#annotations.dedot: true
deployment: true
cronjob: true
```
::::

`kube_config`
: (Optional) `KUBECONFIG` environment variable, if present | Config file to use as the configuration for the Kubernetes client.

`kube_client_options`
: (Optional) Additional configuration options for the Kubernetes client. Currently client QPS and burst are supported. If this setting is not configured, the Kubernetes client’s [default QPS and burst](https://pkg.go.dev/k8s.io/client-go/rest#pkg-constants) is used.

::::{dropdown} Expand this to see an example
```yaml
kube_client_options:
qps: 5
burst: 10
```
::::

`cleanup_timeout`
: (Optional) Time of inactivity before stopping the running configuration for a container.

**Default**: `60s`

`sync_period`
: (Optional) Timeout for listing historical resources.

`labels.dedot`
: (Optional) Whether to replace dots (`.`) in labels with underscores (`_`). `annotations.dedot`

**Default**: `true`


## Indexers and matchers [kubernetes-indexers-and-matchers]
Expand Down Expand Up @@ -194,12 +248,9 @@ Available matchers are:

`resource_type`
: (Optional) Type of the resource to obtain the ID of. Valid `resource_type`:

* `pod`: to make the lookup based on the Pod UID. When `resource_type` is set to `pod`, `logs_path` must be set as well, supported path in this case:

* `/var/lib/kubelet/pods/` used to read logs from mounted into the Pod volumes, those logs end up under `/var/lib/kubelet/pods/<pod UID>/volumes/<volume name>/...` To use `/var/lib/kubelet/pods/` as a `log_path`, `/var/lib/kubelet/pods` must be mounted into the filebeat Pods.
* `/var/log/pods/` Note: when using `resource_type: 'pod'` logs will be enriched only with Pod metadata: Pod id, Pod name, etc., not container metadata.

* `container`: to make the lookup based on the container ID, `logs_path` must be set to `/var/log/containers/`. It defaults to `container`.


Expand Down
59 changes: 49 additions & 10 deletions reference/fleet/add_nomad_metadata-processor.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,17 +35,56 @@ Each event is annotated with the following information:
{{agent}} processors execute *before* ingest pipelines, which means that they process the raw event data rather than the final event sent to {{es}}. For related limitations, refer to [What are some limitations of using processors?](/reference/fleet/agent-processors.md#limitations)
::::

`address`
: (Optional) URL of the agent API used to request the metadata.

| Name | Required | Default | Description |
| --- | --- | --- | --- |
| `address` | No | `http://127.0.0.1:4646` | URL of the agent API used to request the metadata. |
| `namespace` | No | | Namespace to watch. If set, only events for allocations in this namespace are annotated. |
| `region` | No | | Region to watch. If set, only events for allocations in this region are annotated. |
| `secret_id` | No | | SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token.<br><br>```json<br>namespace "*" {<br> policy = "read"<br>}<br>node {<br> policy = "read"<br>}<br>agent {<br> policy = "read"<br>}<br>```<br> |
| `refresh_interval` | No | `30s` | Interval used to update the cached metadata. |
| `cleanup_timeout` | No | `60s` | Time to wait before cleaning up an allocation’s associated resources after it has been removed.This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs. |
| `scope` | No | `node` | Scope of the resources to watch.Specify `node` to get metadata for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent. |
| `node` | No | | When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically.<br><br>For example, you can use the following configuration when {{agent}} is collecting events from all the allocations in the cluster:<br><br>```yaml<br> - add_nomad_metadata:<br> scope: global<br>```<br> |
**Default**: `http://127.0.0.1:4646`

`namespace`
: (Optional) Namespace to watch. If set, only events for allocations in this namespace are annotated.

`region`
: (Optional) Region to watch. If set, only events for allocations in this region are annotated.

`secret_id`
: (Optional) SecretID to use when connecting with the agent API. This is an example ACL policy to apply to the token.

```json
namespace "*" {
policy = "read"
}
node {
policy = "read"
}
agent {
policy = "read"
}
```

`refresh_interval`
: (Optional) Interval used to update the cached metadata.

**Default**: `30s`

`cleanup_timeout`
: (Optional) Time to wait before cleaning up an allocation’s associated resources after it has been removed.This is useful if you expect to receive events after an allocation has been removed, which can happen when collecting logs.

**Default**: `60s`

`scope`
: (Optional) Scope of the resources to watch.Specify `node` to get metadata for the allocations in a single agent, or `global`, to get metadata for allocations running on any agent.

**Default**: `node`

`node`
: (Optional) When using `scope: node`, use `node` to specify the name of the local node if it cannot be discovered automatically.

For example, you can use the following configuration when {{agent}} is collecting events from all the allocations in the cluster:

```yaml
- add_nomad_metadata:
scope: global
```


## Indexers and matchers [_indexers_and_matchers]
Expand Down
Loading
Loading