Skip to content

Commit 91dbcb5

Browse files
authored
Merge branch 'main' into autoops-cc-troubleshoot-firewall
2 parents 90c9139 + 084bb07 commit 91dbcb5

File tree

8 files changed

+250
-8
lines changed

8 files changed

+250
-8
lines changed

deploy-manage/monitor/autoops/cc-cloud-connect-autoops-troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,4 +168,4 @@ The following table shows the errors you might encounter if something goes wrong
168168
| `LICENSE_USED_BY_ANOTHER_ACCOUNT` | License key connected to another account | A license key can only be connected to one {{ecloud}} organization. Contact [Elastic support](https://support.elastic.co/) for help. |
169169
| `VERSION_MISMATCH` | {{es}} version is unsupported | Upgrade your cluster to a [supported version](https://www.elastic.co/support/eol). |
170170
| `UNKNOWN_ERROR` | Installation failed | {{agent}} couldn't be installed due to an unknown issue. Consult the troubleshooting guide or contact [Elastic support](https://support.elastic.co/) for more help. |
171-
| | Failed to register Cloud Connected Mode: cluster license type is not supported | The cluster you are trying to connect doesn't have the required license to connect to AutoOps. For more information, refer to the [prerequisites](/deploy-manage/monitor/autoops/cc-connect-self-managed-to-autoops.md#prerequisites). |
171+
| | Failed to register Cloud Connected Mode: cluster license type is not supported | The cluster you are trying to connect doesn't have the required license to connect to AutoOps. For more information, refer to the [prerequisites](/deploy-manage/monitor/autoops/cc-connect-self-managed-to-autoops.md#prerequisites). |

deploy-manage/monitor/autoops/cc-manage-users.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,4 +44,4 @@ Assign the following roles to new or existing users based on levels of access to
4444
| Role | Allowed actions in AutoOps |
4545
| --- | --- |
4646
| **Organization owner** | View events and metrics reports <br> Add or edit customizations and notification preferences <br> Connect and disconnect clusters |
47-
| **Connected cluster access** | **Viewer**: <br> View events and metrics reports <br><br> **Admin** for all connected clusters: <br> View events and metrics reports <br> Add or edit customizations and notification preferences <br> Connect and disconnect clusters <br><br> **Admin** for selected clusters: <br> View events and metrics reports <br> Add or edit customizations and notification preferences <br> Connect clusters |
47+
| **Connected cluster access** | **Viewer**: <br> View events and metrics reports <br><br> **Admin** for all connected clusters: <br> View events and metrics reports <br> Add or edit customizations and notification preferences <br> Connect and disconnect clusters <br><br> **Admin** for selected clusters: <br> View events and metrics reports <br> Connect clusters |

manage-data/ingest/transform-enrich/ingest-pipelines.md

Lines changed: 227 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -388,7 +388,7 @@ PUT _ingest/pipeline/my-pipeline
388388
Use dot notation to access object fields.
389389

390390
::::{important}
391-
If your document contains flattened objects, use the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor to expand them first. Other ingest processors cannot access flattened objects.
391+
If your document contains flattened objects, use the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor to expand them. If you wish to maintain your document structure, use the [`flexible`](ingest-pipelines.md#access-source-pattern-flexible) access pattern in your pipeline definition. Otherwise Ingest processors cannot access dotted field names.
392392
::::
393393

394394

@@ -431,6 +431,232 @@ PUT _ingest/pipeline/my-pipeline
431431
}
432432
```
433433

434+
## Ingest field access pattern [access-source-pattern]
435+
```{applies_to}
436+
serverless: ga
437+
stack: ga 9.2
438+
```
439+
440+
The default ingest pipeline access pattern does not recognize dotted field names in documents. Retrieving flattened and dotted field names from an ingest document requires a different field retrieval algorithm that does not have this limitation. We know that some pipelines have come to rely on these dotted field name limitations in their logic. In order to continue supporting the original behavior while still adding support for dotted field names, ingest pipelines now support configuring an access pattern to use for all processors in the pipeline.
441+
442+
The `field_access_pattern` property on an ingest pipeline defines how ingest document fields are read and written for all processors in the current pipeline. It accepts two values: `classic` (which is the default) and `flexible`.
443+
444+
```console
445+
PUT _ingest/pipeline/my-pipeline
446+
{
447+
"field_access_pattern": "classic", <1>
448+
"processors": [
449+
{
450+
"set": {
451+
"description": "Set some searchable tags in our document's flattened field",
452+
"field": "event.tags.ingest.processed_by", <2>
453+
"value": "my-pipeline"
454+
}
455+
}
456+
]
457+
}
458+
```
459+
1. All processors in this pipeline will use the `classic` access pattern.
460+
2. The logic for resolving field paths used by processors to read and write values to ingest documents is based on the access pattern.
461+
462+
### Classic field access pattern [access-source-pattern-classic]
463+
464+
The `classic` access pattern is the default access pattern that has been around since ingest node first released. Field paths given to processors (e.g. `event.tags.ingest.processed_by`) are split on the dot character (`.`). The processor then uses the resulting field names to traverse the document until a value is found. When writing a value to a document, if its parent fields do not exist in the source, the processor will create nested objects for the missing fields.
465+
466+
```console
467+
POST /_ingest/pipeline/_simulate
468+
{
469+
"pipeline" : {
470+
"description": "example pipeline",
471+
"field_access_pattern": "classic", <1>
472+
"processors": [
473+
{
474+
"set" : {
475+
"description" : "Copy the foo.bar field into the a.b.c.d field if it exists",
476+
"copy_from" : "foo.bar", <2>
477+
"field" : "a.b.c.d", <3>
478+
"ignore_empty_value": true
479+
}
480+
}
481+
]
482+
},
483+
"docs": [
484+
{
485+
"_index": "index",
486+
"_id": "id",
487+
"_source": {
488+
"foo": {
489+
"bar": "baz" <4>
490+
}
491+
}
492+
},
493+
{
494+
"_index": "index",
495+
"_id": "id",
496+
"_source": {
497+
"foo.bar": "baz" <5>
498+
}
499+
}
500+
]
501+
}
502+
```
503+
1. Explicitly declaring to use the `classic` access pattern in the pipeline. This is the default value.
504+
2. We are reading a value from the field `foo.bar`.
505+
3. We are writing its value to the field `a.b.c.d`.
506+
4. This document uses nested json objects in its structure.
507+
5. This document uses dotted field names in its structure.
508+
509+
```console-result
510+
{
511+
"docs": [
512+
{
513+
"doc": {
514+
"_id": "id",
515+
"_index": "index",
516+
"_version": "-3",
517+
"_source": {
518+
"foo": {
519+
"bar": "baz" <1>
520+
},
521+
"a": {
522+
"b": {
523+
"c": {
524+
"d": "baz" <2>
525+
}
526+
}
527+
}
528+
},
529+
"_ingest": {
530+
"timestamp": "2017-05-04T22:30:03.187Z"
531+
}
532+
}
533+
},
534+
{
535+
"doc": {
536+
"_id": "id",
537+
"_index": "index",
538+
"_version": "-3",
539+
"_source": {
540+
"foo.bar": "baz" <3>
541+
},
542+
"_ingest": {
543+
"timestamp": "2017-05-04T22:30:03.188Z"
544+
}
545+
}
546+
}
547+
]
548+
}
549+
```
550+
1. The first document's `foo.bar` field is located, because it uses nested json. The processor looks for a `foo` field, and then a `bar` field.
551+
2. The value from the `foo.bar` field is written to a nested json structure at field `a.b.c.d`. The processor creates objects for each field in the path.
552+
3. The second document uses a dotted field name for `foo.bar`. The `classic` access pattern does not recognize dotted field names, and so nothing is copied.
553+
554+
If the documents you are ingesting contain dotted field names, to read them with the `classic` access pattern, you must use the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor. This approach is not always reasonable though. Consider the following document:
555+
556+
```json
557+
{
558+
"event": {
559+
"tags": {
560+
"http.host": "localhost:9200",
561+
"http.host.name": "localhost",
562+
"http.host.port": 9200
563+
}
564+
}
565+
}
566+
```
567+
If the `event.tags` field was processed with the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor, the field values would collide. The `http.host` field cannot be a text value and an object value at the same time.
568+
569+
### Flexible field access pattern [access-source-pattern-flexible]
570+
571+
The `flexible` access pattern allows for ingest pipelines to access both nested and dotted field names without using the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor. Additionally, when writing a value to a field that does not exist, any parent fields that are missing are concatenated to the start of the new key. Use the `flexible` access pattern if your documents have dotted field names, and also if you prefer to write missing fields to the document with dotted names.
572+
573+
```console
574+
POST /_ingest/pipeline/_simulate
575+
{
576+
"pipeline" : {
577+
"description": "example pipeline",
578+
"field_access_pattern": "flexible", <1>
579+
"processors": [
580+
{
581+
"set" : {
582+
"description" : "Copy the foo.bar field into the a.b.c.d field if it exists",
583+
"copy_from" : "foo.bar", <2>
584+
"field" : "a.b.c.d", <3>
585+
"ignore_empty_value": true
586+
}
587+
}
588+
]
589+
},
590+
"docs": [
591+
{
592+
"_index": "index",
593+
"_id": "id",
594+
"_source": {
595+
"foo": {
596+
"bar": "baz" <4>
597+
},
598+
"a": {} <5>
599+
}
600+
},
601+
{
602+
"_index": "index",
603+
"_id": "id",
604+
"_source": {
605+
"foo.bar": "baz", <6>
606+
}
607+
}
608+
]
609+
}
610+
```
611+
1. Using the `flexible` access pattern in the pipeline.
612+
2. We are reading a value from the field `foo.bar`.
613+
3. We are writing its value to the field `a.b.c.d`.
614+
4. The first document uses nested json objects in its structure.
615+
5. The first document has an existing `a` field in the root.
616+
6. The second document uses a dotted field name.
617+
618+
```console-result
619+
{
620+
"docs": [
621+
{
622+
"doc": {
623+
"_id": "id",
624+
"_index": "index",
625+
"_version": "-3",
626+
"_source": {
627+
"foo": {
628+
"bar": "baz" <1>
629+
},
630+
"a": {
631+
"b.c.d": "baz" <2>
632+
}
633+
},
634+
"_ingest": {
635+
"timestamp": "2017-05-04T22:30:03.187Z"
636+
}
637+
}
638+
},
639+
{
640+
"doc": {
641+
"_id": "id",
642+
"_index": "index",
643+
"_version": "-3",
644+
"_source": {
645+
"foo.bar": "baz", <3>
646+
"a.b.c.d": "baz" <4>
647+
},
648+
"_ingest": {
649+
"timestamp": "2017-05-04T22:30:03.188Z"
650+
}
651+
}
652+
}
653+
]
654+
}
655+
```
656+
1. The `flexible` access pattern supports nested object fields. The processor looks for a `foo` field, and then a `bar` field.
657+
2. The value from the `foo.bar` field is written to the dotted field name `b.c.d` underneath the field `a`. The processor concatenates the missing field names together as a prefix on the key.
658+
3. The `flexible` access pattern also supports dotted field names. The processor looks for a field named `foo`, and after not finding it, looks for a field named `foo.bar`.
659+
4. The value from the `foo.bar` field is written to the dotted field name `a.b.c.d`. Since none of those fields exist in the document yet, they are concatenated together into a dotted field name.
434660

435661
## Access metadata fields in a processor [access-metadata-fields]
436662

solutions/observability/apm/opentelemetry/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,6 @@ Find more details about how to use an OpenTelemetry API or SDK with an Elastic A
7878

7979
AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {{observability}} or {{obs-serverless}}.
8080

81-
To get started, follow the official AWS Distribution for OpenTelemetry Lambda documentation, and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster:
81+
To get started, follow the official AWS Distribution for OpenTelemetry Lambda documentation, and [configure the EDOT Collector in Gateway mode](elastic-agent://reference/edot-collector/config/default-config-standalone.md#gateway-mode) to send traces and metrics to your Elastic cluster:
8282

83-
[**Get started with the AWS Distro for OpenTelemetry Lambda**](https://aws-otel.github.io/docs/getting-started/lambda)
83+
[**Get started with the AWS Distro for OpenTelemetry Lambda**](https://aws-otel.github.io/docs/getting-started/lambda)

solutions/observability/get-started/logs-essentials.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,12 +34,12 @@ The **Admin** role or higher is required to create projects. Refer to [Assign us
3434
1. Select **Create serverless project**.
3535
1. Under **Elastic for Observability**, select **Next**.
3636
1. Enter a name for your project.
37+
1. Under **Product features**, select **Observability Logs Essentials**.
3738
1. (Optional) Under **Settings** you can change the following:
3839

3940
* **Cloud provider**: The cloud platform where you’ll deploy your project. We currently support Amazon Web Services (AWS).
4041
* **Region**: The [region](/deploy-manage/deploy/elastic-cloud/regions.md) where your project will live.
4142

42-
1. Select **Edit settings**, and select **Observability Logs Essentials**.
4343
1. Select **Create serverless project**. It takes a few minutes to create your project.
4444
1. When the project is ready, click **Continue**.
4545

solutions/security/detect-and-alert/create-detection-rule.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,19 @@ To filter noisy {{ml}} rules, use [rule exceptions](/solutions/security/detect-a
152152
You can also leave the **Group by** field undefined. The rule then creates an alert when the number of search results is equal to or greater than the threshold value. If you set **Count** to limit the results by `process.name` >= 2, an alert will only be generated for source/destination IP pairs that appear with at least 2 unique process names across all events.
153153

154154
::::{important}
155-
Alerts created by threshold rules are synthetic alerts that do not resemble the source documents. The alert itself only contains data about the fields that were aggregated over (the **Group by** fields). Other fields are omitted, because they can vary across all source documents that were counted toward the threshold. Additionally, you can reference the actual count of documents that exceeded the threshold from the `kibana.alert.threshold_result.count` field.
155+
Alerts created by threshold rules are synthetic alerts that do not resemble the source documents:
156+
157+
- The alert itself only contains data about the fields that were aggregated over (the **Group by** fields specified in the rule).
158+
- All other fields are omitted and aren't available in the alert. This is because these fields can vary across all source documents that were counted toward the threshold.
159+
- You can reference the actual count of documents that exceeded the threshold from the `kibana.alert.threshold_result.count` field.
160+
- `context.alerts.kibana.alert.threshold_result.terms` contains fields and values from any **Group by** fields specified in the rule. For example:
161+
```
162+
{{#context.alerts}}
163+
{{#kibana.alert.threshold_result.terms}}
164+
{{field}}: {{value}}
165+
{{/kibana.alert.threshold_result.terms}}
166+
{{/context.alerts}}
167+
```
156168
::::
157169
158170
3. (Optional) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information.

troubleshoot/ingest/opentelemetry/edot-collector/enable-debug-logging.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,3 +86,6 @@ Debug logging for the Collector is not currently configurable through {{fleet}}.
8686
:::
8787

8888

89+
## Resources
90+
91+
To learn how to enable debug logging for the EDOT SDKs, refer to [Enable debug logging for EDOT SDKs](../edot-sdks/enable-debug-logging.md).

troubleshoot/ingest/opentelemetry/edot-sdks/enable-debug-logging.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@ Enabling debug logging can help surface common problems such as:
2727

2828
## Verify you're looking at the right logs
2929

30-
* Ensure you’re checking logs for the same process that starts your app (systemd service, container entrypoint, IIS worker, etc.).
30+
Ensure you’re checking logs for the same process that starts your app (systemd service, container entrypoint, IIS worker, and so on):
31+
3132
* For containerized environments such as Kubernetes/Docker:
3233
* `kubectl logs <pod> -c <container>` (correct container name matters if there are sidecars)
3334
* Check the new Pod after a rollout, as old Pods may show stale environment without your debug flags.

0 commit comments

Comments
 (0)