Skip to content

Commit 269c3d9

Browse files
maycmleeclaude
andauthored
[DOCS-13297] Remove filter query syntax sections from processor docs (#34744)
* fix mistake * fix typos * [DOCS-13297] Remove filter query syntax sections from processor docs * Revert changes to install the worker and advanced worker configurations docs Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent c0fd816 commit 269c3d9

19 files changed

+46
-88
lines changed

content/en/observability_pipelines/processors/add_environment_variables.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Use this processor to add an environment variable field name and value to the lo
1717

1818
To set up this processor:
1919

20-
1. Define a [filter query](#filter-query-syntax). Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
20+
1. Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. See [Search Syntax][1] for more information.
2121
1. Enter the field name for the environment variable.
2222
1. Enter the environment variable name.
2323
1. Click **Add Environment Variable** if you want to add another environment variable.
@@ -50,4 +50,4 @@ After you have added processors to your pipeline and clicked **Next: Install**,
5050

5151
The allowlist is stored in the environment variable `DD_OP_PROCESSOR_ADD_ENV_VARS_ALLOWLIST`.
5252

53-
{{% observability_pipelines/processors/filter_syntax %}}
53+
[1]: /observability_pipelines/search_syntax/logs/

content/en/observability_pipelines/processors/add_hostname.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,6 @@ This processor adds a field with the name of the host that sent the log. For exa
1616
## Setup
1717

1818
To set up this processor:
19-
- Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
19+
- Define a **filter query**. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline. See [Search Syntax][1] for more information.
2020

21-
{{% observability_pipelines/processors/filter_syntax %}}
21+
[1]: /observability_pipelines/search_syntax/logs/

content/en/observability_pipelines/processors/dedupe.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The Deduplicate processor removes copies of data to reduce volume and noise. It
1717

1818
To set up the Deduplicate processor:
1919

20-
1. Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. Deduped logs and logs that do not match the filter query are sent to the next step in the pipeline.
20+
1. Define a **filter query**. Only logs that match the specified filter query are processed. Deduped logs and logs that do not match the filter query are sent to the next step in the pipeline. See [Search Syntax][1] for more information.
2121
1. In the **Type of deduplication** dropdown menu, select whether you want to `Match` on or `Ignore` the fields specified below.
2222
- If `Match` is selected, then after a log passes through, future logs that have the same values for all of the fields you specify below are removed.
2323
- If `Ignore` is selected, then after a log passes through, future logs that have the same values for all of their fields, *except* the ones you specify below, are removed.
@@ -57,4 +57,4 @@ For the following message structure:
5757
- Use `outer_key.inner_key` to refer to the key with the value `inner_value`.
5858
- Use `outer_key.inner_key.double_inner_key` to refer to the key with the value `double_inner_value`.
5959

60-
{{% observability_pipelines/processors/filter_syntax %}}
60+
[1]: /observability_pipelines/search_syntax/logs/

content/en/observability_pipelines/processors/edit_fields.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ See the [Remap Reserved Attributes][1] guide on how to use the Edit Fields proce
2525
Use **add field** to append a new key-value field to your log.
2626

2727
To set up the add field processor:
28-
1. Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
28+
1. Define a **filter query**. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline. See [Search Syntax][2] for more information.
2929
1. Enter the field and value you want to add. To specify a nested field for your key, use the [path notation](#path-notation-example-remap): `<OUTER_FIELD>.<INNER_FIELD>`. All values are stored as strings.
3030
**Note**: If the field you want to add already exists, the Worker throws an error and the existing field remains unchanged.
3131

@@ -34,7 +34,7 @@ To set up the add field processor:
3434
Use **drop field** to drop a field from logging data that matches the filter you specify below. It can delete objects, so you can use the processor to drop nested keys.
3535

3636
To set up the drop field processor:
37-
1. Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
37+
1. Define a **filter query**. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline. See [Search Syntax][2] for more information.
3838
1. Enter the key of the field you want to drop. To specify a nested field for your specified key, use the [path notation](#path-notation-example-remap): `<OUTER_FIELD>.<INNER_FIELD>`.
3939
**Note**: If your specified key does not exist, your log is unimpacted.
4040

@@ -43,7 +43,7 @@ To set up the drop field processor:
4343
Use **rename field** to rename a field within your log.
4444

4545
To set up the rename field processor:
46-
1. Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
46+
1. Define a **filter query**. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline. See [Search Syntax][2] for more information.
4747
1. Enter the name of the field you want to rename in the **Source field**. To specify a nested field for your key, use the [path notation](#path-notation-example-remap): `<OUTER_FIELD>.<INNER_FIELD>`. After it is renamed, your original field is deleted unless you enable the **Preserve source tag** checkbox described below.<br>**Note**: If the source key you specify doesn't exist, a default `null` value is applied to your target.
4848
1. In the **Target field**, enter the name you want the source field to be renamed to. To specify a nested field for your specified key, use the [path notation](#path-notation-example-remap): `<OUTER_FIELD>.<INNER_FIELD>`.<br>**Note**: If the target field you specify already exists, the Worker throws an error and does not overwrite the existing target field.
4949
1. Optionally, check the **Preserve source tag** box if you want to retain the original source field and duplicate the information from your source key to your specified target key. If this box is not checked, the source key is dropped after it is renamed.
@@ -70,8 +70,7 @@ To set up the rename field processor:
7070
- Use `outer_key.inner_key.double_inner_key` to see the key with the value `double_inner_value`.
7171

7272
[1]: /observability_pipelines/guide/remap_reserved_attributes
73-
74-
{{% observability_pipelines/processors/filter_syntax %}}
73+
[2]: /observability_pipelines/search_syntax/logs/
7574

7675
## Further reading
7776

content/en/observability_pipelines/processors/enrichment_table.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ In Datadog's Snowflake integration documentation, see [Reference Tables][3] for
7373
To set up the Enrichment Table processor:
7474

7575
1. Click **Add enrichment**.
76-
1. Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are sent through the processor. **Note**: All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
76+
1. Define a **filter query**. Only logs that match the specified filter query are sent through the processor. **Note**: All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. See [Search Syntax][8] for more information.
7777
1. In the **Set lookup mapping** section, select the type of lookup dataset you want to use.
7878
{{< tabs >}}
7979
{{% tab "Reference Table" %}}
@@ -197,12 +197,11 @@ To see metrics about your Enrichment Table processor using a Reference Table, ad
197197
`pipelines.reference_table_fetched_keys_total`
198198
: For each request sent to the Reference Tables API, this counter is incremented with the number of rows fetched in that request.
199199

200-
{{% observability_pipelines/processors/filter_syntax %}}
201-
202200
[1]: /reference_tables/?tab=cloudstorage
203201
[2]: /integrations/salesforce/#optional-enable-ingestion-of-reference-tables
204202
[3]: /integrations/snowflake-web/#reference-tables
205203
[4]: https://docs.datadoghq.com/reference_tables/?tab=cloudstorage#reference-table-limits
206204
[5]: /help/
207205
[6]: /integrations/databricks/?tab=useaserviceprincipalforoauth#reference-table-configuration
208206
[7]: /integrations/guide/servicenow-cmdb-enrichment-setup/#reference-tables
207+
[8]: /observability_pipelines/search_syntax/logs/

content/en/observability_pipelines/processors/filter.md

Lines changed: 5 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -27,44 +27,13 @@ This processor sends all logs or metrics ({{< tooltip glossary="preview" case="t
2727

2828
To set up the filter processor:
2929

30-
- Define a **filter query**.
31-
- Logs or metrics that match the [query](#filter-query-syntax) are sent to the next component.
30+
- Define a **filter query**.<br>**Notes**:
31+
- Logs or metrics that match the query are sent to the next component.
3232
- Logs or metrics that don't match the query are dropped.
33+
- For more information, see [Search Syntax for Logs][1] or [Search Syntax for Metrics][2].
3334

34-
## Filter query syntax
35-
36-
Each processor has a corresponding filter query in their fields. Processors only process logs or metrics that match their filter query.
37-
38-
The following are filter query examples:
39-
40-
{{< tabs >}}
41-
{{% tab "Logs" %}}
42-
43-
- `NOT (status:debug)`: This filters for logs that do not have the status `DEBUG`.
44-
- `status:ok service:flask-web-app`: This filters for all logs with the status `OK` from your `flask-web-app` service.
45-
- This query can also be written as: `status:ok AND service:flask-web-app`.
46-
- `host:COMP-A9JNGYK OR host:COMP-J58KAS`: This filter query only matches logs from the labeled hosts.
47-
- `user.status:inactive`: This filters for logs with the status `inactive` nested under the `user` attribute.
48-
- `http.status:[200 TO 299]` or `http.status:{300 TO 399}`: These two filters represent the syntax to query a range for `http.status`. Ranges can be used across any attribute.
49-
50-
Learn more about writing log filter queries in [Log Search Syntax][1].
51-
52-
[1]: /observability_pipelines/search_syntax/logs/
53-
54-
{{% /tab %}}
55-
56-
{{% tab "Metrics" %}}
57-
58-
- `NOT system.cpu.user`: This filters for metrics that do not have the field `name:system.cpu.user`.
59-
- `system.cpu.user OR system.cpu.user.total`: This filter query only matches metrics that have either `name:system.cpu.user` or `name:system.cpu.user.total`.
60-
- `tags:(env\:prod OR env\:test)`: This filters for metrics with `env:prod` or `env:test` in `tags`.
61-
62-
Learn more about writing metrics filter queries in [Metrics Search Syntax][1].
63-
64-
[1]: /observability_pipelines/search_syntax/metrics/
65-
66-
{{% /tab %}}
67-
{{< /tabs >}}
35+
[1]: /observability_pipelines/search_syntax/logs
36+
[2]: /observability_pipelines/search_syntax/metrics
6837

6938
## Further reading
7039

content/en/observability_pipelines/processors/generate_metrics.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Click **Manage Metrics** to create new metrics or edit existing metrics. This op
2626

2727
##### Add a metric
2828

29-
1. Enter a [filter query](#filter-query-syntax). Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. **Note**: Since a single processor can generate multiple metrics, you can define a different filter query for each metric.
29+
1. Enter a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. See [Search Syntax][5] for more information. **Note**: Since a single processor can generate multiple metrics, you can define a different filter query for each metric.
3030
1. Enter a name for the metric.
3131
1. In the **Define parameters** section, select the metric type (count, gauge, or distribution). See the [Count metric example](#count-metric-example) and [Distribution metric example](#distribution-metric-example). Also see [Metrics Types](#metrics-types) for more information.
3232
- For gauge and distribution metric types, select a log field which has a numeric (or parseable numeric string) value that is used for the value of the generated metric.
@@ -89,5 +89,4 @@ To create a distribution metric that measures the average time it takes for an A
8989
[2]: /account_management/billing/custom_metrics/
9090
[3]: /metrics/types/
9191
[4]: /metrics/distributions/
92-
93-
{{% observability_pipelines/processors/filter_syntax %}}
92+
[5]: /observability_pipelines/search_syntax/logs/

content/en/observability_pipelines/processors/grok_parser.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ See [Parsing][1] for more information on Datadog's Grok patterns.
3232

3333
## Setup
3434

35-
To set up the grok parser, define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
35+
To set up the grok parser, define a **filter query**. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. See [Search Syntax][3] for more information.
3636

3737
To test log samples for out-of-the-box rules:
3838
1. Click the **Preview Library Rules** button.
@@ -51,8 +51,7 @@ To add a custom parsing rule:
5151

5252
[1]: /logs/log_configuration/parsing/
5353
[2]: /logs/log_configuration/parsing/?tab=matchers#using-helper-rules-to-reuse-common-patterns
54-
55-
{{% observability_pipelines/processors/filter_syntax %}}
54+
[3]: /observability_pipelines/search_syntax/logs/
5655

5756
## Further reading
5857

content/en/observability_pipelines/processors/parse_json.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,10 +62,10 @@ This output contains the `message` field with the parsed JSON:
6262
## Setup
6363

6464
To set up this processor:
65-
1. Define a **filter query**. Only logs that match the specified [filter query](#filter-query-syntax) are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
65+
1. Define a **filter query**. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline. See [Search Syntax][1] for more information.
6666
2. Enter the name of the field you want to parse JSON on.<br>**Note**: The parsed JSON overwrites what was originally contained in the field.
6767

68-
{{% observability_pipelines/processors/filter_syntax %}}
68+
[1]: /observability_pipelines/search_syntax/logs/
6969

7070
## Further reading
7171

content/en/observability_pipelines/processors/parse_xml.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ The following image shows a Windows Event 4625 log in XML, next to the same log
4444

4545
To set up this processor:
4646

47-
1. Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
47+
1. Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. See [Search Syntax][1] for more information.
4848
1. Enter the path to the log field on which you want to parse XML. Use the path notation `<OUTER_FIELD>.<INNER_FIELD>` to match subfields. See the [Path notation example](#path-notation-example-parse-xml) below.
4949
1. Optionally, in the `Enter text key` field, input the key name to use for the text node when XML attributes are appended. See the [text key example](#text-key-example). If the field is left empty, `value` is used as the key name.
5050
1. Optionally, select `Always use text key` if you want to store text inside an object using the text key even when no attributes exist.
@@ -138,7 +138,7 @@ Then it is converted to the JSON:
138138
}
139139
```
140140

141-
{{% observability_pipelines/processors/filter_syntax %}}
141+
[1]: /observability_pipelines/search_syntax/logs/
142142

143143
## Further reading
144144

0 commit comments

Comments
 (0)