Skip to content

Commit 1ece95c

Browse files
committed
Minor edits
1 parent 8f2d79f commit 1ece95c

File tree

5 files changed

+30
-34
lines changed

5 files changed

+30
-34
lines changed

content/en/conf/1-advanced-collector/3-dropping-spans/3-2-test-filter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ To test your configuration, you'll need to generate some trace data that include
2020
../otelcol --config ./agent.yaml
2121
```
2222

23-
**Start the Loadgen**: In the **Loadgen terminal** window run the `loadgen` with the flag to also send `healthz` spans along with base spans:
23+
**Start the Loadgen**: In the **Loadgen terminal** window, execute the following command to start the load generator with health check spans enabled:
2424

2525
```bash
2626
../loadgen -health -count 5
@@ -77,7 +77,7 @@ jq -c '.resourceSpans[].scopeSpans[].spans[] | "Span \(input_line_number) found
7777
{{% tabs %}}
7878
{{% tab title="Check spans in gateway-traces.out" %}}
7979

80-
```bash { title="Check spans in gateway-traces.out" }
80+
```bash
8181
jq -c '.resourceSpans[].scopeSpans[].spans[] | "Span \(input_line_number) found with name \(.name)"' ./gateway-traces.out
8282
```
8383

content/en/conf/1-advanced-collector/5-transform-data/5-1-configuration.md

Lines changed: 20 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ weight: 1
55
---
66

77
{{% notice title="Exercise" style="green" icon="running" %}}
8-
**Add a `transform` processor**: Switch to your **Agent terminal** window and edit the `agent.yaml` and add the following `transform` processor:
8+
**Add a `transform` processor**: Switch to your **Gateway terminal** window and edit the `gateway.yaml` and add the following `transform` processor:
99

1010
```yaml
1111
transform/logs: # Processor Type/Name
@@ -53,34 +53,34 @@ This step is crucial because it ensures that all relevant fields from the log bo
5353
- **Map Severity Text**: Assigns severity_text from the log’s level attribute.
5454
- **Assign Severity Numbers**: Converts severity levels into standardized numerical values.
5555

56-
You should have a **single** `transform` processor containing two context blocks: one whose context is for `resource` and one whose context is for `log`.
56+
> [!IMPORTANT]
57+
> You should have a **single** `transform` processor containing two context blocks: one whose context is for `resource` and one whose context is for `log`.
5758

5859
This configuration ensures that log severity is correctly extracted, standardized, and structured for efficient processing.
5960

6061
{{% notice title="Tip" style="primary" icon="lightbulb" %}}
6162
This method of mapping all JSON fields to top-level attributes should only be used for **testing and debugging OTTL**. It will result in high cardinality in a production scenario.
6263
{{% /notice %}}
6364

64-
**Update the `logs` pipeline**: Add the `transform/logs:` processor into the `logs:` pipeline:
65+
**Update the `logs` pipeline**: Add the `transform/logs:` processor into the `logs:` pipeline so your configuration looks like this:
6566

6667
```yaml
67-
logs:
68+
logs: # Logs pipeline
6869
receivers:
69-
- otlp
70-
- filelog/quotes
71-
processors:
70+
- otlp # OTLP receiver
71+
processors: # Processors for logs
7272
- memory_limiter
73-
- resourcedetection
7473
- resource/add_mode
75-
- transform/logs # Transform logs processor
74+
- transform/logs
7675
- batch
7776
exporters:
78-
- debug
79-
- otlphttp
77+
- debug # Debug exporter
78+
- file/logs
8079
```
8180

8281
{{% /notice %}}
8382

83+
<!--
8484
Validate the agent configuration using [**https://otelbin.io**](https://otelbin.io/). For reference, the `logs:` section of your pipelines will look similar to this:
8585

8686
```mermaid
@@ -101,18 +101,19 @@ graph LR
101101
subgraph " "
102102
subgraph subID1[**Logs**]
103103
direction LR
104-
REC1 --> PRO1
105-
REC2 --> PRO1
106-
PRO1 --> PRO2
107-
PRO2 --> PRO3
108-
PRO3 --> PRO4
109-
PRO4 --> PRO5
110-
PRO5 --> EXP2
111-
PRO5 --> EXP1
104+
REC1 -- > PRO1
105+
REC2 -- > PRO1
106+
PRO1 -- > PRO2
107+
PRO2 -- > PRO3
108+
PRO3 -- > PRO4
109+
PRO4 -- > PRO5
110+
PRO5 -- > EXP2
111+
PRO5 -- > EXP1
112112
end
113113
end
114114
classDef receiver,exporter fill:#8b5cf6,stroke:#333,stroke-width:1px,color:#fff;
115115
classDef processor fill:#6366f1,stroke:#333,stroke-width:1px,color:#fff;
116116
classDef con-receive,con-export fill:#45c175,stroke:#333,stroke-width:1px,color:#fff;
117117
classDef sub-logs stroke:#34d399,stroke-width:1px, color:#34d399,stroke-dasharray: 3 3;
118118
```
119+
-->

content/en/conf/1-advanced-collector/5-transform-data/5-2-setup.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,7 @@ weight: 2
1818
../otelcol --config=agent.yaml
1919
```
2020

21-
**Start the Load Generator**: Open the **Loadgen terminal** window and run the `loadgen`.
22-
23-
> [!IMPORTANT]
24-
> To ensure the logs are structured in JSON format, include the `-json` flag when starting the script.
21+
**Start the Load Generator**: In the **Loadgen terminal** window, execute the following command to start the load generator with **JSON enabled**:
2522

2623
```bash { title="Log Generator" }
2724
../loadgen -logs -json -count 5

content/en/conf/1-advanced-collector/5-transform-data/5-3-test-transform.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@ This test verifies that the `com.splunk/source` and `os.type` metadata have been
1010
- `SeverityText` and `SeverityNumber` are set on the `LogRecord`.
1111
2. JSON fields from the log body are promoted to log `attributes`.
1212

13-
This ensures proper metadata filtering, severity mapping, and structured log enrichment before export.
13+
This ensures proper metadata filtering, severity mapping, and structured log enrichment before exporting.
1414

1515
{{% notice title="Exercise" style="green" icon="running" %}}
1616

1717
**Check the debug output**: For both the **Agent** and **Gateway** confirm that `com.splunk/source` and `os.type` have been removed:
1818

1919
{{% tabs %}}
20-
{{% tab title="New Debug Output" %}}
20+
{{% tab title="Gateway Debug Output" %}}
2121

2222
```text
2323
Resource attributes:
@@ -27,7 +27,7 @@ Resource attributes:
2727
```
2828

2929
{{% /tab %}}
30-
{{% tab title="Original Debug Output" %}}
30+
{{% tab title="Agent Debug Output" %}}
3131

3232
```text
3333
Resource attributes:
@@ -44,7 +44,7 @@ Resource attributes:
4444
For both the **Agent** and **Gateway** confirm that `SeverityText` and `SeverityNumber` in the `LogRecord` is now defined with the severity `level` from the log body. Confirm that the JSON fields from the body can be accessed as top-level log `Attributes`:
4545

4646
{{% tabs %}}
47-
{{% tab title="New Debug Output" %}}
47+
{{% tab title="Gateway Debug Output" %}}
4848

4949
```text
5050
<snip>
@@ -61,7 +61,7 @@ Attributes:
6161
```
6262

6363
{{% /tab %}}
64-
{{% tab title="Original Debug Output" %}}
64+
{{% tab title="Agemt Debug Output" %}}
6565

6666
```text
6767
<snip>

content/en/conf/1-advanced-collector/5-transform-data/_index.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,22 +13,20 @@ In this exercise we’ll update `agent.yaml` to include a **Transform Processor*
1313
- **Parse** JSON structured log data into attributes.
1414
- **Set** log severity levels based on the log message body.
1515

16-
You may have noticed that in previous logs, fields like `SeverityText` and `SeverityNumber` were undefined. This is typical of the `filelog` receiver. However, the severity is embedded within the log body:
16+
You may have noticed that in previous logs, fields like `SeverityText` and `SeverityNumber` were undefined. This is typical of the `filelog` receiver. However, the severity is embedded within the log body e.g.:
1717

1818
```text
19-
<snip>
2019
SeverityText:
2120
SeverityNumber: Unspecified(0)
2221
Body: Str(2025-01-31 15:49:29 [WARN] - Do or do not, there is no try.)
23-
</snip>
2422
```
2523

2624
Logs often contain structured data encoded as JSON within the log body. Extracting these fields into attributes allows for better indexing, filtering, and querying. Instead of manually parsing JSON in downstream systems, OTTL enables automatic transformation at the telemetry pipeline level.
2725

2826
{{% notice title="Exercise" style="green" icon="running" %}}
2927

3028
> [!IMPORTANT]
31-
> **Change _ALL_ terminal windows to the `[WORKSHOP]/5-transform-data` directory and run the `clear` command.**
29+
> **Change _ALL_ terminal windows to the `5-transform-data` directory and run the `clear` command.**
3230
3331
Copy `*.yaml` from the `4-sensitve-data` directory into `5-transform-data`. Your updated directory structure will now look like this:
3432

0 commit comments

Comments
 (0)