Skip to content

Commit 47b4611

Browse files
committed
Routing layout change
1 parent 82a27be commit 47b4611

File tree

6 files changed

+19
-27
lines changed

6 files changed

+19
-27
lines changed

content/en/ninja-workshops/10-advanced-otel/2-gateway/2-1-access-tokens.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ When deploying the Splunk Distribution of the OpenTelemetry Collector in host mo
1717

1818
## Configuring Splunk Access Tokens
1919

20-
To authenticate and send data to Splunk Observability Cloud, you need to configure access tokens properly.
20+
To authenticate and send data to Splunk Observability Cloud, you need to configure access tokens properly.
2121
In OpenTelemetry, authentication is handled via HTTP headers. To pass an access token, use the `headers:` key with the sub-key `X-SF-Token:`. This configuration works in both agent and gateway mode.
2222

2323
Example:
@@ -86,4 +86,3 @@ service:
8686
## Conclusion
8787

8888
The `otlphttp` exporter is now the preferred method for sending telemetry data to Splunk Observability Cloud. Properly configuring Splunk Access Tokens ensures secure data transmission, while the Batch Processor helps optimize performance by reducing network overhead. By implementing these best practices, you can efficiently collect and transmit observability data at scale.
89-

content/en/ninja-workshops/10-advanced-otel/6-sensitive-data/6-1-test-delete-tag.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,4 +143,5 @@ In this exercise, we will **delete** the `user.account_password`, **update** the
143143
{{% /tabs %}}
144144

145145
{{% /notice %}}
146-
Stop the **Agent** and **Gateway** using `Ctrl-C`.
146+
147+
Stop the **Agent** and **Gateway** using `Ctrl-C`.

content/en/ninja-workshops/10-advanced-otel/6-sensitive-data/6-2-test-redaction.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@ In this exercise, we will **redact** the `user.visa` & `user.mastercard` **value
6060
-> user.visa: Str(4111 1111 1111 1111)
6161
-> user.amex: Str(3782 822463 10005)
6262
```
63+
6364
{{% /tab %}}
6465
{{% /tabs %}}
6566

@@ -176,6 +177,6 @@ Add the Amex card regex to `blocked_values` and restart **Agent** collector.
176177
'\b3[47][0-9]{2}[\s-]?[0-9]{6}[\s-]?[0-9]{5}\b'
177178
```
178179

179-
These are just a few examples of how `attributes` and `redaction` processors can be configured to protect sensitive data.
180+
These are just a few examples of how `attributes` and `redaction` processors can be configured to protect sensitive data.
180181

181182
Stop the **Agent** and **Gateway** using `Ctrl-C`.

content/en/ninja-workshops/10-advanced-otel/7-transform-data/7-1-test-transform.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ weight: 1
77
### Test the Transform
88

99
**Run the Log Generator**:
10-
In the `test` terminal window, navigate to the `[WORKSHOP]/7-transform-data` directory and start the appropriate `log-gen` script for your system. We want to work with structured JSON logs, so add the `-json` flag to the script command.
10+
In the `test` terminal window, navigate to the `[WORKSHOP]/7-transform-data` directory and start the appropriate `log-gen` script for your system. We want to work with structured JSON logs, so add the `-json` flag to the script command.
1111

1212
```sh
1313
./log-gen.sh -json
1414
```
1515

16-
The script will begin writing lines to a file named `./quotes.log`, while displaying a single line of output in the console.
16+
The script will begin writing lines to a file named `./quotes.log`, while displaying a single line of output in the console.
1717

1818
```txt
1919
Writing logs to quotes.log. Press Ctrl+C to stop.
@@ -243,4 +243,4 @@ In this exercise, we will **remove the** `com.splunk/source` and `os.type` **met
243243

244244
{{% /notice %}}
245245

246-
Stop the **Agent** and **Gateway** using `Ctrl-C`.
246+
Stop the **Agent** and **Gateway** using `Ctrl-C`.

content/en/ninja-workshops/10-advanced-otel/8-routing/8-1-connector.md

Lines changed: 7 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,27 +4,17 @@ linkTitle: 8.1 Routing Configuration
44
weight: 1
55
---
66

7-
In the following exercise, you will configure the `routing connector` in the `gateway.yaml` file. This setup will enable the **Gateway** to route traces based on the `deployment.environment` attribute in the spans you send. By doing so, you can process and handle traces differently depending on their attributes.
7+
In the following exercise, you will configure the `routing` connector in the `gateway.yaml` file. This setup will enable the **Gateway** to route traces based on the `deployment.environment` attribute in the spans you send. By doing so, you can process and handle traces differently depending on their attributes.
88

99
{{% notice title="Exercise" style="green" icon="running" %}}
1010
Open the `gateway.yaml` and add the following configuration:
1111

12-
- **Add the `connectors:` section**:
13-
In OpenTelemetry configuration files, `connectors` have their own dedicated section, similar to receivers and processors. In the `gateway.yaml` file, insert the `connectors:` section below the `receivers` section and above the `processors` section.
14-
15-
```yaml
16-
connectors: # Section to configure connectors
17-
18-
processors:
19-
20-
```
21-
22-
- **Add the `routing` connector**:
23-
In this configuration, spans will be routed if the `deployment.environment` resource attribute matches `"security_applications"`.
24-
25-
This same approach can also be applied to `metrics` and `logs`, allowing you to route them based on attributes in `resourceMetrics` or `resourceLogs` similarly. Add the following under the `connectors:` section:
12+
- **Add the `routing` connector**: In OpenTelemetry configuration files, `connectors` have their own dedicated section, similar to receivers and processors in the `gateway.yaml` file. This same approach can also be applied to `metrics` and `logs`, allowing you to route them based on attributes in `resourceMetrics` or `resourceLogs` similarly.
13+
14+
Add the following below the `receivers:` section and above the `processors:` section:
2615

2716
```yaml
17+
connectors:
2818
routing:
2919
default_pipelines: [traces/standard] # Default pipeline to use if no matching rule
3020
error_mode: ignore # Ignore errors in the routing
@@ -35,7 +25,7 @@ In OpenTelemetry configuration files, `connectors` have their own dedicated sect
3525
```
3626
3727
- **Configure two `file:` Exporters**:
38-
The `routing connector` requires different targets for routing. To achieve this, update the default `file/traces:` exporter and name it `file/traces/default:` and add a second file exporter called `file/traces/security:`. This will allow the routing connector to direct data to the appropriate target based on the rules you define.
28+
The `routing` connector requires different targets for routing. Add a second and third file exporter called `file/traces/security:` and `file/traces/standard:` respectively. This will allow the routing connector to direct data to the appropriate target based on the rules you define:
3929

4030
```yaml
4131
file/traces/standard: # Exporter Type/Name (For regular traces)
@@ -50,4 +40,4 @@ The `routing connector` requires different targets for routing. To achieve this,
5040

5141
{{% /notice %}}
5242

53-
With the routing configuration complete, next we configure the pipelines to use the routing rules.
43+
With the `routing` configuration complete, next we configure the `pipelines` to use the routing rules.

content/en/ninja-workshops/10-advanced-otel/8-routing/8-2-pipelines.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,10 @@ weight: 2
55
---
66

77
{{% notice title="Exercise" style="green" icon="running" %}}
8+
89
- **Add both the `standard` and `security` traces pipelines**:
910

10-
1. **Standard** pipeline: This pipeline will handle all spans that do not match the routing rule. Add it below the regular `traces:` pipeline, and leave the configuration unchanged for now.
11+
1. **Standard pipeline**: This pipeline will handle all spans that do not match the routing rule. Add it below the regular `traces:` pipeline, and leave the configuration unchanged for now.
1112

1213
```yaml
1314
pipelines:
@@ -22,7 +23,7 @@ weight: 2
2223
- file/traces/standard # File Exporter for spans NOT matching rule
2324
```
2425
25-
- **Target pipeline**: This pipepline will handle all spans that match the routing rule.
26+
- **Target pipeline**: This pipeline will handle all spans that match the routing rule.
2627
2728
```yaml
2829
pipelines:
@@ -37,7 +38,7 @@ weight: 2
3738
- file/traces/security # File Exporter for spans matching rule
3839
```
3940
40-
- **Update the `traces` pipeline to handle routing**: To enable `routing`, update the original `traces:` pipeline by adding `routing` as an exporter. This ensures that all span data is sent through the routing connector for evaluation.
41+
- **Update the `traces` pipeline to use routing**: To enable `routing`, update the original `traces:` pipeline by adding `routing` as an exporter. This ensures that all span data is sent through the routing connector for evaluation.
4142

4243
For clarity, we are removing the `debug` exporter from this pipeline, so that debug output is only shown from the new exporters behind the routing connector.
4344

0 commit comments

Comments
 (0)