You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/ninja-workshops/10-advanced-otel/2-gateway/2-1-access-tokens.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ When deploying the Splunk Distribution of the OpenTelemetry Collector in host mo
17
17
18
18
## Configuring Splunk Access Tokens
19
19
20
-
To authenticate and send data to Splunk Observability Cloud, you need to configure access tokens properly.
20
+
To authenticate and send data to Splunk Observability Cloud, you need to configure access tokens properly.
21
21
In OpenTelemetry, authentication is handled via HTTP headers. To pass an access token, use the `headers:` key with the sub-key `X-SF-Token:`. This configuration works in both agent and gateway mode.
22
22
23
23
Example:
@@ -86,4 +86,3 @@ service:
86
86
## Conclusion
87
87
88
88
The `otlphttp` exporter is now the preferred method for sending telemetry data to Splunk Observability Cloud. Properly configuring Splunk Access Tokens ensures secure data transmission, while the Batch Processor helps optimize performance by reducing network overhead. By implementing these best practices, you can efficiently collect and transmit observability data at scale.
Copy file name to clipboardExpand all lines: content/en/ninja-workshops/10-advanced-otel/7-transform-data/7-1-test-transform.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,13 @@ weight: 1
7
7
### Test the Transform
8
8
9
9
**Run the Log Generator**:
10
-
In the `test` terminal window, navigate to the `[WORKSHOP]/7-transform-data` directory and start the appropriate `log-gen` script for your system. We want to work with structured JSON logs, so add the `-json` flag to the script command.
10
+
In the `test` terminal window, navigate to the `[WORKSHOP]/7-transform-data` directory and start the appropriate `log-gen` script for your system. We want to work with structured JSON logs, so add the `-json` flag to the script command.
11
11
12
12
```sh
13
13
./log-gen.sh -json
14
14
```
15
15
16
-
The script will begin writing lines to a file named `./quotes.log`, while displaying a single line of output in the console.
16
+
The script will begin writing lines to a file named `./quotes.log`, while displaying a single line of output in the console.
17
17
18
18
```txt
19
19
Writing logs to quotes.log. Press Ctrl+C to stop.
@@ -243,4 +243,4 @@ In this exercise, we will **remove the** `com.splunk/source` and `os.type` **met
243
243
244
244
{{% /notice %}}
245
245
246
-
Stop the **Agent** and **Gateway** using `Ctrl-C`.
246
+
Stop the **Agent** and **Gateway** using `Ctrl-C`.
In the following exercise, you will configure the `routing connector` in the `gateway.yaml` file. This setup will enable the **Gateway** to route traces based on the `deployment.environment` attribute in the spans you send. By doing so, you can process and handle traces differently depending on their attributes.
7
+
In the following exercise, you will configure the `routing` connector in the `gateway.yaml` file. This setup will enable the **Gateway** to route traces based on the `deployment.environment` attribute in the spans you send. By doing so, you can process and handle traces differently depending on their attributes.
Open the `gateway.yaml` and add the following configuration:
11
11
12
-
-**Add the `connectors:` section**:
13
-
In OpenTelemetry configuration files, `connectors` have their own dedicated section, similar to receivers and processors. In the `gateway.yaml` file, insert the `connectors:` section below the `receivers` section and above the `processors` section.
14
-
15
-
```yaml
16
-
connectors: # Section to configure connectors
17
-
18
-
processors:
19
-
20
-
```
21
-
22
-
-**Add the `routing` connector**:
23
-
In this configuration, spans will be routed if the `deployment.environment` resource attribute matches `"security_applications"`.
24
-
25
-
This same approach can also be applied to `metrics` and `logs`, allowing you to route them based on attributes in `resourceMetrics` or `resourceLogs` similarly. Add the following under the `connectors:` section:
12
+
-**Add the `routing` connector**: In OpenTelemetry configuration files, `connectors` have their own dedicated section, similar to receivers and processors in the `gateway.yaml` file. This same approach can also be applied to `metrics` and `logs`, allowing you to route them based on attributes in `resourceMetrics` or `resourceLogs` similarly.
13
+
14
+
Add the following below the `receivers:` section and above the `processors:` section:
26
15
27
16
```yaml
17
+
connectors:
28
18
routing:
29
19
default_pipelines: [traces/standard] # Default pipeline to use if no matching rule
30
20
error_mode: ignore # Ignore errors in the routing
@@ -35,7 +25,7 @@ In OpenTelemetry configuration files, `connectors` have their own dedicated sect
35
25
```
36
26
37
27
- **Configure two `file:` Exporters**:
38
-
The `routing connector` requires different targets for routing. To achieve this, update the default `file/traces:` exporter and name it `file/traces/default:` and add a second file exporter called `file/traces/security:`. This will allow the routing connector to direct data to the appropriate target based on the rules you define.
28
+
The `routing` connector requires different targets for routing. Add a second and third file exporter called `file/traces/security:` and `file/traces/standard:` respectively. This will allow the routing connector to direct data to the appropriate target based on the rules you define:
-**Add both the `standard` and `security` traces pipelines**:
9
10
10
-
1.**Standard** pipeline: This pipeline will handle all spans that do not match the routing rule. Add it below the regular `traces:` pipeline, and leave the configuration unchanged for now.
11
+
1.**Standard pipeline**: This pipeline will handle all spans that do not match the routing rule. Add it below the regular `traces:` pipeline, and leave the configuration unchanged for now.
11
12
12
13
```yaml
13
14
pipelines:
@@ -22,7 +23,7 @@ weight: 2
22
23
- file/traces/standard # File Exporter for spans NOT matching rule
23
24
```
24
25
25
-
- **Target pipeline**: This pipepline will handle all spans that match the routing rule.
26
+
- **Target pipeline**: This pipeline will handle all spans that match the routing rule.
26
27
27
28
```yaml
28
29
pipelines:
@@ -37,7 +38,7 @@ weight: 2
37
38
- file/traces/security # File Exporter for spans matching rule
38
39
```
39
40
40
-
- **Update the `traces` pipeline to handle routing**: To enable `routing`, update the original `traces:` pipeline by adding `routing` as an exporter. This ensures that all span data is sent through the routing connector for evaluation.
41
+
- **Update the `traces` pipeline to use routing**: To enable `routing`, update the original `traces:` pipeline by adding `routing` as an exporter. This ensures that all span data is sent through the routing connector for evaluation.
41
42
42
43
For clarity, we are removing the `debug` exporter from this pipeline, so that debug output is only shown from the new exporters behind the routing connector.
0 commit comments