Skip to content

Commit 3f78994

Browse files
Publishing Ingest Processor Workshop (hidden)
1 parent 7716c10 commit 3f78994

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

53 files changed

+496
-0
lines changed
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: How to connect to your workshop environment
3+
linkTitle: 1.1 How to connect to your workshop environment
4+
weight: 2
5+
---
6+
7+
1. How to retrieve the URL for your Splunk Enterprise Cloud instances.
8+
2. How to access the Splunk Observability Cloud workshop organization.
9+
10+
---
11+
12+
## 1. Splunk Cloud Instances
13+
14+
There are three instances that will be used throughout this workshop which have already been provisioned for you:
15+
1. Splunk Enterprise Cloud
16+
2. Splunk Ingest Processor (SCS Tenant)
17+
3. Splunk Observability Cloud
18+
19+
The Splunk Enterprise Cloud and Ingest Processor instances are hosted in [Splunk Show](https://show.splunk.com). If you were invited to the workshop, you should have received an email with an invite to the event in [Splunk Show](https://show.splunk.com) or a link to the event will have been provided at the beginning of the workshop.
20+
21+
Login to Splunk Show using your [splunk.com](https://login.splunk.com/) credentials. You should see the event for this workshop. Open the event to see the instance details for your Splunk Cloud and Ingest Processor instances.
22+
23+
{{% notice title="Note" style="primary" icon="lightbulb" %}}
24+
25+
Take note of the Participant Number provided in your Splunk Show event details. This number will be included in the `sourcetype` that you will use for searching and filtering the Kubernetes data. Because this is a shared environment only use the participant number provided so that other participants data is not effected.
26+
27+
{{% /notice %}}
28+
29+
## 2. Splunk Observability Cloud Instances
30+
31+
You should have also received an email to access the Splunk Observability Cloud workshop organization (You may need to check your spam folder). If you have not received an email, let your workshop instructor know. To access the environment click the **Join Now** button.
32+
33+
![Splunk Observability Cloud Invitation](../../images/workshop_invitation.png)
34+
35+
{{% notice title="Important" style="info" %}}
36+
If you access the event before the workshop start time, your instances may not be available yet. Don't worry, they will provided once the workshop begins.
37+
{{% /notice %}}
38+
39+
Additionally, you have been invited to a Splunk Observability Cloud workshop organization. The invitation includes a link to the environment. If you don't have a Splunk Observability Cloud account already, you will be asked to create one. If you already have one, you can login to the instance and you will see the workshop organization in your available organizations.
40+
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
---
2+
title: Getting Started
3+
linkTitle: 1. Getting Started
4+
weight: 1
5+
---
6+
7+
During this _**technical**_ Ingest Processor[^1] for Splunk Observability Cloud workshop you will have the opportunity to get hands-on with Ingest Processor in Splunk Enterprise Cloud.
8+
9+
To simplify the workshop modules, a pre-configured Splunk Enterprise Cloud instance is provided.
10+
11+
The instance is pre-configured with all of the requirements for creating an Ingest Processor pipeline.
12+
13+
This workshop will introduce you to the benefits of using Ingest Processor to convert robust logs to metrics and send those metrics to Splunk Observability Cloud. By the end of these technical workshops, you will have a good understanding of some of the key features and capabilities of Ingest Processor in Splunk Enterprise Cloud and the value of using Splunk Observability Cloud as a destination within an Ingest Processor pipeline.
14+
15+
Here are the instructions on how to access your pre-configured [Splunk Enterprise Cloud](./1-access-cloud-instances/) instance.
16+
17+
![Splunk Ingest Processor Architecture](../images/IngestProcessor-architecture-diagram_release_updated2.png)
18+
19+
[^1]: [**Ingest Processor**](https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/IngestProcessor/AboutIngestProcessorSolution) is a data processing capability that works within your Splunk Cloud Platform deployment. Use the Ingest Processor to configure data flows, control data format, apply transformation rules prior to indexing, and route to destinations.
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
title: How Ingest Processor Works
3+
linkTitle: 2. How Ingest Processor Works
4+
weight: 1
5+
---
6+
7+
###### System architecture
8+
9+
The primary components of the Ingest Processor service include the Ingest Processor service and SPL2 pipelines that support data processing. The following diagram provides an overview of how the components of the Ingest Processor solution work together:
10+
11+
![Splunk Ingest Processor Architecture](../images/IngestProcessor-architecture-diagram_release_updated2.png)
12+
13+
###### Ingest Processor service
14+
15+
The Ingest Processor service is a cloud service hosted by Splunk. It is part of the data management experience, which is a set of services that fulfill a variety of data ingest and processing use cases.
16+
17+
You can use the Ingest Processor service to do the following:
18+
19+
* Create and apply SPL2 pipelines that determine how each Ingest Processor processes and routes the data that it receives.
20+
* Define source types to identify the kind of data that you want to process and determine how the Ingest Processor breaks and merges that data into distinct events.
21+
* Create connections to the destinations that you want your Ingest Processor to send processed data to.
22+
23+
###### Pipelines
24+
25+
A pipeline is a set of data processing instructions written in SPL2. When you create a pipeline, you write a specialized SPL2 statement that specifies which data to process, how to process it, and where to send the results. When you apply a pipeline, the Ingest Processor uses those instructions to process all the data that it receives from data sources such as Splunk forwarders, HTTP clients, and logging agents.
26+
27+
Each pipeline selects and works with a subset of all the data that the Ingest Processor receives. For example, you can create a pipeline that selects events with the source type `cisco_syslog` from the incoming data, and then sends them to a specified index in Splunk Cloud Platform. This subset of selected data is called a partition. For more information, see [Partitions](http://docs.splunk.com/Documentation/SplunkCloud/latest/IngestProcessor/Architecture#Partitions).
28+
29+
The Ingest Processor solution supports only the commands and functions that are part of the `IngestProcessor` profile. For information about the specific SPL2 commands and functions that you can use to write pipelines for Ingest Processor, see [Ingest Processor pipeline syntax](http://docs.splunk.com/Documentation/SplunkCloud/latest/IngestProcessor/PipelinesOverview). For a summary of how the `IngestProcessor` profile supports different commands and functions compared to other SPL2 profiles, see the following pages in the *SPL2 Search Reference*:
30+
31+
* [Compatibility Quick Reference for SPL2 commands](http://docs.splunk.com/Documentation/SCS/current/SearchReference/CompatibilityQuickReferenceforSPL2commands)
32+
* [Compatibility Quick Reference for SPL2 evaluation functions](http://docs.splunk.com/Documentation/SCS/current/SearchReference/CompatibilityQuickReferenceforSPL2evaluationfunctions)
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: Login to Splunk Cloud
3+
linkTitle: 3.1 Login to Splunk Cloud
4+
weight: 2
5+
---
6+
7+
In this section you will create an Ingest Pipeline which will convert Kubernetes Audit Logs to metrics which are sent to the Splunk Observability Cloud workshop organization. Before getting started you will need to access the Splunk Cloud and Ingest Processor SCS Tenant environments provided in the Splunk Show event details.
8+
9+
{{% notice title="Pre-requisite: Login to Splunk Enterprise Cloud" style="green" icon="running" %}}
10+
11+
Open the **Ingest Processor Cloud Stack** URL provided in the Splunk Show event details.
12+
13+
![Splunk Cloud Instance Details](../../images/show_instances_sec.png)
14+
15+
In the Connection info click on the **Stack URL** link to open your Splunk Cloud stack.
16+
17+
![Splunk Cloud Connection Details](../../images/sec_connection_details.png)
18+
19+
Use the `admin` username and password to login to Splunk Cloud.
20+
21+
![Splunk Cloud Login](../../images/sec_login.png)
22+
23+
After logging in, if prompted, accept the Terms of Service and click **OK**
24+
25+
![Splunk Cloud Login](../../images/sec_terms.png)
26+
27+
Navigate back to the Splunk Show event details and select the Ingest Processor SCS Tenant
28+
29+
![Ingest Processor Connection Details](../../images/show_instances_scs.png)
30+
31+
Click on the **Console URL** to access the **Ingest Processor SCS Tenant**
32+
33+
{{% notice title="Note" style="primary" icon="lightbulb" %}}
34+
**Single Sign-On (SSO)**
35+
Single Sign-on (SSO) is configured between the Splunk Data Management service (‘SCS Tenant’) and Splunk Cloud environments, so if you already logged in to your Splunk Cloud stack you should automatically be logged in to Splunk Data Management service. If you are prompted for credentials, use the credentials provided in the Splunk Cloud Stack on Splunk Show event (listed under the ‘Splunk Cloud Stack’ section.)
36+
{{% /notice %}}
37+
38+
{{% /notice %}}
39+
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
title: Review Kubernetes Audit Logs
3+
linkTitle: 3.2 Review Kubernetes Audit Logs
4+
weight: 3
5+
---
6+
7+
In this section you will review the Kubernetes Audit Logs that are being collected. You can see that the events are quite robust, which can make charting them inefficient. To address this, you will create an Ingest Pipeline in Ingest Processor that will convert these events to metrics that will be sent to Splunk Observability Cloud. This will allow you to chart the events much more efficiently and take advantage of the real-time streaming metrics in Splunk Observability Cloud.
8+
9+
{{% notice title="Exercise: Create Ingest Pipeline" style="green" icon="running" %}}
10+
11+
1. Open your **Ingest Processor Cloud Stack** instance using the URL provided in the Splunk Show workshop details.
12+
13+
2. Navigate to **Apps** -> **Search and Reporting**
14+
15+
![Search and Reporting](../../images/search_and_reporting.png?width=20vw)
16+
17+
3. In the search bar, enter the following SPL search string:
18+
19+
```
20+
```Replace PARTICIPANT_NUMBER with the participant number provided in your Splunk Show event details```
21+
index=main sourcetype="kube:apiserver:audit:PARTICIPANT_NUMBER"
22+
```
23+
24+
4. Press **Enter** or click the green magnifying glass to run the search.
25+
26+
![Kubernetes Audit Log](../../images/k8s_audit_log.png)
27+
28+
You should now see the Kubernetes Audit Logs for your environment. Notice that the events are fairly robust. Explore the available fields and start to think about what information would be good candidates for metrics and dimensions. Ask yourself: What fields would I like to chart and how would I like to be able to filter, group, or split those fields?
29+
30+
{{%/ notice %}}
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
---
2+
title: Create an Ingest Pipeline
3+
linkTitle: 3.3 Create an Ingest Pipeline
4+
weight: 4
5+
---
6+
7+
In this section you will create an Ingest Pipeline which will convert Kubernetes Audit Logs to metrics which are sent to the Splunk Observability Cloud workshop organization.
8+
9+
{{% notice title="Exercise: Create Ingest Pipeline" style="green" icon="running" %}}
10+
11+
1. Open the **Ingest Processor SCS Tenant** using the connection details provided in the Splunk Show event.
12+
13+
![Launch Splunk Cloud Platform](../../images/data_management_home.png?width=40vw)
14+
15+
{{% notice title="Note" style="primary" icon="lightbulb" %}}
16+
17+
When you open the **Ingest Processor SCS Tenant**, if you are taken to a welcome page, click on **Launch** under **Splunk Cloud Platform** to be taken the the Data Management page where you will configure the Ingest Pipeline.
18+
19+
![Launch Splunk Cloud Platform](../../images/launch_scp.png)
20+
21+
{{% /notice %}}
22+
23+
2. From the Splunk Data Management console select **Pipelines** -> **New pipeline** -> **Ingest Processor pipeline**.
24+
25+
![New Ingest Processor Pipeline](../../images/new_pipeline.png?width=40vw)
26+
27+
3. In the **Get started** step of the Ingest Processor configuration page select **Blank Pipeline** and click **Next**.
28+
29+
![Blank Ingest Processor Pipeline](../../images/blank_pipeline.png?width=40vw)
30+
31+
4. In the **Define your pipeline’s partition** step of the Ingest Processor configuration page select **Partition by sourcetype**. Select the **= equals** Operator and enter `kube:apiserver:audit:PARTICIPANT_NUMBER` (Be sure to replace PARTICIPANT_NUMBER with the participant number you were assigned) for the value. Click **Apply**.
32+
33+
![Add Partition](../../images/add_partition.png?width=40vw)
34+
35+
5. Click **Next**
36+
37+
6. In the **Add sample data** step of the Ingest Processor configuration page select **Capture new snapshot**. Enter `k8s_audit` for the name and click **Capture**.
38+
39+
![Capture Snapshot](../../images/capture_snapshot.png?width=40vw)
40+
41+
7. Make sure your newly created snapshot (`k8s_audit`) is selected and then click **Next**.
42+
43+
![Configure Snapshot Sourcetype](../../images/capture_snapshot_sourcetype.png?width=20vw)
44+
45+
8. In the **Select a metrics destination** step of the Ingest Processor configuration page select **show_o11y_org**. Click **Next**.
46+
47+
![Metrics Destination](../../images/metrics_destination.png?width=20vw)
48+
49+
9. In the **Select a data destination** step of the Ingest Processor configuration page select **splunk_indexer**. Under **Specify how you want your events to be routed to an index** select **Default**. Click **Done**.
50+
51+
![Event Routing](../../images/event_routing.png?width=20vw)
52+
53+
10. In the **Pipeline search field** replace the default search with the following.
54+
55+
{{% notice title="Note" style="primary" icon="lightbulb" %}}
56+
**Replace `UNIQUE_FIELD` in the metric name with a unique value which will be used to identify your metric in Observability Cloud.**
57+
{{% /notice %}}
58+
59+
```
60+
/*A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination".*/
61+
/* Import logs_to_metrics */
62+
import logs_to_metrics from /splunk/ingest/commands
63+
$pipeline =
64+
| from $source
65+
| thru [
66+
//define the metric name, type, and value for the Kubernetes Events
67+
| logs_to_metrics name="k8s_audit_UNIQUE_FIELD" metrictype="counter" value=1 time=_time
68+
| into $metrics_destination
69+
]
70+
| eval index = "kube_logs"
71+
//Send unfiltered logs to S3
72+
| into $destination;
73+
```
74+
{{% notice title="New to SPL2?" style="info" icon="lightbulb" %}}
75+
76+
Here is a breakdown of what the SPL2 query is doing:
77+
* First, you are importing the built in `logs_to_metrics` command which will be used to convert the kubernetes events to metrics.
78+
* You're using the source data, which you can see on the right is any event from the `kube:apiserver:audit` sourcetype.
79+
* Now, you use the `thru` command which writes the source dataset to the following command, in this case `logs_to_metrics`.
80+
* You can see that the metric name (`k8s_audit`), metric type (`counter`), value, and timestamp are all provided for the metric. You’re using a value of 1 for this metric because we want to count the number of times the event occurs.
81+
* Next, you choose the destination for the metric using the into `$metrics_destintation` command, which is our Splunk Observability Cloud organization
82+
* Finally, you can send the raw log events to another destination, in this case another index, so they are retained if we ever need to access them.
83+
84+
{{% /notice %}}
85+
86+
11. In the upper-right corner click the **Preview** button ![Preview Button](../../images/preview.png?height=20px&classes=inline) or press CTRL+Enter (CMD+Enter on Mac). From the **Previewing $pipeline** dropdown select **$metrics_destination**. Confirm you are seeing a preview of the metrics that will be sent to Splunk Observability Cloud.
87+
88+
![Preview Pipeline](../../images/preview_pipeline.png?width=40vw)
89+
90+
12. In the upper-right corner click the **Save pipeline** button ![Save Pipeline Button](../../images/save_pipeline_btn.png?height=20px&classes=inline). Enter a name for your pipeline and click **Save**.
91+
92+
![Save Pipeline Dialog](../../images/save_pipeline_dialog.png?width=40vw)
93+
94+
13. After clicking save you will be asked if you would like to apply the newly created pipeline. Click **Yes, apply**.
95+
96+
![Apply Pipeline Dialog](../../images/apply_pipeline_dialog.png?width=40vw)
97+
98+
<center>
99+
<b>The Ingest Pipeline should now be sending metrics to Splunk Observability Cloud. Keep this tab open as it will be used it again in the next section.</b>
100+
101+
In the next step you'll confirm the pipeline is working by viewing the metrics you just created in Splunk Observability Cloud.
102+
</center>
103+
104+
{{% /notice %}}
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
title: Confirm Metrics in Splunk Observability Cloud
3+
linkTitle: 3.4 Confirm Metrics in Splunk Observability Cloud
4+
weight: 5
5+
---
6+
7+
Now that an Ingest Pipeline has been configured to convert Kubernetes Audit Logs into metrics and send them to Splunk Observability Cloud the metrics should be available. To confirm the metrics are being collected complete the following steps:
8+
9+
{{% notice title="Exercise: Confirm Metrics in Splunk Observability Cloud" style="green" icon="running" %}}
10+
11+
1. Login to the **Splunk Observability Cloud** organization you were invited for the workshop. In the upper-right corner, click the **+** Icon -> **Chart** to create a new chart.
12+
13+
![Create New Chart](../../images/create_new_chart.png?width=40vw)
14+
15+
2. In the **Plot Editor** of the newly created chart enter the metric name you used while configuring the **Ingest Pipeline**.
16+
17+
![Review Metric](../../images/review_metric.png?width=40vw)
18+
19+
<center>
20+
<b>You should see the metric you created in the Ingest Pipeline. Keep this tab open as it will be used again in the next section.</b>
21+
22+
In the next step you will update the ingest pipeline to add dimensions to the metric so you have additional context for alerting and troubleshooting.
23+
</center>
24+
25+
{{% /notice %}}
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
---
2+
title: Create an Ingest Pipeline
3+
linkTitle: 3. Create an Ingest Pipeline
4+
weight: 1
5+
---
6+
7+
## Scenario Overview
8+
9+
In this scenario you will be playing the role of a Splunk Admin responsible for managing your organizations Splunk Enterprise Cloud environment. You recently worked with an internal application team on instrumenting their Kubernetes environment with Splunk APM and Infrastructure monitoring using OpenTelemetry to monitor their critical microservice applications.
10+
11+
The logs from the Kubernetes environment are also being collected and sent to Splunk Enter Prize Cloud. These logs include:
12+
13+
* Pod logs (application logs)
14+
* Kubernetes Events
15+
* Kubernetes Cluster Logs
16+
* Control Plane Node logs
17+
* Worker Node logs
18+
* Audit Logs
19+
20+
As a Splunk Admin you want to ensure that the data you are collecting is optimized so it can be analyzed in the most efficient way possible. Taking this approach accelerates troubleshooting and ensures efficient license utilization.
21+
22+
One way to accomplish this is by using Ingest Processor to convert robust logs to metrics and use Splunk Observability Cloud as the destination for those metrics. Not only does this make collecting the logs more efficient, you have the added ability of using the newly created metrics in Splunk Observability which can then be correlated with Splunk APM data (traces) and Splunk Infrastructure Monitoring data providing additional troubleshooting context. Because Splunk Observability Cloud uses a streaming metrics pipeline, the metrics can be alerted on in real-time speeding up problem identification. Additionally, you can use the Metrics Pipeline Management functionality to further optimize the data by aggregating, dropping unnecessary fields, and archiving less important or un-needed metrics.
23+
24+
In the next step you'll create an Ingest Processor Pipeline which will convert Kubernetes Audit Logs to metrics that will be sent to Observability Cloud.

0 commit comments

Comments
 (0)