Skip to content

Commit d10a5e7

Browse files
committed
Populating content
1 parent af07260 commit d10a5e7

File tree

4 files changed

+158
-0
lines changed

4 files changed

+158
-0
lines changed
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
title: How to connect to your workshop environment
3+
linkTitle: 1.1 How to connect to your workshop environment
4+
weight: 2
5+
---
6+
7+
Access Show and sping up Workshop
8+
9+
TBD
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
---
2+
title: Getting Started
3+
linkTitle: 1. Getting Started
4+
weight: 1
5+
---
6+
7+
# Monitoring and Alerting with Splunk, AppDynamics, and Splunk Observability Cloud
8+
9+
## Introduction and Overview
10+
11+
In today's complex IT landscape, ensuring the performance and availability of applications and services is paramount. This workshop will introduce you to a powerful combination of tools – Splunk, AppDynamics, Splunk Observability Cloud, and Splunk IT Service Intelligence (ITSI) – that work together to provide comprehensive monitoring and alerting capabilities.
12+
13+
### The Challenge of Modern Monitoring
14+
15+
Modern applications often rely on distributed architectures, microservices, and cloud infrastructure. This complexity makes it challenging to pinpoint the root cause of performance issues or outages. Traditional monitoring tools often focus on individual components, leaving gaps in understanding the overall health and performance of a service.
16+
17+
### The Solution: Integrated Observability
18+
19+
A comprehensive observability strategy requires integrating data from various sources and correlating it to gain actionable insights. This workshop will demonstrate how Splunk, AppDynamics, Splunk Observability Cloud, and ITSI work together to achieve this:
20+
21+
* **Splunk:** Acts as the central platform for log analytics, security information and event management (SIEM), and broader data analysis. It ingests data from AppDynamics, Splunk Observability Cloud, and other sources, enabling powerful search, visualization, and correlation capabilities. Splunk provides a holistic view of your IT environment.
22+
23+
* **Splunk Observability Cloud:** Offers full-stack observability, encompassing infrastructure metrics, distributed traces, and logs. It provides a unified view of the health and performance of your entire infrastructure, from servers and containers to cloud services and custom applications. Splunk Observability Cloud helps correlate performance issues across the entire stack.
24+
25+
* **AppDynamics:** Provides deep Application Performance Monitoring (APM). It instruments applications to capture detailed performance metrics, including transaction traces, code-level diagnostics, and user experience data. AppDynamics excels at identifying performance bottlenecks *within* the application.
26+
27+
* **Splunk IT Service Intelligence (ITSI):** Provides service intelligence by correlating data from all the other platforms. ITSI allows you to define services, map dependencies, and monitor Key Performance Indicators (KPIs) that reflect the overall health and performance of those services. ITSI is essential for understanding the *business impact* of IT issues.
28+
29+
### Data Flow and Integration
30+
31+
A key concept to understand is how data flows between these platforms:
32+
33+
1. **Splunk Observability Cloud and AppDynamics collect data:** They monitor applications and infrastructure, gathering performance metrics and traces.
34+
35+
2. **Data is sent to Splunk:** AppDynamics and Splunk Observability Cloud integrate with Splunk to forward their collected data alongside logs sent directly to Splunk.
36+
37+
3. **Splunk analyzes and indexes data:** Splunk processes and stores the data, making it searchable and analyzable.
38+
39+
4. **ITSI leverages Splunk data:** ITSI uses the data in Splunk to create services, define KPIs, and monitor the overall health of your IT operations.
40+
41+
### Workshop Objectives
42+
43+
By the end of this workshop, you will:
44+
45+
* Understand the complementary roles of Splunk, AppDynamics, Splunk Observability Cloud, and ITSI.
46+
* Create basic alerts in Splunk, Observability Cloud and AppDynamics.
47+
* Configure a new Service and a simple KPI and alerting in ITSI.
48+
* Understand the concept of episodes in ITSI.
49+
50+
This workshop provides a foundation for building a robust observability practice. We will focus on the alerting configuration workflows, preparing you to explore more advanced features and configurations in your own environment. We will **not** be covering ITSI or Add-On installation and configuration.
51+
52+
Here are the instructions on how to access your pre-configured [Splunk Enterprise Cloud](./1-access-cloud-instances/) instance.
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
---
2+
title: Creating Basic Alerts
3+
linkTitle: 2. Creating Basic Alerts
4+
weight: 1
5+
---
6+
7+
# Setting Up Basic Alerts in Splunk Enterprise, AppDynamics, and Splunk Observability Cloud
8+
9+
This section covers the creation of basic alerts in Splunk Enterprise, AppDynamics, and Splunk Observability Cloud. These examples focus on simplicity and demonstrating the core concepts. Remember that real-world alerting scenarios often require more complex configurations and thresholds.
10+
11+
## 1. Splunk Enterprise Alerts
12+
13+
Splunk alerts are triggered by search results that match specific criteria. We'll create a real-time alert that notifies us when a certain condition is met.
14+
15+
**Scenario:** Alert when the number of "error" events in the "application_logs" index exceeds 10 in the last 5 minutes.
16+
17+
**Steps:**
18+
19+
1. **Create a Search:** Start by creating a Splunk search that identifies the events you want to alert on. For example:
20+
21+
```splunk
22+
index=application_logs level=error
23+
```
24+
Use the time picker to select "Relative" and set the timespan to 10.
25+
26+
2. **Configure the Alert:**
27+
* Click "Save As" and select "Alert."
28+
* Give your alert a descriptive name (e.g., "Application Error Alert").
29+
* **Trigger Condition:**
30+
* **Scheduled:** Choose "Scheduled" to evaluate the search on a set schedule. Below scheduled will be the button to select the frequency, select "Run on Cron Schedule". If the time Range below that is different than 10 minutes, update it.
31+
* **Triggered when:** Select "Number of results" "is greater than" "10."
32+
* **Time Range:** Set to "5 minutes."
33+
* **Trigger Actions:**
34+
* For this basic example, choose "Add to Triggered Alerts." In a real-world scenario, you'd configure email notifications, Slack integrations, or other actions.
35+
* **Save:** Save the alert.
36+
37+
**Explanation:** This alert runs the search every 10 minutes and triggers if the search returns more than 10 results. The "Add to Triggered Alerts" action simply adds a Alert to the Splunk Triggered Alerts list.
38+
39+
**Time Ranges and Frequency:** Since everything in Splunk core is a search, you need to consider the search timespan and frequency so that you are not a) searching the same data multiple times with an overlap timespan, b) missing events because of a gap between timespan and frequency, c) running too frequently and adding overhead or d) running too infrequently and experiencing delays in alerting.
40+
41+
42+
## 2. AppDynamics Alerts (Health Rule Violations)
43+
44+
**2. Create a Health Rule (or modify an existing one):**
45+
* Click "Create Rule" (or edit an existing rule that applies to your application).
46+
* Give the health rule a descriptive name (e.g., "Order Service Response Time Alert").
47+
* **Scope:** Select the application and tier (e.g., "OrderService").
48+
* **Conditions:**
49+
* Choose the metric "Average Response Time."
50+
* Set the threshold: "is greater than" "500" "milliseconds."
51+
* Configure the evaluation frequency (how often AppDynamics checks the metric).
52+
* **Actions:**
53+
* For this basic example, choose "Log to console." In a real-world scenario, you would configure email, SMS, or other notification channels.
54+
* **Save:** Save the health rule.
55+
56+
**Explanation:** This health rule continuously monitors the average response time of the "OrderService." If the response time exceeds 500ms, the health rule is violated, triggering the alert and the configured actions.
57+
58+
59+
## 3. Splunk Observability Cloud Alerts (Detectors) (Continued)
60+
61+
**2. Create a Detector:**
62+
* Click "Create Detector."
63+
* Give the detector a descriptive name (e.g., "High CPU Utilization Alert").
64+
* **Signal:**
65+
* Select the metric you want to monitor (e.g., "host.cpu.utilization"). Use the metric finder to locate the correct metric.
66+
* Add any necessary filters to specify the host (e.g., `host:my-hostname`).
67+
* **Condition:**
68+
* Set the threshold: "is above" "80" "%."
69+
* Configure the evaluation frequency and the "for" duration (how long the condition must be true before the alert triggers).
70+
* **Notifications:**
71+
* For this example, choose a simple notification method (e.g., a test webhook). In a real-world scenario, you would configure integrations with PagerDuty, Slack, or other notification systems.
72+
* **Save:** Save the detector.
73+
74+
**Explanation:** This detector monitors the CPU utilization metric for the specified host. If the CPU utilization exceeds 80% for the configured "for" duration, the detector triggers the alert and sends a notification.
75+
76+
**Important Considerations for All Platforms:**
77+
78+
* **Thresholds:** Carefully consider the thresholds you set for your alerts. Too sensitive thresholds can lead to alert fatigue, while thresholds that are too high might miss critical issues.
79+
* **Notification Channels:** Integrate your alerting systems with appropriate notification channels (email, SMS, Slack, PagerDuty) to ensure that alerts are delivered to the right people at the right time.
80+
* **Alert Grouping and Correlation:** For complex systems, implement alert grouping and correlation to reduce noise and focus on actionable insights. ITSI plays a critical role in this.
81+
* **Documentation:** Document your alerts clearly, including the conditions that trigger them and the appropriate response procedures.
82+
83+
These examples provide a starting point for creating basic alerts. As you become more familiar with these platforms, you can explore more advanced alerting features and configurations to meet your specific monitoring needs.
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: Creating Basic Alerts
3+
linkTitle: 3. Creating Services in ITSI
4+
weight: 1
5+
---
6+
7+
# Creating Services in ITSI with Dependencies Based on Entity Type
8+
9+
This workshop outlines how to create a service in Splunk IT Service Intelligence (ITSI) using an existing entity and establishing dependencies based on the entity's type. We'll differentiate between entities representing business workflows from Splunk Observability Cloud and those representing AppDynamics Business Transactions.
10+
11+
**Scenario:**
12+
13+
We have two existing services: "Astronomy Shop" (representing an application running in Kubernetes and being monitored by Splunk Observability Cloud) and "AD.ECommerce" (representing an application monitored by AppDynamics). We want to create a new service and add it as a dependent of one of those services. It is not necessary to create a service for both during your first run through this workshop so pick one that you are more interested in to start with.
14+

0 commit comments

Comments
 (0)