|
| 1 | +--- |
| 2 | +title: Creating Basic Alerts |
| 3 | +linkTitle: 2. Creating Basic Alerts |
| 4 | +weight: 1 |
| 5 | +--- |
| 6 | + |
| 7 | +# Setting Up Basic Alerts in Splunk Enterprise, AppDynamics, and Splunk Observability Cloud |
| 8 | + |
| 9 | +This section covers the creation of basic alerts in Splunk Enterprise, AppDynamics, and Splunk Observability Cloud. These examples focus on simplicity and demonstrating the core concepts. Remember that real-world alerting scenarios often require more complex configurations and thresholds. |
| 10 | + |
| 11 | +## 1. Splunk Enterprise Alerts |
| 12 | + |
| 13 | +Splunk alerts are triggered by search results that match specific criteria. We'll create a real-time alert that notifies us when a certain condition is met. |
| 14 | + |
| 15 | +**Scenario:** Alert when the number of "error" events in the "application_logs" index exceeds 10 in the last 5 minutes. |
| 16 | + |
| 17 | +**Steps:** |
| 18 | + |
| 19 | +1. **Create a Search:** Start by creating a Splunk search that identifies the events you want to alert on. For example: |
| 20 | + |
| 21 | + ```splunk |
| 22 | + index=application_logs level=error |
| 23 | + ``` |
| 24 | + Use the time picker to select "Relative" and set the timespan to 10. |
| 25 | + |
| 26 | +2. **Configure the Alert:** |
| 27 | + * Click "Save As" and select "Alert." |
| 28 | + * Give your alert a descriptive name (e.g., "Application Error Alert"). |
| 29 | + * **Trigger Condition:** |
| 30 | + * **Scheduled:** Choose "Scheduled" to evaluate the search on a set schedule. Below scheduled will be the button to select the frequency, select "Run on Cron Schedule". If the time Range below that is different than 10 minutes, update it. |
| 31 | + * **Triggered when:** Select "Number of results" "is greater than" "10." |
| 32 | + * **Time Range:** Set to "5 minutes." |
| 33 | + * **Trigger Actions:** |
| 34 | + * For this basic example, choose "Add to Triggered Alerts." In a real-world scenario, you'd configure email notifications, Slack integrations, or other actions. |
| 35 | + * **Save:** Save the alert. |
| 36 | + |
| 37 | +**Explanation:** This alert runs the search every 10 minutes and triggers if the search returns more than 10 results. The "Add to Triggered Alerts" action simply adds a Alert to the Splunk Triggered Alerts list. |
| 38 | + |
| 39 | +**Time Ranges and Frequency:** Since everything in Splunk core is a search, you need to consider the search timespan and frequency so that you are not a) searching the same data multiple times with an overlap timespan, b) missing events because of a gap between timespan and frequency, c) running too frequently and adding overhead or d) running too infrequently and experiencing delays in alerting. |
| 40 | + |
| 41 | + |
| 42 | +## 2. AppDynamics Alerts (Health Rule Violations) |
| 43 | + |
| 44 | +**2. Create a Health Rule (or modify an existing one):** |
| 45 | + * Click "Create Rule" (or edit an existing rule that applies to your application). |
| 46 | + * Give the health rule a descriptive name (e.g., "Order Service Response Time Alert"). |
| 47 | + * **Scope:** Select the application and tier (e.g., "OrderService"). |
| 48 | + * **Conditions:** |
| 49 | + * Choose the metric "Average Response Time." |
| 50 | + * Set the threshold: "is greater than" "500" "milliseconds." |
| 51 | + * Configure the evaluation frequency (how often AppDynamics checks the metric). |
| 52 | + * **Actions:** |
| 53 | + * For this basic example, choose "Log to console." In a real-world scenario, you would configure email, SMS, or other notification channels. |
| 54 | + * **Save:** Save the health rule. |
| 55 | + |
| 56 | +**Explanation:** This health rule continuously monitors the average response time of the "OrderService." If the response time exceeds 500ms, the health rule is violated, triggering the alert and the configured actions. |
| 57 | + |
| 58 | + |
| 59 | +## 3. Splunk Observability Cloud Alerts (Detectors) (Continued) |
| 60 | + |
| 61 | +**2. Create a Detector:** |
| 62 | + * Click "Create Detector." |
| 63 | + * Give the detector a descriptive name (e.g., "High CPU Utilization Alert"). |
| 64 | + * **Signal:** |
| 65 | + * Select the metric you want to monitor (e.g., "host.cpu.utilization"). Use the metric finder to locate the correct metric. |
| 66 | + * Add any necessary filters to specify the host (e.g., `host:my-hostname`). |
| 67 | + * **Condition:** |
| 68 | + * Set the threshold: "is above" "80" "%." |
| 69 | + * Configure the evaluation frequency and the "for" duration (how long the condition must be true before the alert triggers). |
| 70 | + * **Notifications:** |
| 71 | + * For this example, choose a simple notification method (e.g., a test webhook). In a real-world scenario, you would configure integrations with PagerDuty, Slack, or other notification systems. |
| 72 | + * **Save:** Save the detector. |
| 73 | + |
| 74 | +**Explanation:** This detector monitors the CPU utilization metric for the specified host. If the CPU utilization exceeds 80% for the configured "for" duration, the detector triggers the alert and sends a notification. |
| 75 | + |
| 76 | +**Important Considerations for All Platforms:** |
| 77 | + |
| 78 | +* **Thresholds:** Carefully consider the thresholds you set for your alerts. Too sensitive thresholds can lead to alert fatigue, while thresholds that are too high might miss critical issues. |
| 79 | +* **Notification Channels:** Integrate your alerting systems with appropriate notification channels (email, SMS, Slack, PagerDuty) to ensure that alerts are delivered to the right people at the right time. |
| 80 | +* **Alert Grouping and Correlation:** For complex systems, implement alert grouping and correlation to reduce noise and focus on actionable insights. ITSI plays a critical role in this. |
| 81 | +* **Documentation:** Document your alerts clearly, including the conditions that trigger them and the appropriate response procedures. |
| 82 | + |
| 83 | +These examples provide a starting point for creating basic alerts. As you become more familiar with these platforms, you can explore more advanced alerting features and configurations to meet your specific monitoring needs. |
0 commit comments