Skip to content

Commit 4913546

Browse files
committed
[New] Alerts From Multiple Integrations by Entity IP
Higher-Order Rules that trigger on different integrations with different event.category (e.g. authentication with endpoint, email with network etc.) for the same entity (user, IP) in an interval of 4 hours. rule is set to run every 1h. - Alerts From Multiple Integrations by Source Address - Alerts From Multiple Integrations by Destination IP - Alerts From Multiple Integrations by User Name
1 parent a16307e commit 4913546

File tree

3 files changed

+260
-0
lines changed

3 files changed

+260
-0
lines changed
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
[metadata]
2+
creation_date = "2025/12/15"
3+
maturity = "production"
4+
updated_date = "2025/12/15"
5+
6+
[rule]
7+
author = ["Elastic"]
8+
description = """
9+
This rule uses alert data to determine when multiple alerts from different integrations involving the same destination.ip
10+
are triggered. Analysts can use this to prioritize triage and response, as these hosts are more likely to be compromised.
11+
"""
12+
from = "now-8h"
13+
interval = "1h"
14+
language = "esql"
15+
license = "Elastic License v2"
16+
name = "Alerts From Multiple Integrations by Destination Address"
17+
risk_score = 73
18+
rule_id = "08933236-b27a-49f6-b04a-a616983f04b9"
19+
severity = "high"
20+
tags = ["Use Case: Threat Detection", "Rule Type: Higher-Order Rule", "Resources: Investigation Guide"]
21+
timestamp_override = "event.ingested"
22+
type = "esql"
23+
24+
query = '''
25+
from .alerts-security.* metadata _id
26+
27+
// any alerts excluding low severity and the noisy ones
28+
| where kibana.alert.rule.name is not null and destination.ip is not null and kibana.alert.rule.severity != "low" and
29+
not kibana.alert.rule.name in ("Threat Intel IP Address Indicator Match", "Threat Intel Indicator Match", "Agent Spoofing - Mismatched Agent ID")
30+
31+
// group alerts by source.ip and extract values of interest for alert triage
32+
| stats Esql.event_module_distinct_count = COUNT_DISTINCT(event.module),
33+
Esql.rule_name_distinct_count = COUNT_DISTINCT(kibana.alert.rule.name),
34+
Esql.event_category_distinct_count = COUNT_DISTINCT(event.category),
35+
Esql.rule_severity_distinct_count = COUNT_DISTINCT(kibana.alert.rule.severity),
36+
Esql.event_module_values = VALUES(event.module),
37+
Esql.rule_name_values = VALUES(kibana.alert.rule.name),
38+
Esql.message_values = VALUES(message),
39+
Esql.event_vategory_values = VALUES(event.category),
40+
Esql.source_ip_values = VALUES(source.ip),
41+
Esql.host_id_values = VALUES(host.id),
42+
Esql.agent_id_values = VALUES(agent.id),
43+
Esql.user_name_values = VALUES(user.name),
44+
Esql.rule_severity_values = VALUES(kibana.alert.rule.severity) by destination.ip
45+
46+
// filter for alerts from same destination.ip reported by different integrations with unique categories and with different severity levels
47+
| where Esql.event_module_distinct_count >= 2 and Esql.event_category_distinct_count >= 2 and Esql.rule_severity_distinct_count >= 2
48+
| keep destination.ip, Esql.*
49+
'''
50+
note = """## Triage and analysis
51+
52+
> **Disclaimer**:
53+
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
54+
55+
### Investigating Alerts From Multiple Integrations by Destination Address
56+
57+
The detection rule uses alert data to determine when multiple alerts from different integrations involving the same destination.ip are triggered.
58+
59+
### Possible investigation steps
60+
61+
- Review the alert details to identify the specific host involved and the different modules and rules that triggered the alert.
62+
- Examine the timeline of the alerts to understand the sequence of events and determine if there is a pattern or progression in the tactics used.
63+
- Correlate the alert data with other logs and telemetry from the host, such as process creation, network connections, and file modifications, to gather additional context.
64+
- Investigate any known vulnerabilities or misconfigurations on the host that could have been exploited by the adversary.
65+
- Check for any indicators of compromise (IOCs) associated with the alerts, such as suspicious IP addresses, domains, or file hashes, and search for these across the network.
66+
- Assess the impact and scope of the potential compromise by determining if other hosts or systems have similar alerts or related activity.
67+
68+
### False positive analysis
69+
70+
- Alerts from routine administrative tasks may trigger multiple tactics. Review and exclude known benign activities such as scheduled software updates or system maintenance.
71+
- Security tools running on the host might generate alerts across different tactics. Identify and exclude alerts from trusted security applications to reduce noise.
72+
- Automated scripts or batch processes can mimic adversarial behavior. Analyze and whitelist these processes if they are verified as non-threatening.
73+
- Frequent alerts from development or testing environments can be misleading. Consider excluding these environments from the rule or applying a different risk score.
74+
- User behavior anomalies, such as accessing multiple systems or applications, might trigger alerts. Implement user behavior baselines to differentiate between normal and suspicious activities.
75+
76+
### Response and remediation
77+
78+
- Isolate the affected host from the network immediately to prevent further lateral movement by the adversary.
79+
- Conduct a thorough forensic analysis of the host to identify the specific vulnerabilities exploited and gather evidence of the attack phases involved.
80+
- Remove any identified malicious software or unauthorized access tools from the host, ensuring all persistence mechanisms are eradicated.
81+
- Apply security patches and updates to the host to address any exploited vulnerabilities and prevent similar attacks.
82+
- Restore the host from a known good backup if necessary, ensuring that the backup is free from compromise.
83+
- Monitor the host and network for any signs of re-infection or further suspicious activity, using enhanced logging and alerting based on the identified attack patterns.
84+
- Escalate the incident to the appropriate internal or external cybersecurity teams for further investigation and potential legal action if the attack is part of a larger campaign."""
85+
86+
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
[metadata]
2+
creation_date = "2025/12/15"
3+
maturity = "production"
4+
updated_date = "2025/12/15"
5+
6+
[rule]
7+
author = ["Elastic"]
8+
description = """
9+
This rule uses alert data to determine when multiple alerts from different integrations involving the same source.ip are
10+
triggered. Analysts can use this to prioritize triage and response, as these hosts are more likely to be compromised.
11+
"""
12+
from = "now-8h"
13+
interval = "1h"
14+
language = "esql"
15+
license = "Elastic License v2"
16+
name = "Alerts From Multiple Integrations by Source Address"
17+
risk_score = 73
18+
rule_id = "7d02c440-52a8-4854-ad3f-71af7fbb4fc6"
19+
severity = "high"
20+
tags = ["Use Case: Threat Detection", "Rule Type: Higher-Order Rule", "Resources: Investigation Guide"]
21+
timestamp_override = "event.ingested"
22+
type = "esql"
23+
24+
query = '''
25+
from .alerts-security.* metadata _id
26+
27+
// any alerts excluding low severity and the noisy ones
28+
| where kibana.alert.rule.name is not null and source.ip is not null and kibana.alert.rule.severity != "low" and
29+
not kibana.alert.rule.name in ("Threat Intel IP Address Indicator Match", "Threat Intel Indicator Match", "Agent Spoofing - Mismatched Agent ID")
30+
31+
// group alerts by source.ip and extract values of interest for alert triage
32+
| stats Esql.event_module_distinct_count = COUNT_DISTINCT(event.module),
33+
Esql.rule_name_distinct_count = COUNT_DISTINCT(kibana.alert.rule.name),
34+
Esql.event_category_distinct_count = COUNT_DISTINCT(event.category),
35+
Esql.rule_severity_distinct_count = COUNT_DISTINCT(kibana.alert.rule.severity),
36+
Esql.event_module_values = VALUES(event.module),
37+
Esql.rule_name_values = VALUES(kibana.alert.rule.name),
38+
Esql.message_values = VALUES(message),
39+
Esql.event_vategory_values = VALUES(event.category),
40+
Esql.destination_ip_values = VALUES(destination.ip),
41+
Esql.host_id_values = VALUES(host.id),
42+
Esql.agent_id_values = VALUES(agent.id),
43+
Esql.user_name_values = VALUES(user.name),
44+
Esql.rule_severity_values = VALUES(kibana.alert.rule.severity) by source.ip
45+
46+
// filter for alerts from same source.ip reported by different integrations with unique categories and with different severity levels
47+
| where Esql.event_module_distinct_count >= 2 and Esql.event_category_distinct_count >= 2 and Esql.rule_severity_distinct_count >= 2
48+
| keep source.ip, Esql.*
49+
'''
50+
note = """## Triage and analysis
51+
52+
> **Disclaimer**:
53+
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
54+
55+
### Investigating Alerts From Multiple Integrations by Source Address
56+
57+
The detection rule uses alert data to determine when multiple alerts from different integrations involving the same source.ip are triggered.
58+
59+
### Possible investigation steps
60+
61+
- Review the alert details to identify the specific host involved and the different modules and rules that triggered the alert.
62+
- Examine the timeline of the alerts to understand the sequence of events and determine if there is a pattern or progression in the tactics used.
63+
- Correlate the alert data with other logs and telemetry from the host, such as process creation, network connections, and file modifications, to gather additional context.
64+
- Investigate any known vulnerabilities or misconfigurations on the host that could have been exploited by the adversary.
65+
- Check for any indicators of compromise (IOCs) associated with the alerts, such as suspicious IP addresses, domains, or file hashes, and search for these across the network.
66+
- Assess the impact and scope of the potential compromise by determining if other hosts or systems have similar alerts or related activity.
67+
68+
### False positive analysis
69+
70+
- Alerts from routine administrative tasks may trigger multiple tactics. Review and exclude known benign activities such as scheduled software updates or system maintenance.
71+
- Security tools running on the host might generate alerts across different tactics. Identify and exclude alerts from trusted security applications to reduce noise.
72+
- Automated scripts or batch processes can mimic adversarial behavior. Analyze and whitelist these processes if they are verified as non-threatening.
73+
- Frequent alerts from development or testing environments can be misleading. Consider excluding these environments from the rule or applying a different risk score.
74+
- User behavior anomalies, such as accessing multiple systems or applications, might trigger alerts. Implement user behavior baselines to differentiate between normal and suspicious activities.
75+
76+
### Response and remediation
77+
78+
- Isolate the affected host from the network immediately to prevent further lateral movement by the adversary.
79+
- Conduct a thorough forensic analysis of the host to identify the specific vulnerabilities exploited and gather evidence of the attack phases involved.
80+
- Remove any identified malicious software or unauthorized access tools from the host, ensuring all persistence mechanisms are eradicated.
81+
- Apply security patches and updates to the host to address any exploited vulnerabilities and prevent similar attacks.
82+
- Restore the host from a known good backup if necessary, ensuring that the backup is free from compromise.
83+
- Monitor the host and network for any signs of re-infection or further suspicious activity, using enhanced logging and alerting based on the identified attack patterns.
84+
- Escalate the incident to the appropriate internal or external cybersecurity teams for further investigation and potential legal action if the attack is part of a larger campaign."""
85+
86+
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
[metadata]
2+
creation_date = "2025/12/15"
3+
maturity = "production"
4+
updated_date = "2025/12/15"
5+
6+
[rule]
7+
author = ["Elastic"]
8+
description = """
9+
This rule uses alert data to determine when multiple alerts from different integrations involving the same user.name are
10+
triggered. Analysts can use this to prioritize triage and response, as these hosts are more likely to be compromised.
11+
"""
12+
from = "now-4h"
13+
interval = "1h"
14+
language = "esql"
15+
license = "Elastic License v2"
16+
name = "Alerts From Multiple Integrations by User Name"
17+
risk_score = 73
18+
rule_id = "1dd99dbf-b98d-4956-876b-f13bc0ce017f"
19+
severity = "high"
20+
tags = ["Use Case: Threat Detection", "Rule Type: Higher-Order Rule", "Resources: Investigation Guide"]
21+
timestamp_override = "event.ingested"
22+
type = "esql"
23+
24+
query = '''
25+
from .alerts-security.* metadata _id
26+
27+
// any alerts excluding low severity and the noisy ones
28+
| where kibana.alert.rule.name is not null and user.name is not null and kibana.alert.rule.severity != "low" and
29+
not kibana.alert.rule.name in ("Threat Intel IP Address Indicator Match", "Threat Intel Indicator Match", "Agent Spoofing - Mismatched Agent ID") and
30+
not user.id in ("S-1-5-18", "S-1-5-19", "S-1-5-20")
31+
32+
// group alerts by source.ip and extract values of interest for alert triage
33+
| stats Esql.event_module_distinct_count = COUNT_DISTINCT(event.module),
34+
Esql.rule_name_distinct_count = COUNT_DISTINCT(kibana.alert.rule.name),
35+
Esql.event_category_distinct_count = COUNT_DISTINCT(event.category),
36+
Esql.rule_severity_distinct_count = COUNT_DISTINCT(kibana.alert.rule.severity),
37+
Esql.event_module_values = VALUES(event.module),
38+
Esql.rule_name_values = VALUES(kibana.alert.rule.name),
39+
Esql.message_values = VALUES(message),
40+
Esql.event_vategory_values = VALUES(event.category),
41+
Esql.source_ip_values = VALUES(source.ip),
42+
Esql.source_ip_values = VALUES(destination.ip),
43+
Esql.host_id_values = VALUES(host.id),
44+
Esql.agent_id_values = VALUES(agent.id),
45+
Esql.user_id_values = VALUES(user.id),
46+
Esql.rule_severity_values = VALUES(kibana.alert.rule.severity) by user.name
47+
48+
// filter for alerts from same destination.ip reported by different integrations with unique categories and with different severity levels
49+
| where Esql.event_module_distinct_count >= 2 and Esql.event_category_distinct_count >= 2 and Esql.rule_severity_distinct_count >= 2
50+
| keep user.name, Esql.*
51+
'''
52+
note = """## Triage and analysis
53+
54+
> **Disclaimer**:
55+
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
56+
57+
### Investigating Alerts From Multiple Integrations by User Name
58+
59+
The detection rule uses alert data to determine when multiple alerts from different integrations involving the same user.name are triggered.
60+
61+
### Possible investigation steps
62+
63+
- Review the alert details to identify the specific user involved and the different modules and rules that triggered the alert.
64+
- Examine the timeline of the alerts to understand the sequence of events and determine if there is a pattern or progression in the tactics used.
65+
- Correlate the alert data with other logs and telemetry from the host, such as process creation, network connections, and file modifications, to gather additional context.
66+
- Investigate any known vulnerabilities or misconfigurations on the host that could have been exploited by the adversary.
67+
- Check for any indicators of compromise (IOCs) associated with the alerts, such as suspicious IP addresses, domains, or file hashes, and search for these across the network.
68+
- Assess the impact and scope of the potential compromise by determining if other hosts or systems have similar alerts or related activity.
69+
70+
### False positive analysis
71+
72+
- Alerts from routine administrative tasks may trigger multiple tactics. Review and exclude known benign activities such as scheduled software updates or system maintenance.
73+
- Security tools running on the host might generate alerts across different tactics. Identify and exclude alerts from trusted security applications to reduce noise.
74+
- Automated scripts or batch processes can mimic adversarial behavior. Analyze and whitelist these processes if they are verified as non-threatening.
75+
- Frequent alerts from development or testing environments can be misleading. Consider excluding these environments from the rule or applying a different risk score.
76+
- User behavior anomalies, such as accessing multiple systems or applications, might trigger alerts. Implement user behavior baselines to differentiate between normal and suspicious activities.
77+
78+
### Response and remediation
79+
80+
- Isolate the affected host from the network immediately to prevent further lateral movement by the adversary.
81+
- Conduct a thorough forensic analysis of the host to identify the specific vulnerabilities exploited and gather evidence of the attack phases involved.
82+
- Remove any identified malicious software or unauthorized access tools from the host, ensuring all persistence mechanisms are eradicated.
83+
- Apply security patches and updates to the host to address any exploited vulnerabilities and prevent similar attacks.
84+
- Restore the host from a known good backup if necessary, ensuring that the backup is free from compromise.
85+
- Monitor the host and network for any signs of re-infection or further suspicious activity, using enhanced logging and alerting based on the identified attack patterns.
86+
- Escalate the incident to the appropriate internal or external cybersecurity teams for further investigation and potential legal action if the attack is part of a larger campaign."""
87+
88+

0 commit comments

Comments
 (0)