Skip to content

Commit 48f7cec

Browse files
imays11tradebot-elastic
authored andcommitted
[Rule Tuning] AWS S3 Bucket Enumeration or Brute Force (#5173)
* [Rule Tuning] AWS S3 Bucket Enumeration or Brute Force - changed to threshold rule to improve context - groups alerts by unique combination of `tls.client.server_name`(bucket name), `source.address` (can be either an ip or an internal AWS service address like ), and `aws.cloudtrail.user_identity.type` (this is to prevent capturing double events produced when a user Assumes a role inside another AWS account. This results in the same request being created twice, once as both AssumedRole and AWSAccount identity types) - uses `event.id` as the cardinality field and counts >= 40 - checks that`tls.client.server_name` exists in the query, this is to prevent capturing denied internal AWS actions that may occur against no particular bucket but against the S3 service itself - adds highlighted fields - replaces mitre technique - replaces more detailed investigation guide including specific details around investigating Threshold rule types via timeline * kuery language update * removing extra space * adding integration * removing filebeat because of tls.client.server_name removing filebeat because of tls.client.server_name * update IG references updated the references listed in the IG --------- Co-authored-by: Terrance DeJesus <[email protected]> (cherry picked from commit b73e6e2)
1 parent e42a5a5 commit 48f7cec

File tree

1 file changed

+72
-75
lines changed

1 file changed

+72
-75
lines changed
Lines changed: 72 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -1,73 +1,79 @@
11
[metadata]
22
creation_date = "2024/05/01"
3+
integration = ["aws"]
34
maturity = "production"
4-
updated_date = "2025/09/25"
5+
updated_date = "2025/10/01"
56

67
[rule]
78
author = ["Elastic"]
89
description = """
9-
Identifies a high number of failed S3 operations from a single source and account (or anonymous account) within a short
10-
timeframe. This activity can be indicative of attempting to cause an increase in billing to an account for excessive
11-
random operations, cause resource exhaustion, or enumerating bucket names for discovery.
10+
Identifies a high number of failed S3 operations against a single bucket from a single source address within a short timeframe.
11+
This activity can indicate attempts to collect bucket objects or cause an increase in billing to an account via internal "AccessDenied" errors.
1212
"""
13-
false_positives = ["Known or internal account IDs or automation"]
13+
false_positives = [
14+
"""
15+
External account IDs or broken automation may trigger this rule. For AccessDenied (HTTP 403 Forbidden), S3 doesn't charge the bucket owner when the request is initiated outside of the bucket owner's individual AWS account or the bucket owner's AWS organization.
16+
"""]
1417
from = "now-6m"
15-
language = "esql"
18+
index = ["logs-aws.cloudtrail-*"]
19+
language = "kuery"
1620
license = "Elastic License v2"
1721
name = "AWS S3 Bucket Enumeration or Brute Force"
18-
note = """## Triage and analysis
22+
note = """
23+
## Triage and analysis
1924
20-
### Investigating AWS S3 Bucket Enumeration or Brute Force
25+
> **Disclaimer**:
26+
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
2127
22-
AWS S3 buckets can be be brute forced to cause financial impact against the resource owner. What makes this even riskier is that even private, locked down buckets can still trigger a potential cost, even with an "Access Denied", while also being accessible from unauthenticated, anonymous accounts. This also appears to work on several or all [operations](https://docs.aws.amazon.com/cli/latest/reference/s3api/) (GET, PUT, list-objects, etc.). Additionally, buckets are trivially discoverable by default as long as the bucket name is known, making it vulnerable to enumeration for discovery.
28+
### Investigating AWS S3 Bucket Enumeration or Brute Force
2329
24-
Attackers may attempt to enumerate names until a valid bucket is discovered and then pivot to cause financial impact, enumerate for more information, or brute force in other ways to attempt to exfil data.
30+
This rule detects when many failed S3 operations (HTTP 403 AccessDenied) hit a single bucket from a single source address in a short window. This can indicate bucket name enumeration, object/key guessing, or brute-force style traffic intended to drive cost or probe for misconfigurations. 403 requests from outside the bucket owner’s account/organization are not billed, but 4XX from inside the owner’s account/org can still incur charges. Prioritize confirming who is making the calls and where they originate.
2531
2632
#### Possible investigation steps
2733
28-
- Examine the history of the operation requests from the same `source.address` and `cloud.account.id` to determine if there is other suspicious activity.
29-
- Review similar requests and look at the `user.agent` info to ascertain the source of the requests (though do not overly rely on this since it is controlled by the requestor).
30-
- Review other requests to the same `aws.s3.object.key` as well as other `aws.s3.object.key` accessed by the same `cloud.account.id` or `source.address`.
31-
- Investigate other alerts associated with the user account during the past 48 hours.
32-
- Validate the activity is not related to planned patches, updates, or network administrator activity.
33-
- Examine the request parameters. These may indicate the source of the program or the nature of the task being performed when the error occurred.
34-
- Check whether the error is related to unsuccessful attempts to enumerate or access objects, data, or secrets.
35-
- Considering the source IP address and geolocation of the user who issued the command:
36-
- Do they look normal for the calling user?
37-
- If the source is an EC2 IP address, is it associated with an EC2 instance in one of your accounts or is the source IP from an EC2 instance that's not under your control?
38-
- If it is an authorized EC2 instance, is the activity associated with normal behavior for the instance role or roles? Are there any other alerts or signs of suspicious activity involving this instance?
39-
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
40-
- Contact the account owner and confirm whether they are aware of this activity if suspicious.
41-
- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours.
34+
- **Investigate in Timeline.** Investigate the alert in timeline (Take action -> Investigate in timeline) to retrieve and review all of the raw CloudTrail events that contributed to the threshold alert. Threshold alerts only display the grouped fields; Timeline provides a way to see individual event details such as request parameters, full error messages, and additional user context.
35+
- **Confirm entity & target.** Note the rule’s threshold and window. Identify the target bucket (`tls.client.server_name`) and the source (`source.address`). Verify the caller identity details via any available `aws.cloudtrail.user_identity` fields.
36+
- **Actor & session context.** In CloudTrail events, pivot 15–30 minutes around the spike for the same `source.address` or principal. Determine if the source is:
37+
- **External** to your account/organization (recon/cost DDoS risk is lower for you due to 2024 billing change).
38+
- **Internal** (same account/org)—higher cost risk and possible misuse of internal automation.
39+
- **Bucket posture snapshot.** Record S3 Block Public Access, Bucket Policy, ACLs, and whether Versioning/Object Lock are enabled. Capture any recent `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, or lifecycle changes.
40+
- **Blast radius.** Check for similar spikes to other buckets/regions, or parallel spikes from the same source. Review any GuardDuty S3 findings and AWS Config drift related to the bucket or principal.
41+
- **Business context.** Contact the bucket/app owner. Validate whether a migration, scanner, or broken job could legitimately cause bursts.
4242
4343
### False positive analysis
4444
45-
- Verify the `source.address` and `cloud.account.id` - there are some valid operations from within AWS directly that can cause failures and false positives. Additionally, failed automation can also caeuse false positives, but should be identifiable by reviewing the `source.address` and `cloud.account.id`.
45+
- **Expected jobs / broken automation.** Data movers, posture scanners, or failed credentials can generate 403 storms. Validate with `userAgent`, ARNs, change windows, and environment (dev/stage vs prod).
46+
- **External probing.** Internet-origin enumeration often looks like uniform 403s from transient or cloud-provider IPs and typically has no business impact and no billing if outside your account/org. Tune thresholds or allowlist known scanners if appropriate.
4647
4748
### Response and remediation
4849
49-
- Initiate the incident response process based on the outcome of the triage.
50-
- Disable or limit the account during the investigation and response.
51-
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
52-
- Identify the account role in the cloud environment.
53-
- Assess the criticality of affected services and servers.
54-
- Work with your IT team to identify and minimize the impact on users.
55-
- Identify if the attacker is moving laterally and compromising other accounts, servers, or services.
56-
- Identify any regulatory or legal ramifications related to this activity.
57-
- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions.
58-
- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users.
59-
- Consider enabling multi-factor authentication for users.
60-
- Review the permissions assigned to the implicated user to ensure that the least privilege principle is being followed.
61-
- Implement security best practices [outlined](https://aws.amazon.com/premiumsupport/knowledge-center/security-best-practices/) by AWS.
62-
- Take the actions needed to return affected systems, data, or services to their normal operational levels.
63-
- Identify the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
64-
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
65-
- Check for PutBucketPolicy event actions as well to see if they have been tampered with. While we monitor for denied, a single successful action to add a backdoor into the bucket via policy updates (however they got permissions) may be critical to identify during TDIR.
50+
**1. Immediate, low-risk actions**
51+
- **Preserve evidence.** Export CloudTrail records (±30 minutes) for the bucket and source address into an evidence bucket with restricted access.
52+
- **Notify owners.** Inform the bucket/application owner and security lead; confirm any maintenance windows.
53+
54+
**2. Containment options**
55+
- **External-origin spikes:** Verify Block Public Access is enforced and bucket policies are locked down. Optionally apply a temporary deny-all bucket policy allowing only IR/admin roles while scoping.
56+
- **Internal-origin spikes:** Identify the principal. Rotate access keys for IAM users, or restrict involved roles (temporary deny/SCP, remove risky policies). Pause broken jobs/pipelines until validated.
57+
58+
**3. Scope & hunting**
59+
- Review Timeline and CloudTrail for related events: `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, lifecycle changes, unusual `PutObject`/`DeleteObject` volumes, or cross-account access.
60+
- Check GuardDuty S3 and Config drift findings for signs of tampering or lateral movement.
61+
62+
**4. Recovery & hardening**
63+
- If data impact suspected: with Versioning, restore known-good versions; otherwise, recover from backups/replicas.
64+
- Enable Versioning on critical buckets going forward; evaluate Object Lock legal hold if enabled.
65+
- Ensure Block Public Access, least-privilege IAM policies, CloudTrail data events for S3, and GuardDuty protections are consistently enforced.
6666
67+
### Additional information
68+
69+
- [AWS S3 billing for error responses](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorCodeBilling.html): see latest AWS docs on which error codes are billed.
70+
- [AWS announcement (Aug 2024)](https://aws.amazon.com/about-aws/whats-new/2024/05/amazon-s3-no-charge-http-error-codes/): 403s from outside the account/org are not billed.
71+
- [AWS IR Playbooks](https://github.com/aws-samples/aws-incident-response-playbooks/): NIST-aligned template for evidence, containment, eradication, recovery, post-incident.
72+
- [AWS Customer Playbook Framework](https://github.com/aws-samples/aws-customer-playbook-framework/): Practical response steps for account and bucket-level abuse.
6773
"""
6874
references = [
6975
"https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1",
70-
"https://docs.aws.amazon.com/cli/latest/reference/s3api/",
76+
"https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorCodeBilling.html",
7177
]
7278
risk_score = 21
7379
rule_id = "5f0234fd-7f21-42af-8391-511d5fd11d5c"
@@ -80,38 +86,26 @@ tags = [
8086
"Resources: Investigation Guide",
8187
"Use Case: Log Auditing",
8288
"Tactic: Impact",
89+
"Tactic: Discovery",
90+
"Tactic: Collection",
8391
]
8492
timestamp_override = "event.ingested"
85-
type = "esql"
93+
type = "threshold"
8694

8795
query = '''
88-
from logs-aws.cloudtrail-*
89-
90-
| where
91-
event.dataset == "aws.cloudtrail"
92-
and event.provider == "s3.amazonaws.com"
93-
and aws.cloudtrail.error_code == "AccessDenied"
94-
and tls.client.server_name is not null
95-
and cloud.account.id is not null
96-
97-
// keep only relevant ECS fields
98-
| keep
99-
tls.client.server_name,
100-
source.address,
101-
cloud.account.id
102-
103-
// count access denied requests per server_name, source, and account
104-
| stats
105-
Esql.event_count = count(*)
106-
by
107-
tls.client.server_name,
108-
source.address,
109-
cloud.account.id
110-
111-
// Threshold: more than 40 denied requests
112-
| where Esql.event_count > 40
96+
event.dataset: "aws.cloudtrail" and
97+
event.provider : "s3.amazonaws.com" and
98+
aws.cloudtrail.error_code : "AccessDenied" and
99+
tls.client.server_name : *
113100
'''
114101

102+
[rule.investigation_fields]
103+
field_names = [
104+
"@timestamp",
105+
"source.address",
106+
"aws.cloudtrail.user_identity.type",
107+
"tls.client.server_name"
108+
]
115109

116110
[[rule.threat]]
117111
framework = "MITRE ATT&CK"
@@ -128,10 +122,9 @@ reference = "https://attack.mitre.org/tactics/TA0040/"
128122
[[rule.threat]]
129123
framework = "MITRE ATT&CK"
130124
[[rule.threat.technique]]
131-
id = "T1580"
132-
name = "Cloud Infrastructure Discovery"
133-
reference = "https://attack.mitre.org/techniques/T1580/"
134-
125+
id = "T1619"
126+
name = "Cloud Storage Object Discovery"
127+
reference = "https://attack.mitre.org/techniques/T1619/"
135128

136129
[rule.threat.tactic]
137130
id = "TA0007"
@@ -150,6 +143,10 @@ id = "TA0009"
150143
name = "Collection"
151144
reference = "https://attack.mitre.org/tactics/TA0009/"
152145

153-
[rule.investigation_fields]
154-
field_names = ["source.address", "tls.client.server_name", "cloud.account.id", "failed_requests"]
155146

147+
[rule.threshold]
148+
field = ["tls.client.server_name", "source.address", "aws.cloudtrail.user_identity.type"]
149+
value = 1
150+
[[rule.threshold.cardinality]]
151+
field = "event.id"
152+
value = 40

0 commit comments

Comments
 (0)