Skip to content

Commit a6fb1f7

Browse files
Update latest docs
1 parent 6cbd488 commit a6fb1f7

File tree

156 files changed

+8725
-234
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

156 files changed

+8725
-234
lines changed
Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
[[prebuilt-rule-8-13-22-aws-bedrock-detected-multiple-attempts-to-use-denied-models-by-a-single-user]]
2+
=== AWS Bedrock Detected Multiple Attempts to use Denied Models by a Single User
3+
4+
Identifies multiple successive failed attempts to use denied model resources within AWS Bedrock. This could indicated attempts to bypass limitations of other approved models, or to force an impact on the environment by incurring exhorbitant costs.
5+
6+
*Rule type*: esql
7+
8+
*Rule indices*: None
9+
10+
*Severity*: high
11+
12+
*Risk score*: 73
13+
14+
*Runs every*: 10m
15+
16+
*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <<rule-schedule, `Additional look-back time`>>)
17+
18+
*Maximum alerts per execution*: 100
19+
20+
*References*:
21+
22+
* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html
23+
* https://atlas.mitre.org/techniques/AML.T0015
24+
* https://atlas.mitre.org/techniques/AML.T0034
25+
* https://www.elastic.co/security-labs/elastic-advances-llm-security
26+
27+
*Tags*:
28+
29+
* Domain: LLM
30+
* Data Source: AWS Bedrock
31+
* Data Source: AWS S3
32+
* Resources: Investigation Guide
33+
* Use Case: Policy Violation
34+
* Mitre Atlas: T0015
35+
* Mitre Atlas: T0034
36+
37+
*Version*: 3
38+
39+
*Rule authors*:
40+
41+
* Elastic
42+
43+
*Rule license*: Elastic License v2
44+
45+
46+
==== Investigation guide
47+
48+
49+
50+
*Triage and analysis*
51+
52+
53+
54+
*Investigating Attempt to use Denied Amazon Bedrock Models.*
55+
56+
57+
Amazon Bedrock is AWS’s managed service that enables developers to build and scale generative AI applications using large foundation models (FMs) from top providers.
58+
59+
Bedrock offers a variety of pretrained models from Amazon (such as the Titan series), as well as models from providers like Anthropic, Meta, Cohere, and AI21 Labs.
60+
61+
62+
*Possible investigation steps*
63+
64+
65+
- Identify the user account that attempted to use denied models.
66+
- Investigate other alerts associated with the user account during the past 48 hours.
67+
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
68+
- Examine the account's attempts to access Amazon Bedrock models in the last 24 hours.
69+
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
70+
71+
72+
*False positive analysis*
73+
74+
75+
- Verify the user account that attempted to use denied models, is a legitimate misunderstanding by users or overly strict policies.
76+
77+
78+
*Response and remediation*
79+
80+
81+
- Initiate the incident response process based on the outcome of the triage.
82+
- Disable or limit the account during the investigation and response.
83+
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
84+
- Identify the account role in the cloud environment.
85+
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
86+
- Identify any regulatory or legal ramifications related to this activity.
87+
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
88+
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
89+
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
90+
91+
92+
==== Setup
93+
94+
95+
96+
*Setup*
97+
98+
99+
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
100+
101+
https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html
102+
103+
104+
==== Rule query
105+
106+
107+
[source, js]
108+
----------------------------------
109+
from logs-aws_bedrock.invocation-*
110+
| where gen_ai.response.error_code == "AccessDeniedException"
111+
| keep user.id, gen_ai.request.model.id, cloud.account.id, gen_ai.response.error_code
112+
| stats total_denials = count(*) by user.id, gen_ai.request.model.id, cloud.account.id
113+
| where total_denials > 3
114+
| sort total_denials desc
115+
116+
----------------------------------
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
[[prebuilt-rule-8-13-22-aws-bedrock-detected-multiple-validation-exception-errors-by-a-single-user]]
2+
=== AWS Bedrock Detected Multiple Validation Exception Errors by a Single User
3+
4+
Identifies multiple validation exeception errors within AWS Bedrock. Validation errors occur when you run the InvokeModel or InvokeModelWithResponseStream APIs on a foundation model that uses an incorrect inference parameter or corresponding value. These errors also occur when you use an inference parameter for one model with a model that doesn't have the same API parameter. This could indicate attempts to bypass limitations of other approved models, or to force an impact on the environment by incurring exhorbitant costs.
5+
6+
*Rule type*: esql
7+
8+
*Rule indices*: None
9+
10+
*Severity*: high
11+
12+
*Risk score*: 73
13+
14+
*Runs every*: 10m
15+
16+
*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <<rule-schedule, `Additional look-back time`>>)
17+
18+
*Maximum alerts per execution*: 100
19+
20+
*References*:
21+
22+
* https://atlas.mitre.org/techniques/AML.T0015
23+
* https://atlas.mitre.org/techniques/AML.T0034
24+
* https://atlas.mitre.org/techniques/AML.T0046
25+
* https://www.elastic.co/security-labs/elastic-advances-llm-security
26+
27+
*Tags*:
28+
29+
* Domain: LLM
30+
* Data Source: AWS
31+
* Data Source: AWS Bedrock
32+
* Data Source: AWS S3
33+
* Use Case: Policy Violation
34+
* Mitre Atlas: T0015
35+
* Mitre Atlas: T0034
36+
* Mitre Atlas: T0046
37+
38+
*Version*: 3
39+
40+
*Rule authors*:
41+
42+
* Elastic
43+
44+
*Rule license*: Elastic License v2
45+
46+
47+
==== Investigation guide
48+
49+
50+
51+
*Triage and analysis*
52+
53+
54+
55+
*Investigating Amazon Bedrock Model Validation Exception Errors.*
56+
57+
58+
Amazon Bedrock is AWS’s managed service that enables developers to build and scale generative AI applications using large foundation models (FMs) from top providers.
59+
60+
Bedrock offers a variety of pretrained models from Amazon (such as the Titan series), as well as models from providers like Anthropic, Meta, Cohere, and AI21 Labs.
61+
62+
63+
*Possible investigation steps*
64+
65+
66+
- Identify the user account that caused validation errors in accessing the Amazon Bedrock models.
67+
- Investigate other alerts associated with the user account during the past 48 hours.
68+
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
69+
- Examine the account's attempts to access Amazon Bedrock models in the last 24 hours.
70+
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
71+
72+
73+
*False positive analysis*
74+
75+
76+
- Verify the user account that that caused validation errors is a legitimate misunderstanding by users on accessing the bedrock models.
77+
78+
79+
*Response and remediation*
80+
81+
82+
- Initiate the incident response process based on the outcome of the triage.
83+
- Disable or limit the account during the investigation and response.
84+
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
85+
- Identify the account role in the cloud environment.
86+
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
87+
- Identify any regulatory or legal ramifications related to this activity.
88+
- Identify if any implication to resource billing.
89+
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
90+
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
91+
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
92+
93+
94+
==== Setup
95+
96+
97+
98+
*Setup*
99+
100+
101+
This rule requires that AWS Bedrock Integration be configured. For more information, see the AWS Bedrock integration documentation:
102+
103+
https://www.elastic.co/docs/current/integrations/aws_bedrock
104+
105+
106+
==== Rule query
107+
108+
109+
[source, js]
110+
----------------------------------
111+
from logs-aws_bedrock.invocation-*
112+
// truncate the timestamp to a 1-minute window
113+
| eval target_time_window = DATE_TRUNC(1 minutes, @timestamp)
114+
| where gen_ai.response.error_code == "ValidationException"
115+
| keep user.id, gen_ai.request.model.id, cloud.account.id, gen_ai.response.error_code, target_time_window
116+
// count the number of users causing validation errors within a 1 minute window
117+
| stats total_denials = count(*) by target_time_window, user.id, cloud.account.id
118+
| where total_denials > 3
119+
120+
----------------------------------
Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
[[prebuilt-rule-8-13-22-aws-bedrock-guardrails-detected-multiple-policy-violations-within-a-single-blocked-request]]
2+
=== AWS Bedrock Guardrails Detected Multiple Policy Violations Within a Single Blocked Request
3+
4+
Identifies multiple violations of AWS Bedrock guardrails within a single request, resulting in a block action, increasing the likelihood of malicious intent. Multiple violations implies that a user may be intentionally attempting to cirvumvent security controls, access sensitive information, or possibly exploit a vulnerability in the system.
5+
6+
*Rule type*: esql
7+
8+
*Rule indices*: None
9+
10+
*Severity*: low
11+
12+
*Risk score*: 21
13+
14+
*Runs every*: 10m
15+
16+
*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <<rule-schedule, `Additional look-back time`>>)
17+
18+
*Maximum alerts per execution*: 100
19+
20+
*References*:
21+
22+
* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html
23+
* https://atlas.mitre.org/techniques/AML.T0051
24+
* https://atlas.mitre.org/techniques/AML.T0054
25+
* https://www.elastic.co/security-labs/elastic-advances-llm-security
26+
27+
*Tags*:
28+
29+
* Domain: LLM
30+
* Data Source: AWS Bedrock
31+
* Data Source: AWS S3
32+
* Resources: Investigation Guide
33+
* Use Case: Policy Violation
34+
* Mitre Atlas: T0051
35+
* Mitre Atlas: T0054
36+
37+
*Version*: 3
38+
39+
*Rule authors*:
40+
41+
* Elastic
42+
43+
*Rule license*: Elastic License v2
44+
45+
46+
==== Investigation guide
47+
48+
49+
50+
*Triage and analysis*
51+
52+
53+
54+
*Investigating Amazon Bedrock Guardrail Multiple Policy Violations Within a Single Blocked Request.*
55+
56+
57+
Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications.
58+
59+
It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices.
60+
61+
Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects,
62+
and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language.
63+
64+
65+
*Possible investigation steps*
66+
67+
68+
- Identify the user account and the user request that caused multiple policy violations and whether it should perform this kind of action.
69+
- Investigate the user activity that might indicate a potential brute force attack.
70+
- Investigate other alerts associated with the user account during the past 48 hours.
71+
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
72+
- Examine the account's prompts and responses in the last 24 hours.
73+
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
74+
75+
76+
*False positive analysis*
77+
78+
79+
- Verify the user account that caused multiple policy violations, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.
80+
81+
82+
*Response and remediation*
83+
84+
85+
- Initiate the incident response process based on the outcome of the triage.
86+
- Disable or limit the account during the investigation and response.
87+
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
88+
- Identify the account role in the cloud environment.
89+
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
90+
- Identify any regulatory or legal ramifications related to this activity.
91+
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
92+
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
93+
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
94+
95+
96+
==== Setup
97+
98+
99+
100+
*Setup*
101+
102+
103+
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
104+
105+
https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html
106+
107+
108+
==== Rule query
109+
110+
111+
[source, js]
112+
----------------------------------
113+
from logs-aws_bedrock.invocation-*
114+
| where gen_ai.policy.action == "BLOCKED"
115+
| eval policy_violations = mv_count(gen_ai.policy.name)
116+
| where policy_violations > 1
117+
| keep gen_ai.policy.action, policy_violations, user.id, gen_ai.request.model.id, cloud.account.id, user.id
118+
| stats total_unique_request_violations = count(*) by policy_violations, user.id, gen_ai.request.model.id, cloud.account.id
119+
| sort total_unique_request_violations desc
120+
121+
----------------------------------

0 commit comments

Comments
 (0)