Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Recommendation(s)

- Implement adversarial training to make the model more robust against adversarial examples.
- Use input preprocessing or data augmentation techniques to reduce the effectiveness of adversarial perturbations.
- Monitor model inputs for anomalies that may indicate adversarial examples.
- Add additional layers of validation or human review for critical decisions based on AI predictions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
AI misclassification attacks occur when an attacker introduces specially crafted input designed to trick the AI model into making an incorrect prediction or classification. These inputs, known as adversarial examples, are often subtle modifications to legitimate data that are imperceptible to humans but can significantly alter the AI’s output.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Identify the expected inputs of the AI model
1. Generate adversarial examples by adding small, targeted perturbations to legitimate inputs:

```
{malicious input}
```

1. Submit the adversarial examples to the AI model
1. Observe that the model misclassifies the modified input compared to its expected classification

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Recommendation(s)

- Implement adversarial training to make the model more robust against adversarial examples.
- Use input preprocessing or data augmentation techniques to reduce the effectiveness of adversarial perturbations.
- Monitor model inputs for anomalies that may indicate adversarial examples.
- Add additional layers of validation or human review for critical decisions based on AI predictions.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Insecure output handling within Large Language Models (LLMs) occurs when the output generated by the LLM is not sanitized or validated before being passed downstream to other systems. This can allow an attacker to indirectly gain access to systems, elevate their privileges, or gain arbitrary code execution by using crafted prompts.
Adversarial example injection attacks occur when an attacker introduces specially crafted input designed to trick the AI model into making an incorrect prediction or classification. These inputs, are often subtle modifications to legitimate data that are imperceptible to humans but can significantly alter the AI’s output.

**Business Impact**

Expand Down
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Recommendation(s)

- Improve the model's training data and fact-checking mechanisms.
- Implement retrieval augmentation techniques to access external knowledge bases.
- Provide clear disclaimers about the potential for AI-generated content to be inaccurate.
- Enable user feedback mechanisms for reporting misinformation.
- Regularly audit the model's output for factual errors.
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
AI models can generates or presents inaccurate, false, or misleading information as fact. Misinformation or wrong factual data can happen due to errors in the model's training data, hallucinations (fabrication of information), or a failure to cross-reference with reliable sources.

**Business Impact**

Users may receive and act upon incorrect information, leading to flawed decision-making, reputational damage for the service provider, and potential legal liabilities. There is also a loss of trust in the AI's reliability and accuracy.

**Steps to Reproduce**

1. Submit the following prompts that require factual information

```prompt
{prompt}
```

1. Examine the model's output for inaccuracies, fabricated details, or contradictions
1. Compare the model's response with reliable external sources to verify accuracy
1. Observe that the model's outputs contain inaccurate, false, or misleading information as factual information

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Recommendation(s)

- Improve the model's training data and fact-checking mechanisms.
- Implement retrieval augmentation techniques to access external knowledge bases.
- Provide clear disclaimers about the potential for AI-generated content to be inaccurate.
- Enable user feedback mechanisms for reporting misinformation.
- Regularly audit the model's output for factual errors.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
AI models can generates or presents inaccurate, false, or misleading information as fact. This can occur due to errors in the model's training data, hallucinations (fabrication of information), or a failure to cross-reference with reliable sources.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:

```prompt
{malicious prompt}
```

1. Observe that the LLM returns sensitive data

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Recommendation(s)

- Implement rate limiting and throttling for API requests and user interactions.
- Use load balancing to distribute traffic across multiple servers.
- Implement resource monitoring and auto-scaling to handle increased load.
- Employ input validation and sanitization to prevent resource-intensive processing of malicious input.
- Use content delivery networks (CDNs) to cache and deliver content efficiently.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Application-wide Denial-of-Service (DoS) occurs when an attacker attempts to overload the entire AI application with requests or malicious input, rendering the application unavailable to legitimate users. This can be achieved by sending a flood of queries that exploit resource-intensive processes, or by triggering application crashes.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Obtain access to an account within a specific tenant
1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources

```python
{malicious script}
```

1. Observe that the target tenant's service availability and performance is degraded

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Recommendation(s)

- Implement rate limiting and throttling for API requests and user interactions.
- Use load balancing to distribute traffic across multiple servers.
- Implement resource monitoring and auto-scaling to handle increased load.
- Employ input validation and sanitization to prevent resource-intensive processing of malicious input.
- Use content delivery networks (CDNs) to cache and deliver content efficiently.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Denial-of-Service (DoS) occurs when an attacker targets and overwhelms the resources of an AI application. This can be achieved through excessive requests, resource-intensive queries, or exploiting vulnerabilities specific to the tenant's configuration. An attacker can leverage this vulnerability to cause disruption or unavailability for that specific tenant without affecting other tenants.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Obtain access to an account within a specific tenant
1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources

```python
{malicious script}
```

1. Observe that the target tenant's service availability and performance is degraded

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Recommendation(s)

- Implement per-tenant resource allocation and limits.
- Isolate tenant resources and infrastructure to prevent impact on other tenants.
- Monitor individual tenant activity and resource usage for anomalies.
- Implement tenant-specific rate limiting and throttling.
- Provide detailed activity logs and monitoring dashboards to tenants.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Tenant-Scoped Denial-of-Service (DoS) occurs when an attacker specifically targets and overwhelms a single tenant's resources within a multi-tenant AI application. This can be achieved through excessive requests, resource-intensive queries, or exploiting vulnerabilities specific to the tenant's configuration. An attacker can leverage this vulnerability to cause disruption or unavailability for that specific tenant without affecting other tenants.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Obtain access to an account within a specific tenant
1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources

```python
{malicious script}
```

1. Observe that the target tenant's service availability and performance is degraded

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Recommendation(s)

- Sanitize user-supplied input by removing or escaping ANSI escape sequences before displaying or processing it.
- Use a secure terminal library or renderer that does not execute or interpret ANSI escape codes from untrusted sources.
- Validate and strip any non-printable or control characters from user inputs.
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
ANSI escape codes injection occurs when an attacker uses specially crafted ANSI escape sequences within user-supplied input that can manipulate either the terminal output, or the behavior of the system receiving that input. This can lead to an attacker creating visual distortions, hiding of data, or even remote code execution in vulnerable systems that interpret these codes incorrectly.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Use the following crafted input containing specific ANSI escape sequences for functions:

```input
{malicious input}
```

1. Input the crafted text and observe that the ANSI escape sequences are processed in the output

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Recommendation(s)

- Sanitize user-supplied input before displaying or processing it.
- Use a secure terminal library or renderer that does not execute or interpret inputs from untrusted sources.
- Validate and strip any non-printable or control characters from user inputs.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Recommendation(s)

- Sanitize user-supplied input by removing or escaping RTL and LTR override characters before displaying it.
- Use a text rendering engine that properly handles or visually indicates RTL/LTR overrides.
- Display filenames and URLs with caution, providing clear context or information about the directionality of the text.
- Educate users about potential RTL/LTR override attacks and how to recognize them.
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
RTL (Right-To-Left) override vulnerabilities occur when an attacker uses special Unicode characters (RTL override or LTR override) to manipulate the display order of text. An attacker can use this inproper input handling to create visually misleading content, hide file extensions, or obfuscate URLs, leading to social engineering attacks, phishing, or bypassing of security filters.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Use the following crafted input containing RTL or LTR override characters:

```input
{malicious input}
```

1. Observe how it is rendered, notice the reversing or obscuring the intended display order.

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Recommendation(s)

- Normalize and canonicalize user input by converting Unicode characters to a standard representation.
- Use allowlisting or denylisting to restrict the use of specific unicode characters.
- Display Unicode characters with visual indicators (e.g., highlighting) when there is a risk of confusion.
- Implement string comparison functions that take into account visual similarity.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Unicode confusable vulnerabilities occur when an attacker uses unicode characters that look visually similar to standard characters but have different underlying code points. This inproper input handling allows an attacker to create domain names, usernames, or content that appears legitimate but can deceive users or bypass security filters.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Input the following visually similar Unicode characters to common ASCII characters:

```input
{malicious input}
```

1. Use these Unicode characters to create a fake domain name, username, or content
1. Observe that this fake entity can be used to deceive users or bypass security filters

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Recommendation(s)

- Implement output encoding or escaping to sanitize user-supplied data before displaying it.
- Use Content Security Policy (CSP) to restrict the sources from which scripts can be loaded.
- Implement input validation to prevent injection of malicious characters or code.
- Regularly scan the application for XSS vulnerabilities.
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
Improper output handling can result in cross-Site Scripting (XSS) where an AI application fails to properly sanitize or encode user-supplied input. This allows an attacker to inject malicious scripts into the application, where the output is viewed by other users. These scripts execute within the user's browser context, potentially stealing session cookies, redirecting users to malicious sites, or performing other harmful actions.

**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

**Steps to Reproduce**

1. Input the following specifically crafted text/data designed to trigger an XSS payload within an applicable function:

```prompt
{malicious prompt}
```

1. Observe that the output of the AI application leads to XSS execution

**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Loading