Skip to content

Commit 9f4e920

Browse files
committed
refactor: adversarial attack to input manipulation
1 parent 439ede4 commit 9f4e920

File tree

2 files changed

+12
-12
lines changed

2 files changed

+12
-12
lines changed

docs/ML01_2023-Adversarial_Attack.md renamed to docs/ML01_2023-Input_Manipulation_Attack.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ auto-migrated: 0
77
document: OWASP Machine Learning Security Top Ten 2023
88
year: 2023
99
order: 1
10-
title: ML01:2023 Adversarial Attack
10+
title: ML01:2023 Input Manipulation Attack
1111
lang: en
1212
tags:
1313
[
@@ -24,30 +24,30 @@ technical: 5
2424

2525
## Description
2626

27-
Adversarial attacks are a type of attack in which an attacker deliberately
27+
Input Manipulation Attacks present under the umbrella term – Adversarial attacks are a type of attack in which an attacker deliberately
2828
alters input data to mislead the model.
2929

3030
## How to Prevent
3131

32-
**Adversarial training:** One approach to defending against adversarial attacks
32+
**Adversarial training:** One approach to defending against input manipulation attack
3333
is to train the model on adversarial examples. This can help the model become
3434
more robust to attacks and reduce its susceptibility to being misled.
3535

3636
**Robust models:** Another approach is to use models that are designed to be
37-
robust against adversarial attacks, such as adversarial training or models that
37+
robust against manipulative attacks, such as adversarial training or models that
3838
incorporate defense mechanisms.
3939

4040
**Input validation:** Input validation is another important defense mechanism
41-
that can be used to detect and prevent adversarial attacks. This involves
41+
that can be used to detect and prevent input manipulation attacks. This involves
4242
checking the input data for anomalies, such as unexpected values or patterns,
4343
and rejecting inputs that are likely to be malicious.
4444

4545
## Risk Factors
4646

4747
| Threat Agents/Attack Vectors | Security Weakness | Impact |
4848
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------: |
49-
| Exploitability: 5 (Easy) <br><br> _ML Application Specific: 4_ <br> _ML Operations Specific: 3_ | Detectability: 3 (Moderate) <br><br> _The adversarial image may not be noticeable to the naked eye, making it difficult to detect the attack._ | Technical: 5 (Difficult) <br><br> _The attack requires technical knowledge of deep learning and image processing techniques._ |
50-
| Threat Agent: Attacker with knowledge of deep learning and image processing techniques. <br><br> Attack Vector: Deliberately crafted adversarial image that is similar to a legitimate image. | Vulnerability in the deep learning model's ability to classify images accurately. | Misclassification of the image, leading to security bypass or harm to the system. |
49+
| Exploitability: 5 (Easy) <br><br> _ML Application Specific: 4_ <br> _ML Operations Specific: 3_ | Detectability: 3 (Moderate) <br><br> _The manipulated image may not be noticeable to the naked eye, making it difficult to detect the attack._ | Technical: 5 (Difficult) <br><br> _The attack requires technical knowledge of deep learning and image processing techniques._ |
50+
| Threat Agent: Attacker with knowledge of deep learning and image processing techniques. <br><br> Attack Vector: Deliberately crafted manipulated image that is similar to a legitimate image. | Vulnerability in the deep learning model's ability to classify images accurately. | Misclassification of the image, leading to security bypass or harm to the system. |
5151

5252
It is important to note that this chart is only a sample based on
5353
[the scenario below](#scenario1) only. The actual risk assessment will depend on
@@ -58,18 +58,18 @@ the specific circumstances of each machine learning system.
5858
### Scenario \#1: Image classification {#scenario1}
5959

6060
A deep learning model is trained to classify images into different categories,
61-
such as dogs and cats. An attacker creates an adversarial image that is very
61+
such as dogs and cats. An attacker manipulates the original image that is very
6262
similar to a legitimate image of a cat, but with small, carefully crafted
6363
perturbations that cause the model to misclassify it as a dog. When the model is
64-
deployed in a real-world setting, the attacker can use the adversarial image to
64+
deployed in a real-world setting, the attacker can use the manipulated image to
6565
bypass security measures or cause harm to the system.
6666

6767
### Scenario \#2: Network intrusion detection
6868

6969
A deep learning model is trained to detect intrusions in a network. An attacker
70-
creates adversarial network traffic by carefully crafting packets in such a way
70+
manipulates network traffic by carefully crafting packets in such a way
7171
that they will evade the model\'s intrusion detection system. The attacker can
72-
manipulate the features of the network traffic, such as the source IP address,
72+
alter the features of the network traffic, such as the source IP address,
7373
destination IP address, or payload, in such a way that they are not detected by
7474
the intrusion detection system. For example, the attacker may hide their source
7575
IP address behind a proxy server or encrypt the payload of their network

index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ in our
2929

3030
## Top 10 Machine Learning Security Risks
3131

32-
- [**ML01:2023 Adversarial Attack**](/docs/ML01_2023-Adversarial_Attack.md)
32+
- [**ML01:2023 Input Manipulation Attack**](/docs/ML01_2023-Input_Manipulation_Attack.md)
3333
- [**ML02:2023 Data Poisoning Attack**](/docs/ML02_2023-Data_Poisoning_Attack.md)
3434
- [**ML03:2023 Model Inversion Attack**](/docs/ML03_2023-Model_Inversion_Attack.md)
3535
- [**ML04:2023 Membership Inference Attack**](/docs/ML04_2023-Membership_Inference_Attack.md)

0 commit comments

Comments
 (0)