Skip to content

Commit 9255737

Browse files
committed
fix: minor changes to ML01_Input_Manipulation_Attacks
1 parent 9f4e920 commit 9255737

File tree

1 file changed

+16
-15
lines changed

1 file changed

+16
-15
lines changed

docs/ML01_2023-Input_Manipulation_Attack.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,15 @@ technical: 5
2424

2525
## Description
2626

27-
Input Manipulation Attacks present under the umbrella term – Adversarial attacks are a type of attack in which an attacker deliberately
28-
alters input data to mislead the model.
27+
Input Manipulation Attacks is an umbrella term, which include Adversarial
28+
Attacks, a type of attack in which an attacker deliberately alters input data to
29+
mislead the model.
2930

3031
## How to Prevent
3132

32-
**Adversarial training:** One approach to defending against input manipulation attack
33-
is to train the model on adversarial examples. This can help the model become
34-
more robust to attacks and reduce its susceptibility to being misled.
33+
**Adversarial training:** One approach to defending against input manipulation
34+
attack is to train the model on adversarial examples. This can help the model
35+
become more robust to attacks and reduce its susceptibility to being misled.
3536

3637
**Robust models:** Another approach is to use models that are designed to be
3738
robust against manipulative attacks, such as adversarial training or models that
@@ -55,7 +56,7 @@ the specific circumstances of each machine learning system.
5556

5657
## Example Attack Scenarios
5758

58-
### Scenario \#1: Image classification {#scenario1}
59+
### Scenario \#1: Input manipulation of Image Classification systems {#scenario1}
5960

6061
A deep learning model is trained to classify images into different categories,
6162
such as dogs and cats. An attacker manipulates the original image that is very
@@ -64,16 +65,16 @@ perturbations that cause the model to misclassify it as a dog. When the model is
6465
deployed in a real-world setting, the attacker can use the manipulated image to
6566
bypass security measures or cause harm to the system.
6667

67-
### Scenario \#2: Network intrusion detection
68+
### Scenario \#2: Manipulation of network traffic to evade intrusion detection systems {#scenario2}
6869

6970
A deep learning model is trained to detect intrusions in a network. An attacker
70-
manipulates network traffic by carefully crafting packets in such a way
71-
that they will evade the model\'s intrusion detection system. The attacker can
72-
alter the features of the network traffic, such as the source IP address,
73-
destination IP address, or payload, in such a way that they are not detected by
74-
the intrusion detection system. For example, the attacker may hide their source
75-
IP address behind a proxy server or encrypt the payload of their network
76-
traffic. This type of attack can have serious consequences, as it can lead to
77-
data theft, system compromise, or other forms of damage.
71+
manipulates network traffic by carefully crafting packets in such a way that
72+
they will evade the model\'s intrusion detection system. The attacker can alter
73+
the features of the network traffic, such as the source IP address, destination
74+
IP address, or payload, in such a way that they are not detected by the
75+
intrusion detection system. For example, the attacker may hide their source IP
76+
address behind a proxy server or encrypt the payload of their network traffic.
77+
This type of attack can have serious consequences, as it can lead to data theft,
78+
system compromise, or other forms of damage.
7879

7980
## References

0 commit comments

Comments
 (0)