You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/class6/class6.rst
+41Lines changed: 41 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,15 +45,56 @@ Key offering includes:-
45
45
- Real-world scenario testing where it leverage 10,000+ monthly attack prompts and detailed “agentic fingerprint” insights (exploit paths, chain of reasoning)
46
46
47
47
48
+
1 - Explore F5 AI Red Team Interface
49
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
50
+
51
+
**Reports** is the place where all your attack campaign results come together for a victory lap (or a post-mortem)
52
+
53
+
**Attack campaigns** is where you orchestrate, launch, and track adversarial attacks against your AI systems
48
54
49
55
.. image:: ./_static/class6-redteam-1.png
50
56
57
+
58
+
Some reports include a CASI (Comprehensive AI Security Index) score to help you quickly gauge how well your model performed:-
59
+
60
+
- Critical (0 - 69) - Highly vulnerable, meaning it is susceptible to even basic attacks and is **not recommended for production use**.
61
+
62
+
- Warning (70 - 85) - Moderate vulnerable, meaning it has some common vulnerabilites. It's recommended to conduct more testing and **add safeguards before deploying them in production**.
63
+
64
+
- Good (85 - 99) - Secure against most attacks but should still be evaluated for highly complex or novel attack vectors depending on your use case. In addition, high CASI score may potentially impact user expereince as it potentially blocks some valid/legitimate requests or overly sensitive to certain inputs evaluation. Balancing security and usability should be considered.
65
+
66
+
CASI scoring only applied to Signature attacks, not Agentic Warfare attack.
67
+
68
+
.. NOTE::
69
+
CASI evaluates several several critical factors beyond simple success rates:-
70
+
71
+
- **Severity**: The potential impact of a successful attack (e.g., bicycle lock vs. nuclear launch codes)
72
+
73
+
- **Complexity**: The sophistication of the attack being assessed (e.g. plain text vs. complex encoding)
74
+
75
+
- **Defensive Breaking Point (DPB)**: Identifies the weakest link in the model’s defences, focusing on the path of least resistance and considering factors like computational resources required for a successful attack
76
+
51
77
.. image:: ./_static/class6-redteam-2.png
52
78
79
+
**Attack Campaign** is where you set up and launch your adversarial attacks against your AI systems. You can choose from a variety of attack types.
80
+
53
81
.. image:: ./_static/class6-redteam-3.png
54
82
83
+
**Signature Attacks** - These are pre-defined attack prompts designed to exploit known vulnerabilities in AI models. They are typically based on common attack patterns and techniques that have been observed in the wild.
84
+
85
+
.. image:: ./_static/class6-redteam-2-1.png
86
+
87
+
**Agentic Warfare** - This is a more advanced and dynamic form of attack where autonomous agents (a group of Red Agent) are deployed to interact with the AI model or AI apps in a more sophisticated manner. These agents can adapt their strategies based on the model's responses, making them more effective at uncovering vulnerabilities that may not be exposed by static signature attacks.
88
+
89
+
.. image:: ./_static/class6-redteam-2-2.png
90
+
91
+
92
+
Report of an Attack Campaign. Click **View raw data** to see more details of every prompt used in the attack campaign.
93
+
55
94
.. image:: ./_static/class6-redteam-4.png
56
95
96
+
Details of the attack prompts used in the campaign including the response from the AI model, the attack type, severity level, and whether the attack was successful or not.
0 commit comments