Skip to content

Commit 47e5c44

Browse files
committed
update
1 parent 113a907 commit 47e5c44

File tree

3 files changed

+41
-0
lines changed

3 files changed

+41
-0
lines changed
483 KB
Loading
375 KB
Loading

docs/class6/class6.rst

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,15 +45,56 @@ Key offering includes:-
4545
- Real-world scenario testing where it leverage 10,000+ monthly attack prompts and detailed “agentic fingerprint” insights (exploit paths, chain of reasoning)
4646

4747

48+
1 - Explore F5 AI Red Team Interface
49+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
50+
51+
**Reports** is the place where all your attack campaign results come together for a victory lap (or a post-mortem)
52+
53+
**Attack campaigns** is where you orchestrate, launch, and track adversarial attacks against your AI systems
4854

4955
.. image:: ./_static/class6-redteam-1.png
5056

57+
58+
Some reports include a CASI (Comprehensive AI Security Index) score to help you quickly gauge how well your model performed:-
59+
60+
- Critical (0 - 69) - Highly vulnerable, meaning it is susceptible to even basic attacks and is **not recommended for production use**.
61+
62+
- Warning (70 - 85) - Moderate vulnerable, meaning it has some common vulnerabilites. It's recommended to conduct more testing and **add safeguards before deploying them in production**.
63+
64+
- Good (85 - 99) - Secure against most attacks but should still be evaluated for highly complex or novel attack vectors depending on your use case. In addition, high CASI score may potentially impact user expereince as it potentially blocks some valid/legitimate requests or overly sensitive to certain inputs evaluation. Balancing security and usability should be considered.
65+
66+
CASI scoring only applied to Signature attacks, not Agentic Warfare attack.
67+
68+
.. NOTE::
69+
CASI evaluates several several critical factors beyond simple success rates:-
70+
71+
- **Severity**: The potential impact of a successful attack (e.g., bicycle lock vs. nuclear launch codes)
72+
73+
- **Complexity**: The sophistication of the attack being assessed (e.g. plain text vs. complex encoding)
74+
75+
- **Defensive Breaking Point (DPB)**: Identifies the weakest link in the model’s defences, focusing on the path of least resistance and considering factors like computational resources required for a successful attack
76+
5177
.. image:: ./_static/class6-redteam-2.png
5278

79+
**Attack Campaign** is where you set up and launch your adversarial attacks against your AI systems. You can choose from a variety of attack types.
80+
5381
.. image:: ./_static/class6-redteam-3.png
5482

83+
**Signature Attacks** - These are pre-defined attack prompts designed to exploit known vulnerabilities in AI models. They are typically based on common attack patterns and techniques that have been observed in the wild.
84+
85+
.. image:: ./_static/class6-redteam-2-1.png
86+
87+
**Agentic Warfare** - This is a more advanced and dynamic form of attack where autonomous agents (a group of Red Agent) are deployed to interact with the AI model or AI apps in a more sophisticated manner. These agents can adapt their strategies based on the model's responses, making them more effective at uncovering vulnerabilities that may not be exposed by static signature attacks.
88+
89+
.. image:: ./_static/class6-redteam-2-2.png
90+
91+
92+
Report of an Attack Campaign. Click **View raw data** to see more details of every prompt used in the attack campaign.
93+
5594
.. image:: ./_static/class6-redteam-4.png
5695

96+
Details of the attack prompts used in the campaign including the response from the AI model, the attack type, severity level, and whether the attack was successful or not.
97+
5798
.. image:: ./_static/class6-redteam-4-1.png
5899

59100

0 commit comments

Comments
 (0)