Skip to content

Commit 65b5644

Browse files
busting image caches
1 parent ab7e183 commit 65b5644

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

content/ai_exchange/content/docs/ai_security_overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ In AI, we outline 6 types of impacts that align with three types of attacker goa
305305
The threats that create these impacts use different attack surfaces. For example: the confidentiality of training data can be compromised by hacking into the database during development, but it can also get leaked by a _membership inference attack_ that can find out whether a certain individual was in the train data, simply by feeding that person's data into the model and looking at the details of the model output.
306306

307307
The diagram shows the threats as arrows. Each threat has a specific impact, indicated by letters referring to the Impact legend. The control overview section contains this diagram with groups of controls added.
308-
[![](/images/threats2.png)](/images/threats2.png)
308+
[![](/images/threats.png?v=2)](/images/threats.png?v=2)
309309

310310
Note that some threats represent attacks consisting of several steps, and therefore present multiple threats in one, for example:
311311
— An adversary performs a data poisoning attack by hacking into the training database and placing poisoned samples, and then after the data has been used for training, presents specific inputs to make use of the corrupted behaviour.
@@ -395,7 +395,7 @@ In the AI Exchange we focus on AI-specific threats and their corresponding contr
395395

396396
### Threat model with controls - general
397397
The below diagram puts the controls in the AI Exchange into groups and places these groups in the right lifecycle with the corresponding threats.
398-
[![](/images/threatscontrols2.png)](/images/threatscontrols2.png)
398+
[![](/images/threatscontrols.png?v=2)](/images/threatscontrols.png?v=2)
399399
The groups of controls form a summary of how to address AI security (controls are in capitals):
400400
- **AI Governance**(1): integrate AI comprehensively into your information security and software development lifecycle processes, not just by addressing AI risks, but by embedding AI considerations across the entire lifecycle:
401401
> [AI PROGRAM](/go/aiprogram/ ), [SEC PROGRAM](/go/secprogram/), [DEV PROGRAM](/go/devprogram/), [SECDEV PROGRAM](/go/secdevprogram/), [CHECK COMPLIANCE](/go/checkcompliance/), [SEC EDUCATE](/go/seceducate/)
@@ -442,7 +442,7 @@ The following deployment options apply for ready-made models:
442442

443443
The diagram below shows threats and controls of a ready-made model in a self-hosting situation.
444444

445-
[![AI Security Threats and controls - GenAI as-is](/images/threatscontrols2-readymodel-selfhosted.png)](/images/threatscontrols2-readymodel-selfhosted.png)
445+
[![AI Security Threats and controls - GenAI as-is](/images/threatscontrols-readymodel-selfhosted.png?v=2)](/images/threatscontrols-readymodel-selfhosted.png?v=2)
446446

447447

448448
**External-hosted**
@@ -475,7 +475,7 @@ When weighing this risk, compare it fairly: the vendor may still protect that en
475475

476476
The diagram below shows threats and controls of a ready-made model in an externally hosted situation.
477477

478-
[![AI Security Threats and controls - GenAI as-is](/images/threatscontrols2-readymodel-hosted.png)](/images/threatscontrols2-readymodel-hosted.png)
478+
[![AI Security Threats and controls - GenAI as-is](/images/threatscontrols-readymodel-hosted.png?v=2)](/images/threatscontrols-readymodel-hosted.png?v=2)
479479

480480
A typical challenge for organizations is to control the use of ready-made-models for general purpose Generative AI (e.g., ChatGPT), since employees typically can access many of them, even for free. Some of these models may not satisfy the organization's requirements for security and privacy. Still, employees can be very tempted to use them with the lack of a better alterative, sometimes referred to as _shadow AI_. The best solution for this problem is to provide a good alternative in the form of an AI model that has been deployed and configured in a secure and privacy-preserving way, of sufficient quality, and complying with the organization's values and policies. In addition, the risks of shadow AI need to be made very clear to users.
481481

0 commit comments

Comments
 (0)