You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/ai_exchange/content/docs/ai_security_overview.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -305,7 +305,7 @@ In AI, we outline 6 types of impacts that align with three types of attacker goa
305
305
The threats that create these impacts use different attack surfaces. For example: the confidentiality of training data can be compromised by hacking into the database during development, but it can also get leaked by a _membership inference attack_ that can find out whether a certain individual was in the train data, simply by feeding that person's data into the model and looking at the details of the model output.
306
306
307
307
The diagram shows the threats as arrows. Each threat has a specific impact, indicated by letters referring to the Impact legend. The control overview section contains this diagram with groups of controls added.
Note that some threats represent attacks consisting of several steps, and therefore present multiple threats in one, for example:
311
311
— An adversary performs a data poisoning attack by hacking into the training database and placing poisoned samples, and then after the data has been used for training, presents specific inputs to make use of the corrupted behaviour.
@@ -395,7 +395,7 @@ In the AI Exchange we focus on AI-specific threats and their corresponding contr
395
395
396
396
### Threat model with controls - general
397
397
The below diagram puts the controls in the AI Exchange into groups and places these groups in the right lifecycle with the corresponding threats.
The groups of controls form a summary of how to address AI security (controls are in capitals):
400
400
-**AI Governance**(1): integrate AI comprehensively into your information security and software development lifecycle processes, not just by addressing AI risks, but by embedding AI considerations across the entire lifecycle:
@@ -442,7 +442,7 @@ The following deployment options apply for ready-made models:
442
442
443
443
The diagram below shows threats and controls of a ready-made model in a self-hosting situation.
444
444
445
-
[](/images/threatscontrols2-readymodel-selfhosted.png)
445
+
[](/images/threatscontrols-readymodel-selfhosted.png?v=2)
446
446
447
447
448
448
**External-hosted**
@@ -475,7 +475,7 @@ When weighing this risk, compare it fairly: the vendor may still protect that en
475
475
476
476
The diagram below shows threats and controls of a ready-made model in an externally hosted situation.
477
477
478
-
[](/images/threatscontrols2-readymodel-hosted.png)
478
+
[](/images/threatscontrols-readymodel-hosted.png?v=2)
479
479
480
480
A typical challenge for organizations is to control the use of ready-made-models for general purpose Generative AI (e.g., ChatGPT), since employees typically can access many of them, even for free. Some of these models may not satisfy the organization's requirements for security and privacy. Still, employees can be very tempted to use them with the lack of a better alterative, sometimes referred to as _shadow AI_. The best solution for this problem is to provide a good alternative in the form of an AI model that has been deployed and configured in a secure and privacy-preserving way, of sufficient quality, and complying with the organization's values and policies. In addition, the risks of shadow AI need to be made very clear to users.
0 commit comments