Skip to content

Commit b5a4f53

Browse files
testing goto link busting
1 parent a4d72f4 commit b5a4f53

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

content/ai_exchange/content/docs/4_runtime_application_security_threats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Impact: Confidentiality breach of the model (i.e., model parameters), which can
8181
- intellectual property theft (e.g., by a competitor)
8282
- and/or a way to perform input attacks on the copied model, circumventing protections. These protections include rate limiting, access control, and detection mechanisms. This can be done for [all input attacks](/goto/inputthreats/) that extract data, and for the preparation of [evasion](/goto/evasion/) or [prompt injection](/goto/promptinjection/): experimenting to find attack inputs that work.
8383

84-
This attack occurs when stealing model parameters from a live system by breaking into it (e.g., by gaining access to executables, memory or other storage/transfer of parameter data in the production environment). This is different from [model exfiltration](/goto/modelexfiltration/) which goes through a number of steps to steal a model through normal use, hence the use of the word 'direct'. It is also different from [direct development-time model leak](/goto/devmodelleak/) from a lifecycle and attack surface perspective.
84+
This attack occurs when stealing model parameters from a live system by breaking into it (e.g., by gaining access to executables, memory or other storage/transfer of parameter data in the production environment). This is different from [model exfiltration](/goto/modelexfiltration/) which goes through a number of steps to steal a model through normal use, hence the use of the word 'direct'. It is also different from [direct development-time model leak](/goto/devmodelleak/?v=20260216) from a lifecycle and attack surface perspective.
8585

8686
This attack also includes _side-channel attacks_, where attackers do not necessarily steal the entire model but instead extract specific details about the model’s behaviour or internal state. By observing characteristics like response times, power consumption, or electromagnetic emissions during inference, attackers can infer sensitive information about the model. This type of attack can provide insights into the model's structure, the type of data it processes, or even specific parameter values, which may be leveraged for subsequent attacks or to replicate the model.
8787

0 commit comments

Comments
 (0)