You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/agents/transparency-note.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -109,6 +109,7 @@ Developers can connect an Agent to external systems, APIs, and services through
109
109
***OpenAPI 3.0 specified tools** (a custom function defined with OpenAPI 3.0 specification to connect an Agent to external OpenAPI-based APIs securely)
110
110
***Model Context Protocol tools** (a custom service connected via Model Context Protocol through an existing remote MCP server to an Agent).
111
111
***Deep Research tool**: (a tool that enables multi-step web-based research with the o3-deep-research model and Grounding with Bing Search.).
112
+
***Computer Use**: (a tool to perform tasks by interacting with computer systems and applications through their UIs)
112
113
***Browser Automation Tool** (a tool that can perform real-world browser tasks through natural language prompts, enabling automated browsing activities without human intervention in the middle)
113
114
***Image Generation** (a tool to generate and edit images)
114
115
@@ -132,6 +133,7 @@ Azure AI Agent Service is **flexible and use-case agnostic.** This presents mult
132
133
***Education: Assisting with Research and Reference Gathering:** A teacher relies on an agent to gather age-appropriate articles and resources from reputable sources for a planetary science lesson; the teacher verifies the materials for factual accuracy and adjusts them to fit the curriculum, ensuring students receive trustworthy content.
133
134
***Manufacturing: Inventory Oversight and Task Scheduling:** A factory supervisor deploys an agent to monitor inventory levels, schedule restocking when supplies run low, and optimize shift rosters; management confirms the agent’s suggestions and retains final decision-making authority.
134
135
***Deep Research Tool**: Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case model with deep research technology in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=text).
136
+
***Computer Use**: The Computer Use tool comes with additional significant security and privacy risks, including prompt injection attacks. Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=image).
135
137
***Image Generation Tool**: The Image Generation tool is empowered by the gpt-image-1 model. Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case model in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=image).
136
138
137
139
Agent code samples have specific intended uses that are configurable by developers to carefully build upon, implement, and deploy agents. See [list of Agent code samples](/azure/ai-foundry/agents/overview#agent-catalog).
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/openai/transparency-note.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -584,10 +584,17 @@ For more best practices, see the [OpenAI 4o System Card](https://openai.com/inde
584
584
585
585
### Risk and limitations of Computer Use (Preview)
586
586
587
+
> [!WARNING]
588
+
> Computer Use carries substantial security and privacy risks and user responsibility. Computer Use comes with significant security and privacy risks. Both errors in judgment by the AI and the presence of malicious or confusing instructions on web pages, desktops, or other operating environments which the AI encounters may cause it to execute commands you or others do not intend, which could compromise the security of your or other users’ browsers, computers, and any accounts to which AI has access, including personal, financial, or enterprise systems.
589
+
>
590
+
> We strongly recommend taking appropriate measures to address these risks, such as using the Computer Use tool on virtual machines with no access to sensitive data or critical resources.
591
+
587
592
Verify and check actions taken: Computer Use might make mistakes and perform unintended actions. This can be due to the model not fully understanding the GUI, having unclear instructions or encountering an unexpected scenario.
588
593
589
594
Carefully consider and monitor use: Computer Use, in some limited circumstances, may perform actions without explicit authorization, some of which may be high-risk (e.g. send communications)
590
595
596
+
Developers will need to be systematically aware of, and defend against, situations where the model can be fooled into executing commands that are harmful to the user or the system, such as downloading malware, leaking credentials, or issuing fraudulent financial transactions. Particular attention should be paid to the fact that screenshot inputs are untrusted by nature and may include malicious instructions aimed at the model.
597
+
591
598
Evaluate in isolation: We recommend only evaluating Computer Use in isolated containers without access to sensitive data or credentials.
592
599
593
600
Opaque decision-making processes: As agents combine large language models with external systems, tracing the “why” behind their decisions can become challenging. AEnd users using such of an agent built using the Computer Use model may find it difficult to understand why certain tools or combination of tools were chosen to answer a query, complicating trust and verification of the agent’s outputs or actions.
0 commit comments