You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/agents/transparency-note.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: aahi
7
7
manager: nitinme
8
8
ms.service: azure-ai-agent-service
9
9
ms.topic: article
10
-
ms.date: 05/16/2025
10
+
ms.date: 09/12/2025
11
11
---
12
12
13
13
# Transparency Note for Azure Agent Service
@@ -109,9 +109,9 @@ Developers can connect an Agent to external systems, APIs, and services through
109
109
***OpenAPI 3.0 specified tools** (a custom function defined with OpenAPI 3.0 specification to connect an Agent to external OpenAPI-based APIs securely)
110
110
***Model Context Protocol tools** (a custom service connected via Model Context Protocol through an existing remote MCP server to an Agent).
111
111
***Deep Research tool**: (a tool that enables multi-step web-based research with the o3-deep-research model and Grounding with Bing Search.).
112
+
***Computer Use**: (a tool to perform tasks by interacting with computer systems and applications through their UIs)
112
113
***Browser Automation Tool** (a tool that can perform real-world browser tasks through natural language prompts, enabling automated browsing activities without human intervention in the middle)
113
114
114
-
115
115
#### Orchestrating multi-agent systems
116
116
117
117
Multi-agent systems using Azure AI Agent Service can be designed to achieve performant autonomous workflows for specific scenarios. In multi-agent systems, multiple context-aware autonomous agents, whether humans or AI systems, interact or work together to achieve individual or collective goals specified by the user. Azure AI Agent Service works out-of-the-box with multi-agent orchestration frameworks that are wireline compatible<sup>1</sup> with the Assistants API, such as [**AutoGen**](https://www.microsoft.com/research/blog/autogen-enabling-next-generation-large-language-model-applications/), a state-of-the-art research SDK for Python created by Microsoft Research, and [**Semantic Kernel**](/semantic-kernel/frameworks/agent/agent-architecture?pivots=programming-language-csharp), an enterprise AI SDK for Python, .NET, and Java.
@@ -131,7 +131,8 @@ Azure AI Agent Service is **flexible and use-case agnostic.** This presents mult
131
131
***Government: Citizen Request Triage and Community Event Coordination:** A city clerk uses an agent to categorize incoming service requests (for example, pothole repairs), assign them to the right departments, and compile simple status updates; officials review and finalize communications to maintain transparency and accuracy.
132
132
***Education: Assisting with Research and Reference Gathering:** A teacher relies on an agent to gather age-appropriate articles and resources from reputable sources for a planetary science lesson; the teacher verifies the materials for factual accuracy and adjusts them to fit the curriculum, ensuring students receive trustworthy content.
133
133
***Manufacturing: Inventory Oversight and Task Scheduling:** A factory supervisor deploys an agent to monitor inventory levels, schedule restocking when supplies run low, and optimize shift rosters; management confirms the agent’s suggestions and retains final decision-making authority.
134
-
***Deep research**: See the deep research section of the [Azure OpenAI transparency note](../openai/transparency-note.md#deep-research-use-cases) for examples of use cases for the deep research tool.
134
+
***Deep Research Tool**: Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case model with deep research technology in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=text).
135
+
***Computer Use**: The Computer Use tool comes with additional significant security and privacy risks, including prompt injection attacks. Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=image).
135
136
136
137
Agent code samples have specific intended uses that are configurable by developers to carefully build upon, implement, and deploy agents. See [list of Agent code samples](/azure/ai-foundry/agents/overview#agent-catalog).
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/openai/transparency-note.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -584,10 +584,17 @@ For more best practices, see the [OpenAI 4o System Card](https://openai.com/inde
584
584
585
585
### Risk and limitations of Computer Use (Preview)
586
586
587
+
> [!WARNING]
588
+
> Computer Use carries substantial security and privacy risks and user responsibility. Computer Use comes with significant security and privacy risks. Both errors in judgment by the AI and the presence of malicious or confusing instructions on web pages, desktops, or other operating environments which the AI encounters may cause it to execute commands you or others do not intend, which could compromise the security of your or other users’ browsers, computers, and any accounts to which AI has access, including personal, financial, or enterprise systems.
589
+
>
590
+
> We strongly recommend taking appropriate measures to address these risks, such as using the Computer Use tool on virtual machines with no access to sensitive data or critical resources.
591
+
587
592
Verify and check actions taken: Computer Use might make mistakes and perform unintended actions. This can be due to the model not fully understanding the GUI, having unclear instructions or encountering an unexpected scenario.
588
593
589
594
Carefully consider and monitor use: Computer Use, in some limited circumstances, may perform actions without explicit authorization, some of which may be high-risk (e.g. send communications)
590
595
596
+
Developers will need to be systematically aware of, and defend against, situations where the model can be fooled into executing commands that are harmful to the user or the system, such as downloading malware, leaking credentials, or issuing fraudulent financial transactions. Particular attention should be paid to the fact that screenshot inputs are untrusted by nature and may include malicious instructions aimed at the model.
597
+
591
598
Evaluate in isolation: We recommend only evaluating Computer Use in isolated containers without access to sensitive data or credentials.
592
599
593
600
Opaque decision-making processes: As agents combine large language models with external systems, tracing the “why” behind their decisions can become challenging. AEnd users using such of an agent built using the Computer Use model may find it difficult to understand why certain tools or combination of tools were chosen to answer a query, complicating trust and verification of the agent’s outputs or actions.
0 commit comments