Skip to content

Commit a222e25

Browse files
Merge pull request #7065 from aahill/sept-tn-update
updating transparency note
2 parents 9532750 + 4038798 commit a222e25

File tree

2 files changed

+11
-3
lines changed

2 files changed

+11
-3
lines changed

articles/ai-foundry/responsible-ai/agents/transparency-note.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: aahi
77
manager: nitinme
88
ms.service: azure-ai-agent-service
99
ms.topic: article
10-
ms.date: 05/16/2025
10+
ms.date: 09/12/2025
1111
---
1212

1313
# Transparency Note for Azure Agent Service
@@ -109,9 +109,9 @@ Developers can connect an Agent to external systems, APIs, and services through
109109
* **OpenAPI 3.0 specified tools** (a custom function defined with OpenAPI 3.0 specification to connect an Agent to external OpenAPI-based APIs securely)
110110
* **Model Context Protocol tools** (a custom service connected via Model Context Protocol through an existing remote MCP server to an Agent).
111111
* **Deep Research tool**: (a tool that enables multi-step web-based research with the o3-deep-research model and Grounding with Bing Search.).
112+
* **Computer Use**: (a tool to perform tasks by interacting with computer systems and applications through their UIs)
112113
* **Browser Automation Tool** (a tool that can perform real-world browser tasks through natural language prompts, enabling automated browsing activities without human intervention in the middle)
113114

114-
115115
#### Orchestrating multi-agent systems
116116

117117
Multi-agent systems using Azure AI Agent Service can be designed to achieve performant autonomous workflows for specific scenarios. In multi-agent systems, multiple context-aware autonomous agents, whether humans or AI systems, interact or work together to achieve individual or collective goals specified by the user. Azure AI Agent Service works out-of-the-box with multi-agent orchestration frameworks that are wireline compatible<sup>1</sup> with the Assistants API, such as [**AutoGen**](https://www.microsoft.com/research/blog/autogen-enabling-next-generation-large-language-model-applications/), a state-of-the-art research SDK for Python created by Microsoft Research, and [**Semantic Kernel**](/semantic-kernel/frameworks/agent/agent-architecture?pivots=programming-language-csharp), an enterprise AI SDK for Python, .NET, and Java.
@@ -131,7 +131,8 @@ Azure AI Agent Service is **flexible and use-case agnostic.** This presents mult
131131
* **Government: Citizen Request Triage and Community Event Coordination:** A city clerk uses an agent to categorize incoming service requests (for example, pothole repairs), assign them to the right departments, and compile simple status updates; officials review and finalize communications to maintain transparency and accuracy.
132132
* **Education: Assisting with Research and Reference Gathering:** A teacher relies on an agent to gather age-appropriate articles and resources from reputable sources for a planetary science lesson; the teacher verifies the materials for factual accuracy and adjusts them to fit the curriculum, ensuring students receive trustworthy content.
133133
* **Manufacturing: Inventory Oversight and Task Scheduling:** A factory supervisor deploys an agent to monitor inventory levels, schedule restocking when supplies run low, and optimize shift rosters; management confirms the agent’s suggestions and retains final decision-making authority.
134-
* **Deep research**: See the deep research section of the [Azure OpenAI transparency note](../openai/transparency-note.md#deep-research-use-cases) for examples of use cases for the deep research tool.
134+
* **Deep Research Tool**: Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case model with deep research technology in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=text).
135+
* **Computer Use**: The Computer Use tool comes with additional significant security and privacy risks, including prompt injection attacks. Learn more about intended uses, capabilities, limitations, risks, and considerations when choosing a use case in the [Azure OpenAI transparency note](../openai/transparency-note.md?tabs=image).
135136

136137
Agent code samples have specific intended uses that are configurable by developers to carefully build upon, implement, and deploy agents. See [list of Agent code samples](/azure/ai-foundry/agents/overview#agent-catalog).
137138

articles/ai-foundry/responsible-ai/openai/transparency-note.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -584,10 +584,17 @@ For more best practices, see the [OpenAI 4o System Card](https://openai.com/inde
584584

585585
### Risk and limitations of Computer Use (Preview)
586586

587+
> [!WARNING]
588+
> Computer Use carries substantial security and privacy risks and user responsibility. Computer Use comes with significant security and privacy risks. Both errors in judgment by the AI and the presence of malicious or confusing instructions on web pages, desktops, or other operating environments which the AI encounters may cause it to execute commands you or others do not intend, which could compromise the security of your or other users’ browsers, computers, and any accounts to which AI has access, including personal, financial, or enterprise systems.
589+
>
590+
> We strongly recommend taking appropriate measures to address these risks, such as using the Computer Use tool on virtual machines with no access to sensitive data or critical resources.
591+
587592
Verify and check actions taken: Computer Use might make mistakes and perform unintended actions. This can be due to the model not fully understanding the GUI, having unclear instructions or encountering an unexpected scenario.
588593

589594
Carefully consider and monitor use: Computer Use, in some limited circumstances, may perform actions without explicit authorization, some of which may be high-risk (e.g. send communications)
590595

596+
Developers will need to be systematically aware of, and defend against, situations where the model can be fooled into executing commands that are harmful to the user or the system, such as downloading malware, leaking credentials, or issuing fraudulent financial transactions. Particular attention should be paid to the fact that screenshot inputs are untrusted by nature and may include malicious instructions aimed at the model.
597+
591598
Evaluate in isolation: We recommend only evaluating Computer Use in isolated containers without access to sensitive data or credentials.
592599

593600
Opaque decision-making processes: As agents combine large language models with external systems, tracing the “why” behind their decisions can become challenging. AEnd users using such of an agent built using the Computer Use model may find it difficult to understand why certain tools or combination of tools were chosen to answer a query, complicating trust and verification of the agent’s outputs or actions.

0 commit comments

Comments
 (0)