You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/agents/transparency-note.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -145,7 +145,7 @@ We encourage customers to use Azure AI Agent Service in their innovative solutio
145
145
146
146
### Technical limitations, operational factors, and ranges
147
147
148
-
***Generative AI model limitations:** Because Azure AI Agent Service works with a variety of models, the overall system inherits the limitations specific to those models. Before selecting a model to incorporate into your agent, carefully [evaluate the model](/azure/ai-studio/how-to/model-catalog-overview#overview-of-model-catalog-capabilities) to understand its limitations. Consider reviewing the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#best-practices-for-improving-system-performance) for additional information about generative AI limitations that are also likely to be relevant to the system and review other best practices for incorporating generative AI into your agent application.
148
+
***Generative AI model limitations:** Because Azure AI Agent Service works with a variety of models, the overall system inherits the limitations specific to those models. Before selecting a model to incorporate into your agent, carefully [evaluate the model](/azure/ai-studio/how-to/model-catalog-overview#overview-of-model-catalog-capabilities) to understand its limitations. Consider reviewing the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text#best-practices-for-improving-system-performance) for additional information about generative AI limitations that are also likely to be relevant to the system and review other best practices for incorporating generative AI into your agent application.
149
149
***Tool orchestration complexities:** AI Agents depend on multiple integrated tools and data connectors (such as Bing Search, SharePoint, and Azure Logic Apps). If any of these tools are misconfigured, unavailable, or return inconsistent results, or a high number of tools are configured on a single agent, the agent’s guidance may become fragmented, outdated, or misleading.
150
150
***Unequal representation and support:** When serving diverse user groups, AI Agents can show uneven performance if language varieties, regional data, or specialized knowledge domains are underrepresented. A retail agent, for example, might offer less reliable product recommendations to customers who speak under-represented languages.
151
151
***Opaque decision-making processes:** As agents combine large language models with external systems, tracing the “why” behind their decisions can become challenging. A user using such an agent may find it difficult to understand why certain tools or combination of tools were chosen to answer a query, complicating trust and verification of the agent’s outputs or actions.
@@ -176,7 +176,7 @@ We encourage customers to use Azure AI Agent Service in their innovative solutio
176
176
***Clearly define intended operating environments.** Clearly define the intended operating environments (domain boundaries) where your agent is designed to perform effectively.
177
177
***Ensure appropriate intelligibility in decision making.** Providing information to users before, during, and after actions are taken and/or tools are called may help them understand action justification or why certain actions were taken or the application is behaving a certain way, where to intervene, and how to troubleshoot issues.
178
178
<!--* **Provide trusted data.** Retrieving or uploading untrusted data into your systems could compromise the security of your systems or applications. To mitigate these risks in your applications using the Azure AI Agent Service, we recommend logging and monitoring LLM interactions (inputs/outputs) to detect and analyze potential prompt injections, clearly delineating user input to minimize risk of prompt injection, restricting the LLM’s access to sensitive resources, limiting its capabilities to the minimum required, and isolating it from critical systems and resources. Learn about additional mitigation approaches in [Security guidance for Large Language Models.](/ai/playbook/technology-guidance/generative-ai/mlops-in-openai/security/security-recommend)-->
179
-
* Follow additional generative AI best practices as appropriate for your system, including recommendations in the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#best-practices-for-improving-system-performance).
179
+
* Follow additional generative AI best practices as appropriate for your system, including recommendations in the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text#best-practices-for-improving-system-performance).
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,4 +78,4 @@ To learn more about Microsoft's privacy and security commitments visit the Micro
78
78
## Next steps
79
79
80
80
> [!div class="nextstepaction"]
81
-
> [Responsible use deployment guidance for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
81
+
> [Responsible use deployment guidance for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment)
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/computer-vision/image-analysis-transparency-note.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -196,7 +196,7 @@ This section discusses Image Analysis and key considerations for using this tech
196
196
197
197
You can report feedback on the content filtering system [through support](/azure/ai-services/cognitive-services-support-options).
198
198
199
-
To ensure you have properly mitigated risks in your application, you should evaluate all potential harms carefully, follow guidance in the [Transparency Note](/azure/ai-foundry/responsible-ai/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) and add scenario-specific mitigation as needed.
199
+
To ensure you have properly mitigated risks in your application, you should evaluate all potential harms carefully, follow guidance in the [Transparency Note](/azure/ai-foundry/responsible-ai/computer-vision/imageanalysis-transparency-note) and add scenario-specific mitigation as needed.
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/computer-vision/limited-access-identity.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Since the announcement on June 11th, 2020, Azure AI Face recognition services ar
23
23
24
24
Customers and partners who wish to use Limited Access features of the Face API, including Face identification and Face verification, are required to register for access by [submitting a registration form](https://aka.ms/facerecognition). The Face Detection operation is available without registration.
25
25
26
-
Access to Face API is subject to Microsoft's sole discretion based on eligibility criteria and a vetting process. Face API is available only to customers managed by Microsoft, defined as those customers and partners who are working directly with Microsoft account teams. Additionally, Face API is only available for certain use cases, and customers must select their desired use case in their registration. Microsoft may require customers and partners to reverify this information periodically. Read more about example use cases and disallowed use cases to avoid [here](../face/transparency-note.md?context=/azure/ai-services/computer-vision/context/context).
26
+
Access to Face API is subject to Microsoft's sole discretion based on eligibility criteria and a vetting process. Face API is available only to customers managed by Microsoft, defined as those customers and partners who are working directly with Microsoft account teams. Additionally, Face API is only available for certain use cases, and customers must select their desired use case in their registration. Microsoft may require customers and partners to reverify this information periodically. Read more about example use cases and disallowed use cases to avoid [here](../face/transparency-note.md).
27
27
28
28
The Face API service is made available to customers and partners under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://aka.ms/MCAServiceSpecificTerms)). Please review these terms carefully as they contain important conditions and obligations governing your use of Face API.
-[Data, privacy, and security for Spatial Analysis](/azure/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext)
228
+
-[Data, privacy, and security for Spatial Analysis](/azure/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2)
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/content-understanding/data-privacy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ Data does not get stored outside the designated region that the user selected fo
61
61
62
62
### Face
63
63
64
-
Face is a gated feature as it processes biometric data. We detect faces in the input files and group them by their similarity. All intermediate data do not persist beyond the processing of the request. The face groupings associated with analysis results are persisted for 48 hours unless the user explicitly deletes face data. For more information, please refer to the [Data and Privacy for Face documentation](/azure/ai-foundry/responsible-ai/face/data-privacy-security?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext).
64
+
Face is a gated feature as it processes biometric data. We detect faces in the input files and group them by their similarity. All intermediate data do not persist beyond the processing of the request. The face groupings associated with analysis results are persisted for 48 hours unless the user explicitly deletes face data. For more information, please refer to the [Data and Privacy for Face documentation](/azure/ai-foundry/responsible-ai/face/data-privacy-security).
Copy file name to clipboardExpand all lines: articles/ai-foundry/responsible-ai/content-understanding/transparency-note.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -128,11 +128,11 @@ If highly disturbing input files are uploaded to Content Understanding, it can r
128
128
Faces are blurred before the image or video is sent to the model for analysis thus inference on faces, such as emotion, won't work in either image or video. Only video modality supports face grouping which only provides groups of similar faces without any additional analysis.
129
129
130
130
> [!IMPORTANT]
131
-
> Face grouping feature in Content Understanding is limited based on eligibility and usage criteria. in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://aka.ms/facerecognition) to apply for access. For more information, see the [Face limited access page](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext).
131
+
> Face grouping feature in Content Understanding is limited based on eligibility and usage criteria. in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://aka.ms/facerecognition) to apply for access. For more information, see the [Face limited access page](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity).
132
132
133
133
#### Document
134
134
135
-
Document extraction capability is heavily dependent on the way you name the fields and description of the fields. Also, the product forces grounding – anchoring outputs in the text of the input documents – and will not return answers if they cannot be grounded. Therefore, in some cases, the value of the field may be missing. Due to the nature of the grounded extraction, the system will return content from the document even if the document is incorrect or the content is not visible to the human eye. Documents should also have a reasonable resolution, where the text is not too blurry for the [Layout model](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext) to recognize.
135
+
Document extraction capability is heavily dependent on the way you name the fields and description of the fields. Also, the product forces grounding – anchoring outputs in the text of the input documents – and will not return answers if they cannot be grounded. Therefore, in some cases, the value of the field may be missing. Due to the nature of the grounded extraction, the system will return content from the document even if the document is incorrect or the content is not visible to the human eye. Documents should also have a reasonable resolution, where the text is not too blurry for the [Layout model](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity) to recognize.
136
136
137
137
#### Video
138
138
@@ -293,12 +293,12 @@ We are committed to continuously improving our fairness evaluations to gain a de
293
293
## Evaluating and integrating Image Analysis for your use
294
294
295
295
296
-
When integrating Content Understanding for your use case, knowing that Content Understanding is subject to the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct) will ensure a successful integration.
296
+
When integrating Content Understanding for your use case, knowing that Content Understanding is subject to the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct?context=%2Fazure%2Fai-services%2Fcontent-understanding%2Fcontext%2Fcontext) will ensure a successful integration.
297
297
298
298
When you're getting ready to integrate Content Understanding to your product or features, the following activities help to set you up for success:
299
299
-**Understand what it can do**: Fully assess the potential of Content Understanding to understand its capabilities and limitations. Understand how it will perform in your scenario and context. For example, if you're using audio content extraction, test with real-world recordings from your business processes to analyze and benchmark the results against your existing process metrics.
300
300
-**Respect an individual's right to privacy**: Only collect data and information from individuals from whom you have obtained consent, and for lawful and justifiable purposes.
301
-
-**Legal and regulatory considerations**. Organizations need to evaluate potential specific legal and regulatory obligations when using Content Understanding. Content Understanding is not appropriate for use in every industry or scenario. Always use Content Understanding in accordance with the applicable terms of service and the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct).
301
+
-**Legal and regulatory considerations**. Organizations need to evaluate potential specific legal and regulatory obligations when using Content Understanding. Content Understanding is not appropriate for use in every industry or scenario. Always use Content Understanding in accordance with the applicable terms of service and the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct?context=%2Fazure%2Fai-services%2Fcontent-understanding%2Fcontext%2Fcontext).
302
302
-**Human-in-the-loop**: Keep a human in the loop, and include human oversight as a consistent pattern area to explore. This means ensuring constant human oversight of the AI-powered product or feature and to maintain the role of humans in decision-making. Ensure that you can have real-time human intervention in the solution to prevent harm. A human in the loop enables you to manage situations when Content Understanding does not perform as required.
303
303
-**Security**: Ensure your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access.
304
304
@@ -323,7 +323,7 @@ When you're getting ready to integrate Content Understanding to your product or
323
323
-[Azure AI Document Intelligence](/azure/ai-foundry/responsible-ai/document-intelligence/transparency-note?toc=%2Fazure%2Fai-services%2Fdocument-intelligence%2Ftoc.json&bc=%2Fazure%2Fai-services%2Fdocument-intelligence%2Fbreadcrumb%2Ftoc.json&view=doc-intel-4.0.0&preserve-view=true)
324
324
-[Azure AI Speech](/azure/ai-foundry/responsible-ai/speech-service/speech-to-text/transparency-note?context=%2Fazure%2Fai-services%2Fspeech-service%2Fcontext%2Fcontext)
325
325
-[Azure AI Vision](/azure/ai-foundry/responsible-ai/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext)
326
-
-[Azure AI Face](/azure/ai-foundry/responsible-ai/face/transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext)
326
+
-[Azure AI Face](/azure/ai-foundry/responsible-ai/face/transparency-note)
327
327
-[Azure AI Video Indexer](/legal/azure-video-indexer/transparency-note?context=%2Fazure%2Fazure-video-indexer%2Fcontext%2Fcontext)
0 commit comments