Skip to content

Commit d8f7d6b

Browse files
committed
rm context url params
1 parent 9b1ea87 commit d8f7d6b

39 files changed

+139
-139
lines changed

articles/ai-foundry/responsible-ai/agents/transparency-note.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ We encourage customers to use Azure AI Agent Service in their innovative solutio
145145

146146
### Technical limitations, operational factors, and ranges
147147

148-
* **Generative AI model limitations:** Because Azure AI Agent Service works with a variety of models, the overall system inherits the limitations specific to those models. Before selecting a model to incorporate into your agent, carefully [evaluate the model](/azure/ai-studio/how-to/model-catalog-overview#overview-of-model-catalog-capabilities) to understand its limitations. Consider reviewing the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#best-practices-for-improving-system-performance) for additional information about generative AI limitations that are also likely to be relevant to the system and review other best practices for incorporating generative AI into your agent application.
148+
* **Generative AI model limitations:** Because Azure AI Agent Service works with a variety of models, the overall system inherits the limitations specific to those models. Before selecting a model to incorporate into your agent, carefully [evaluate the model](/azure/ai-studio/how-to/model-catalog-overview#overview-of-model-catalog-capabilities) to understand its limitations. Consider reviewing the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text#best-practices-for-improving-system-performance) for additional information about generative AI limitations that are also likely to be relevant to the system and review other best practices for incorporating generative AI into your agent application.
149149
* **Tool orchestration complexities:** AI Agents depend on multiple integrated tools and data connectors (such as Bing Search, SharePoint, and Azure Logic Apps). If any of these tools are misconfigured, unavailable, or return inconsistent results, or a high number of tools are configured on a single agent, the agent’s guidance may become fragmented, outdated, or misleading.
150150
* **Unequal representation and support:** When serving diverse user groups, AI Agents can show uneven performance if language varieties, regional data, or specialized knowledge domains are underrepresented. A retail agent, for example, might offer less reliable product recommendations to customers who speak under-represented languages.
151151
* **Opaque decision-making processes:** As agents combine large language models with external systems, tracing the “why” behind their decisions can become challenging. A user using such an agent may find it difficult to understand why certain tools or combination of tools were chosen to answer a query, complicating trust and verification of the agent’s outputs or actions.
@@ -176,7 +176,7 @@ We encourage customers to use Azure AI Agent Service in their innovative solutio
176176
* **Clearly define intended operating environments.** Clearly define the intended operating environments (domain boundaries) where your agent is designed to perform effectively.
177177
* **Ensure appropriate intelligibility in decision making.** Providing information to users before, during, and after actions are taken and/or tools are called may help them understand action justification or why certain actions were taken or the application is behaving a certain way, where to intervene, and how to troubleshoot issues.
178178
<!--* **Provide trusted data.** Retrieving or uploading untrusted data into your systems could compromise the security of your systems or applications. To mitigate these risks in your applications using the Azure AI Agent Service, we recommend logging and monitoring LLM interactions (inputs/outputs) to detect and analyze potential prompt injections, clearly delineating user input to minimize risk of prompt injection, restricting the LLM’s access to sensitive resources, limiting its capabilities to the minimum required, and isolating it from critical systems and resources. Learn about additional mitigation approaches in [Security guidance for Large Language Models.](/ai/playbook/technology-guidance/generative-ai/mlops-in-openai/security/security-recommend)-->
179-
* Follow additional generative AI best practices as appropriate for your system, including recommendations in the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#best-practices-for-improving-system-performance).
179+
* Follow additional generative AI best practices as appropriate for your system, including recommendations in the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text#best-practices-for-improving-system-performance).
180180

181181
## Learn more about responsible AI
182182

articles/ai-foundry/responsible-ai/clu/clu-characteristics-and-limitations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ Not all features are at the same language parity. For example, language support
9999
## Next steps
100100

101101
* [Introduction to conversational language understanding](/azure/ai-services/language-service/conversational-language-understanding/overview)
102-
* [Language Understanding transparency note](/azure/ai-foundry/responsible-ai/luis/luis-transparency-note?context=/azure/ai-services/LUIS/context/context)
102+
* [Language Understanding transparency note](/azure/ai-foundry/responsible-ai/luis/luis-transparency-note)
103103

104104
* [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai?rtc=1&activetab=pivot1%3aprimaryr6)
105105
* [Building responsible bots](https://www.microsoft.com/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf)

articles/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,4 +78,4 @@ To learn more about Microsoft's privacy and security commitments visit the Micro
7878
## Next steps
7979

8080
> [!div class="nextstepaction"]
81-
> [Responsible use deployment guidance for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
81+
> [Responsible use deployment guidance for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment)

articles/ai-foundry/responsible-ai/computer-vision/disclosure-design.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,4 +105,4 @@ Evaluate the first and continuous-use experience with a representative sample of
105105
## Next steps
106106

107107
> [!div class="nextstepaction"]
108-
> [Research insights for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/research-insights?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
108+
> [Research insights for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/research-insights)

articles/ai-foundry/responsible-ai/computer-vision/image-analysis-transparency-note.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ This section discusses Image Analysis and key considerations for using this tech
196196

197197
You can report feedback on the content filtering system [through support](/azure/ai-services/cognitive-services-support-options).
198198

199-
To ensure you have properly mitigated risks in your application, you should evaluate all potential harms carefully, follow guidance in the [Transparency Note](/azure/ai-foundry/responsible-ai/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) and add scenario-specific mitigation as needed.
199+
To ensure you have properly mitigated risks in your application, you should evaluate all potential harms carefully, follow guidance in the [Transparency Note](/azure/ai-foundry/responsible-ai/computer-vision/imageanalysis-transparency-note) and add scenario-specific mitigation as needed.
200200

201201
### Recommendations for preserving privacy
202202

articles/ai-foundry/responsible-ai/computer-vision/limited-access-identity.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Since the announcement on June 11th, 2020, Azure AI Face recognition services ar
2323

2424
Customers and partners who wish to use Limited Access features of the Face API, including Face identification and Face verification, are required to register for access by [submitting a registration form](https://aka.ms/facerecognition). The Face Detection operation is available without registration.
2525

26-
Access to Face API is subject to Microsoft's sole discretion based on eligibility criteria and a vetting process. Face API is available only to customers managed by Microsoft, defined as those customers and partners who are working directly with Microsoft account teams. Additionally, Face API is only available for certain use cases, and customers must select their desired use case in their registration. Microsoft may require customers and partners to reverify this information periodically. Read more about example use cases and disallowed use cases to avoid [here](../face/transparency-note.md?context=/azure/ai-services/computer-vision/context/context).
26+
Access to Face API is subject to Microsoft's sole discretion based on eligibility criteria and a vetting process. Face API is available only to customers managed by Microsoft, defined as those customers and partners who are working directly with Microsoft account teams. Additionally, Face API is only available for certain use cases, and customers must select their desired use case in their registration. Microsoft may require customers and partners to reverify this information periodically. Read more about example use cases and disallowed use cases to avoid [here](../face/transparency-note.md).
2727

2828
The Face API service is made available to customers and partners under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://aka.ms/MCAServiceSpecificTerms)). Please review these terms carefully as they contain important conditions and obligations governing your use of Face API.
2929

articles/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,4 +101,4 @@ The recommendations outlined below provide general guidance for supporting effec
101101
## Next steps
102102

103103
> [!div class="nextstepaction"]
104-
> [Disclosure design guidelines for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/disclosure-design?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
104+
> [Disclosure design guidelines for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/disclosure-design)

articles/ai-foundry/responsible-ai/computer-vision/transparency-note-spatial-analysis.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,5 +225,5 @@ The evaluation of Spatial Analysis and Video Retrieval models is essential to en
225225
## Learn more about Spatial Analysis
226226

227227
- [Spatial Analysis overview](/azure/ai-services/computer-vision/intro-to-spatial-analysis-public-preview)
228-
- [Data, privacy, and security for Spatial Analysis](/azure/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext)
228+
- [Data, privacy, and security for Spatial Analysis](/azure/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2)
229229

articles/ai-foundry/responsible-ai/content-understanding/data-privacy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Data does not get stored outside the designated region that the user selected fo
6161

6262
### Face
6363

64-
Face is a gated feature as it processes biometric data. We detect faces in the input files and group them by their similarity. All intermediate data do not persist beyond the processing of the request. The face groupings associated with analysis results are persisted for 48 hours unless the user explicitly deletes face data. For more information, please refer to the [Data and Privacy for Face documentation](/azure/ai-foundry/responsible-ai/face/data-privacy-security?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext).
64+
Face is a gated feature as it processes biometric data. We detect faces in the input files and group them by their similarity. All intermediate data do not persist beyond the processing of the request. The face groupings associated with analysis results are persisted for 48 hours unless the user explicitly deletes face data. For more information, please refer to the [Data and Privacy for Face documentation](/azure/ai-foundry/responsible-ai/face/data-privacy-security).
6565

6666

6767

articles/ai-foundry/responsible-ai/content-understanding/transparency-note.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -128,11 +128,11 @@ If highly disturbing input files are uploaded to Content Understanding, it can r
128128
Faces are blurred before the image or video is sent to the model for analysis thus inference on faces, such as emotion, won't work in either image or video. Only video modality supports face grouping which only provides groups of similar faces without any additional analysis.
129129

130130
> [!IMPORTANT]
131-
> Face grouping feature in Content Understanding is limited based on eligibility and usage criteria. in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://aka.ms/facerecognition) to apply for access. For more information, see the [Face limited access page](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext).
131+
> Face grouping feature in Content Understanding is limited based on eligibility and usage criteria. in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://aka.ms/facerecognition) to apply for access. For more information, see the [Face limited access page](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity).
132132
133133
#### Document
134134

135-
Document extraction capability is heavily dependent on the way you name the fields and description of the fields. Also, the product forces grounding – anchoring outputs in the text of the input documents – and will not return answers if they cannot be grounded. Therefore, in some cases, the value of the field may be missing. Due to the nature of the grounded extraction, the system will return content from the document even if the document is incorrect or the content is not visible to the human eye. Documents should also have a reasonable resolution, where the text is not too blurry for the [Layout model](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext) to recognize.
135+
Document extraction capability is heavily dependent on the way you name the fields and description of the fields. Also, the product forces grounding – anchoring outputs in the text of the input documents – and will not return answers if they cannot be grounded. Therefore, in some cases, the value of the field may be missing. Due to the nature of the grounded extraction, the system will return content from the document even if the document is incorrect or the content is not visible to the human eye. Documents should also have a reasonable resolution, where the text is not too blurry for the [Layout model](/azure/ai-foundry/responsible-ai/computer-vision/limited-access-identity) to recognize.
136136

137137
#### Video
138138

@@ -293,12 +293,12 @@ We are committed to continuously improving our fairness evaluations to gain a de
293293
## Evaluating and integrating Image Analysis for your use
294294

295295

296-
When integrating Content Understanding for your use case, knowing that Content Understanding is subject to the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct) will ensure a successful integration.
296+
When integrating Content Understanding for your use case, knowing that Content Understanding is subject to the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct?context=%2Fazure%2Fai-services%2Fcontent-understanding%2Fcontext%2Fcontext) will ensure a successful integration.
297297

298298
When you're getting ready to integrate Content Understanding to your product or features, the following activities help to set you up for success:
299299
- **Understand what it can do**: Fully assess the potential of Content Understanding to understand its capabilities and limitations. Understand how it will perform in your scenario and context. For example, if you're using audio content extraction, test with real-world recordings from your business processes to analyze and benchmark the results against your existing process metrics.
300300
- **Respect an individual's right to privacy**: Only collect data and information from individuals from whom you have obtained consent, and for lawful and justifiable purposes.
301-
- **Legal and regulatory considerations**. Organizations need to evaluate potential specific legal and regulatory obligations when using Content Understanding. Content Understanding is not appropriate for use in every industry or scenario. Always use Content Understanding in accordance with the applicable terms of service and the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct). 
301+
- **Legal and regulatory considerations**. Organizations need to evaluate potential specific legal and regulatory obligations when using Content Understanding. Content Understanding is not appropriate for use in every industry or scenario. Always use Content Understanding in accordance with the applicable terms of service and the [Microsoft Generative AI Services Code of Conduct](/legal/ai-code-of-conduct?context=%2Fazure%2Fai-services%2Fcontent-understanding%2Fcontext%2Fcontext). 
302302
- **Human-in-the-loop**: Keep a human in the loop, and include human oversight as a consistent pattern area to explore. This means ensuring constant human oversight of the AI-powered product or feature and to maintain the role of humans in decision-making. Ensure that you can have real-time human intervention in the solution to prevent harm. A human in the loop enables you to manage situations when Content Understanding does not perform as required.
303303
- **Security**: Ensure your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access.
304304

@@ -323,7 +323,7 @@ When you're getting ready to integrate Content Understanding to your product or
323323
- [Azure AI Document Intelligence](/azure/ai-foundry/responsible-ai/document-intelligence/transparency-note?toc=%2Fazure%2Fai-services%2Fdocument-intelligence%2Ftoc.json&bc=%2Fazure%2Fai-services%2Fdocument-intelligence%2Fbreadcrumb%2Ftoc.json&view=doc-intel-4.0.0&preserve-view=true)
324324
- [Azure AI Speech](/azure/ai-foundry/responsible-ai/speech-service/speech-to-text/transparency-note?context=%2Fazure%2Fai-services%2Fspeech-service%2Fcontext%2Fcontext )
325325
- [Azure AI Vision](/azure/ai-foundry/responsible-ai/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext )
326-
- [Azure AI Face](/azure/ai-foundry/responsible-ai/face/transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext)
326+
- [Azure AI Face](/azure/ai-foundry/responsible-ai/face/transparency-note)
327327
- [Azure AI Video Indexer](/legal/azure-video-indexer/transparency-note?context=%2Fazure%2Fazure-video-indexer%2Fcontext%2Fcontext )
328328

329329
### Code of Conduct

0 commit comments

Comments
 (0)