Skip to content

Commit 132503d

Browse files
committed
freshness
1 parent 324c5db commit 132503d

File tree

6 files changed

+12
-14
lines changed

6 files changed

+12
-14
lines changed

articles/ai-foundry/responsible-ai/computer-vision/limited-access-identity.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: pafarley
77
manager: nitinme
88
ms.service: azure-ai-vision
99
ms.topic: article
10-
ms.date: 06/17/2022
10+
ms.date: 07/28/2025
1111
---
1212

1313
# Limited Access to Face API

articles/ai-services/computer-vision/sdk/overview-sdk.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-vision
88
ms.topic: overview
9-
ms.date: 06/01/2024
9+
ms.date: 07/28/2025
1010
ms.collection: "ce-skilling-fresh-tier2, ce-skilling-ai-copilot"
1111
ms.update-cycle: 365-days
1212
ms.author: pafarley
@@ -20,13 +20,13 @@ The Image Analysis SDK provides a convenient way to access the Image Analysis se
2020
> [!IMPORTANT]
2121
> **Breaking Changes in SDK version 1.0.0-beta.1**
2222
>
23-
> The Image Analysis SDK was rewritten in version 1.0.0-beta.1 to better align with other Azure SDKs. All APIs have changed. See the updated [quickstart](/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40), [samples](#github-samples) and [how-to-guides](/azure/ai-services/computer-vision/how-to/call-analyze-image-40) for information on how to use the new SDK.
23+
> The Image Analysis SDK was rewritten in version 1.0.0-beta.1 to better align with other Azure SDKs. All APIs are changed. See the updated [quickstart](/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40), [samples](#github-samples), and [how-to-guides](/azure/ai-services/computer-vision/how-to/call-analyze-image-40) for information on how to use the new SDK.
2424
>
2525
> Major changes:
2626
> - The SDK now calls the generally available [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01), instead of the preview [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview).
2727
> - Support for JavaScript was added.
2828
> - C++ is no longer supported.
29-
> - Image Analysis with a custom model, and Image Segmentation (background removal) are no longer supported in the SDK, because the [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01) does not yet support them. To use either feature, call the [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview) directly (using the `Analyze` and `Segment` operations respectively).
29+
> - Image Analysis with a custom model, and Image Segmentation (background removal) are no longer supported in the SDK, because the [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01) doesn't yet support them. To use either feature, call the [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview) directly (using the `Analyze` and `Segment` operations respectively).
3030
3131
## Supported languages
3232

articles/ai-services/content-safety/concepts/groundedness.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.topic: conceptual
9-
ms.date: 04/29/2025
9+
ms.date: 07/28/2025
1010
ms.author: pafarley
1111
---
1212

articles/ai-services/content-safety/concepts/harm-categories.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.custom: build-2023
99
ms.topic: conceptual
10-
ms.date: 04/29/2025
10+
ms.date: 07/28/2025
1111
ms.author: pafarley
1212
---
1313

1414

1515
# Harm categories in Azure AI Content Safety
1616

17-
This guide describes all of the harm categories and ratings that Azure AI Content Safety uses to flag content. Both text and image content use the same set of flags.
17+
Azure AI Content Safety uses harm categories to flag and rate objectionable content in both text and images. This guide describes all of the harm categories and ratings that Azure AI Content Safety uses. Understanding these categories helps you configure moderation and compliance for your use cases. Both text and image content use the same set of flags.
1818

1919
## Harm categories
2020

articles/ai-services/content-safety/concepts/jailbreak-detection.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,15 @@ manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.custom: build-2023
99
ms.topic: conceptual
10-
ms.date: 04/29/2025
10+
ms.date: 07/28/2025
1111
ms.author: pafarley
1212
---
1313

1414
# Prompt Shields
1515

16-
Generative AI models can pose risks of exploitation by malicious actors. To mitigate these risks, we integrate safety mechanisms to restrict the behavior of large language models (LLMs) within a safe operational scope. However, despite these safeguards, LLMs can still be vulnerable to adversarial inputs that bypass the integrated safety protocols.
17-
18-
Prompt Shields is a unified API that analyzes inputs to LLMs and detects adversarial user input attacks.
16+
Prompt Shields is a unified API in Azure AI Content Safety that detects and blocks adversarial user input attacks on large language models (LLMs). It helps prevent harmful, unsafe, or policy-violating AI outputs by analyzing prompts and documents before content is generated.
1917

18+
Generative AI models can pose risks of exploitation by malicious actors. To mitigate these risks, we integrate safety mechanisms to restrict the behavior of large language models (LLMs) within a safe operational scope. However, despite these safeguards, LLMs can still be vulnerable to adversarial inputs that bypass the integrated safety protocols. In these cases, specialized filters like Prompt Shields are effective.
2019

2120
## User scenarios
2221

articles/ai-services/content-safety/includes/groundedness-detection-overview.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,12 @@ author: PatrickFarley
44
manager: nitinme
55
ms.service: azure-ai-content-safety
66
ms.topic: include
7-
ms.date: 05/08/2025
7+
ms.date: 07/28/2025
88
ms.author: pafarley
99
---
1010

1111

12-
13-
The Groundedness detection feature detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials.
12+
in Azure AI Content Safety helps you ensure that large language model (LLM) responses are based on your provided source material, reducing the risk of non-factual or fabricated outputs Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials.
1413

1514
## Key terms
1615

0 commit comments

Comments
 (0)