You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/advisor/permissions.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: Permissions in Azure Advisor
2
+
title: Roles and permissions
3
3
description: Advisor permissions and how they may block your ability to configure subscriptions or postpone or dismiss recommendations.
4
4
ms.topic: article
5
-
ms.date: 04/03/2019
5
+
ms.date: 05/03/2024
6
6
---
7
7
8
-
# Permissions in Azure Advisor
8
+
# Roles and permissions
9
9
10
10
Azure Advisor provides recommendations based on the usage and configuration of your Azure resources and subscriptions. Advisor uses the [built-in roles](../role-based-access-control/built-in-roles.md) provided by [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) to manage your access to recommendations and Advisor features.
description: View and filter Azure Advisor recommendations to reduce noise.
4
4
ms.topic: article
5
5
ms.date: 01/02/2024
6
6
---
7
7
8
-
# View Azure Advisor recommendations that matter to you
8
+
# Configure Azure Advisor recommendations view
9
9
10
10
Azure Advisor provides recommendations to help you optimize your Azure deployments. Within Advisor, you have access to a few features that help you to narrow down your recommendations to only those that matter to you.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/how-to/specify-detection-model.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
---
2
2
title: How to specify a detection model - Face
3
3
titleSuffix: Azure AI services
4
-
description: This article will show you how to choose which face detection model to use with your Azure AI Face application.
4
+
description: This article shows you how to choose which face detection model to use with your Azure AI Face application.
5
5
#services: cognitive-services
6
6
author: PatrickFarley
7
7
manager: nitinme
8
8
9
9
ms.service: azure-ai-vision
10
10
ms.subservice: azure-ai-face
11
11
ms.topic: how-to
12
-
ms.date: 02/14/2024
12
+
ms.date: 06/10/2024
13
13
ms.author: pafarley
14
14
ms.devlang: csharp
15
15
ms.custom: devx-track-csharp
@@ -19,7 +19,7 @@ ms.custom: devx-track-csharp
19
19
20
20
This guide shows you how to specify a face detection model for the Azure AI Face service.
21
21
22
-
The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers have the option to specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
22
+
The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
23
23
24
24
Read on to learn how to specify the face detection model in certain face operations. The Face service uses face detection whenever it converts an image of a face into some other form of data.
25
25
@@ -40,8 +40,8 @@ The different face detection models are optimized for different tasks. See the f
|**detection_01**| Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
44
-
|**detection_02**| Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. |Does not return face attributes. |Does not return face landmarks. |
43
+
|**detection_01**| Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, glasses, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
44
+
|**detection_02**| Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. |Doesn't return face attributes. |Doesn't return face landmarks. |
45
45
|**detection_03**| Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask, blur, and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
46
46
47
47
@@ -57,11 +57,11 @@ When you use the [Detect] API, you can assign the model version with the `detect
57
57
*`detection_02`
58
58
*`detection_03`
59
59
60
-
A request URL for the [Detect] REST API will look like this:
60
+
A request URL for the [Detect] REST API looks like this:
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
64
+
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API uses the default model version (`detection_01`). See the following code example for the .NET client library.
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
88
+
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
89
89
90
90
> [!NOTE]
91
91
> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Identify From Person Group] API, for example).
This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
104
+
This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
105
105
106
106
> [!NOTE]
107
107
> You don't need to use the same detection model for all faces in a **FaceList** object, and you don't need to use the same detection model when detecting new faces to compare with a **FaceList** object.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/includes/image-analysis-curl-quickstart-40.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ To analyze an image for various visual features, do the following steps:
40
40
41
41
1. Make the following changes in the command where needed:
42
42
1. Replace the value of `<subscriptionKey>` with your Vision resource key.
43
-
1. Replace the value of `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
43
+
1. Replace the value of `<endpoint>` with your Vision resource endpoint URL. For example: `https://YourResourceName.cognitiveservices.azure.com`.
44
44
1. Optionally, change the image URL in the request body (`https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png`) to the URL of a different image to be analyzed.
45
45
1. Open a command prompt window.
46
46
1. Paste your edited `curl`command from the text editor into the command prompt window, and then run the command.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/concepts/groundedness.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ Currently, the Groundedness detection API supports English language content. Whi
51
51
52
52
### Text length limitations
53
53
54
-
The maximum character limit for the grounding sources is 55,000 characters per API call, and for the text and query, it's 7,500 characters per API call. If your input (either text or grounding sources) exceeds these character limitations, you'll encounter an error.
54
+
See [Input requirements](../overview.md#input-requirements) for maximum text length limitations.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/concepts/jailbreak-detection.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,7 +71,7 @@ Currently, the Prompt Shields API supports the English language. While our API d
71
71
72
72
### Text length limitations
73
73
74
-
The maximum character limit for Prompt Shields allows for a user prompt of up to 10,000 characters, while the document array is restricted to a maximum of 5 documents with a combined total not exceeding 10,000 characters.
74
+
See [Input requirements](/azure/ai-services/content-safety/overview#input-requirements)for maximum text length limitations.
0 commit comments