Skip to content

Commit f6ca207

Browse files
authored
Merge pull request #276608 from MicrosoftDocs/main
5/29/2024 AM Publish
2 parents 44aaa8c + 64d822e commit f6ca207

File tree

93 files changed

+1272
-1011
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

93 files changed

+1272
-1011
lines changed

.openpublishing.redirection.azure-kubernetes-service.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,11 @@
8080
"redirect_url": "/azure/aks/azure-csi-files-storage-provision",
8181
"redirect_document_id": false
8282
},
83+
{
84+
"source_path_from_root": "/articles/aks/learn/tutorial-kubernetes-workload-identity.md",
85+
"redirect_url": "/azure/aks/workload-identity-deploy-cluster",
86+
"redirect_document_id": false
87+
},
8388
{
8489
"source_path_from_root": "/articles/aks/workload-identity-migration-sidecar.md",
8590
"redirect_url": "/azure/aks/workload-identity-migrate-from-pod-identity",

articles/ai-services/computer-vision/concept-face-recognition-data-structures.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,6 @@ ms.author: pafarley
1919

2020
This article explains the data structures used in the Face service for face recognition operations. These data structures hold data on faces and persons.
2121

22-
You can try out the capabilities of face recognition quickly and easily using Vision Studio.
23-
> [!div class="nextstepaction"]
24-
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
25-
2622
[!INCLUDE [Gate notice](./includes/identity-gate-notice.md)]
2723

2824
## Data structures used with Identify

articles/ai-services/computer-vision/concept-face-recognition.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,6 @@ ms.author: pafarley
1919

2020
This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the process of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
2121

22-
You can try out the capabilities of face recognition quickly and easily using Vision Studio.
23-
> [!div class="nextstepaction"]
24-
> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
25-
2622

2723
## Face recognition operations
2824

articles/ai-services/computer-vision/how-to/identity-access-token.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,9 @@ If the ISV learns that a client is using the LimitedAccessToken for non-approved
3333

3434
## Prerequisites
3535

36-
* [cURL](https://curl.haxx.se/) installed (or another tool that can make HTTP requests).
37-
* The ISV needs to have either an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or an [Azure AI services multi-service](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource.
38-
* The client needs to have an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource.
36+
* [cURL](https://curl.se/) installed (or another tool that can make HTTP requests).
37+
* The ISV needs to have either an [Azure AI Face](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or an [Azure AI services multi-service](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource.
38+
* The client needs to have an [Azure AI Face](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource.
3939

4040
## Step 1: ISV obtains client's Face resource ID
4141

articles/ai-services/computer-vision/how-to/specify-detection-model.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The different face detection models are optimized for different tasks. See the f
4242
|---------|------------|-------------------|-------------|--|
4343
|**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
4444
|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Does not return face attributes. | Does not return face landmarks. |
45-
|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
45+
|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask, blur, and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
4646

4747

4848
The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
@@ -64,7 +64,7 @@ A request URL for the [Detect] REST API will look like this:
6464
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
6565

6666
```csharp
67-
string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
67+
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
6868
var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03");
6969
```
7070

@@ -81,7 +81,7 @@ await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name",
8181

8282
string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
8383

84-
string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
84+
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
8585
await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
8686
```
8787

@@ -97,7 +97,7 @@ You can also specify a detection model when you add a face to an existing **Face
9797
```csharp
9898
await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
9999

100-
string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
100+
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
101101
await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
102102
```
103103

articles/ai-services/computer-vision/how-to/use-large-scale.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,13 @@ private static async Task TrainLargeFaceList(
9292
int timeIntervalInMilliseconds = 1000)
9393
{
9494
// Trigger a train call.
95-
await FaceClient.LargeTrainLargeFaceListAsync(largeFaceListId);
95+
await FaceClient.LargeFaceList.TrainAsync(largeFaceListId);
9696

9797
// Wait for training finish.
9898
while (true)
9999
{
100-
Task.Delay(timeIntervalInMilliseconds).Wait();
101-
var status = await faceClient.LargeFaceList.TrainAsync(largeFaceListId);
100+
await Task.Delay(timeIntervalInMilliseconds);
101+
var status = await faceClient.LargeFaceList.GetTrainingStatusAsyn(largeFaceListId);
102102

103103
if (status.Status == Status.Running)
104104
{
@@ -123,7 +123,7 @@ Previously, a typical use of **FaceList** with added faces and **FindSimilar** l
123123
const string FaceListId = "myfacelistid_001";
124124
const string FaceListName = "MyFaceListDisplayName";
125125
const string ImageDir = @"/path/to/FaceList/images";
126-
faceClient.FaceList.CreateAsync(FaceListId, FaceListName).Wait();
126+
await faceClient.FaceList.CreateAsync(FaceListId, FaceListName);
127127

128128
// Add Faces to the FaceList.
129129
Parallel.ForEach(
@@ -141,7 +141,7 @@ const string QueryImagePath = @"/path/to/query/image";
141141
var results = new List<SimilarPersistedFace[]>();
142142
using (Stream stream = File.OpenRead(QueryImagePath))
143143
{
144-
var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
144+
var faces = await faceClient.Face.DetectWithStreamAsync(stream);
145145
foreach (var face in faces)
146146
{
147147
results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
@@ -156,7 +156,7 @@ When migrating it to **LargeFaceList**, it becomes the following:
156156
const string LargeFaceListId = "mylargefacelistid_001";
157157
const string LargeFaceListName = "MyLargeFaceListDisplayName";
158158
const string ImageDir = @"/path/to/FaceList/images";
159-
faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName).Wait();
159+
await faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName);
160160

161161
// Add Faces to the LargeFaceList.
162162
Parallel.ForEach(
@@ -178,7 +178,7 @@ const string QueryImagePath = @"/path/to/query/image";
178178
var results = new List<SimilarPersistedFace[]>();
179179
using (Stream stream = File.OpenRead(QueryImagePath))
180180
{
181-
var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
181+
var faces = await faceClient.Face.DetectWithStreamAsync(stream);
182182
foreach (var face in faces)
183183
{
184184
results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));

articles/ai-services/computer-vision/how-to/use-persondirectory.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ var client = new HttpClient();
5656
// Request headers
5757
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
5858

59-
var addPersonUri = "https:// {endpoint}/face/v1.0-preview/persons";
59+
var addPersonUri = "https://{endpoint}/face/v1.0-preview/persons";
6060

6161
HttpResponseMessage response;
6262

@@ -113,10 +113,7 @@ Stopwatch s = Stopwatch.StartNew();
113113
string status = "notstarted";
114114
do
115115
{
116-
if (status == "succeeded")
117-
{
118-
await Task.Delay(500);
119-
}
116+
await Task.Delay(500);
120117

121118
var operationResponseMessage = await client.GetAsync(operationLocation);
122119

articles/ai-services/content-safety/overview.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,7 @@ Feel free to [contact us](mailto:[email protected]) if you need
134134

135135
### Query rates
136136

137-
#### Moderation APIs
138-
| Pricing Tier | Requests per 10 seconds |
137+
| Pricing Tier | Requests per 10 seconds |
139138
| :----------- | :--------------------- |
140139
| F0 | 1000 |
141140
| S0 | 1000 |

articles/ai-services/content-safety/quickstart-groundedness.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,8 @@ Follow this guide to use Azure AI Content Safety Groundedness detection to check
1818
## Prerequisites
1919

2020
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
21-
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
22-
* The resource takes a few minutes to deploy. After it does, go to the new resource. In the left pane, under **Resource Management**, select **API Keys and Endpoints**. Copy one of the subscription key values and endpoint to a temporary location for later use.
21+
* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US, East US2, West US, Sweden Central), and supported pricing tier. Then select **Create**.
22+
* The resource takes a few minutes to deploy. After it does, go to the new resource. In the left pane, under **Resource Management**, select **API Keys and Endpoints**. Copy one of the subscription key values and endpoint to a temporary location for later use.
2323
* (Optional) If you want to use the _reasoning_ feature, create an Azure OpenAI Service resource with a GPT model deployed.
2424
* [cURL](https://curl.haxx.se/) or [Python](https://www.python.org/downloads/) installed.
2525

@@ -55,7 +55,7 @@ This section walks through a sample request with cURL. Paste the command below i
5555
}'
5656
```
5757

58-
1. Open a command prompt and run the cURL command.
58+
Open a command prompt and run the cURL command.
5959

6060

6161
#### [Python](#tab/python)
@@ -132,7 +132,7 @@ The parameters in the request body are defined in this table:
132132
| - `query` | (Optional) This represents the question in a QnA task. Character limit: 7,500. | String |
133133
| **text** | (Required) The LLM output text to be checked. Character limit: 7,500. | String |
134134
| **groundingSources** | (Required) Uses an array of grounding sources to validate AI-generated text. Up to 55,000 characters of grounding sources can be analyzed in a single request. | String array |
135-
| **reasoning** | (Optional) Specifies whether to use the reasoning feature. The default value is `false`. If `true`, you need to bring your own Azure OpenAI GPT-4 Turbo resources to provide an explanation. Be careful: using reasoning increases the processing time.| Boolean |
135+
| **reasoning** | (Optional) Specifies whether to use the reasoning feature. The default value is `false`. If `true`, you need to bring your own Azure OpenAI GPT-4 Turbo (1106-preview) resources to provide an explanation. Be careful: using reasoning increases the processing time.| Boolean |
136136

137137
### Interpret the API response
138138

@@ -161,14 +161,13 @@ The JSON objects in the output are defined here:
161161

162162
## Check groundedness with reasoning
163163

164-
The Groundedness detection API provides the option to include _reasoning_ in the API response. With reasoning enabled, the response includes a `"reasoning"` field that details specific instances and explanations for any detected ungroundedness. Be careful: using reasoning increases the processing time and incurs extra fees.
165-
164+
The Groundedness detection API provides the option to include _reasoning_ in the API response. With reasoning enabled, the response includes a `"reasoning"` field that details specific instances and explanations for any detected ungroundedness.
166165
### Bring your own GPT deployment
167166

168167
> [!TIP]
169-
> At the moment, we only support **Azure OpenAI GPT-4 Turbo** resources and do not support other GPT types. Your GPT-4 Turbo resources can be deployed in any region; however, we recommend that they be located in the same region as the content safety resources to minimize potential latency.
168+
> At the moment, we only support **Azure OpenAI GPT-4 Turbo (1106-preview)** resources and do not support other GPT types. You have the flexibility to deploy your GPT-4 Turbo (1106-preview) resources in any region. However, to minimize potential latency and avoid any geographical boundary data privacy and risk concerns, we recommend situating them in the same region as your content safety resources. For comprehensive details on data privacy, please refer to the [Data, privacy and security guidelines for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy) and [Data, privacy, and security for Azure AI Content Safety](/legal/cognitive-services/content-safety/data-privacy?context=%2Fazure%2Fai-services%2Fcontent-safety%2Fcontext%2Fcontext).
170169
171-
In order to use your Azure OpenAI GPT4-Turbo resource to enable the reasoning feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource:
170+
In order to use your Azure OpenAI GPT4-Turbo (1106-preview) resource to enable the reasoning feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource:
172171

173172
[!INCLUDE [openai-account-access](includes/openai-account-access.md)]
174173

@@ -281,8 +280,8 @@ The parameters in the request body are defined in this table:
281280
| **text** | (Required) The LLM output text to be checked. Character limit: 7,500. | String |
282281
| **groundingSources** | (Required) Uses an array of grounding sources to validate AI-generated text. Up to 55,000 characters of grounding sources can be analyzed in a single request. | String array |
283282
| **reasoning** | (Optional) Set to `true`, the service uses Azure OpenAI resources to provide an explanation. Be careful: using reasoning increases the processing time and incurs extra fees.| Boolean |
284-
| **llmResource** | (Required) If you want to use your own Azure OpenAI GPT4-Turbo resource to enable reasoning, add this field and include the subfields for the resources used. | String |
285-
| - `resourceType `| Specifies the type of resource being used. Currently it only allows `AzureOpenAI`. We only support Azure OpenAI GPT-4 Turbo resources and do not support other GPT types. Your GPT-4 Turbo resources can be deployed in any region; however, we recommend that they be located in the same region as the content safety resources to minimize potential latency. | Enum|
283+
| **llmResource** | (Required) If you want to use your own Azure OpenAI GPT4-Turbo (1106-preview) resource to enable reasoning, add this field and include the subfields for the resources used. | String |
284+
| - `resourceType `| Specifies the type of resource being used. Currently it only allows `AzureOpenAI`. We only support Azure OpenAI GPT-4 Turbo (1106-preview) resources and do not support other GPT types. | Enum|
286285
| - `azureOpenAIEndpoint `| Your endpoint URL for Azure OpenAI service. | String |
287286
| - `azureOpenAIDeploymentName` | The name of the specific GPT deployment to use. | String|
288287

articles/ai-services/speech-service/toc.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -572,6 +572,10 @@ items:
572572
- name: Text to speech REST API
573573
href: rest-text-to-speech.md
574574
displayName: reference, tts output, output format
575+
- name: Custom voice REST API
576+
href: /rest/api/speechapi/operation-groups?&preserve-view=true=rest-speechapi-2023-12-01-preview
577+
- name: Batch synthesis REST API
578+
href: /rest/api/batchtexttospeech/operation-groups?&preserve-view=true=rest-batchtexttospeech-2024-04-01
575579
- name: Speaker Recognition REST API
576580
href: /rest/api/speakerrecognition/
577581
displayName: reference

0 commit comments

Comments
 (0)