Skip to content

Commit e399efe

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into migrate-server-articles-2
2 parents c6c170c + 6c92e20 commit e399efe

File tree

412 files changed

+6663
-2918
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

412 files changed

+6663
-2918
lines changed

.openpublishing.redirection.json

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,11 @@
6060
"redirect_url": "/previous-versions/azure/storage/queues/storage-ruby-how-to-use-queue-storage",
6161
"redirect_document_id": false
6262
},
63+
{
64+
"source_path": "articles/storage-actions/index.yml",
65+
"redirect_url": "/azure/storage-actions/storage-tasks/",
66+
"redirect_document_id": false
67+
},
6368
{
6469
"source_path": "articles/storage/queues/storage-php-how-to-use-queues.md",
6570
"redirect_url": "/previous-versions/azure/storage/queues/storage-php-how-to-use-queues",
@@ -11799,10 +11804,15 @@
1179911804
"redirect_document_id": false
1180011805

1180111806
},
11807+
{
11808+
"source_path": "articles/external-attack-surface-management/labeling-inventory-assets.md",
11809+
"redirect_URL": "/azure/external-attack-surface-management/modifying-inventory-assets",
11810+
"redirect_document_id": true
11811+
},
1180211812
{
1180311813
"source_path_from_root": "/articles/azure-health-insights/response-info.md",
1180411814
"redirect_url": "/azure/azure-health-insights/overview",
1180511815
"redirect_document_id": false
1180611816
}
1180711817
]
11808-
}
11818+
}

articles/active-directory-b2c/whats-new-docs.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,14 @@ manager: CelesteDG
1919

2020
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
2121

22+
## January 2024
23+
24+
### Updated articles
25+
26+
- [Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication](partner-nok-nok.md) - Updated Nok Nok instructions
27+
- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md) - Updated Transmit Security instructions
28+
- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md) - Updated claim resolvers and user journey
29+
2230
## December 2023
2331

2432
### Updated articles
@@ -46,13 +54,6 @@ Welcome to what's new in Azure Active Directory B2C documentation. This article
4654
- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates
4755
- [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates
4856

49-
## October 2023
50-
51-
### Updated articles
52-
53-
- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md) - Editorial updates
54-
- [Azure AD B2C: Frequently asked questions (FAQ)](faq.yml) - Editorial updates
55-
- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md) - Added breaking change on script tags
5657

5758

5859

articles/ai-services/.openpublishing.redirection.cognitive-services.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5125,7 +5125,7 @@
51255125
},
51265126
{
51275127
"source_path_from_root": "/articles/cognitive-services/text-analytics/migration-guide.md",
5128-
"redirect_url": "/azure/ai-services/language-service/concepts/migrate-from-text-analytics-v2",
5128+
"redirect_url": "/azure/ai-services/language-service/concepts/migrate",
51295129
"redirect_document_id": false
51305130
},
51315131
{

articles/ai-services/computer-vision/Tutorials/liveness.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The liveness detection solution successfully defends against a variety of spoof
2828
- Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
2929
- You need the key and endpoint from the resource you create to connect your application to the Face service. You'll paste your key and endpoint into the code later in the quickstart.
3030
- You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
31-
- Access to the Azure AI Vision SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
31+
- Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
3232

3333
## Perform liveness detection
3434

@@ -222,7 +222,7 @@ The high-level steps involved in liveness with verification orchestration are il
222222

223223
```json
224224
Request:
225-
curl --location '<insert-api-endpoint>/face/v1.1-preview.1/detectlivenesswithverify/singlemodal' \
225+
curl --location '<insert-api-endpoint>/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/3847ffd3-4657-4e6c-870c-8e20de52f567' \
226226
--header 'Content-Type: multipart/form-data' \
227227
--header 'apim-recognition-model-preview-1904: true' \
228228
--header 'Authorization: Bearer.<session-authorization-token> \

articles/ai-services/computer-vision/concept-face-detection.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,9 @@ ms.author: pafarley
1919

2020
[!INCLUDE [Gate notice](./includes/identity-gate-notice.md)]
2121

22+
> [!IMPORTANT]
23+
> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data. Please refrain from using these attributes for anti-spoofing. Instead, we recommend using Face Liveness detection. For more information, please refer to [Tutorial: Detect liveness in faces](/azure/ai-services/computer-vision/tutorials/liveness).
24+
2225
This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
2326

2427
You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
@@ -67,9 +70,6 @@ Attributes are a set of features that can optionally be detected by the [Face -
6770
>[!NOTE]
6871
> The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
6972
70-
> [!IMPORTANT]
71-
> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.
72-
7373
## Input data
7474

7575
Use the following tips to make sure that your input images give the most accurate detection results:

articles/ai-services/computer-vision/how-to/use-headpose.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ ms.custom:
2020

2121
In this guide, you'll see how you can use the HeadPose attribute of a detected face to enable some key scenarios.
2222

23+
> [!IMPORTANT]
24+
> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data. Please refrain from using these attributes for anti-spoofing. Instead, we recommend using Face Liveness detection. For more information, please refer to [Tutorial: Detect liveness in faces](/azure/ai-services/computer-vision/tutorials/liveness).
25+
2326
## Rotate the face rectangle
2427

2528
The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.

articles/ai-services/computer-vision/toc.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -374,10 +374,10 @@ items:
374374
href: /python/api/overview/azure/cognitiveservices-vision-face-readme
375375
- name: Client-side
376376
items:
377-
- name: Java (Android)
378-
href: https://aka.ms/liveness-sdk-java
377+
- name: Kotlin (Android)
378+
href: https://aka.ms/azure-ai-vision-face-liveness-client-sdk-android-api-reference
379379
- name: Swift (iOS)
380-
href: https://aka.ms/liveness-sdk-ios
380+
href: https://aka.ms/azure-ai-vision-face-liveness-client-sdk-ios-api-reference
381381
- name: Spatial Analysis
382382
items:
383383
- name: Spatial Analysis overview

articles/ai-services/document-intelligence/faq.yml

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -460,11 +460,32 @@ sections:
460460
461461
- You need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/) and subscription with at least a **Reader** role to access Document Intelligence Studio.
462462
463-
- For **document analysis and prebuilt models**, you need full access—**Cognitive Services User** role—to at least one [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource. Once you access the model analyze page, you can change the endpoint and key to access other resources, if needed.
463+
- For document analysis and prebuilt models, here are the role requirements for user scenarios.
464464
465-
- For **custom models**, you can either use a **Cognitive Services User** role, or use the endpoint and key of a [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to create a project. You also need to have **Storage Blob Data Contributor** role to access to at least one blob storage account.
465+
- Basic
466466
467-
- For more information, *see* [Microsoft Entra built-in roles](../../role-based-access-control/built-in-roles.md).
467+
- **Cognitive Services User**: you need this role to [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to enter the analyze page.
468+
469+
- Advanced
470+
471+
- **Contributor**: you need this role to create resource group or Document Intelligence resource.
472+
473+
- For custom model projects, here are the role requirements for user scenarios.
474+
475+
- Basic
476+
477+
- **Cognitive Services User**: you need this role to [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource to train custom model or analyze with trained models.
478+
479+
- **Storage Blob Data Contributor**: you need this role to storage account to create project and label data.
480+
481+
- Advanced
482+
483+
- **Storage Account Contributor**: you need this role to the storage account to setup CORS settings (this is one time effort if the same storage account is reused).
484+
485+
- **Contributor**: you need this role to create resource group and resources.
486+
487+
488+
- For more information, *see* [Microsoft Entra built-in roles](../../role-based-access-control/built-in-roles.md) and **Azure role assignments** sections in [Document Intelligence Studio](quickstarts/try-document-intelligence-studio.md) page.
468489
469490
- question: |
470491
I have multiple pages in a document. Why are there only two pages analyzed in Document Intelligence Studio?

0 commit comments

Comments
 (0)