You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/authentication/concept-fido2-hardware-vendor.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,8 +29,6 @@ You can become a Microsoft-compatible FIDO2 security key vendor through the foll
29
29
- Receive an overview of the device from the vendor
30
30
- Microsoft will share our test scripts with you. Our engineering team will be able to answer questions if you have any specific needs.
31
31
- You will complete and send all passed results to Microsoft Engineering team
32
-
- Once Microsoft confirms, you will send multiple hardware/solution samples of each device to Microsoft Engineering team
33
-
- Upon receipt Microsoft Engineering team will conduct test script verification and user experience flow
34
32
4. Upon successful passing of all tests by Microsoft Engineering team, Microsoft will confirm vendor's device is listed in [the FIDO MDS](https://fidoalliance.org/metadata/).
35
33
5. Microsoft will add your FIDO2 Security Key on Azure AD backend and to our list of approved FIDO2 vendors.
title: 'Quickstart: Direct web traffic using Bicep'
3
+
titleSuffix: Azure Application Gateway
4
+
description: In this quickstart, you learn how to use Bicep to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool.
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
23
+
24
+
## Review the Bicep file
25
+
26
+
This Bicep file creates a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
27
+
28
+
The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/)
-[**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses) : one for the application gateway, and two for the virtual machines.
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/includes/input-requirements.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,19 +3,17 @@ author: laujan
3
3
ms.service: applied-ai-services
4
4
ms.subservice: forms-recognizer
5
5
ms.topic: include
6
-
ms.date: 09/22/2021
6
+
ms.date: 04/14/2022
7
7
ms.author: lajanuar
8
8
ms.custom: ignite-fall-2021
9
9
---
10
+
<!-- markdownlint-disable MD041 -->
10
11
11
12
* For best results, provide one clear photo or high-quality scan per document.
12
13
* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
13
14
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
14
15
* The file size must be less than 50 MB.
15
-
* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
16
+
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
16
17
* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
17
18
* The total size of the training data is 500 pages or less.
18
19
* If your PDFs are password-locked, you must remove the lock before submission.
19
-
* For unsupervised learning (without labeled data):
20
-
* Data must contain keys and values.
21
-
* Keys must appear above or to the left of the values; they can't appear below or to the right.
Copy file name to clipboardExpand all lines: articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
+21-16Lines changed: 21 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ keywords: document processing
19
19
<!-- markdownlint-disable MD029 -->
20
20
# Get started with the Form Recognizer Sample Labeling tool
21
21
22
-
Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
22
+
Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
23
23
24
24
The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR) services:
25
25
@@ -31,7 +31,7 @@ The Form Recognizer Sample Labeling tool is an open source tool that enables you
31
31
32
32
## Prerequisites
33
33
34
-
You will need the following to get started:
34
+
You'll need the following to get started:
35
35
36
36
* An Azure subscription—you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
37
37
@@ -57,11 +57,11 @@ Form Recognizer offers several prebuilt models to choose from. Each model has it
57
57
58
58
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
59
59
60
-
1. On the sample tool home page select **Use prebuilt model to get data**.
60
+
1. On the sample tool home page, select **Use prebuilt model to get data**.
61
61
62
62
:::image type="content" source="../media/label-tool/prebuilt-1.jpg" alt-text="Analyze results of Form Recognizer Layout":::
63
63
64
-
1. Select the **Form Type**your would like to analyze from the dropdown window.
64
+
1. Select the **Form Type** to analyze from the dropdown window.
65
65
66
66
1. Choose a URL for the file you would like to analyze from the below options:
67
67
@@ -97,7 +97,7 @@ Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
97
97
98
98
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
99
99
100
-
1. On the sample tool home page select **Use Layout to get text, tables and selection marks**.
100
+
1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
101
101
102
102
:::image type="content" source="../media/label-tool/layout-1.jpg" alt-text="Connection settings for Layout Form Recognizer tool.":::
103
103
@@ -130,31 +130,36 @@ Train a custom model to analyze and extract data from forms and documents specif
130
130
131
131
* Configure CORS
132
132
133
-
[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you will need access to the CORS blade of your storage account.
134
-
135
-
:::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
133
+
[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
136
134
137
135
1. Select the CORS blade for the storage account.
138
136
137
+
:::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
138
+
139
139
1. Start by creating a new CORS entry in the Blob service.
140
140
141
-
1. Set the **Allowed origins** to **https://fott-2-1.azurewebsites.net**.
141
+
1. Set the **Allowed origins** to **<https://fott-2-1.azurewebsites.net>**.
142
+
143
+
:::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
144
+
145
+
> [!TIP]
146
+
> You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
142
147
143
148
1. Select all the available 8 options for **Allowed methods**.
144
149
145
150
1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
146
151
147
152
1. Set the **Max Age** to 120 seconds or any acceptable value.
148
153
149
-
1.Click the save button at the top of the page to save the changes.
154
+
1.Select the save button at the top of the page to save the changes.
150
155
151
156
CORS should now be configured to use the storage account from Form Recognizer Studio.
152
157
153
158
### Use the Sample Labeling tool
154
159
155
160
1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
156
161
157
-
1. On the sample tool home page select **Use custom form to train a model with labels and get key-value pairs**.
162
+
1. On the sample tool home page, select **Use custom form to train a model with labels and get key-value pairs**.
158
163
159
164
:::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Train a custom model.":::
160
165
@@ -180,7 +185,7 @@ Configure the **Project Settings** fields with the following values:
180
185
> ***Description**. Add a brief description.
181
186
> ***SAS URL**. Paste the shared access signature (SAS) URL for your Azure Blob Storage container.
182
187
183
-
* To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and click**Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
188
+
* To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and select**Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
@@ -210,13 +215,13 @@ When you create or open a project, the main tag editor window opens. The tag edi
210
215
211
216
Select **Run OCR on all files** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
212
217
213
-
The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
218
+
The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we won't label the table content, but rather rely on the automated extraction.
214
219
215
220
:::image type="content" source="../media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool.":::
216
221
217
222
##### Apply labels to text
218
223
219
-
Next, you will create tags (labels) and apply them to the text elements that you want the model to analyze. Note the sample label data set includes already labeled fields; we will add another field.
224
+
Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the sample label data set includes already labeled fields; we'll add another field.
220
225
221
226
Use the tags editor pane to create a new tag you'd like to identify:
222
227
@@ -261,9 +266,9 @@ Choose the Train icon on the left pane to open the Training page. Then select th
261
266
262
267
#### Analyze a custom form
263
268
264
-
1. Select the **Analyze** (light bulb) icon on the left to test your model.
269
+
1. Select the **Analyze** (light bulb) icon on the left to test your model.
265
270
266
-
1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
271
+
1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
267
272
268
273
1. Choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
0 commit comments