You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/architecture/architecture-icons.md
+2-6Lines changed: 2 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,16 +41,12 @@ Helping our customers design and architect new solutions is core to the Microsof
41
41
42
42
Microsoft permits the use of these icons in architectural diagrams, training materials, or documentation. You may copy, distribute, and display the icons only for the permitted use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.
> [I agree to the above terms. Download icons.](https://download.microsoft.com/download/a/4/2/a4289cad-4eaf-4580-87fd-ce999a601516/Microsoft-Entra-architecture-icons.zip?wt.mc_id=microsoftentraicons_downloadmicrosoftentraicons_content_cnl_csasci)
Copy file name to clipboardExpand all lines: articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,7 +77,7 @@ The following languages are supported in the customer tenant:
77
77
- Spanish (Spain)
78
78
- Swedish (Sweden)
79
79
- Thai (Thailand)
80
-
- Turkish (Turkey)
80
+
- Turkish (Türkiye)
81
81
- Ukrainian (Ukraine)
82
82
83
83
6. Customize the elements on the **Basics**, **Layout**, **Header**, **Footer**, **Sign-in form**, and **Text** tabs. For detailed instructions, see [Customize the branding and end-user experience](how-to-customize-branding-customers.md).
Copy file name to clipboardExpand all lines: articles/active-directory/governance/understanding-lifecycle-workflows.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -171,7 +171,7 @@ For a detailed guide on setting the execution conditions for a workflow, see: [C
171
171
172
172
While newly created workflows are enabled by default, scheduling is an option that must be enabled manually. To verify whether the workflow is scheduled, you can view the **Scheduled** column.
173
173
174
-
Once scheduling is enabled, the workflow is evaluated every three hours to determine whether or not it should run based on the execution conditions.
174
+
Once scheduling is enabled, the workflow is evaluated based on the interval that is set within your workflow settings(default of three hours) to determine whether or not it should run based on the execution conditions.
source: ${FILE_MOUNT_PATH} # path to your local folder
640
+
target: /onprem_folder
641
+
- type: bind
642
+
source: ${DB_MOUNT_PATH} # path to your local folder
643
+
target: /onprem_db
644
+
ports:
645
+
- "5001:5001"
646
+
user: "1000:1000" # echo $(id -u):$(id -g)
647
647
648
648
```
649
649
@@ -1051,7 +1051,7 @@ http {
1051
1051
2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/how-to-guides/includes/v2-1/rest-api.md
+10-11Lines changed: 10 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,6 @@ ms.author: lajanuar
34
34
* A URL for an **image of an invoice**. You can use a [sample document](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf) for this quickstart.
35
35
* A URL for an **image of an ID document**. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png)
36
36
37
-
38
37
## Analyze layout
39
38
40
39
You can use Document Intelligence to analyze and extract tables, selection marks, text, and structure in documents, without needing to train a model. For more information about layout extraction, see the [Layout conceptual guide](../../../concept-layout.md). Before you run the command, make these changes:
You receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:
53
+
You receive a `202 (Success)` response that includes a read-only **Operation-Location** header. The value of this header contains a `resultID`that can be queried to get the status of the asynchronous operation and retrieve the results using a GET request with your same resource subscription key:
After you've called the **[Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync)** API, you call the **[Get Analyze Layout Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
65
+
After you've called the **[Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync)** API, poll the **[Get Analyze Layout Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
67
66
68
67
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
69
68
1. Replace `{key}` with the key you copied from the previous step.
@@ -83,7 +82,7 @@ You receive a `200 (success)` response with JSON content.
83
82
See the following invoice image and its corresponding JSON output.
84
83
85
84
* The `"readResults"` node contains every line of text with its respective bounding box placement on the page.
86
-
* The `selectionMarks` node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
85
+
* The `selectionMarks` node shows every selection mark (checkbox, radio mark) and whether its status is `selected` or `unselected`.
87
86
* The `"pageResults"` section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
88
87
89
88
:::image type="content" source="../../../media/contoso-invoice.png" alt-text="Screenshot of Contoso project statement document with a table.":::
@@ -213,7 +212,7 @@ This output has been shortened for simplicity. See the [full sample output on Gi
213
212
214
213
## Analyze receipts
215
214
216
-
This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the [Receipts conceptual guide](../../../concept-receipt.md). To start analyzing a receipt, call the **[Analyze Receipt](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync)** API using the cURL command. Before you run the command, make these changes:
215
+
This section demonstrates how to analyze and extract common fields from US receipts, using a pretrained receipt model. For more information about receipt analysis, see the [Receipts conceptual guide](../../../concept-receipt.md). To start analyzing a receipt, call the **[Analyze Receipt](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync)** API using the cURL command. Before you run the command, make these changes:
217
216
218
217
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
219
218
1. Replace `{your receipt URL}` with the URL address of a receipt image.
@@ -255,7 +254,7 @@ curl -X GET "https://{endpoint}/formrecognizer/v2.1/prebuilt/receipt/analyzeResu
255
254
256
255
You receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` is `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
257
256
258
-
The `"readResults"` node contains all of the recognized text (if you set the optional *includeTextDetails* parameter to `true`). Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the receipt-specific values that the model discovered. The `"documentResults"` node is where you find useful key/value pairs like the tax, total, merchant address, and so on.
257
+
The `"readResults"` node contains all of the recognized text (if you set the optional *includeTextDetails* parameter to `true`). The response organizes text by page, then by line, then by individual words. The `"documentResults"` node contains the receipt-specific values that the model discovered. The `"documentResults"` node is where you find useful key/value pairs like the tax, total, merchant address, and so on.
259
258
260
259
See the following receipt image and its corresponding JSON output.
261
260
@@ -592,7 +591,7 @@ This output has been shortened for readability. See the [full sample output on G
592
591
593
592
## Analyze business cards
594
593
595
-
This section demonstrates how to analyze and extract common fields from English business cards, using a pre-trained model. For more information about business card analysis, see the [Business cards conceptual guide](../../../concept-business-card.md). To start analyzing a business card, you call the **[Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)** API using the cURL command. Before you run the command, make these changes:
594
+
This section demonstrates how to analyze and extract common fields from English business cards, using a pretrained model. For more information about business card analysis, see the [Business cards conceptual guide](../../../concept-business-card.md). To start analyzing a business card, you call the **[Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)** API using the cURL command. Before you run the command, make these changes:
596
595
597
596
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
598
597
1. Replace `{your business card URL}` with the URL address of a receipt image.
@@ -633,7 +632,7 @@ curl -v -X GET https://{endpoint}/formrecognizer/v2.1/prebuilt/businessCard/anal
633
632
634
633
You receive a `200 (Success)` response with JSON output.
635
634
636
-
The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the business-card-specific values that the model discovered. The `"documentResults"` node is where you find useful contact information like the company name, first name, last name, phone number, and so on.
635
+
The `"readResults"` node contains all of the recognized text. The response organizes text by page, then by line, then by individual words. The `"documentResults"` node contains the business-card-specific values that the model discovered. The `"documentResults"` node is where you find useful contact information like the company name, first name, last name, phone number, and so on.
637
636
638
637

To train with labels, you need to have special label information files (`\<filename\>.pdf.labels.json`) in your blob storage container alongside the training documents. The [Document Intelligence Sample Labeling tool](../../../label-tool.md) provides a UI to help you create these label files. Once you've them, you can call the **[Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync)** API, with the `"useLabelFile"` parameter set to `true` in the JSON body.
1220
+
To train with labels, you need to have special label information files (`\<filename\>.pdf.labels.json`) in your blob storage container alongside the training documents. The [Document Intelligence Sample Labeling tool](../../../label-tool.md) provides a UI to help you create these label files. Once you have them, you can call the **[Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync)** API, with the `"useLabelFile"` parameter set to `true`in the JSON body.
1222
1221
1223
1222
Before you run the command, make these changes:
1224
1223
1225
1224
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
1226
1225
1. Replace `{key}` with the key you copied from the previous step.
1227
-
1. Replace `{SAS URL}` with the Azure Blob storage container's shared access signature (SAS) URL.
1226
+
1. Replace `{SAS URL}` with the Azure Blob storage container's shared access signature (SAS) URL.
1228
1227
1229
1228
* To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS foryour container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and select **Create**. Then copy the valuein the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
0 commit comments