Skip to content

Commit 158c92f

Browse files
committed
Merge branch 'main' into release-umc-rebranding
2 parents a0f07c1 + a4b61b4 commit 158c92f

File tree

151 files changed

+2145
-540
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

151 files changed

+2145
-540
lines changed

.openpublishing.publish.config.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -416,6 +416,12 @@
416416
"branch": "main",
417417
"branch_mapping": {}
418418
},
419+
{
420+
"path_to_root": "azureml-examples-archive",
421+
"url": "https://github.com/azure/azureml-examples",
422+
"branch": "v1-archive",
423+
"branch_mapping": {}
424+
},
419425
{
420426
"path_to_root": "azureml-examples-batch-pup",
421427
"url": "https://github.com/azure/azureml-examples",

articles/active-directory/authentication/howto-mfa-mfasettings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ The following table lists more numbers for different countries.
200200
| Sri Lanka | +94 117750440 |
201201
| Sweden | +46 701924176 |
202202
| Taiwan | +886 277515260 |
203-
| Turkey | +90 8505404893 |
203+
| Türkiye | +90 8505404893 |
204204
| Ukraine | +380 443332393 |
205205
| United Arab Emirates | +971 44015046 |
206206
| Vietnam | +84 2039990161 |

articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ The following languages are supported in the customer tenant:
7777
- Spanish (Spain)
7878
- Swedish (Sweden)
7979
- Thai (Thailand)
80-
- Turkish (Turkey)
80+
- Turkish (Türkiye)
8181
- Ukrainian (Ukraine)
8282

8383
6. Customize the elements on the **Basics**, **Layout**, **Header**, **Footer**, **Sign-in form**, and **Text** tabs. For detailed instructions, see [Customize the branding and end-user experience](how-to-customize-branding-customers.md).

articles/ai-services/document-intelligence/containers/install-run.md

Lines changed: 68 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -575,75 +575,75 @@ http {
575575
```yml
576576
version: '3.3'
577577
services:
578-
nginx:
579-
image: nginx:alpine
580-
container_name: reverseproxy
581-
volumes:
582-
- ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
583-
ports:
584-
- "5000:5000"
585-
layout:
586-
container_name: azure-cognitive-service-layout
587-
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
588-
environment:
589-
eula: accept
590-
apikey: ${FORM_RECOGNIZER_KEY}
591-
billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
592-
Logging:Console:LogLevel:Default: Information
593-
SharedRootFolder: /shared
594-
Mounts:Shared: /shared
595-
Mounts:Output: /logs
596-
volumes:
597-
- type: bind
598-
source: ${SHARED_MOUNT_PATH}
599-
target: /shared
600-
- type: bind
601-
source: ${OUTPUT_MOUNT_PATH}
602-
target: /logs
603-
expose:
604-
- "5000"
578+
nginx:
579+
image: nginx:alpine
580+
container_name: reverseproxy
581+
volumes:
582+
- ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
583+
ports:
584+
- "5000:5000"
585+
layout:
586+
container_name: azure-cognitive-service-layout
587+
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
588+
environment:
589+
eula: accept
590+
apikey: ${FORM_RECOGNIZER_KEY}
591+
billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
592+
Logging:Console:LogLevel:Default: Information
593+
SharedRootFolder: /shared
594+
Mounts:Shared: /shared
595+
Mounts:Output: /logs
596+
volumes:
597+
- type: bind
598+
source: ${SHARED_MOUNT_PATH}
599+
target: /shared
600+
- type: bind
601+
source: ${OUTPUT_MOUNT_PATH}
602+
target: /logs
603+
expose:
604+
- "5000"
605605
606-
custom-template:
607-
container_name: azure-cognitive-service-custom-template
608-
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
609-
restart: always
610-
depends_on:
611-
- layout
612-
environment:
613-
AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
614-
eula: accept
615-
apikey: ${FORM_RECOGNIZER_KEY}
616-
billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
617-
Logging:Console:LogLevel:Default: Information
618-
SharedRootFolder: /shared
619-
Mounts:Shared: /shared
620-
Mounts:Output: /logs
621-
volumes:
622-
- type: bind
623-
source: ${SHARED_MOUNT_PATH}
624-
target: /shared
625-
- type: bind
626-
source: ${OUTPUT_MOUNT_PATH}
627-
target: /logs
628-
expose:
629-
- "5000"
606+
custom-template:
607+
container_name: azure-cognitive-service-custom-template
608+
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
609+
restart: always
610+
depends_on:
611+
- layout
612+
environment:
613+
AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
614+
eula: accept
615+
apikey: ${FORM_RECOGNIZER_KEY}
616+
billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
617+
Logging:Console:LogLevel:Default: Information
618+
SharedRootFolder: /shared
619+
Mounts:Shared: /shared
620+
Mounts:Output: /logs
621+
volumes:
622+
- type: bind
623+
source: ${SHARED_MOUNT_PATH}
624+
target: /shared
625+
- type: bind
626+
source: ${OUTPUT_MOUNT_PATH}
627+
target: /logs
628+
expose:
629+
- "5000"
630630
631-
studio:
632-
container_name: form-recognizer-studio
633-
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
634-
environment:
635-
ONPREM_LOCALFILE_BASEPATH: /onprem_folder
636-
STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
637-
volumes:
638-
- type: bind
639-
source: ${FILE_MOUNT_PATH} # path to your local folder
640-
target: /onprem_folder
641-
- type: bind
642-
source: ${DB_MOUNT_PATH} # path to your local folder
643-
target: /onprem_db
644-
ports:
645-
- "5001:5001"
646-
user: "1000:1000" # echo $(id -u):$(id -g)
631+
studio:
632+
container_name: form-recognizer-studio
633+
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
634+
environment:
635+
ONPREM_LOCALFILE_BASEPATH: /onprem_folder
636+
STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
637+
volumes:
638+
- type: bind
639+
source: ${FILE_MOUNT_PATH} # path to your local folder
640+
target: /onprem_folder
641+
- type: bind
642+
source: ${DB_MOUNT_PATH} # path to your local folder
643+
target: /onprem_db
644+
ports:
645+
- "5001:5001"
646+
user: "1000:1000" # echo $(id -u):$(id -g)
647647
648648
```
649649

@@ -1051,7 +1051,7 @@ http {
10511051
2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
10521052

10531053
```yml
1054-
version: '3.3'
1054+
version: '3.3'
10551055
services:
10561056
nginx:
10571057
image: nginx:alpine

articles/ai-services/document-intelligence/how-to-guides/includes/v2-1/rest-api.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@ ms.author: lajanuar
3434
* A URL for an **image of an invoice**. You can use a [sample document](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf) for this quickstart.
3535
* A URL for an **image of an ID document**. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png)
3636

37-
3837
## Analyze layout
3938

4039
You can use Document Intelligence to analyze and extract tables, selection marks, text, and structure in documents, without needing to train a model. For more information about layout extraction, see the [Layout conceptual guide](../../../concept-layout.md). Before you run the command, make these changes:
@@ -51,9 +50,9 @@ curl -v -i POST "https://{endpoint}/formrecognizer/v2.1/layout/analyze" -H "Cont
5150

5251
#### Operation-Location
5352

54-
You receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:
53+
You receive a `202 (Success)` response that includes a read-only **Operation-Location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation and retrieve the results using a GET request with your same resource subscription key:
5554

56-
https://<span></span>cognitiveservice/formrecognizer/v2.1/layout/analyzeResults/**{resultId}**.
55+
https://<span></span>cognitiveservice/formrecognizer/v2.1/layout/analyzeResults/**{resultId}**.
5756

5857
In the following example, as part of the URL, the string after `analyzeResults/` is the result ID.
5958

@@ -63,7 +62,7 @@ https://cognitiveservice/formrecognizer/v2/layout/analyzeResults/54f0b076-4e38-4
6362

6463
### Get layout results
6564

66-
After you've called the **[Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync)** API, you call the **[Get Analyze Layout Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
65+
After you've called the **[Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync)** API, poll the **[Get Analyze Layout Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
6766

6867
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
6968
1. Replace `{key}` with the key you copied from the previous step.
@@ -83,7 +82,7 @@ You receive a `200 (success)` response with JSON content.
8382
See the following invoice image and its corresponding JSON output.
8483

8584
* The `"readResults"` node contains every line of text with its respective bounding box placement on the page.
86-
* The `selectionMarks` node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
85+
* The `selectionMarks` node shows every selection mark (checkbox, radio mark) and whether its status is `selected` or `unselected`.
8786
* The `"pageResults"` section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
8887

8988
:::image type="content" source="../../../media/contoso-invoice.png" alt-text="Screenshot of Contoso project statement document with a table.":::
@@ -213,7 +212,7 @@ This output has been shortened for simplicity. See the [full sample output on Gi
213212

214213
## Analyze receipts
215214

216-
This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the [Receipts conceptual guide](../../../concept-receipt.md). To start analyzing a receipt, call the **[Analyze Receipt](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync)** API using the cURL command. Before you run the command, make these changes:
215+
This section demonstrates how to analyze and extract common fields from US receipts, using a pretrained receipt model. For more information about receipt analysis, see the [Receipts conceptual guide](../../../concept-receipt.md). To start analyzing a receipt, call the **[Analyze Receipt](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync)** API using the cURL command. Before you run the command, make these changes:
217216

218217
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
219218
1. Replace `{your receipt URL}` with the URL address of a receipt image.
@@ -255,7 +254,7 @@ curl -X GET "https://{endpoint}/formrecognizer/v2.1/prebuilt/receipt/analyzeResu
255254

256255
You receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` is `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
257256

258-
The `"readResults"` node contains all of the recognized text (if you set the optional *includeTextDetails* parameter to `true`). Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the receipt-specific values that the model discovered. The `"documentResults"` node is where you find useful key/value pairs like the tax, total, merchant address, and so on.
257+
The `"readResults"` node contains all of the recognized text (if you set the optional *includeTextDetails* parameter to `true`). The response organizes text by page, then by line, then by individual words. The `"documentResults"` node contains the receipt-specific values that the model discovered. The `"documentResults"` node is where you find useful key/value pairs like the tax, total, merchant address, and so on.
259258

260259
See the following receipt image and its corresponding JSON output.
261260

@@ -592,7 +591,7 @@ This output has been shortened for readability. See the [full sample output on G
592591

593592
## Analyze business cards
594593

595-
This section demonstrates how to analyze and extract common fields from English business cards, using a pre-trained model. For more information about business card analysis, see the [Business cards conceptual guide](../../../concept-business-card.md). To start analyzing a business card, you call the **[Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)** API using the cURL command. Before you run the command, make these changes:
594+
This section demonstrates how to analyze and extract common fields from English business cards, using a pretrained model. For more information about business card analysis, see the [Business cards conceptual guide](../../../concept-business-card.md). To start analyzing a business card, you call the **[Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)** API using the cURL command. Before you run the command, make these changes:
596595

597596
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
598597
1. Replace `{your business card URL}` with the URL address of a receipt image.
@@ -633,7 +632,7 @@ curl -v -X GET https://{endpoint}/formrecognizer/v2.1/prebuilt/businessCard/anal
633632
634633
You receive a `200 (Success)` response with JSON output.
635634
636-
The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the business-card-specific values that the model discovered. The `"documentResults"` node is where you find useful contact information like the company name, first name, last name, phone number, and so on.
635+
The `"readResults"` node contains all of the recognized text. The response organizes text by page, then by line, then by individual words. The `"documentResults"` node contains the business-card-specific values that the model discovered. The `"documentResults"` node is where you find useful contact information like the company name, first name, last name, phone number, and so on.
637636
638637
![A business card from Contoso company](../../../media/business-card-english.jpg)
639638
@@ -1218,13 +1217,13 @@ https://westus.api.cognitive.microsoft.com/formrecognizer/v2.1/custom/models/77d
12181217
12191218
### Train a model with labels
12201219
1221-
To train with labels, you need to have special label information files (`\<filename\>.pdf.labels.json`) in your blob storage container alongside the training documents. The [Document Intelligence Sample Labeling tool](../../../label-tool.md) provides a UI to help you create these label files. Once you've them, you can call the **[Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync)** API, with the `"useLabelFile"` parameter set to `true` in the JSON body.
1220+
To train with labels, you need to have special label information files (`\<filename\>.pdf.labels.json`) in your blob storage container alongside the training documents. The [Document Intelligence Sample Labeling tool](../../../label-tool.md) provides a UI to help you create these label files. Once you have them, you can call the **[Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync)** API, with the `"useLabelFile"` parameter set to `true` in the JSON body.
12221221
12231222
Before you run the command, make these changes:
12241223
12251224
1. Replace `{endpoint}` with the endpoint that you obtained with your Document Intelligence subscription.
12261225
1. Replace `{key}` with the key you copied from the previous step.
1227-
1. Replace `{SAS URL}` with the Azure Blob storage container's shared access signature (SAS) URL.
1226+
1. Replace `{SAS URL}` with the Azure Blob storage container's shared access signature (SAS) URL.
12281227
12291228
* To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and select **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
12301229

articles/ai-services/document-intelligence/how-to-guides/includes/v3-0/rest-api.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -77,15 +77,15 @@ curl -i -X POST "%FR_ENDPOINT%formrecognizer/documentModels/{modelID}:analyze?ap
7777

7878
**Enable add-on capabilities
7979

80-
To enable add-on capabilities, use the `features` query parameter in the POST request. There are four add-on capabilities available with the 2023-07-31 (GA) release: *ocr.highResolution*, *ocr.formula*, *ocr.font*, and *queryFields.premium*. To learn more about each of the capabilities, visit the [Add-On Capabilities concept page](../../../concept-accuracy-confidence.md). You can only call the highResolution, formula and font capabilities for the Read and Layout model, and the queryFields capability for the General Documents model. The following example shows how to call the highResolution, formula and font capabilities for the Layout model.
80+
To enable add-on capabilities, use the `features` query parameter in the POST request. There are four add-on capabilities available with the `2023-07-31` (GA) release: *ocr.highResolution*, *ocr.formula*, *ocr.font*, and *queryFields.premium*. To learn more about each of the capabilities, visit the [Add-On Capabilities concept page](../../../concept-accuracy-confidence.md). You can only call the highResolution, formula and font capabilities for the Read and Layout model, and the queryFields capability for the General Documents model. The following example shows how to call the highResolution, formula and font capabilities for the Layout model.
8181

8282
```bash
8383
curl -i -X POST "%FR_ENDPOINT%formrecognizer/documentModels/prebuilt-layout:analyze?features=ocr.highResolution,ocr.formula,ocr.font?api-version=2023-07-31" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: %FR_KEY%" --data-ascii "{'urlSource': '{document-url}'}"
8484
```
8585

8686
### POST response
8787

88-
You receive a `202 (Success)` response that includes an **Operation-location** header. You use the value of this header to retrieve the response results.
88+
You receive a `202 (Success)` response that includes an **Operation-Location** header. You use the value of this header to retrieve the response results.
8989

9090
:::image type="content" source="../../../media/how-to/rest-get-response.png" alt-text="{alt-text}":::
9191

@@ -162,11 +162,11 @@ After you've called the [**Analyze document**](https://westus.dev.cognitive.micr
162162

163163
Before you run the following command, make these changes:
164164

165-
1. Replace `{POST response}` with the Operation-location header from the [POST response](#post-response).
165+
1. Replace `{POST response}` with the Operation-Location header from the [POST response](#post-response).
166166

167167
1. Replace `FR_KEY` with the variable name for your environment variable if it differs.
168168

169-
* Replace `{POST response}` with the Operation-location header from the [POST response](#post-response).
169+
* Replace `{POST response}` with the Operation-Location header from the [POST response](#post-response).
170170

171171
* Replace `FR_KEY` with the variable for your environment variable if it differs from the name in the code.
172172

0 commit comments

Comments
 (0)