You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/includes/python-custom-analyze.md
+19-17Lines changed: 19 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,11 @@ ms.author: pafarley
9
9
10
10
## Analyze forms for key-value pairs and tables
11
11
12
-
Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the **Analyze Form** API by running the following code in a new Python script. Before you run the script, make these changes:
12
+
Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the **[Analyze Form](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeWithCustomForm)** API by running the following code in a new Python script. Before you run the script, make these changes:
13
13
14
-
1. Replace `<path to your form>` with the file path of your form (for example, C:\temp\file.pdf). This can also be the URL of a remote file. For this quickstart, you can use the files under the **Test** folder of the [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451).
15
-
1. Replace `<modelID>` with the model ID you received in the previous section.
16
-
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
14
+
1. Replace `<file path>` with the file path of your form (for example, C:\temp\file.pdf). This can also be the URL of a remote file. For this quickstart, you can use the files under the **Test** folder of the [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451).
15
+
1. Replace `<model_id>` with the model ID you received in the previous section.
16
+
1. Replace `<endpoint>` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
17
17
1. Replace `<file type>` with the file type. Supported types: `application/pdf`, `image/jpeg`, `image/png`, `image/tiff`.
18
18
1. Replace `<subscription key>` with your subscription key.
19
19
@@ -24,12 +24,11 @@ Next, you'll use your newly trained model to analyze a document and extract key-
@@ -65,28 +64,31 @@ When you call the **Analyze Form** API, you'll receive a `201 (Success)` respons
65
64
Add the following code to the bottom of your Python script. This uses the ID value from the previous call in a new API call to retrieve the analysis results. The **Analyze Form** operation is asynchronous, so this script calls the API at regular intervals until the results are available. We recommend an interval of one second or more.
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/curl-receipts.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ To complete this quickstart, you must have:
35
35
36
36
## Analyze a receipt
37
37
38
-
To start analyzing a receipt, you call the **Analyze Receipt** API using the cURL command below. Before you run the command, make these changes:
38
+
To start analyzing a receipt, you call the **[Analyze Receipt](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeReceiptAsync)** API using the cURL command below. Before you run the command, make these changes:
39
39
40
40
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription.
41
41
1. Replace `<your receipt URL>` with the URL address of a receipt image.
After you've called the **Analyze Receipt** API, you call the **Get Receipt Result** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
56
+
After you've called the **Analyze Receipt** API, you call the **[Get Analyze Receipt Result](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/GetAnalyzeReceiptResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
57
57
58
58
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
59
59
1. Replace `<operationId>` with the operation ID from the previous step.
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/curl-train-extract.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ First, you'll need a set of training data in an Azure Storage blob. You should h
40
40
> [!NOTE]
41
41
> You can use the labeled data feature to manually label some or all of your training data beforehand. This is a more complex process but results in a better trained model. See the [Train with labels](../overview.md#train-with-labels) section of the overview to learn more about this feature.
42
42
43
-
To train a Form Recognizer model with the documents in your Azure blob container, call the **Train Custom Model** API by running the following cURL command. Before you run the command, make these changes:
43
+
To train a Form Recognizer model with the documents in your Azure blob container, call the **[Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/TrainCustomModelAsync)** API by running the following cURL command. Before you run the command, make these changes:
44
44
45
45
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription.
46
46
1. Replace `<subscription key>` with the subscription key you copied from the previous step.
@@ -54,7 +54,7 @@ You'll receive a `201 (Success)` response with a **Location** header. The value
54
54
55
55
## Get training results
56
56
57
-
After you've started the train operation, you use a new operation, **Get Custom Model** to check the training status. Pass the model ID into this API call to check the training status:
57
+
After you've started the train operation, you use a new operation, **[Get Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/GetCustomModel)** to check the training status. Pass the model ID into this API call to check the training status:
58
58
59
59
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription key.
60
60
1. Replace `<subscription key>` with your subscription key
@@ -136,7 +136,7 @@ The `"modelId"` field contains the ID of the model you're training. You'll need
136
136
137
137
## Analyze forms for key-value pairs and tables
138
138
139
-
Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the **Analyze Form** API by running the following cURL command. Before you run the command, make these changes:
139
+
Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the **[Analyze Form](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeWithCustomForm)** API by running the following cURL command. Before you run the command, make these changes:
140
140
141
141
1. Replace `<Endpoint>` with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
142
142
1. Replace `<model ID>` with the model ID that you received in the previous section.
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/python-labeled-data.md
+33-22Lines changed: 33 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,8 +55,8 @@ All of these files should occupy the same sub-folder and be in the following for
55
55
56
56
You need OCR result files in order for the service to consider the corresponding input files for labeled training. To obtain OCR results for a given source form, follow the steps below:
57
57
58
-
1. Call the **/formrecognizer/v2.0-preview/layout/analyze** API on the read Layout container with the input file as part of the request body. Save the ID found in the response's **Operation-Location** header.
59
-
1. Call the **/formrecognizer/v2.0-preview/layout/analyzeResults/{id}** API, using operation ID from the previous step.
58
+
1. Call the **[Analyze Layout](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeLayoutAsync)** API on the read Layout container with the input file as part of the request body. Save the ID found in the response's **Operation-Location** header.
59
+
1. Call the **[Get Analyze Layout Result](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/GetAnalyzeLayoutResult)** API, using the operation ID from the previous step.
60
60
1. Get the response and write the contents to a file. For each source form, the corresponding OCR file should have the original file name appended with `.ocr.json`. The OCR JSON output should have the following format. See the [sample OCR file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/Invoice_1.pdf.ocr.json) for a full example.
61
61
62
62
```json
@@ -187,11 +187,11 @@ For each source form, the corresponding label file should have the original file
187
187
188
188
## Train a model using labeled data
189
189
190
-
To train a model with labeled data, call the **Train Custom Model** API by running the following python code. Before you run the code, make these changes:
190
+
To train a model with labeled data, call the **[Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/TrainCustomModelAsync)** API by running the following python code. Before you run the code, make these changes:
191
191
192
192
1. Replace `<Endpoint>` with the endpoint URL for your Form Recognizer resource.
193
193
1. Replace `<SAS URL>` with the Azure Blob storage container's shared access signature (SAS) URL. To retrieve the SAS URL, open the Microsoft Azure Storage Explorer, right-click your container, and select **Get shared access signature**. Make sure the **Read** and **List** permissions are checked, and click **Create**. Then copy the value in the **URL** section. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
194
-
1. Replace `<prefix>` with the folder name in your blob container where the input data is located. Or, if your data is at the root, leave this blank and remove the `"prefix"` field from the body of the HTTP request.
194
+
1. Replace `<Blob folder name>` with the folder name in your blob container where the input data is located. Or, if your data is at the root, leave this blank and remove the `"prefix"` field from the body of the HTTP request.
195
195
196
196
```python
197
197
########### Python Form Recognizer Labeled Async Train #############
@@ -203,14 +203,14 @@ from requests import get, post
print("POST model failed (%s):\n%s" % (resp.status_code, json.dumps(resp.json())))
229
229
quit()
230
230
print("POST model succeeded:\n%s" % resp.headers)
231
231
get_url = resp.headers["location"]
@@ -236,25 +236,36 @@ except Exception as e:
236
236
237
237
## Get training results
238
238
239
-
After you've started the train operation, you use the returned ID to get the status of the operation. Add the following code to the bottom of your Python script. This extracts the ID value from the training call and passes it to a new API call. The training operation is asynchronous, so this script calls the API at regular intervals until the training status is completed. We recommend an interval of one second or more.
239
+
After you've started the train operation, you use the returned ID to get the status of the operation. Add the following code to the bottom of your Python script. This uses the ID value from the training call in a new API call. The training operation is asynchronous, so this script calls the API at regular intervals until the training status is completed. We recommend an interval of one second or more.
print("Training failed. Model is invalid:\n%s" % json.dumps(resp_json))
259
+
quit()
260
+
# Training still running. Wait and retry.
261
+
time.sleep(wait_sec)
262
+
n_try += 1
263
+
wait_sec = min(2*wait_sec, max_wait_sec)
255
264
except Exception as e:
256
-
print(e)
257
-
exit()
265
+
msg = "GET model failed:\n%s" % str(e)
266
+
print(msg)
267
+
quit()
268
+
print("Train operation did not complete within the allocated time.")
258
269
```
259
270
260
271
When the training process is completed, you'll receive a `201 (Success)` response with JSON content like the following. The response has been shortened for simplicity.
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/python-layout.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ To complete this quickstart, you must have:
31
31
32
32
## Analyze the form layout
33
33
34
-
To start analyzing the layout, you call the **Analyze Layout** API using the Python script below. Before you run the script, make these changes:
34
+
To start analyzing the layout, you call the **[Analyze Layout](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeLayoutAsync)** API using the Python script below. Before you run the script, make these changes:
35
35
36
36
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription.
37
37
1. Replace `<path to your form>` with the path to your local form document.
After you've called the **Analyze Layout** API, you call the **Get Analyze Layout Result** API to get the status of the operation and the extracted data. Add the following code to the bottom of your Python script. This uses the operation ID value in a new API call. This script calls the API at regular intervals until the results are available. We recommend an interval of one second or more.
85
+
After you've called the **Analyze Layout** API, you call the **[Get Analyze Layout Result](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/GetAnalyzeLayoutResult)** API to get the status of the operation and the extracted data. Add the following code to the bottom of your Python script. This uses the operation ID value in a new API call. This script calls the API at regular intervals until the results are available. We recommend an interval of one second or more.
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/python-receipts.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ To complete this quickstart, you must have:
35
35
36
36
## Analyze a receipt
37
37
38
-
To start analyzing a receipt, you call the **Analyze Receipt** API using the Python script below. Before you run the script, make these changes:
38
+
To start analyzing a receipt, you call the **[Analyze Receipt](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeReceiptAsync)** API using the Python script below. Before you run the script, make these changes:
39
39
40
40
1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription.
41
41
1. Replace `<your receipt URL>` with the URL address of a receipt image.
After you've called the **Analyze Receipt** API, you call the **Get Receipt Result** API to get the status of the operation and the extracted data. Add the following code to the bottom of your Python script. This uses the operation ID value in a new API call. This script calls the API at regular intervals until the results are available. We recommend an interval of one second or more.
94
+
After you've called the **Analyze Receipt** API, you call the **[Get Analyze Receipt Result](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/GetAnalyzeReceiptResult)** API to get the status of the operation and the extracted data. Add the following code to the bottom of your Python script. This uses the operation ID value in a new API call. This script calls the API at regular intervals until the results are available. We recommend an interval of one second or more.
0 commit comments