Skip to content

Commit f96a710

Browse files
authored
Merge branch 'MicrosoftDocs:main' into patch-1
2 parents d9b5565 + 15e0494 commit f96a710

File tree

6 files changed

+22
-18
lines changed

6 files changed

+22
-18
lines changed

articles/ai-services/document-intelligence/concept-retrieval-augumented-generation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ If you're looking for a specific section in a document, you can use semantic chu
119119

120120
```python
121121

122-
# Using SDK targeting 2023-10-31-preview
122+
# Using SDK targeting 2023-10-31-preview, make sure your resource is in one of these regions: East US, West US2, West Europe
123123
# pip install azure-ai-documentintelligence==1.0.0b1
124124
# pip install langchain langchain-community azure-ai-documentintelligence
125125

@@ -154,4 +154,4 @@ splits
154154

155155
* [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
156156

157-
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
157+
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.

articles/ai-services/document-intelligence/faq.yml

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ sections:
3333
3434
- There are no changes to pricing. The names "Cognitive Services" and "Azure Applied AI" continue to be used in Azure billing, cost analysis, price list, and price APIs.
3535
36-
- There are no breaking changes to application programming interfaces (APIs) or SDKs.
36+
- There are no breaking changes to application programming interfaces (APIs) or SDKs. Starting from 2023-10-31-preview, API and SDKs will be renamed to "documentintelligence".
3737
3838
- Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
3939
@@ -45,6 +45,13 @@ sections:
4545
4646
A Document Generative AI solution can enable you to chat with your documents, generate captivating content from them and access the power of Azure OpenAI models on your data. With Azure AI Document Intelligence and Azure OpenAI combined, you can build an enterprise application to seamlessly interact with your documents using natural languages, easily find answers and gain valuable insights, effortlessly generate new and engaging content from your existing documents. Check for more details in the [technical community blog](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/document-generative-ai-the-power-of-azure-ai-document/ba-p/3875015).
4747
48+
- question: |
49+
How is Document Intelligence related to Retrieval Augmented Generation (RAG)?
50+
answer: |
51+
52+
Semantic chunking is a key step in RAG to ensure its efficient storage and retrieval. The Document Intelligence [Layout model](concept-layout.md) offers a comprehensive solution for advanced content extraction and document structure analysis capabilities. With the Layout model, you can easily extract text and structural elements to divide large bodies of text into smaller, meaningful chunks based on semantic content rather than arbitrary splits. The extracted information can be conveniently outputted to Markdown format, enabling you to define your semantic chunking strategy based on provided building blocks. Check for more details in this [article](concept-retrieval-augumented-generation.md).
53+
54+
4855
- question: |
4956
Which Document Intelligence use cases require special consideration?
5057
answer: |
@@ -104,7 +111,7 @@ sections:
104111
- question: |
105112
What is the accuracy score and how is it calculated?
106113
answer: |
107-
The output of a `build` (v3.0) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document.
114+
The output of a `build` (v3.0 and later versions) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document.
108115
109116
Accuracy is measured within a percentage value range between 0% (low) and 100% (high).
110117
@@ -159,7 +166,7 @@ sections:
159166
What is a bounding box?
160167
answer: |
161168
162-
A bounding box is an abstract rectangle that surrounds text elements on a document or form and is used as a reference point for object detection.
169+
A bounding box (`polygon` in v3.0 and later versions) is an abstract rectangle that surrounds text elements on a document or form and is used as a reference point for object detection.
163170
164171
- The bounding box specifies position using an x and y coordinate plane presented in an array of four numerical pairs. Each pair represents a corner of the box in the following order: top-left, top-right, bottom-right, bottom-left.
165172
@@ -205,7 +212,7 @@ sections:
205212
Where can I find the supported API version for the latest programming language SDKs?
206213
answer: |
207214
208-
This table provides links to the latest SDK versions and shows the relationship between supported Document Intelligence SDK and API versions: |
215+
This table provides links to the latest SDK versions and shows the relationship between supported Document Intelligence SDK and API versions:
209216
| Supported Language | Azure SDK reference|Supported API version|
210217
| ----- | -----|-----|
211218
| C#/.NET| [4.0.0](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/index.html)|[**v3.0**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |
@@ -242,7 +249,7 @@ sections:
242249
How can I specify a specific range of pages to be analyzed in a document?
243250
answer: |
244251
245-
- The parameter `pages`(supported in both v2.1 and v3.0 REST API) enables you to specify pages for multi-page PDF and TIFF documents. Accepted input includes the following ranges:
252+
- The parameter `pages`(supported in v2.1, v3.0 and later version REST API) enables you to specify pages for multi-page PDF and TIFF documents. Accepted input includes the following ranges:
246253
247254
- Single pages (for example,'1, 2' -> pages 1 and 2 are processed).- Finite (for example '2-5' -> pages 2 to 5 are processed)
248255
- Open-ended ranges (for example '5-' -> all the pages from page 5 are processed & for example, '-10' -> pages 1 to 10 are processed).
@@ -271,13 +278,13 @@ sections:
271278
answer: |
272279
Document Intelligence billing is calculated monthly based on the model type and number of pages analyzed:
273280
274-
- When you submit a document for analysis, all pages are analyzed unless you specify a page range with the `pages` parameter in your request. When the service analyzes Microsoft Excel and PowerPoint documents with the new Read OCR model, each worksheet and slide is counted as one page respectively.
281+
- When you submit a document for analysis, all pages are analyzed unless you specify a page range with the `pages` parameter in your request. When the service analyzes Microsoft Excel and PowerPoint documents with the Read OCR and Layout model, each worksheet and slide is counted as one page respectively.
275282
276283
- When analyzing PDF and TIFF files, each page in the PDF file or each image in the TIFF file is counted as one page with no maximum character limits.
277284
278-
- When analyzing Microsoft Word and HTML files supported by only the Read model, pages are counted in blocks of 3,000 characters each. For example, if your document contains 7,000 characters, the two pages with 3,000 characters each and one page with 1,000 characters adds up to a total of three pages.
285+
- When analyzing Microsoft Word and HTML files supported by the Read and Layout model, pages are counted in blocks of 3,000 characters each. For example, if your document contains 7,000 characters, the two pages with 3,000 characters each and one page with 1,000 characters adds up to a total of three pages.
279286
280-
- When using the Read model, if your Microsoft Word, Excel, and PowerPoint pages with embedded images, each image is analyzed and counted as a page. Therefore, the total analyzed pages for Microsoft Office documents are equal to the sum of total text pages and total images analyzed. In the previous example if the document contains two embedded images, the total page count in the service output is three text pages plus two images equaling a total of five pages.
287+
- When using the Read or Layout model to analyze Microsoft Word, Excel, PowerPoint and HTML files, embedded or linked images are not supported. So they will not be counted as additional images for chargeing.
281288
282289
- Training a custom model is always free with Document Intelligence. You’re only charged when a model is used to analyze a document.
283290

articles/ai-services/openai/how-to/migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ client = AzureOpenAI(
196196

197197
response = client.embeddings.create(
198198
input = "Your text string goes here",
199-
model= "text-embedding-ada-002"
199+
model= "text-embedding-ada-002" # model = "deployment_name".
200200
)
201201

202202
print(response.model_dump_json(indent=2))

articles/ai-services/openai/how-to/switching-endpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ client = AzureOpenAI(
6161

6262
<a name='azure-active-directory-authentication'></a>
6363

64-
### Microsoft Entra authentication
64+
### Microsoft Entra ID authentication
6565

6666
<table>
6767
<tr>

articles/machine-learning/how-to-train-with-ui.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.date: 11/04/2022
1313
ms.reviewer: ssalgado
1414
---
1515

16-
# Submit a training job in Studio (preview)
16+
# Submit a training job in Studio
1717

1818
There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with a guided experience for submitting training jobs in Azure Machine Learning studio.
1919

@@ -33,12 +33,9 @@ There are many ways to create a training job with Azure Machine Learning. You ca
3333

3434
1. Select your subscription and workspace.
3535

36-
* Navigate to the Azure Machine Learning Studio and enable the feature by clicking open the preview panel.
37-
[![Azure Machine Learning studio preview panel allowing users to enable preview features.](media/how-to-train-with-ui/preview-panel.png)](media/how-to-train-with-ui/preview-panel.png)
38-
3936

40-
* You may enter the job creation UI from the homepage. Click **Create new** and select **Job**.
41-
[![Azure Machine Learning studio homepage](media/how-to-train-with-ui/home-entry.png)](media/how-to-train-with-ui/home-entry.png)
37+
* You may enter the job creation UI from the homepage. Click **Create new** and select **Job**.
38+
[![Azure Machine Learning studio homepage](media/how-to-train-with-ui/unified-job-submission-home.png)](media/how-to-train-with-ui/unified-job-submission-home.png)
4239

4340
In this wizard, you can select your method of training, complete the rest of the submission wizard based on your selection, and submit the training job. Below we will walk through the wizard for running a custom script (command job).
4441

585 KB
Loading

0 commit comments

Comments
 (0)