You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Retrieval Augmented Generation (RAG) is an document generative AI solution that combines a pretrained Large Language Model (LLM) like ChatGPT with an external data retrieval system to generate an enhanced response incorporating new data outside of the original training data. Adding an information retrieval system to your applications enables you to chat with your documents, generate captivating content, and access the power of Azure OpenAI models for your data. You also have more control over the data used by the LLM as it formulates a response.
20
+
Retrieval Augmented Generation (RAG) is a document generative AI solution that combines a pretrained Large Language Model (LLM) like ChatGPT with an external data retrieval system to generate an enhanced response incorporating new data outside of the original training data. Adding an information retrieval system to your applications enables you to chat with your documents, generate captivating content, and access the power of Azure OpenAI models for your data. You also have more control over the data used by the LLM as it formulates a response.
21
21
22
22
## Azure AI Document Intelligence Layout model
23
23
24
-
The ADocument Intelligence [Layout model](concept-layout.md) is an advanced machine-learning based document analysis API. Using semantic chunking, the layout model offers a comprehensive solution for advanced content extraction and document structure analysis capabilities. With the layout model, you can easily extract paragraphs, tables, titles, section headings, selection marks, font/style properties, key-value pairs, math formulas, QR code/barcode and more from various document types. This enables you to divide a large body of texts or documents into smaller, meaningful chunks based on semantic content rather than arbitrary splits. The extracted information can be conveniently outputted to Markdown format, enabling you to define your semantic chunking strategy based on the provided building blocks.
24
+
The Document Intelligence [Layout model](concept-layout.md) is an advanced machine-learning based document analysis API. With semantic chunking, the layout model offers a comprehensive solution for advanced content extraction and document structure analysis capabilities. With the layout model, you can easily extract text and structural to divide large bodies of text into smaller, meaningful chunks based on semantic content rather than arbitrary splits. The extracted information can be conveniently outputted to Markdown format, enabling you to define your semantic chunking strategy based on the provided building blocks.
25
25
26
26
:::image type="content" source="media/rag/azure-rag-processing.png" alt-text="Screenshot depicting semantic chunking with RAG using Azure AI Document Intelligence":::
27
27
28
28
## Layout model and semantic chunking
29
29
30
-
Long sentences are challenging for natural language processing (NLP) applications. Especially when they are comprised of multiple clauses, complex noun or verb phrases, relative clauses, and parenthetical groupings. Just like the human beholder, an NLP system also needs to successfully keep track of all the presented dependencies.The goal of semantic chunking is to find semantically coherent fragments of a sentence representation. These fragments can then be processed independently and recombined as semantic representations without loss of information, interpretation, or semantic relevance. The inherent meaning of the text is used as a guide for the chunking process.
30
+
Long sentences are challenging for natural language processing (NLP) applications. Especially when they're composed of multiple clauses, complex noun or verb phrases, relative clauses, and parenthetical groupings. Just like the human beholder, an NLP system also needs to successfully keep track of all the presented dependencies.The goal of semantic chunking is to find semantically coherent fragments of a sentence representation. These fragments can then be processed independently and recombined as semantic representations without loss of information, interpretation, or semantic relevance. The inherent meaning of the text is used as a guide for the chunking process.
31
31
32
32
Text data chunking strategies play a key role in optimizing the RAG response and performance. Fixed-sized and semantic are two distinct chunking methods:
33
33
34
-
***Fixed-sized chunking**. Most chunking strategies used in RAG today are based on fix-sized text segments known as chunks. It is quick, easy, and effective with text that does not have a strong sematic structure such as logs and data. However it is not recommended for text that requires semantic understanding and precise context. The fixed-size nature of the window can result in severing words, sentences, or paragraphs impeding comprehension and disrupt the flow of information and understanding.
34
+
***Fixed-sized chunking**. Most chunking strategies used in RAG today are based on fix-sized text segments known as chunks. Fixed-sized chunking is quick, easy, and effective with text that doesn't have a strong semantic structure such as logs and data. However it isn't recommended for text that requires semantic understanding and precise context. The fixed-size nature of the window can result in severing words, sentences, or paragraphs impeding comprehension and disrupt the flow of information and understanding.
35
35
36
-
***Semantic chunking**. This method divides the text into chunks based on semantic understanding that focuses on the subject or topic of a sentence and uses significant computational resources and is algorithmically complex. However, it has the distinct advantage of maintaining semantic consistency within each chunk. It is particularly useful for text summarization, sentiment analysis, and document classification tasks. For example, if you are looking for a specific section in a document, you can use semantic chunking to divide the document into smaller chunks based on the section headers. This can help you to find the section you are looking for quickly and easily. An effective semantic chunking strategy yields the following benefits:
36
+
***Semantic chunking**. This method divides the text into chunks based on semantic understanding. Division boundaries are focused on sentence subject and use significant computational algorithmically complex resources. However, it has the distinct advantage of maintaining semantic consistency within each chunk. It's useful for text summarization, sentiment analysis, and document classification tasks. For example, if you're looking for a specific section in a document, you can use semantic chunking to divide the document into smaller chunks based on the section headers helping you to find the section you're looking for quickly and easily. An effective semantic chunking strategy yields the following benefits:
37
37
38
38
## Semantic chunking with Document Intelligence layout model
39
39
@@ -43,7 +43,7 @@ Markdown is a structured and formatted markup language and a popular input for e
43
43
44
44
***Simplified processing**. You can parse different document types, such as digital and scanned PDFs, images, office files (docx, xlsx, pptx), and HTML, with just a single API call.
45
45
46
-
***Scalability and AI quality**. The layout model is highly scalable in Optical Character Recognition (OCR), table extraction, [document structure analysis](concept-layout.md#document-layout-analysis) (paragraphs, titles, and section headings), and reading order detection.It supports [309 printed and 12 handwritten languages](language-support-ocr.md#model-id-prebuilt-layout) further ensuring high-quality results driven by AI capabilities.
46
+
***Scalability and AI quality**. The layout model is highly scalable in Optical Character Recognition (OCR), table extraction, and [document structure analysis](concept-layout.md#document-layout-analysis). It supports [309 printed and 12 handwritten languages](language-support-ocr.md#model-id-prebuilt-layout) further ensuring high-quality results driven by AI capabilities.
47
47
48
48
***Large learning model (LLM) compatibility**. The layout model Markdown formatted output is LLM friendly and facilitates seamless integration into your workflows. You can turn any table in a document into Markdown format and avoid extensive effort parsing the documents for greater LLM understanding.
49
49
@@ -79,13 +79,13 @@ You can follow the [Document Intelligence studio quickstart](quickstarts/try-doc
79
79
80
80
:::image type="content" source="media/rag/rag-analyze-options.png" alt-text="Screenshot of Analyze options dialog window with RAG required options in the Document Intelligence studio.":::
81
81
82
-
* Next, select the **Run analysis** button to view the output.
82
+
* Next, select the **Run analysis** button to view the output.
83
83
84
84
:::image type="content" source="media/rag/run-analysis.png" alt-text="Screenshot of the Run Analysis button in the Document Intelligence Studio.":::
85
85
86
86
* The Markdown content is presented in the right-pane window:
87
87
88
-
:::image type="content" source="media/rag/markdown-content.png" alt-text="Screenshot of layout model markdown output in the Document Intelligence Studio.":::
88
+
:::image type="content" source="media/rag/markdown-content.png" alt-text="Screenshot of the layout model markdown output in the Document Intelligence Studio.":::
89
89
90
90
### SDK or REST API
91
91
@@ -101,11 +101,11 @@ You can follow the [Document Intelligence studio quickstart](quickstarts/try-doc
101
101
102
102
## Build document chat with semantic chunking
103
103
104
-
*[Azure OpenAI on your data](../openai/concepts/use-your-data) enables you to run supported chat on your documents. Azure OpenAI on your data leverages the Document Intelligence layout model to extract and parse document data by chunking long text based on table tables and paragraphs. You can also customize your chunking strategy using [Azure OpenAI sample scripts](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) located in our GitHub repo.
104
+
*[Azure OpenAI on your data](../openai/concepts/use-your-data) enables you to run supported chat on your documents. Azure OpenAI on your data applies the Document Intelligence layout model to extract and parse document data by chunking long text based on tables and paragraphs. You can also customize your chunking strategy using [Azure OpenAI sample scripts](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) located in our GitHub repo.
105
105
106
-
* Azure AI Document Intelligence is now integrated with [LangChain](https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence) as one of its document loaders. You can use it to easily load the data, output to Markdown format, and then use This [notebook](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/AI%20Services/Quickstart%20-%20Document%20Question%20and%20Answering%20with%20PDFs/) shows a simple demo for RAG pattern with Azure AI Document Intelligence as document loader and Azure Search as retriever in LangChain.
106
+
* Azure AI Document Intelligence is now integrated with [LangChain](https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence) as one of its document loaders. You can use it to easily load the data and output to Markdown format. This [notebook](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/AI%20Services/Quickstart%20-%20Document%20Question%20and%20Answering%20with%20PDFs/) shows a simple demo for RAG pattern with Azure AI Document Intelligence as document loader and Azure Search as retriever in LangChain.
107
107
108
-
* The [chat with your data solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) demonstrates an end-to-end baseline RAG pattern sample that uses Azure AI Search as a retriever and Azure AI Document Intelligence for document loading and semantic chunking.
108
+
* The chat with your data solution accelerator[code sample](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) demonstrates an end-to-end baseline RAG pattern sample. It uses Azure AI Search as a retriever and Azure AI Document Intelligence for document loading and semantic chunking.
0 commit comments