Skip to content

Commit 60b1be5

Browse files
committed
config clarification
1 parent e0d7879 commit 60b1be5

File tree

1 file changed

+14
-10
lines changed

1 file changed

+14
-10
lines changed

articles/search/search-get-started-retrieval-augmented-generation.md

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -75,15 +75,15 @@ In this quickstart:
7575

7676
1. Provide an index name that's unique in your search service.
7777

78-
1. Check **Add vector search to this search index.**
78+
1. Check **Add vector search to this search index.** This option tokenizes your content and generates embeddings.
7979

80-
1. Select **Azure OpenaI - text-embedding-ada-002**.
80+
1. Select **Azure OpenaI - text-embedding-ada-002**. This embedding model accepts a maximum of 8192 tokens for each chunk. Data chunking is internal and nonconfigurable.
8181

8282
1. Check the acknowledgment that Azure AI Search is a billable service. If you're using an existing search service, there's no extra charge for vector store unless you add semantic ranking. If you're creating a new service, Azure AI Search becomes billable upon service creation.
8383

8484
1. Select **Next**.
8585

86-
1. In Upload files, select the four files and then select **Upload**.
86+
1. In Upload files, select the four files and then select **Upload**. File size limit is 16 MB.
8787

8888
1. Select **Next**.
8989

@@ -97,27 +97,31 @@ In this quickstart:
9797

9898
## Chat with your data
9999

100-
1. Review advanced settings that determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response.
100+
The playground gives you options for configuring and monitoring chat.
101101

102-
Strictness determines whether the model supplements the query with its own information. Level of 5 is no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query.
102+
On the right, model configuration determines which model formulates an answer using the search results from Azure AI Search. The input token progress indicator keeps track of the token count of the question you submit.
103103

104-
Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
104+
On the left, advanced settings determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response. Strictness determines whether the model supplements its own information with the query response from Azure AI Search.
105105

106-
:::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
106+
+ 5 means no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query.
107107

108-
1. Start with these settings:
108+
+ Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
109+
110+
:::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
111+
112+
1. Start with these advanced settings:
109113

110114
+ Verify the **Limit responses to your data content** option is selected.
111115
+ Strictness set to 3 or 4.
112-
+ Retrieved documents set to 20. Given chunk sizes of 1024 tokens, a setting of 20 gives you roughly 20,000 tokens to use for generating responses. The tradeoff is query latency, but you can experiment with chat replay to find the right balance.
116+
+ Retrieved documents set to 20. Maximum documents give the model more information to work with when generating responses. The tradeoff for maximum documents is increased query latency, but you can experiment with chat replay to find the right balance.
113117

114118
1. Send your first query. The chat models perform best in question and answer exercises. For example, "who gave the Gettysburg speech" or "when was the Gettysburg speech delivered".
115119

116120
More complex queries, such as "why was Gettysburg important", perform better if the model has some latitude to answer (lower levels of strictness) or if semantic ranking is enabled.
117121

118122
Queries that require deeper analysis or language understanding, such as "how many speeches are in the vector store" or "what's in this vector store", will probably fail to return a response. In RAG pattern chat scenarios, information retrieval is keyword and similarity search against the query string, where the search engine looks for chunks having exact or similar terms, phrases, or construction. The return payload might be insufficient for handling an open-ended question.
119123

120-
Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means adopting an approach that allows more than 20 chunks. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
124+
Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means adopting an approach that allows more than 20 chunks. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [service-side exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
121125

122126
:::image type="content" source="media/search-get-started-rag/chat-results.png" lightbox="media/search-get-started-rag/chat-results.png" alt-text="Screenshot of a chat session.":::
123127

0 commit comments

Comments
 (0)