You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Select **Azure OpenaI - text-embedding-ada-002**. This embedding model accepts a maximum of 8192 tokens for each chunk. Data chunking is internal and nonconfigurable.
81
81
82
82
1. Check the acknowledgment that Azure AI Search is a billable service. If you're using an existing search service, there's no extra charge for vector store unless you add semantic ranking. If you're creating a new service, Azure AI Search becomes billable upon service creation.
83
83
84
84
1. Select **Next**.
85
85
86
-
1. In Upload files, select the four files and then select **Upload**.
86
+
1. In Upload files, select the four files and then select **Upload**. File size limit is 16 MB.
87
87
88
88
1. Select **Next**.
89
89
@@ -97,27 +97,31 @@ In this quickstart:
97
97
98
98
## Chat with your data
99
99
100
-
1. Review advanced settings that determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response.
100
+
The playground gives you options for configuring and monitoring chat.
101
101
102
-
Strictness determines whether the model supplements the query with its own information. Level of 5 is no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query.
102
+
On the right, model configuration determines which model formulates an answer using the search results from Azure AI Search. The input token progress indicator keeps track of the token count of the question you submit.
103
103
104
-
Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
104
+
On the left, advanced settings determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response. Strictness determines whether the model supplements its own information with the query response from Azure AI Search.
105
105
106
-
:::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
106
+
+ 5 means no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query.
107
107
108
-
1. Start with these settings:
108
+
+ Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
109
+
110
+
:::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
111
+
112
+
1. Start with these advanced settings:
109
113
110
114
+ Verify the **Limit responses to your data content** option is selected.
111
115
+ Strictness set to 3 or 4.
112
-
+ Retrieved documents set to 20. Given chunk sizes of 1024 tokens, a setting of 20 gives you roughly 20,000 tokens to use for generating responses. The tradeoff is query latency, but you can experiment with chat replay to find the right balance.
116
+
+ Retrieved documents set to 20. Maximum documents give the model more information to work with when generating responses. The tradeoff for maximum documents is increased query latency, but you can experiment with chat replay to find the right balance.
113
117
114
118
1. Send your first query. The chat models perform best in question and answer exercises. For example, "who gave the Gettysburg speech" or "when was the Gettysburg speech delivered".
115
119
116
120
More complex queries, such as "why was Gettysburg important", perform better if the model has some latitude to answer (lower levels of strictness) or if semantic ranking is enabled.
117
121
118
122
Queries that require deeper analysis or language understanding, such as "how many speeches are in the vector store" or "what's in this vector store", will probably fail to return a response. In RAG pattern chat scenarios, information retrieval is keyword and similarity search against the query string, where the search engine looks for chunks having exact or similar terms, phrases, or construction. The return payload might be insufficient for handling an open-ended question.
119
123
120
-
Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means adopting an approach that allows more than 20 chunks. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
124
+
Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means adopting an approach that allows more than 20 chunks. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [service-side exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
121
125
122
126
:::image type="content" source="media/search-get-started-rag/chat-results.png" lightbox="media/search-get-started-rag/chat-results.png" alt-text="Screenshot of a chat session.":::
0 commit comments