You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-10Lines changed: 12 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,7 +81,7 @@ You can find the details you need for existing resources in the top-right projec
81
81
You can also try running our experimental script to check quota in your subscription. You can modify it to fit your requirements.
82
82
83
83
> [!NOTE]
84
-
> Note: this script is a tentative to help locating quota, but it might provide numbers that are not accurate. The Azure AI Studio or the [Azure OpenAI portal](https://oai.azure.com/), and our [docs of quota limits](https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits) would be the source of truth.
84
+
> This script is intended to help understand quota, but it might provide numbers that are not accurate. The Azure AI Studio or the [Azure OpenAI portal](https://oai.azure.com/), and our [docs of quota limits](https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits) would be the source of truth.
@@ -149,9 +149,10 @@ This sample includes custom code to add retrieval augmented generation (RAG) to
149
149
150
150
The code follows the following general logic:
151
151
152
-
1. Uses an embedding model to embed the the user's query
152
+
1. Generates a search query based on user query intent and any chat history
153
+
1. Uses an embedding model to embed the query
153
154
1. Retrieves relevant documents from the search index, given the query
154
-
1. Integrates the document context into messages passed to chat completion model
155
+
1. Passes the relevant context to the Azure Open AI chat completion model
155
156
1. Returns the response from the Azure Open AI model
156
157
157
158
You can modify this logic as appropriate to fit your use case.
@@ -172,25 +173,26 @@ If you want to test with chat_history, you can use or update the sample input js
172
173
pf flow test --flow ./copilot_flow --inputs ./copilot_flow/input_with_chat_history.json
173
174
```
174
175
175
-
## Step 7: Batch evaluate, iterate, evaluate again (eval compare in AI Studio)
176
+
## Step 7: Evaluate copilot performance
176
177
177
178
Evaluation is a key part of developing a copilot application. Once you have validated your logic on a sample set of inputs, its time to test it on a larger set of inputs.
178
179
179
180
Evaluation relies on an evaluation dataset. In this case, we have an evaluation dataset with chat_input and truth, and then a target functionthat adds the LLM response and context to the evaluation dataset before running the evaluations.
180
181
181
182
The following script streamlines the evaluation process. Update the evaluation code to set your desired evaluation metrics, or optionally evaluate on custom metrics. You can also change where the evaluation results get written to.
182
183
183
-
We recommend viewing your evaluation results in the Azure AI Studio, to compare evaluation runs with different prompts, or even different models.
184
-
Note that this will configure your project with a Cosmos DB account for logging. It may take several minutes the first time you run an evaluation.
Specify the `--dataset-path` argument if you want to provide a different evaluation dataset.
193
189
190
+
We recommend viewing your evaluation results in the Azure AI Studio, to compare evaluation runs with different prompts, or even different models. The _evaluate.py_ script is set up to log your evaluation results to your AI Studio project.
191
+
192
+
> [!NOTE]
193
+
> This will configure your project with a Cosmos DB account for logging. It may take several minutes the first time you run an evaluation.
194
+
195
+
194
196
If you do not want to log evaluation results to your AI Studio project, you can modify the _evaluation.py_ script to not pass the azure_ai_project parameter.
195
197
196
198
## Step 8: Deploy application to AI Studio
@@ -217,7 +219,7 @@ We recommend you test your application in the Azure AI Studio. The previous step
217
219
218
220
Navigate to the Test tab, and try asking a question in the chat interface. You should see the response come back and you have verified your deployment!
219
221
220
-
If you prefer to test your deployed endpoint locally, you can invoke it with a sample question.
222
+
If you prefer to test your deployed endpoint locally, you can invoke it with a default query.
0 commit comments