Skip to content

Commit 9658380

Browse files
committed
update README
1 parent 5df798f commit 9658380

File tree

1 file changed

+12
-10
lines changed

1 file changed

+12
-10
lines changed

README.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ You can find the details you need for existing resources in the top-right projec
8181
You can also try running our experimental script to check quota in your subscription. You can modify it to fit your requirements.
8282
8383
> [!NOTE]
84-
> Note: this script is a tentative to help locating quota, but it might provide numbers that are not accurate. The Azure AI Studio or the [Azure OpenAI portal](https://oai.azure.com/), and our [docs of quota limits](https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits) would be the source of truth.
84+
> This script is intended to help understand quota, but it might provide numbers that are not accurate. The Azure AI Studio or the [Azure OpenAI portal](https://oai.azure.com/), and our [docs of quota limits](https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits) would be the source of truth.
8585
8686
```bash
8787
python provisioning/check_quota.py --subscription-id <your-subscription-id>
@@ -149,9 +149,10 @@ This sample includes custom code to add retrieval augmented generation (RAG) to
149149

150150
The code follows the following general logic:
151151

152-
1. Uses an embedding model to embed the the user's query
152+
1. Generates a search query based on user query intent and any chat history
153+
1. Uses an embedding model to embed the query
153154
1. Retrieves relevant documents from the search index, given the query
154-
1. Integrates the document context into messages passed to chat completion model
155+
1. Passes the relevant context to the Azure Open AI chat completion model
155156
1. Returns the response from the Azure Open AI model
156157

157158
You can modify this logic as appropriate to fit your use case.
@@ -172,25 +173,26 @@ If you want to test with chat_history, you can use or update the sample input js
172173
pf flow test --flow ./copilot_flow --inputs ./copilot_flow/input_with_chat_history.json
173174
```
174175

175-
## Step 7: Batch evaluate, iterate, evaluate again (eval compare in AI Studio)
176+
## Step 7: Evaluate copilot performance
176177

177178
Evaluation is a key part of developing a copilot application. Once you have validated your logic on a sample set of inputs, its time to test it on a larger set of inputs.
178179

179180
Evaluation relies on an evaluation dataset. In this case, we have an evaluation dataset with chat_input and truth, and then a target function that adds the LLM response and context to the evaluation dataset before running the evaluations.
180181

181182
The following script streamlines the evaluation process. Update the evaluation code to set your desired evaluation metrics, or optionally evaluate on custom metrics. You can also change where the evaluation results get written to.
182183

183-
We recommend viewing your evaluation results in the Azure AI Studio, to compare evaluation runs with different prompts, or even different models.
184-
Note that this will configure your project with a Cosmos DB account for logging. It may take several minutes the first time you run an evaluation.
185-
186-
187-
188184
``` bash
189185
python -m evaluation.evaluate --evaluation-name <evaluation_name>
190186
```
191187

192188
Specify the `--dataset-path` argument if you want to provide a different evaluation dataset.
193189

190+
We recommend viewing your evaluation results in the Azure AI Studio, to compare evaluation runs with different prompts, or even different models. The _evaluate.py_ script is set up to log your evaluation results to your AI Studio project.
191+
192+
> [!NOTE]
193+
> This will configure your project with a Cosmos DB account for logging. It may take several minutes the first time you run an evaluation.
194+
195+
194196
If you do not want to log evaluation results to your AI Studio project, you can modify the _evaluation.py_ script to not pass the azure_ai_project parameter.
195197

196198
## Step 8: Deploy application to AI Studio
@@ -217,7 +219,7 @@ We recommend you test your application in the Azure AI Studio. The previous step
217219
218220
Navigate to the Test tab, and try asking a question in the chat interface. You should see the response come back and you have verified your deployment!
219221
220-
If you prefer to test your deployed endpoint locally, you can invoke it with a sample question.
222+
If you prefer to test your deployed endpoint locally, you can invoke it with a default query.
221223
222224
``` bash
223225
python -m deployment.invoke --endpoint-name <endpoint_name> --deployment-name <deployment_name>

0 commit comments

Comments
 (0)