You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/tutorials/copilot-sdk-build-rag.md
+2-6Lines changed: 2 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to build a RAG-based chat app using the Azure AI Foundry
5
5
manager: scottpolly
6
6
ms.service: azure-ai-foundry
7
7
ms.topic: tutorial
8
-
ms.date: 12/18/2024
8
+
ms.date: 02/12/2025
9
9
ms.reviewer: lebaro
10
10
ms.author: sgilley
11
11
author: sdgilley
@@ -78,11 +78,6 @@ The search index is used to store vectorized data from the embeddings model. The
78
78
python create_search_index.py
79
79
```
80
80
81
-
1. Once the script is run, you can view your newly created index in the **Data + indexes** page of your Azure AI Foundry project. For more information, see [How to build and consume vector indexes in Azure AI Foundry portal](../how-to/index-add.md).
82
-
83
-
1. If you run the script again with the same index name, it creates a new version of the same index.
84
-
85
-
86
81
## <a name="get-documents"></a> Get product documents
87
82
88
83
Next, you create a script to get product documents from the search index. The script queries the search index for documents that match a user's question.
@@ -176,6 +171,7 @@ To enable logging of telemetry to your project:
176
171
python chat_with_products.py --query "I need a new tent for 4 people, what would you recommend?" --enable-telemetry
177
172
```
178
173
174
+
Follow the link in the console output to see the telemetry data in your Application Insights resource. If it doesn't appear right away, wait a few minutes and select**Refresh**in the toolbar.
Copy file name to clipboardExpand all lines: articles/ai-studio/tutorials/copilot-sdk-evaluate.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -85,7 +85,7 @@ In Part 1 of this tutorial series, you created an **.env** file that specifies t
85
85
1. In your project in Azure AI Foundry portal, select **Models + endpoints**.
86
86
1. Select **gpt-4o-mini**.
87
87
1. Select **Edit**.
88
-
1. If you have quota to increase the **Tokens per Minute Rate Limit**, try increasing it to 30.
88
+
1. If you have quota to increase the **Tokens per Minute Rate Limit**, try increasing it to 30 or above.
89
89
1. Select **Save and close**.
90
90
91
91
### Run the evaluation script
@@ -108,6 +108,8 @@ In Part 1 of this tutorial series, you created an **.env** file that specifies t
108
108
python evaluate.py
109
109
```
110
110
111
+
Expect the evaluation to take a few minutes to complete.
112
+
111
113
### Interpret the evaluation output
112
114
113
115
In the console output, you see an answer foreach question, followed by a table with summarized metrics. (You might see different columnsin your output.)
@@ -160,7 +162,7 @@ For more information about evaluation results in Azure AI Foundry portal, see [H
160
162
161
163
Notice that the responses are not well grounded. In many cases, the model replies with a question rather than an answer. This is a result of the prompt template instructions.
162
164
163
-
* In your **assets/grounded_chat.prompty** file, find the sentence "If the question is related to outdoor/camping gear and clothing but vague, ask for clarifying questions instead of referencing documents."
165
+
* In your **assets/grounded_chat.prompty** file, find the sentence "If the question is not related to outdoor/camping gear and clothing, just say 'Sorry, I only can answer queries related to outdoor/camping gear and clothing. So, how can I help?'"
164
166
* Change the sentence to "If the question is related to outdoor/camping gear and clothing but vague, try to answer based on the reference documents, then ask for clarifying questions."
0 commit comments