Skip to content

Commit 7bcb935

Browse files
Merge pull request #1428 from sdgilley/sdg-release-update-code-qs-tutorial
shuffle code, add sections
2 parents 0e36fd6 + cc459e9 commit 7bcb935

File tree

2 files changed

+15
-7
lines changed

2 files changed

+15
-7
lines changed

articles/ai-studio/tutorials/copilot-sdk-create-resources.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,12 @@ In the Azure AI Studio, check for an Azure AI Search connected resource.
101101
pip install azure-ai-projects azure-ai-inference azure-ai-identity azure-search-documents pandas python-dotenv
102102
```
103103

104+
### Create helper script
105+
106+
Create a folder for your work. Create a file called **config.py** in this folder. This helper script is used in the next two parts of the tutorial series. Add the following code:
107+
108+
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/config.py":::
109+
104110
## Deploy models
105111

106112
You need two models to build a RAG-based chat app: an Azure OpenAI chat model (`gpt-4o-mini`) and an Azure OpenAI embedding model (`text-embedding-ada-002`). Deploy these models in your Azure AI Studio project, using this set of steps for each model.

articles/ai-studio/tutorials/copilot-sdk-evaluate.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Use the following evaluation dataset, which contains example questions and expec
4545

4646
:::code language="jsonl" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/assets/chat_eval_data.jsonl":::
4747

48-
### Evaluate with Azure AI evaluators
48+
## Evaluate with Azure AI evaluators
4949

5050
Now define an evaluation script that will:
5151

@@ -61,17 +61,19 @@ The script allows you to review the results locally, by outputting the results i
6161
The script also logs the evaluation results to the cloud project so that you can compare evaluation runs in the UI.
6262

6363
1. Create a file called **evaluate.py** in your main folder.
64-
1. Add the following code.
64+
1. Add the following code to import the required libraries, create a project client, and configure some settings:
6565

66-
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/evaluate.py":::
66+
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/evaluate.py" id="imports_and_config":::
6767

68-
The main function at the end allows you to view the evaluation result locally, and gives you a link to the evaluation results in AI Studio.
68+
1. Add code to create a wrapper function that implements the evaluation interface for query and response evaluation
6969

70-
### Create helper script
70+
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/evaluate.py" id="evaluate_wrapper":::
7171

72-
The evaluation script uses a helper script to define the target function and run the evaluation. Create a file called **config.py** in your main folder. Add the following code:
72+
1. Finally, add code to run the evaluation, view the results locally, and gives you a link to the evaluation results in AI Studio.
73+
74+
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/evaluate.py" id="run_evaluation":::
7375

74-
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/config.py":::
76+
7577

7678
### Configure the evaluation model
7779

0 commit comments

Comments
 (0)