You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this quickstart, we walk you through setting up your local development environment with the prompt flow SDK. We write a prompt, run it as part of your app code, trace the LLM calls being made, and run a basic evaluation on the outputs of the LLM.
@@ -62,9 +63,9 @@ To grant yourself access to the Azure AI Services resource that you're using:
62
63
63
64
1. Continue through the wizard and select **Review + assign** to add the role assignment.
64
65
65
-
## Install the Azure CLI and login
66
+
## Install the Azure CLI and sign in
66
67
67
-
Now we install the Azure CLI and login from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
68
+
You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
68
69
69
70
In most cases you can install the Azure CLI from your terminal using the following command:
You can follow instructions [How to install the Azure CLI](/cli/azure/install-azure-cli) if these commands don't work for your particular operating system or setup.
91
92
92
-
After you install the Azure CLI, login using the ``az login`` command and sign-in using the browser:
93
+
After you install the Azure CLI, sign in using the ``az login`` command and sign-in using the browser:
93
94
```
94
95
az login
95
96
```
@@ -131,7 +132,7 @@ source .venv/bin/activate
131
132
132
133
---
133
134
134
-
Activating the Python environment means that when you run ```python``` or ```pip``` from the command line, you'll be using the Python interpreter contained in the ```.venv``` folder of your application.
135
+
Activating the Python environment means that when you run ```python``` or ```pip``` from the command line, you then use the Python interpreter contained in the ```.venv``` folder of your application.
135
136
136
137
> [!NOTE]
137
138
> You can use the ```deactivate``` command to exit the python virtual environment, and can later reactivate it when needed.
@@ -181,7 +182,7 @@ Your AI services endpoint and deployment name are required to call the Azure Ope
181
182
182
183
## Create a basic chat prompt and app
183
184
184
-
First create a prompt template file, for this we'll use **Prompty** which is the prompt template format supported by prompt flow.
185
+
First create a **Prompty** file, which is the prompt template format supported by prompt flow.
185
186
186
187
Create a ```chat.prompty``` file and copy the following code into it:
187
188
@@ -350,4 +351,4 @@ For more information on how to use prompt flow evaluators, including how to make
350
351
## Next step
351
352
352
353
> [!div class="nextstepaction"]
353
-
> [Augment the model with data for retrieval augmented generation (RAG)](../tutorials/copilot-sdk-build-rag.md)
354
+
> [Add data and use retrieval augmented generation (RAG) to build a copilot](../tutorials/copilot-sdk-build-rag.md)
Copy file name to clipboardExpand all lines: articles/ai-studio/tutorials/copilot-sdk-build-rag.md
+11-8Lines changed: 11 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to build a RAG-based copilot using the prompt flow SDK.
5
5
manager: scottpolly
6
6
ms.service: azure-ai-studio
7
7
ms.topic: tutorial
8
-
ms.date: 7/18/2024
8
+
ms.date: 8/6/2024
9
9
ms.reviewer: lebaro
10
10
ms.author: sgilley
11
11
author: sdgilley
@@ -19,9 +19,9 @@ In this [Azure AI Studio](https://ai.azure.com) tutorial, you use the prompt flo
19
19
This tutorial is part one of a two-part tutorial.
20
20
21
21
> [!TIP]
22
-
> This tutorial is based on code in the sample repo for a [copilot application that implements RAG](https://github.com/Azure-Samples/rag-data-openai-python-promptflow).
22
+
> Be sure to set aside enough time to complete the prerequisites before starting this tutorial. If you're new to Azure AI Studio, you might need to spend additional time to get familiar with the platform.
23
23
24
-
This part one shows you how to enhance a basic chat application by adding retrieval augmented generation (RAG) to ground the responses in your custom data.
24
+
This part one shows you how to enhance a basic chat application by adding [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) to ground the responses in your custom data.
25
25
26
26
In this part one, you learn how to:
27
27
@@ -34,14 +34,15 @@ In this part one, you learn how to:
34
34
35
35
## Prerequisites
36
36
37
+
> [!IMPORTANT]
38
+
> You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help with completing the [assign access](#configure-access-for-the-azure-ai-search-service) section.
39
+
37
40
- You need to complete the [Build a custom chat app in Python using the prompt flow SDK quickstart](../quickstarts/get-started-code.md) to set up your environment.
38
41
39
42
> [!IMPORTANT]
40
43
> This tutorial builds on the code and environment you set up in the quickstart.
41
44
42
-
- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Clone the repository or [download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.
43
-
44
-
- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help with completing the [assign access](#configure-access-for-the-azure-ai-search-service) section.
45
+
- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.
For the RAG capability, we need to be able to embed the search query to search the Azure AI Search index we create.
114
+
For the [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) capability, we need to be able to embed the search query to search the Azure AI Search index we create.
114
115
115
116
1. Deploy an Azure OpenAI embedding model. Follow the [deploy Azure OpenAI models guide](../how-to/deploy-models-openai.md) and deploy the **text-embedding-ada-002** model. Use the same **AIServices** or **Azure OpenAI** connection that you used [to deploy the chat model](../quickstarts/get-started-playground.md#deploy-a-chat-model).
116
117
2. Add embedding model environment variables in your *.env* file. For the *AZURE_OPENAI_EMBEDDING_DEPLOYMENT* value, enter the name of the embedding model that you deployed.
@@ -119,14 +120,16 @@ For the RAG capability, we need to be able to embed the search query to search t
For more information about the embedding model, see the [Azure OpenAI Service embeddings documentation](../../ai-services/openai/how-to/embeddings.md).
124
+
122
125
## Create an Azure AI Search index
123
126
124
127
The goal with this RAG-based application is to ground the model responses in your custom data. You use an Azure AI Search index that stores vectorized data from the embeddings model. The search index is used to retrieve relevant documents based on the user's question.
125
128
126
129
You need an Azure AI Search service and connection in order to create a search index.
127
130
128
131
> [!NOTE]
129
-
> Creating an Azure AI Search service and subsequent search indexes has associated costs. You can see details about pricing and pricing tiers for the Azure AI Search service on the creation page, to confirm cost before creating the resource.
132
+
> Creating an [Azure AI Search service](../../search/index.yml) and subsequent search indexes has associated costs. You can see details about pricing and pricing tiers for the Azure AI Search service on the creation page, to confirm cost before creating the resource.
Copy file name to clipboardExpand all lines: articles/ai-studio/tutorials/copilot-sdk-evaluate-deploy.md
+4-7Lines changed: 4 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Evaluate and deploy a RAG-based copilot with the prompt flow SDK. T
5
5
manager: scottpolly
6
6
ms.service: azure-ai-studio
7
7
ms.topic: tutorial
8
-
ms.date: 7/18/2024
8
+
ms.date: 8/6/2024
9
9
ms.reviewer: lebaro
10
10
ms.author: sgilley
11
11
author: sdgilley
@@ -18,9 +18,6 @@ In this [Azure AI Studio](https://ai.azure.com) tutorial, you use the prompt flo
18
18
19
19
This tutorial is part two of a two-part tutorial.
20
20
21
-
> [!TIP]
22
-
> This tutorial is based on code in the sample repo for a [copilot application that implements RAG](https://github.com/Azure-Samples/rag-data-openai-python-promptflow).
23
-
24
21
In this part two, you learn how to:
25
22
26
23
> [!div class="checklist"]
@@ -69,7 +66,7 @@ Now define an evaluation script that will:
69
66
- Load the sample `.jsonl` dataset.
70
67
- Generate a target function wrapper around our copilot logic.
71
68
- Run the evaluation, which takes the target function, and merges the evaluation dataset with the responses from the copilot.
72
-
- Generate a set of GPT-assisted metrics (Relevance, Groundedness, and Coherence) to evaluate the quality of the copilot responses.
69
+
- Generate a set of GPT-assisted metrics (relevance, groundedness, and coherence) to evaluate the quality of the copilot responses.
73
70
- Output the results locally, and logs the results to the cloud project.
74
71
75
72
The script allows you to review the results locally, by outputting the results in the command line, and to a json file.
@@ -327,5 +324,5 @@ To avoid incurring unnecessary Azure costs, you should delete the resources you
327
324
328
325
## Related content
329
326
330
-
> [!div class="nextstepaction"]
331
-
> [Learn more about prompt flow](../how-to/prompt-flow.md)
327
+
- [Learn more about prompt flow](../how-to/prompt-flow.md)
328
+
- For a sample copilot application that implements RAG, see [Azure-Samples/rag-data-openai-python-promptflow](https://github.com/Azure-Samples/rag-data-openai-python-promptflow)
0 commit comments