Skip to content

Commit 065ef02

Browse files
committed
general cleanup
1 parent 7ae5c9f commit 065ef02

File tree

7 files changed

+39
-41
lines changed

7 files changed

+39
-41
lines changed

README.md

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,9 @@ If you intend to develop your own code following this sample, we recommend you u
5656
```bash
5757
cd src
5858
```
59+
5960
- Next, install the requirements in your venv. Note: this may take several minutes the first time you install.
61+
6062
``` bash
6163
pip install -r requirements.txt
6264
```
@@ -65,6 +67,7 @@ If you intend to develop your own code following this sample, we recommend you u
6567
- Note: if you are running from within a Codespace or the curated VS Code cloud container, you will need to use `az login --use-device-code`
6668
6769
## Step 2: Provision or reference Azure AI resources
70+
6871
Use the provision script to provision new or reference existing Azure AI resources to use in your application.
6972
7073
We have a process to help you easily provision the resources you need to run this sample. You can either create new resources, or specify existing resources.
@@ -81,19 +84,19 @@ You can find the details you need for existing resources in the top-right projec
8184
You can also try running our experimental script to check quota in your subscription. You can modify it to fit your requirements.
8285
8386
> [!NOTE]
84-
> This script is intended to help understand quota, but it might provide numbers that are not accurate. The Azure AI Studio or the [Azure OpenAI portal](https://oai.azure.com/), and our [docs of quota limits](https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits) would be the source of truth.
87+
> This script is intended to help understand quota, but it might provide numbers that are not accurate. The Azure AI Studio or the [Azure OpenAI portal](https://oai.azure.com/), and our [docs of quota limits](https://learn.microsoft.com/azure/ai-services/openai/quotas-limits) would be the source of truth.
8588
8689
```bash
8790
python provisioning/check_quota.py --subscription-id <your-subscription-id>
8891
```
8992
90-
2. **Open the _provision.yaml_ file** that is located in the `provisioning` directory
93+
1. **Open the *provision.yaml* file** that is located in the `provisioning` directory
9194
1. There are notes in the file to help you.
92-
3. **Input all your desired fields**
95+
1. **Input all your desired fields**
9396
1. Note that you can either specify existing resources, or your desired names for new resources. If you are specifying existing resources, you can find the details you need in the Azure AI Studio project view.
9497
1. Make sure you select a location and deployments you have quota for.
95-
1. **Run the _provision.py_ script**
96-
1. If you want to see the provisioning plan (what _would_ be provisioned given your `provision.yaml` specifications, without actually provisioning anything), run the below script with the `--show-only` flag.
98+
1. **Run the *provision.py* script**
99+
1. If you want to see the provisioning plan (what *would* be provisioned given your `provision.yaml` specifications, without actually provisioning anything), run the below script with the `--show-only` flag.
97100
1. This script will output a .env in your src/ directory with all of your specified resources, which will be referenced by the rest of the sample code.
98101
99102
``` bash
@@ -103,7 +106,6 @@ You can find the details you need for existing resources in the top-right projec
103106
104107
The script will check whether the resources you specified exist, otherwise it will create them. It will then construct a .env for you that references the provisioned or referenced resources, including your keys. Once the provisioning is complete, you'll be ready to move to step 3.
105108

106-
107109
## Step 3: Explore prompts
108110

109111
This sample repository contains a sample chat prompty file you can explore. This will let you verify your environment is set up to call your model deployments.
@@ -145,24 +147,24 @@ AZUREAI_SEARCH_INDEX_NAME=<index-name>
145147

146148
## Step 5: Develop custom code
147149

148-
This sample includes custom code to add retrieval augmented generation (RAG) to our application.
150+
This sample includes custom code to add retrieval augmented generation (RAG) capabilities to a basic chat application.
149151

150-
The code follows the following general logic:
152+
The code implements the following general logic:
151153

152-
1. Generates a search query based on user query intent and any chat history
153-
1. Uses an embedding model to embed the query
154-
1. Retrieves relevant documents from the search index, given the query
155-
1. Passes the relevant context to the Azure Open AI chat completion model
156-
1. Returns the response from the Azure Open AI model
154+
1. Generate a search query based on user query intent and any chat history
155+
1. Use an embedding model to embed the query
156+
1. Retrieve relevant documents from the search index, given the query
157+
1. Pass the relevant context to the Azure Open AI chat completion model
158+
1. Return the response from the Azure Open AI model
157159

158160
You can modify this logic as appropriate to fit your use case.
159161

160162
## Step 6: Use prompt flow to test copilot code
161163

162-
Use the built-in prompt flow front end to locally serve your application, and validate your copilot performs as expected on sample inputs.
164+
Use prompt flow's testing capability to validate how your copilot performs as expected on sample inputs.
163165
164166
``` bash
165-
pf flow test --flow ./copilot_flow --inputs chat_input="how much for the Trailwalker shoes cost?"
167+
pf flow test --flow ./copilot_flow --inputs chat_input="how much do the Trailwalker shoes cost?"
166168
```
167169
168170
You can use the `--ui` flag to test interactively with a sample chat experience. Prompt flow locally serves a front end integrated with your code.

src/copilot_flow/chat.prompty

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
name: Chat Prompt
3-
description: A prompty that uses the chat API to respond to queries
3+
description: A prompty that uses the chat API to respond to queries grounded in relevant documents
44
model:
55
api: chat
66
configuration:

src/copilot_flow/copilot.py

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,29 @@
11
# ---------------------------------------------------------
22
# Copyright (c) Microsoft Corporation. All rights reserved.
33
# ---------------------------------------------------------
4-
from typing import TypedDict
5-
6-
from openai import AzureOpenAI
7-
84
import os
5+
# set environment variables before importing any other code
6+
from dotenv import load_dotenv
7+
load_dotenv()
8+
99
from pathlib import Path
1010

11-
from promptflow.tracing import trace
11+
from typing import TypedDict
12+
13+
from openai import AzureOpenAI
1214

1315
from azure.core.credentials import AzureKeyCredential
1416
from azure.search.documents import SearchClient
1517
from azure.search.documents.models import VectorizedQuery
1618

17-
import os
18-
# set environment variables before importing any other code
19-
from dotenv import load_dotenv
20-
load_dotenv()
19+
from promptflow.tracing import trace
20+
from promptflow.core import Prompty, AzureOpenAIModelConfiguration
2121

2222
class ChatResponse(TypedDict):
2323
context: dict
2424
reply: str
2525

26-
from promptflow.core import tool, Prompty, AzureOpenAIModelConfiguration
27-
28-
@tool
26+
@trace
2927
def get_chat_response(chat_input: str, chat_history: list = []) -> ChatResponse:
3028

3129
model_config = AzureOpenAIModelConfiguration(
@@ -75,7 +73,7 @@ def get_documents(search_query: str, num_docs=3):
7573

7674
index_name = os.environ["AZUREAI_SEARCH_INDEX_NAME"]
7775

78-
# retrieve documents relevant to the user's question from Cognitive Search
76+
# retrieve documents relevant to the user's query from Azure AI Search index
7977
search_client = SearchClient(
8078
endpoint=os.environ["AZURE_SEARCH_ENDPOINT"],
8179
credential=AzureKeyCredential(os.environ["AZURE_SEARCH_KEY"]),

src/deployment/deploy.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ def deploy_flow(endpoint_name, deployment_name):
5757
),
5858
# instance type comes with associated cost.
5959
# make sure you have quota for the specified instance type
60-
# See more details here: https://learn.microsoft.com/en-us/azure/machine-learning/reference-managed-online-endpoints-vm-sku-list?view=azureml-api-2
60+
# See more details here: https://learn.microsoft.com/azure/machine-learning/reference-managed-online-endpoints-vm-sku-list
6161
instance_type="Standard_DS3_v2",
6262
instance_count=1,
6363
environment_variables={
@@ -77,26 +77,24 @@ def deploy_flow(endpoint_name, deployment_name):
7777
)
7878

7979
# 1. create endpoint
80-
created_endpoint = client.begin_create_or_update(endpoint).result() # result() means we wait on this to complete
80+
client.begin_create_or_update(endpoint).result() # result() means we wait on this to complete
8181

8282
# 2. create deployment
83-
created_deployment = client.begin_create_or_update(deployment).result()
83+
client.begin_create_or_update(deployment).result()
8484

8585
# 3. update endpoint traffic for the deployment
8686
endpoint.traffic = {deployment_name: 100} # 100% of traffic
8787
client.begin_create_or_update(endpoint).result()
8888

8989
output_deployment_details(client, endpoint_name, deployment_name)
9090

91-
return created_endpoint, created_deployment
92-
9391
def output_deployment_details(client, endpoint_name, deployment_name) -> str:
9492
print("\n ~~~Deployment details~~~")
9593
print(f"Your online endpoint name is: {endpoint_name}")
9694
print(f"Your deployment name is: {deployment_name}")
9795

9896
print("\n ~~~Test in the Azure AI Studio~~~")
99-
print(f"Follow this link to your deployment in the Azure AI Studio:")
97+
print("\n Follow this link to your deployment in the Azure AI Studio:")
10098
print(get_ai_studio_url_for_deploy(client=client, endpoint_name=endpoint_name, deployment_name=deployment_name))
10199

102100
if __name__ == "__main__":

src/evaluation/evaluate.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ def load_jsonl(path):
1919
with open(path, "r") as f:
2020
return [json.loads(line) for line in f.readlines()]
2121

22-
def copilot_qna(*, chat_input, **kwargs):
22+
def copilot_wrapper(*, chat_input, **kwargs):
2323
from copilot_flow.copilot import get_chat_response
2424

2525
result = get_chat_response(chat_input)
@@ -48,7 +48,7 @@ def run_evaluation(name, dataset_path):
4848
output_path = "./evaluation/eval_results/eval_results.jsonl"
4949

5050
result = evaluate(
51-
target=copilot_qna,
51+
target=copilot_wrapper,
5252
evaluation_name=name,
5353
data=data_path,
5454
evaluators={

src/evaluation/evaluation_dataset.jsonl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22
{"chat_input": "Which camping table holds the most weight?", "truth": "The Adventure Dining Table has a higher weight capacity than all of the other camping tables mentioned"}
33
{"chat_input": "How much do the TrailWalker Hiking Shoes cost? ", "truth": "The Trailewalker Hiking Shoes are priced at $110"}
44
{"chat_input": "What is the proper care for trailwalker hiking shoes? ", "truth": "After each use, remove any dirt or debris by brushing or wiping the shoes with a damp cloth."}
5-
{"chat_input": "What brand is for TrailMaster tent? ", "truth": "OutdoorLiving"}
5+
{"chat_input": "What brand is TrailMaster tent? ", "truth": "OutdoorLiving"}
66
{"chat_input": "How do I carry the TrailMaster tent around? ", "truth": " Carry bag included for convenient storage and transportation"}
77
{"chat_input": "What is the floor area for Floor Area? ", "truth": "80 square feet"}
88
{"chat_input": "What is the material for TrailBlaze Hiking Pants?", "truth": "Made of high-quality nylon fabric"}
99
{"chat_input": "What color does TrailBlaze Hiking Pants come in?", "truth": "Khaki"}
10-
{"chat_input": "Cant he warrenty for TrailBlaze pants be transfered? ", "truth": "The warranty is non-transferable and applies only to the original purchaser of the TrailBlaze Hiking Pants. It is valid only when the product is purchased from an authorized retailer."}
11-
{"chat_input": "How long are the TrailBlaze pants under warrenty for? ", "truth": " The TrailBlaze Hiking Pants are backed by a 1-year limited warranty from the date of purchase."}
10+
{"chat_input": "Can the warrenty for TrailBlaze pants be transfered? ", "truth": "The warranty is non-transferable and applies only to the original purchaser of the TrailBlaze Hiking Pants. It is valid only when the product is purchased from an authorized retailer."}
11+
{"chat_input": "How long are the TrailBlaze pants under warranty for? ", "truth": " The TrailBlaze Hiking Pants are backed by a 1-year limited warranty from the date of purchase."}
1212
{"chat_input": "What is the material for PowerBurner Camping Stove? ", "truth": "Stainless Steel"}
1313
{"chat_input": "Is France in Europe?", "truth": "Sorry, I can only queries related to outdoor/camping gear and equipment"}

src/indexing/build_index.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ def build_aisearch_index(index_name, path_to_data):
4444
token_overlap_across_chunks = 0, # Optional field - Number of tokens to overlap between chunks
4545
)
4646

47-
# register the index so that it shows up in the project
47+
# register the index so that it shows up in the cloud project
4848
client.indexes.create_or_update(Index(name=index_name, path=index_path))
4949

5050
print(f"Local Path: {index_path}")

0 commit comments

Comments
 (0)