Skip to content

Commit 5df798f

Browse files
committed
add log eval to cloud project capability
1 parent 1d6cf11 commit 5df798f

File tree

2 files changed

+10
-5
lines changed

2 files changed

+10
-5
lines changed

README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -180,20 +180,19 @@ Evaluation relies on an evaluation dataset. In this case, we have an evaluation
180180
181181
The following script streamlines the evaluation process. Update the evaluation code to set your desired evaluation metrics, or optionally evaluate on custom metrics. You can also change where the evaluation results get written to.
182182
183-
We recommend viewing your evaluation results in the Azure AI Studio, to compare evaluation runs with different prompts, or even different models. To enable logging to your cloud project, add your configurations (which you can find in your .env file) and run the following command:
183+
We recommend viewing your evaluation results in the Azure AI Studio, to compare evaluation runs with different prompts, or even different models.
184+
Note that this will configure your project with a Cosmos DB account for logging. It may take several minutes the first time you run an evaluation.
184185
185-
```bash
186-
pf config set trace.destination=azureml://subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/workspaces/<project_name>
187-
```
188186
189-
This will configure your project with a Cosmos DB account for logging. Be mindful this has associated costs.
190187
191188
``` bash
192189
python -m evaluation.evaluate --evaluation-name <evaluation_name>
193190
```
194191
195192
Specify the `--dataset-path` argument if you want to provide a different evaluation dataset.
196193
194+
If you do not want to log evaluation results to your AI Studio project, you can modify the _evaluation.py_ script to not pass the azure_ai_project parameter.
195+
197196
## Step 8: Deploy application to AI Studio
198197
199198
Use the deployment script to deploy your application to Azure AI Studio. This will deploy your app to a managed endpoint in Azure, that you can test, integrate into a front end application, or share with others.

src/evaluation/evaluate.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,12 @@ def run_evaluation(name, dataset_path):
6060
"relevance": {"question": "${data.chat_input}"},
6161
"coherence": {"question": "${data.chat_input}"},
6262
},
63+
# to log evaluation to the cloud AI Studio project
64+
azure_ai_project = {
65+
"subscription_id": os.environ["AZURE_SUBSCRIPTION_ID"],
66+
"resource_group_name": os.environ["AZURE_RESOURCE_GROUP"],
67+
"project_name": os.environ["AZUREAI_PROJECT_NAME"]
68+
}
6369
)
6470

6571
tabular_result = pd.DataFrame(result.get("rows"))

0 commit comments

Comments
 (0)