Skip to content

Commit 9bf53fc

Browse files
committed
address customer feedback and acrolinx
1 parent 129c5b7 commit 9bf53fc

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

articles/ai-studio/how-to/monitor-quality-safety.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- ignite-2023
99
- build-2024
1010
ms.topic: how-to
11-
ms.date: 5/21/2024
11+
ms.date: 7/31/2024
1212
ms.reviewer: alehughes
1313
reviewer: ahughes-msft
1414
ms.author: mopeakande
@@ -38,7 +38,7 @@ Integrations for monitoring a prompt flow deployment allow you to:
3838

3939
Before following the steps in this article, make sure you have the following prerequisites:
4040

41-
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
41+
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions aren't supported for this scenario. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
4242

4343
- An [Azure AI Studio hub](create-azure-ai-resource.md).
4444

@@ -152,7 +152,7 @@ In this section, you learn how to configure monitoring for your deployed prompt
152152
# [Studio](#tab/azure-studio)
153153

154154
1. From the left navigation bar, go to **Components** > **Deployments**.
155-
1. Select the prompt flow deployment you just created.
155+
1. Select the prompt flow deployment that you created.
156156
1. Select **Enable** within the **Enable generation quality monitoring** box.
157157

158158
:::image type="content" source="../media/deploy-monitor/monitor/deployment-page-highlight-monitoring.png" alt-text="Screenshot of the deployment page highlighting generation quality monitoring." lightbox = "../media/deploy-monitor/monitor/deployment-page-highlight-monitoring.png":::
@@ -204,7 +204,7 @@ credential = DefaultAzureCredential()
204204
# Update your azure resources details
205205
subscription_id = "INSERT YOUR SUBSCRIPTION ID"
206206
resource_group = "INSERT YOUR RESOURCE GROUP NAME"
207-
workspace_name = "INSERT YOUR WORKSPACE NAME" # This is the same as your AI Studio project name
207+
project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Studio project name
208208
endpoint_name = "INSERT YOUR ENDPOINT NAME" # This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
209209
deployment_name = "INSERT YOUR DEPLOYMENT NAME"
210210
aoai_deployment_name ="INSERT YOUR AOAI DEPLOYMENT NAME"
@@ -225,7 +225,7 @@ ml_client = MLClient(
225225
credential=credential,
226226
subscription_id=subscription_id,
227227
resource_group_name=resource_group,
228-
workspace_name=workspace_name,
228+
workspace_name=project_name,
229229
)
230230

231231
spark_compute = ServerlessSparkCompute(instance_type="standard_e4s_v3", runtime_version="3.3")
@@ -259,7 +259,7 @@ production_data = LlmData(
259259
)
260260

261261
gsq_signal = GenerationSafetyQualitySignal(
262-
connection_id=f"/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}/connections/{aoai_connection_name}",
262+
connection_id=f"/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{project_name}/connections/{aoai_connection_name}",
263263
metric_thresholds=generation_quality_thresholds,
264264
production_data=[production_data],
265265
sampling_rate=1.0,
@@ -297,7 +297,7 @@ ml_client.schedules.begin_create_or_update(model_monitor)
297297

298298
## Consume monitoring results
299299

300-
After you've created your monitor, it will run daily to compute the token usage and generation quality metrics.
300+
After you create your monitor, it will run daily to compute the token usage and generation quality metrics.
301301

302302
1. Go to the **Monitoring (preview)** tab from within the deployment to view the monitoring results. Here, you see an overview of monitoring results during the selected time window. You can use the date picker to change the time window of data you're monitoring. The following metrics are available in this overview:
303303

@@ -371,8 +371,8 @@ credential = DefaultAzureCredential()
371371
# Update your azure resources details
372372
subscription_id = "INSERT YOUR SUBSCRIPTION ID"
373373
resource_group = "INSERT YOUR RESOURCE GROUP NAME"
374-
workspace_name = "INSERT YOUR WORKSPACE NAME" # This is the same as your AI Studio project name
375-
endpoint_name = "INSERT YOUR ENDPOINT NAME" This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
374+
project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Studio project name
375+
endpoint_name = "INSERT YOUR ENDPOINT NAME" # This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
376376
deployment_name = "INSERT YOUR DEPLOYMENT NAME"
377377

378378
# These variables can be renamed but it is not necessary
@@ -387,7 +387,7 @@ ml_client = MLClient(
387387
credential=credential,
388388
subscription_id=subscription_id,
389389
resource_group_name=resource_group,
390-
workspace_name=workspace_name,
390+
workspace_name=project_name,
391391
)
392392

393393
spark_compute = ServerlessSparkCompute(instance_type="standard_e4s_v3", runtime_version="3.3")
@@ -448,7 +448,7 @@ credential = DefaultAzureCredential()
448448
# Update your azure resources details
449449
subscription_id = "INSERT YOUR SUBSCRIPTION ID"
450450
resource_group = "INSERT YOUR RESOURCE GROUP NAME"
451-
workspace_name = "INSERT YOUR WORKSPACE NAME" # This is the same as your AI Studio project name
451+
project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Studio project name
452452
endpoint_name = "INSERT YOUR ENDPOINT NAME" # This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
453453
deployment_name = "INSERT YOUR DEPLOYMENT NAME"
454454
aoai_deployment_name ="INSERT YOUR AOAI DEPLOYMENT NAME"
@@ -468,7 +468,7 @@ ml_client = MLClient(
468468
credential=credential,
469469
subscription_id=subscription_id,
470470
resource_group_name=resource_group,
471-
workspace_name=workspace_name,
471+
workspace_name=project_name,
472472
)
473473

474474
spark_compute = ServerlessSparkCompute(instance_type="standard_e4s_v3", runtime_version="3.3")
@@ -502,7 +502,7 @@ production_data = LlmData(
502502
)
503503

504504
gsq_signal = GenerationSafetyQualitySignal(
505-
connection_id=f"/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}/connections/{aoai_connection_name}",
505+
connection_id=f"/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{project_name}/connections/{aoai_connection_name}",
506506
metric_thresholds=generation_quality_thresholds,
507507
production_data=[production_data],
508508
sampling_rate=1.0,
@@ -533,9 +533,9 @@ model_monitor = MonitorSchedule(
533533
ml_client.schedules.begin_create_or_update(model_monitor)
534534
```
535535

536-
After you've created your monitor from the SDK, you can [consume the monitoring results](#consume-monitoring-results) in AI Studio.
536+
After you create your monitor from the SDK, you can [consume the monitoring results](#consume-monitoring-results) in AI Studio.
537537

538538
## Related content
539539

540-
- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md)
541-
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)
540+
- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md).
541+
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml).

0 commit comments

Comments
 (0)