Skip to content

Commit 8de0bb7

Browse files
committed
fixed acrylinx scores
1 parent d185223 commit 8de0bb7

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ For evaluators that support conversations, you can provide `conversation` as inp
122122
}
123123
```
124124

125-
Our evaluators will understand that the first turn of the conversation provides valid `query` from `user`, `context` from `assistant`, and `response` from `assistant` in the query-response format. Conversations are then evaluated per turn and results are aggregated over all turns for a conversation score.
125+
Our evaluators understand that the first turn of the conversation provides valid `query` from `user`, `context` from `assistant`, and `response` from `assistant` in the query-response format. Conversations are then evaluated per turn and results are aggregated over all turns for a conversation score.
126126

127127
> [!NOTE]
128128
> Note that in the second turn, even if `context` is `null` or a missing key, it will be interpreted as an empty string instead of erroring out, which might lead to misleading results. We strongly recommend that you validate your evaluation data to comply with the data requirements.
@@ -500,7 +500,7 @@ After you spot-check your built-in or custom evaluators on a single row of data,
500500

501501
If you want to enable logging and tracing to your Azure AI project for evaluation results, follow these steps:
502502

503-
1. Make sure you are first logged in by running `az login`.
503+
1. Make sure you're first logged in by running `az login`.
504504
2. Install the following sub-package:
505505

506506
```python
@@ -510,7 +510,7 @@ pip install azure-ai-evaluation[remote]
510510

511511
4. Make sure you have `Storage Blob Data Contributor` role for the storage account.
512512

513-
### Local evaluaton on datasets
513+
### Local evaluation on datasets
514514
In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `query`, `response`, and `context`.
515515

516516
```python
@@ -669,7 +669,7 @@ result = evaluate(
669669

670670
## Cloud evaluation on test datasets
671671

672-
After local evaluations of your generative AI applications, you may want to run evaluations in the cloud for pre-deployment testing and [continuously evaluate](https://aka.ms/GenAIMonitoringDoc) your applications for post-deployment monitoring. Azure AI Projects SDK offers such capabilities via a Python API and supports almost all of the features available in local evaluations. Follow the steps below to submit your evaluation to the cloud on your data using built-in or custom evaluators.
672+
After local evaluations of your generative AI applications, you may want to run evaluations in the cloud for pre-deployment testing, and [continuously evaluate](https://aka.ms/GenAIMonitoringDoc) your applications for post-deployment monitoring. Azure AI Projects SDK offers such capabilities via a Python API and supports almost all of the features available in local evaluations. Follow the steps below to submit your evaluation to the cloud on your data using built-in or custom evaluators.
673673

674674

675675
### Prerequisites
@@ -680,7 +680,7 @@ After local evaluations of your generative AI applications, you may want to run
680680
681681
- Azure OpenAI Deployment with GPT model supporting `chat completion`, for example `gpt-4`.
682682
- `Connection String` for Azure AI project to easily create `AIProjectClient` object. You can get the **Project connection string** under **Project details** from the project's **Overview** page.
683-
- Make sure you are first logged into your Azure subscription by running `az login`.
683+
- Make sure you're first logged into your Azure subscription by running `az login`.
684684

685685
### Installation Instructions
686686

@@ -693,7 +693,7 @@ After local evaluations of your generative AI applications, you may want to run
693693
```bash
694694
pip install azure-identity azure-ai-projects azure-ai-ml
695695
```
696-
Optionally you can `pip install azure-ai-evaluation` if you want a code-first experience to fetch evaluator id for built-in evaluators in code.
696+
Optionally you can `pip install azure-ai-evaluation` if you want a code-first experience to fetch evaluator ID for built-in evaluators in code.
697697

698698
Now you can define a client and a deployment which will be used to run your evaluations in the cloud:
699699
```python
@@ -717,16 +717,16 @@ project_client = AIProjectClient.from_connection_string(
717717

718718
### Uploading evaluation data
719719
We provide two ways to register your data in Azure AI project required for evaluations in the cloud:
720-
1. **From SDK**: Upload new data from your local directory to your Azure AI project in the SDK, and fetch the dataset id as a result:
720+
1. **From SDK**: Upload new data from your local directory to your Azure AI project in the SDK, and fetch the dataset ID as a result:
721721
```python
722722
data_id, _ = project_client.upload_file("./evaluate_test_data.jsonl")
723723
```
724724
**From UI**: Alternatively, you can upload new data or update existing data versions by following the UI walkthrough under the **Data** tab of your Azure AI project.
725725

726726
2. Given existing datasets uploaded to your Project:
727-
- **From SDK**: if you already know the dataset name you created, construct the dataset id in this format: `/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<project-name>/data/<dataset-name>/versions/<version-number>`
727+
- **From SDK**: if you already know the dataset name you created, construct the dataset ID in this format: `/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<project-name>/data/<dataset-name>/versions/<version-number>`
728728

729-
- **From UI**: If you don't know the dataset name, locate it under the **Data** tab of your Azure AI project and construct the dataset id as in the format above.
729+
- **From UI**: If you don't know the dataset name, locate it under the **Data** tab of your Azure AI project and construct the dataset ID as in the format above.
730730
731731
732732
### Specifying evaluators from Evaluator library
@@ -742,7 +742,7 @@ print("F1 Score evaluator id:", F1ScoreEvaluator.id)
742742
- **From UI**: Follows these steps to fetch evaluator ids after they're registered to your project:
743743
- Select **Evaluation** tab in your Azure AI project;
744744
- Select Evaluator library;
745-
- Select your evaluator(s) of choice by comparing the descriptions;
745+
- Select your evaluators of choice by comparing the descriptions;
746746
- Copy its "Asset ID" which will be your evaluator id, for example, `azureml://registries/azureml/models/Groundedness-Evaluator/versions/1`.
747747

748748
#### Specifying custom evaluators
@@ -861,7 +861,7 @@ project_client = AIProjectClient.from_connection_string(
861861
conn_str="<connection_string>"
862862
)
863863
864-
# Construct dataset id per the instruction
864+
# Construct dataset ID per the instruction
865865
data_id = "<dataset-id>"
866866
867867
default_connection = project_client.connections.get_default(connection_type=ConnectionType.AZURE_OPEN_AI)

0 commit comments

Comments
 (0)