You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure OpenAI graders are a new set of evaluation tools in the Azure AI Foundry SDK that evaluate the performance of AI models and their outputs. These graders include:
19
+
20
+
-[Label grader](#label-grader)
21
+
-[String checker](#string-checker)
22
+
-[Text similarity](#text-similarity)
23
+
-[Python grader](#python-grader)
19
24
20
-
The Azure OpenAI Graders are a new set of evaluation graders available in the Azure AI Foundry SDK, aimed at evaluating the performance of AI models and their outputs. These graders including [Label grader](#label-grader), [String checker](#string-checker), [Text similarity](#text-similarity), and [General grader](#general-grader) can be run locally or remotely. Each grader serves a specific purpose in assessing different aspects of AI model/model outputs.
25
+
You can run graders locally or remotely. Each grader assesses specific aspects of AI models and their outputs.
`AzureOpenAILabelGrader` uses your custom prompt to instruct a model to classify outputs based on labels you define. It returns structured results with explanations for why each label was chosen.
43
50
44
51
> [!NOTE]
45
-
> We recommend using Azure OpenAI GPT o3-mini for best results.
52
+
> We recommend using Azure OpenAI o3-mini for the best results.
46
53
47
-
Here's an example `data.jsonl` that is used in the following code snippets:
54
+
Here's an example of `data.jsonl` used in the following code snippets:
48
55
49
-
```jsonl
56
+
```json
50
57
[
51
58
{
52
59
"query": "What is the importance of choosing the right provider in getting the most value out of your health insurance plan?",
@@ -83,11 +90,11 @@ from azure.ai.evaluation import AzureOpenAILabelGrader, evaluate
83
90
84
91
data_file_name="data.jsonl"
85
92
86
-
# Evaluation criteria: Determine if the response column contains texts that are "too short", "just right", or "too long" and pass if it is "just right"
93
+
# Evaluation criteria: Determine if the response column contains text that is "too short," "just right," or "too long," and pass if it is "just right."
{"content":"Any text including space that's more than 600 characters are too long, less than 500 characters are too short; 500 to 600 characters are just right.", "role":"user", "type": "message"}],
{"content":"Any text including space that's more than 600 characters is too long, less than 500 characters is too short; 500 to 600 characters is just right.", "role":"user", "type": "message"}],
For each of the sets of sample data contained in the data file, an evaluation result of `True` or `False` is returned signifying if the output matches with the passing label defined. The `score` is `1.0` for `True` cases while `score` is `0.0` for `False` cases. The reason for why the model provided the label for the data can be found in `content` under `outputs.label.sample`.
114
+
For each set of sample data in the data file, an evaluation result of `True` or `False` is returned, signifying if the output matches the defined passing label. The `score` is `1.0` for `True` cases, and `0.0` for `False` cases. The reason the model provided the label for the data is in `content` under `outputs.label.sample`.
108
115
109
116
```python
110
117
'outputs.label.sample':
@@ -114,12 +121,11 @@ For each of the sets of sample data contained in the data file, an evaluation re
114
121
'content': '{"steps":[{"description":"Calculate the number of characters in the user\'s input including spaces.","conclusion":"The provided text contains 575 characters."},{"description":"Evaluate if the character count falls within the given ranges (greater than 600 too long, less than 500 too short, 500 to 600 just right).","conclusion":"The character count falls between 500 and 600, categorized as \'just right.\'"}],"result":"just right"}'}],
115
122
...
116
123
...
117
-
'outputs.label.label_result': 'pass',
118
124
'outputs.label.passed': True,
119
125
'outputs.label.score': 1.0
120
126
```
121
127
122
-
Aside from individual data evaluation results, the grader also returns a metric indicating the overall dataset pass rate.
128
+
In addition to individual data evaluation results, the grader returns a metric indicating the overall dataset pass rate.
123
129
124
130
```python
125
131
'metrics': {'label.pass_rate': 0.2}, #1/5 in this case
For each of the sets of sample data contained in the data file, an evaluation result of `True` or `False` is returned signifying if the input text matches with pattern matching rules defined. The `score` is `1.0` for `True` cases while `score` is `0.0` for `False` cases.
162
+
For each set of sample data in the data file, an evaluation result of `True` or `False` is returned, indicating whether the input text matches the defined pattern-matching rules. The `score` is `1.0` for `True` cases while `score` is `0.0` for `False` cases.
157
163
158
164
```python
159
-
'outputs.string.string_result': 'pass',
160
165
'outputs.string.passed': True,
161
166
'outputs.string.score': 1.0
162
167
```
163
168
164
169
The grader also returns a metric indicating the overall dataset pass rate.
165
170
166
171
```python
167
-
'metrics': {'string.pass_rate': 0.4}, #2/5 in this case
172
+
'metrics': {'string.pass_rate': 0.4}, #2/5 in this case
168
173
```
169
174
170
175
## Text similarity
171
176
172
-
Evaluates how closely input text matches a reference value using similarity metrics like`fuzzy_match`, `BLEU`, `ROUGE`, or `METEOR`. Useful for assessing text quality or semantic closeness.
177
+
Evaluates how closely input text matches a reference value using similarity metrics like`fuzzy_match`, `BLEU`, `ROUGE`, or `METEOR`. This is useful for assessing text quality or semantic closeness.
173
178
174
179
### Text similarity example
175
180
@@ -197,69 +202,83 @@ sim_grader_evaluation
197
202
198
203
### Text similarity output
199
204
200
-
For each set of sample data contained in the data file, a numerical similarity score is generated. This score, ranging from 0 to 1, indicates the degree of similarity, with higher scores representing greater similarity. Additionally, an evaluation result of `True` or `False` is returned, signifying whether the similarity score meets or exceeds the specified threshold based on the evaluation metric defined in the grader.
205
+
For each set of sample data in the data file, a numerical similarity score is generated. This score ranges from 0 to 1 and indicates the degree of similarity, with higher scores representing greater similarity. An evaluation result of `True` or `False` is also returned, signifying whether the similarity score meets or exceeds the specified threshold based on the evaluation metric defined in the grader.
201
206
202
207
```python
203
-
'outputs.similarity.similarity_result': 'pass',
204
208
'outputs.similarity.passed': True,
205
209
'outputs.similarity.score': 0.6117136659436009
206
210
```
207
211
208
212
The grader also returns a metric indicating the overall dataset pass rate.
209
213
210
214
```python
211
-
'metrics': {'similarity.pass_rate': 0.4}, #2/5 in this case
215
+
'metrics': {'similarity.pass_rate': 0.4}, # 2 out of 5 in this case
212
216
```
213
217
214
-
## General grader
218
+
## Python Grader
215
219
216
-
Advanced users have the capability to import or define a custom grader and integrate it into the AOAI general grader. This allows for evaluations to be performed based on specific areas of interest aside from the existing AOAI graders. Following is an example to import the OpenAI `StringCheckGrader`and construct it to be ran as a AOAI general grader on Foundry SDK.
220
+
Advanced users can create or import custom Python grader functions and integrate them into the Azure OpenAI Python grader. This enables evaluations tailored to specific areas of interest beyond the capabilities of the existing Azure OpenAI graders. The following example demonstrates how to import a custom similarity grader function and configure it to run as an Azure OpenAI Python grader using the Azure AI Foundry SDK.
217
221
218
222
### Example
219
223
220
224
```python
221
-
from openai.types.graders import StringCheckGrader
222
-
from azure.ai.evaluation import AzureOpenAIGrader
225
+
from azure.ai.evaluation import AzureOpenAIPythonGrader
223
226
224
-
# Define an string check grader config directly using the OAI SDK
225
-
# Evaluation criteria: Pass if query column contains "Northwind"
For each set of sample data contained in the data file, general grader returns a numerical score that is a 0-1 float and a higher score is better. Given a numerical threshold defined as part of the custom grader, we also output `True` if the score >= threshold, or `False` otherwise.
269
+
For each set of sample data in the data file, the Python grader returns a numerical score based on the defined function. Given a numerical threshold defined as part of the custom grader, we also output `True` if the score >= threshold, or `False` otherwise.
250
270
251
271
For example:
252
272
253
273
```python
254
-
'outputs.general.general_result': 'pass',
255
-
'outputs.general.passed': True,
256
-
'outputs.general.score': 1.0
274
+
"outputs.custom_similarity.passed": false,
275
+
"outputs.custom_similarity.score": 0.0
257
276
```
258
277
259
278
Aside from individual data evaluation results, the grader also returns a metric indicating the overall dataset pass rate.
260
279
261
280
```python
262
-
'metrics': {'general.pass_rate': 0.4}, #2/5 in this case
281
+
'metrics': {'custom_similarity.pass_rate': 0.0}, #0/5 in this case
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-local/concepts/foundry-local-architecture.md
+19-10Lines changed: 19 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,6 +93,12 @@ The hardware abstraction layer ensures that Foundry Local can run on various dev
93
93
-**multiple _execution providers_**, such as NVIDIA CUDA, AMD, Qualcomm, Intel.
94
94
-**multiple _device types_**, such as CPU, GPU, NPU.
95
95
96
+
> [!NOTE]
97
+
> For Intel NPU support on Windows, you need to install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable hardware acceleration.
98
+
99
+
> [!NOTE]
100
+
> For Qualcomm NPU support, you need to install the [Qualcomm NPU driver](https://softwarecenter.qualcomm.com/catalog/item/QHND). If you encounter the error `Qnn error code 5005: "Failed to load from EpContext model. qnn_backend_manager."`, this typically indicates an outdated driver or NPU resource conflicts. Try rebooting to clear NPU resource conflicts, especially after using Windows Copilot+ features.
101
+
96
102
### Developer experiences
97
103
98
104
The Foundry Local architecture is designed to provide a seamless developer experience, enabling easy integration and interaction with AI models.
@@ -123,19 +129,22 @@ Foundry Local supports integration with various SDKs in most languages, such as
123
129
The AI Toolkit for Visual Studio Code provides a user-friendly interface for developers to interact with Foundry Local. It allows users to run models, manage the local cache, and visualize results directly within the IDE.
124
130
125
131
**Features**:
126
-
- Model management: Download, load, and run models from within the IDE.
127
-
- Interactive console: Send requests and view responses in real-time.
128
-
- Visualization tools: Graphical representation of model performance and results.
132
+
133
+
- Model management: Download, load, and run models from within the IDE.
134
+
- Interactive console: Send requests and view responses in real-time.
135
+
- Visualization tools: Graphical representation of model performance and results.
129
136
130
137
**Prerequisites:**
131
-
- You have installed [Foundry Local](../get-started.md) and have a model service running.
132
-
- You have installed the [AI Toolkit for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-windows-ai-studio.windows-ai-studio) extension.
133
-
138
+
139
+
- You have installed [Foundry Local](../get-started.md) and have a model service running.
140
+
- You have installed the [AI Toolkit for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-windows-ai-studio.windows-ai-studio) extension.
141
+
134
142
**Connect Foundry Local model to AI Toolkit:**
135
-
1.**Add model in AI Toolkit**: Open AI Toolkit from the activity bar of Visual Studio Code. In the 'My Models' panel, click the 'Add model for remote interface' button and then select 'Add a custom model' from the dropdown menu.
136
-
2.**Enter the chat compatible endpoint URL**: Enter `http://localhost:PORT/v1/chat/completions` where PORT is replaced with the port number of your Foundry Local service endpoint. You can see the port of your locally running service using the CLI command `foundry service status`. Foundry Local dynamically assigns a port, so it might not always the same.
137
-
3.**Provide model name**: Enter the exact model name you which to use from Foundry Local, for example `phi-3.5-mini`. You can list all previously downloaded and locally cached models using the CLI command `foundry cache list` or use `foundry model list` to see all available models for local use. You’ll also be asked to enter a display name, which is only for your own local use, so to avoid confusion it’s recommended to enter the same name as the exact model name.
138
-
4.**Authentication**: If your local setup doesn't require authentication *(which is the default for a Foundry Local setup)*, you can leave the authentication headers field blank and press Enter.
143
+
144
+
1.**Add model in AI Toolkit**: Open AI Toolkit from the activity bar of Visual Studio Code. In the 'My Models' panel, select the 'Add model for remote interface' button and then select 'Add a custom model' from the dropdown menu.
145
+
2.**Enter the chat compatible endpoint URL**: Enter `http://localhost:PORT/v1/chat/completions` where PORT is replaced with the port number of your Foundry Local service endpoint. You can see the port of your locally running service using the CLI command `foundry service status`. Foundry Local dynamically assigns a port, so it might not always be the same.
146
+
3.**Provide model name**: Enter the exact model name you which to use from Foundry Local, for example `phi-3.5-mini`. You can list all previously downloaded and locally cached models using the CLI command `foundry cache list` or use `foundry model list` to see all available models for local use. You’ll also be asked to enter a display name, which is only for your own local use, so to avoid confusion it’s recommended to enter the same name as the exact model name.
147
+
4.**Authentication**: If your local setup doesn't require authentication _(which is the default for a Foundry Local setup)_, you can leave the authentication headers field blank and press Enter.
139
148
140
149
After completing these steps, your Foundry Local model will appear in the 'My Models' list in AI Toolkit and is ready to be used by right-clicking on your model and select 'Load in Playground'.
0 commit comments