Skip to content

Commit 7235000

Browse files
authored
Merge branch 'release-preview-2-cu' into release-preview-2-cu
2 parents 362d154 + 6126a66 commit 7235000

File tree

67 files changed

+1042
-397
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+1042
-397
lines changed

.vscode/settings.json

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
{
22
"cSpell.words": [
33
"DALL"
4-
],
5-
"DockerRun.DisableAutoGenerateConfig": true
4+
]
65
}

articles/ai-foundry/how-to/develop/get-started-projects-vs-code.md

Lines changed: 3 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ manager: mcleans
66
ms.service: azure-ai-foundry
77
content_well_notification:
88
- AI-contribution
9+
ai-usage: ai-assisted
910
ms.topic: how-to
10-
ms.date: 04/28/2025
11+
ms.date: 05/07/2025
1112
ms.reviewer: erichen
1213
ms.author: johalexander
1314
author: ms-johnalex
@@ -30,7 +31,7 @@ With Azure AI Foundry, you can:
3031

3132
With the Azure AI Foundry for Visual Studio Code extension, you can accomplish much of this workflow directly from Visual Studio Code. It also comes with other features, such as code templates, playgrounds, and integration with other VS Code extensions and features.
3233

33-
This article shoes you how to quickly get started using the features of the Azure AI Foundry for Visual Studio Code extension.
34+
This article shows you how to quickly get started using the features of the Azure AI Foundry for Visual Studio Code extension.
3435

3536
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
3637

@@ -298,14 +299,6 @@ You can also open the model playground using the following steps:
298299

299300
The Azure resources that you created in this article are billed to your Azure subscription. If you don't expect to need these resources in the future, delete them to avoid incurring more charges.
300301

301-
### Delete your agents
302-
303-
1. In the VS Code navbar, refresh the **Azure AI Foundry Extension**. In the **Resources** section, expand the **Agents** subsection to display the list of deployed agents.
304-
305-
1. Right-click on your deployed agent to delete and select the **Delete** option.
306-
307-
:::image type="content" source="../../media/how-to/get-started-projects-vs-code/delete-agent.png" alt-text="Screenshot of the AI Foundry portal with 'Agents' from the navigation menu on the left and the **Delete** button highlighted." lightbox="../../media/how-to/get-started-projects-vs-code/delete-agent.png":::
308-
309302
### Delete your models
310303

311304
1. In the VS Code navbar, refresh the **Azure AI Foundry Extension**. In the **Resources** section, expand the **Models** subsection to display the list of deployed models.

articles/ai-foundry/how-to/develop/vs-code-agents.md

Lines changed: 21 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ manager: mcleans
66
ms.service: azure-ai-foundry
77
content_well_notification:
88
- AI-contribution
9+
ai-usage: ai-assisted
910
ms.topic: how-to
10-
ms.date: 04/29/2025
11+
ms.date: 05/07/2025
1112
ms.reviewer: erichen
1213
ms.author: johalexander
1314
author: ms-johnalex
@@ -99,28 +100,39 @@ tools: []
99100
100101
### Add tools to the Azure AI Agent
101102
102-
Azure AI Agent Service has a set of knowledge and action tools that you can use to interact with your data sources, such as:
103-
- [Grounding with Bing search](/azure/ai-services/agents/how-to/tools/bing-grounding?tabs=python&pivots=overview)
104-
- [Azure AI Search](/azure/ai-services/agents/how-to/tools/file-search?tabs=python&pivots=overview)
105-
- [Azure Functions](/azure/ai-services/agents/how-to/tools/file-search?tabs=python&pivots=overview)
106-
- [File retrieval](/azure/ai-services/agents/how-to/tools/azure-functions?tabs=python&pivots=overview)
103+
Azure AI Agent Service has a set of knowledge and action tools that you can use to interact with your data sources.
104+
105+
106+
#### Available tools for Azure AI Agents
107+
108+
The following tools are available:
109+
110+
- Knowledge tools:
111+
- [Grounding with Bing search](/azure/ai-services/agents/how-to/tools/bing-grounding?tabs=python&pivots=overview)
112+
- [File search]( /azure/ai-services/agents/how-to/tools/file-search?tabs=python&pivots=overview)
113+
- [Azure AI Search](/azure/ai-services/agents/how-to/tools/azure-ai-search?tabs=azurecli%2Cpython&pivots=overview-azure-ai-search)
114+
- [Microsoft Fabric](/azure/ai-services/agents/how-to/tools/fabric?tabs=csharp&pivots=overview)
115+
- [Use licensed data](/azure/ai-services/agents/how-to/tools/licensed-data)
116+
117+
- Action tools:
118+
- [Azure AI Agents function calling](/azure/ai-services/agents/how-to/tools/function-calling?tabs=python&pivots=overview)
107119
- [Code interpreter](/azure/ai-services/agents/how-to/tools/code-interpreter?tabs=python&pivots=overview)
108120
- [OpenAPI Specified tools](/azure/ai-services/agents/how-to/tools/openapi-spec?tabs=python&pivots=overview)
121+
- [Azure Functions](/azure/ai-services/agents/how-to/tools/azure-functions?tabs=python&pivots=overview)
109122
110123
#### Configure the tools YAML file
111124
112125
The Agent Designer adds tools to an AI Agent via .yaml files.
113126
114127
Create a tool configuration .yaml file using the following steps:
115128
116-
1. Perform any setup steps that might be required. See the article for the tool you’re interested in using. For example, [Grounding with Bing search](/azure/ai-services/agents/how-to/tools/bing-grounding?tabs=python&pivots=overview#setup).
129+
1. Choose a tool from the [available tools for Azure AI Agents](#available-tools-for-azure-ai-agents). Perform any setup steps that might be required. For example, [Grounding with Bing search](/azure/ai-services/agents/how-to/tools/bing-grounding?tabs=python&pivots=overview#setup).
117130
118131
1. Once you complete the setup, create a yaml code file that specifies the tool’s configuration. For example, this format for Grounding with Bing Search:
119132
120133
```yml
121134
type: bing_grounding
122-
name: bing_search
123-
configuration:
135+
options:
124136
tool_connections:
125137
- >-
126138
/subscriptions/<Azure Subscription ID>/resourceGroups/<Azure Resource Group name>/providers/Microsoft.MachineLearningServices/workspaces/<Azure AI Foundry Project name>/connections/<Bing connection name>
Lines changed: 252 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,252 @@
1+
---
2+
title: How to run an evaluation in GitHub Action
3+
titleSuffix: Azure AI Foundry
4+
description: How to run evaluation in GitHub Action to streamline the evaluation process, allowing you to assess model performance and make informed decisions before deploying to production.
5+
manager: scottpolly
6+
ms.service: azure-ai-foundry
7+
ms.topic: how-to
8+
ms.date: 05/08/2025
9+
ms.reviewer: hanch
10+
ms.author: lagayhar
11+
author: lgayhardt
12+
---
13+
14+
# How to run an evaluation in GitHub Action (preview)
15+
16+
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
17+
18+
This GitHub Action enables offline evaluation of AI models and agents within your CI/CD pipelines. It's designed to streamline the evaluation process, allowing you to assess model performance and make informed decisions before deploying to production.
19+
20+
Offline evaluation involves testing AI models and agents using test datasets to measure their performance on various quality and safety metrics such as fluency, coherence, and content safety. After you select a model in the [Azure AI Model Catalog](https://azure.microsoft.com/products/ai-model-catalog?msockid=1f44c87dd9fa6d1e257fdd6dd8406c42) or [GitHub Model marketplace](https://github.com/marketplace/models), offline pre-production evaluation is crucial for AI application validation during integration testing. This process allows developers to identify potential issues and make improvements before deploying the model or application to production, such as when creating and updating agents.
21+
22+
[!INCLUDE [features](../includes/evaluation-github-action-azure-devops-features.md)]
23+
24+
- **Seamless Integration**: Easily integrate with existing GitHub workflows to run evaluation based on rules that you specify in your workflows (for examples, when changes are committed to agent versions, prompt templates, or feature flag configuration).
25+
- **Statistical Analysis**: Evaluation results include confidence intervals and test for statistical significance to determine if changes are meaningful and not due to random variation.
26+
- **Out-of-box operation metrics**: Automatically generates operational metrics for each evaluation run (client run duration, server run duration, completion tokens, and prompt tokens).
27+
28+
## Prerequisites
29+
30+
Two GitHub Actions are available for evaluating AI applications: **ai-agent-evals** and **genai-evals**.
31+
32+
- If your application is already using AI Foundry agents, **ai-agent-evals** is well-suited as it offers a simplified setup process and direct integration with agent-based workflows.
33+
- **genai-evals** is intended for evaluating generative AI models outside of the agent framework.
34+
35+
> [!NOTE]
36+
> The **ai-agent-evals** interface is more straightforward to configure. In contrast, **genai-evals** requires you to prepare structured evaluation input data. Code samples are provided to help with setup.
37+
38+
## How to set up AI agent evaluations
39+
40+
### AI agent evaluations input
41+
42+
The input of ai-agent-evals includes:
43+
44+
**Required:**
45+
46+
- `azure-aiproject-connection-string`: The connection string for the Azure AI project. This is used to connect to Azure OpenAI to simulate conversations with each agent, and to connect to the Azure AI evaluation SDK to perform the evaluation.
47+
- `deployment-name`: the deployed model name.
48+
- `data-path`: Path to the input data file containing the conversation starters. Each conversation starter is sent to each agent for a pairwise comparison of evaluation results.
49+
- `evaluators`: built-in evaluator names.
50+
- `data`: a set of conversation starters/queries.
51+
- Only single agent turn is supported.
52+
- `agent-ids`: a unique identifier for the agent and comma-separated list of agent IDs to evaluate.
53+
- When only one `agent-id` is specified, the evaluation results include the absolute values for each metric along with the corresponding confidence intervals.
54+
- When multiple `agent-ids` are specified, the results include absolute values for each agent and a statistical comparison against the designated baseline agent ID.
55+
56+
**Optional:**
57+
58+
- `api-version`: the API version of deployed model.
59+
- `baseline-agent-id`: Agent ID of the baseline agent to compare against. By default, the first agent is used.
60+
- `evaluation-result-view`: Specifies the format of evaluation results. Defaults to "default" (boolean scores such as passing and defect rates) if omitted. Options are "default", "all-scores" (includes all evaluation scores), and "raw-scores-only" (non-boolean scores only).
61+
62+
Here's a sample of the dataset:
63+
64+
```JSON
65+
{
66+
"name": "MyTestData",
67+
"evaluators": [
68+
"RelevanceEvaluator",
69+
"ViolenceEvaluator",
70+
"HateUnfairnessEvaluator",
71+
],
72+
"data": [
73+
{
74+
"query": "Tell me about Tokyo?",
75+
},
76+
{
77+
"query": "Where is Italy?",
78+
}
79+
]
80+
}
81+
82+
```
83+
84+
### AI agent evaluations workflow
85+
86+
To use the GitHub Action, add the GitHub Action to your CI/CD workflows and specify the trigger criteria (for example, on commit) and file paths to trigger your automated workflows.
87+
88+
> [!TIP]
89+
> To minimize costs, you should avoid running evaluation on every commit.
90+
91+
This example illustrates how Azure Agent AI Evaluation can be run when comparing different agents with agent IDs.
92+
93+
```YAML
94+
name: "AI Agent Evaluation"
95+
96+
on:
97+
workflow_dispatch:
98+
push:
99+
branches:
100+
- main
101+
102+
permissions:
103+
id-token: write
104+
contents: read
105+
106+
jobs:
107+
run-action:
108+
runs-on: ubuntu-latest
109+
steps:
110+
- name: Checkout
111+
uses: actions/checkout@v4
112+
113+
- name: Azure login using Federated Credentials
114+
uses: azure/login@v2
115+
with:
116+
client-id: ${{ vars.AZURE_CLIENT_ID }}
117+
tenant-id: ${{ vars.AZURE_TENANT_ID }}
118+
subscription-id: ${{ vars.AZURE_SUBSCRIPTION_ID }}
119+
120+
- name: Run Evaluation
121+
uses: microsoft/ai-agent-evals@v1-beta
122+
with:
123+
# Replace placeholders with values for your Azure AI Project
124+
azure-aiproject-connection-string: "<your-ai-project-conn-str>"
125+
deployment-name: "<your-deployment-name>"
126+
agent-ids: "<your-ai-agent-ids>"
127+
data-path: ${{ github.workspace }}/path/to/your/data-file
128+
129+
```
130+
131+
### AI agent evaluations output
132+
133+
Evaluation results are outputted to the summary section for each AI evaluation GitHub Action run under Actions in GitHub.com.
134+
135+
The result includes two main parts:
136+
137+
- The top section summarizes the overview of your AI agent variants. You can select it on the agent ID link, and it directs you to the agent setting page in AI Foundry portal. You can also select the link for Evaluation Results, and it directs you to AI Foundry portal to view individual result in detail.
138+
- The second section includes evaluation scores and comparison between different variants on statistical significance (for multiple agents) and confidence intervals (for single agent).
139+
140+
Multi agent evaluation result:
141+
142+
:::image type="content" source="../media/evaluations/github-action-multi-agent-result.png" alt-text="Screenshot of multi agent evaluation result in GitHub Action." lightbox="../media/evaluations/github-action-multi-agent-result.png":::
143+
144+
Single agent evaluation result:
145+
146+
:::image type="content" source="../media/evaluations/github-action-single-agent-output.png" alt-text="Screenshot of single agent evaluation result in GitHub Action." lightbox="../media/evaluations/github-action-single-agent-output.png":::
147+
148+
## How to set up genAI evaluations
149+
150+
### GenAI evaluations input
151+
152+
The input of genai-evals includes (some of them are optional depending on the evaluator used):
153+
154+
Evaluation configuration file:
155+
156+
- `data`: a set of queries and ground truth. Ground-truth is optional and only required for a subset of evaluators. (See which [evaluator requires ground-truth](./develop/evaluate-sdk.md#data-requirements-for-built-in-evaluators)).
157+
158+
Here's a sample of the dataset:
159+
160+
```json
161+
[
162+
{
163+
"query": "Tell me about Tokyo?",
164+
"ground-truth": "Tokyo is the capital of Japan and the largest city in the country. It is located on the eastern coast of Honshu, the largest of Japan's four main islands. Tokyo is the political, economic, and cultural center of Japan and is one of the world's most populous cities. It is also one of the world's most important financial centers and is home to the Tokyo Stock Exchange."
165+
},
166+
{
167+
"query": "Where is Italy?",
168+
"ground-truth": "Italy is a country in southern Europe, located on the Italian Peninsula and the two largest islands in the Mediterranean Sea, Sicily and Sardinia. It is a unitary parliamentary republic with its capital in Rome, the largest city in Italy. Other major cities include Milan, Naples, Turin, and Palermo."
169+
},
170+
171+
{
172+
"query": "Where is Papua New Guinea?",
173+
"ground-truth": "Papua New Guinea is an island country that lies in the south-western Pacific. It includes the eastern half of New Guinea and many small offshore islands. Its neighbours include Indonesia to the west, Australia to the south and Solomon Islands to the south-east."
174+
}
175+
]
176+
177+
```
178+
179+
- `evaluators`: built-in evaluator names.
180+
- `ai_model_configuration`: including type, `azure_endpoint`, `azure_deployment` and `api_version`.
181+
182+
### GenAI evaluations workflow
183+
184+
This example illustrates how Azure AI Evaluation can be run when changes are committed to specific files in your repo.
185+
186+
> [!NOTE]
187+
> Update `GENAI_EVALS_DATA_PATH` to point to the correct directory in your repo.
188+
189+
```yml
190+
name: Sample Evaluate Action
191+
on:
192+
workflow_call:
193+
workflow_dispatch:
194+
195+
permissions:
196+
id-token: write
197+
contents: read
198+
199+
jobs:
200+
evaluate:
201+
runs-on: ubuntu-latest
202+
env:
203+
GENAI_EVALS_CONFIG_PATH: ${{ github.workspace }}/evaluate-config.json
204+
GENAI_EVALS_DATA_PATH: ${{ github.workspace }}/.github/.test_files/eval-input.jsonl
205+
steps:
206+
- uses: actions/checkout@v4
207+
- uses: azure/login@v2
208+
with:
209+
client-id: ${{ secrets.OIDC_AZURE_CLIENT_ID }}
210+
tenant-id: ${{ secrets.OIDC_AZURE_TENANT_ID }}
211+
subscription-id: ${{ secrets.OIDC_AZURE_SUBSCRIPTION_ID }}
212+
- name: Write evaluate config
213+
run: |
214+
cat > ${{ env.GENAI_EVALS_CONFIG_PATH }} <<EOF
215+
{
216+
"data": "${{ env.GENAI_EVALS_DATA_PATH }}",
217+
"evaluators": {
218+
"coherence": "CoherenceEvaluator",
219+
"fluency": "FluencyEvaluator"
220+
},
221+
"ai_model_configuration": {
222+
"type": "azure_openai",
223+
"azure_endpoint": "${{ secrets.AZURE_OPENAI_ENDPOINT }}",
224+
"azure_deployment": "${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT }}",
225+
"api_key": "${{ secrets.AZURE_OPENAI_API_KEY }}",
226+
"api_version": "${{ secrets.AZURE_OPENAI_API_VERSION }}"
227+
}
228+
}
229+
EOF
230+
- name: Run AI Evaluation
231+
id: run-ai-evaluation
232+
uses: microsoft/genai-evals@main
233+
with:
234+
evaluate-configuration: ${{ env.GENAI_EVALS_CONFIG_PATH }}
235+
```
236+
237+
### GenAI evaluations output
238+
239+
Evaluation results are outputted to the summary section for each AI evaluation GitHub Action run under Actions in GitHub.com.
240+
241+
The results include three parts:
242+
243+
- Test Variants: a summary of variant names and system prompts.
244+
- Average scores: the average score of each evaluator for each variant.
245+
- Individual test scores: detailed result for each individual test case.
246+
247+
:::image type="content" source="../media/evaluations/github-action-output-results.png" alt-text="Screenshot of result output including test variants, average score, and individual test in GitHub Action." lightbox="../media/evaluations/github-action-output-results.png":::
248+
249+
## Related content
250+
251+
- [How to evaluate generative AI models and applications with Azure AI Foundry](./evaluate-generative-ai-app.md)
252+
- [How to view evaluation results in Azure AI Foundry portal](./evaluate-results.md)

0 commit comments

Comments
 (0)