Skip to content

Commit 158e988

Browse files
authored
Prepare azure-ai-evaluation release 1.11.0 (#42710)
1 parent 2eb11f6 commit 158e988

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

sdk/evaluation/azure-ai-evaluation/CHANGELOG.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# Release History
22

3-
## 1.11.0 (Unreleased)
4-
5-
### Breaking Changes
3+
## 1.11.0 (2025-09-02)
64

75
### Features Added
86
- Added support for user-supplied tags in the `evaluate` function. Tags are key-value pairs that can be used for experiment tracking, A/B testing, filtering, and organizing evaluation runs. The function accepts a `tags` parameter.
97
- Enhanced `GroundednessEvaluator` to support AI agent evaluation with tool calls. The evaluator now accepts agent response data containing tool calls and can extract context from `file_search` tool results for groundedness assessment. This enables evaluation of AI agents that use tools to retrieve information and generate responses. Note: Agent groundedness evaluation is currently supported only when the `file_search` tool is used.
8+
- Added `language` parameter to `RedTeam` class for multilingual red team scanning support. The parameter accepts values from `SupportedLanguages` enum including English, Spanish, French, German, Italian, Portuguese, Japanese, Korean, and Simplified Chinese, enabling red team attacks to be generated and conducted in multiple languages.
9+
- Added support for XPIA and UngroundedAttributes risk categories in `RedTeam` scanning. These new risk categories expand red team capabilities to detect cross-platform indirect attacks and evaluate ungrounded inferences about human attributes including emotional state and protected class information.
1010

1111
### Bugs Fixed
1212
- Fixed issue where evaluation results were not properly aligned with input data, leading to incorrect metrics being reported.

0 commit comments

Comments
 (0)