Skip to content

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Sep 22, 2025

Bumps azure-ai-evaluation from 1.8.0 to 1.11.1.

Release notes

Sourced from azure-ai-evaluation's releases.

azure-ai-evaluation_1.11.1

1.11.1 (2025-09-17)

Bugs Fixed

  • Pinning duckdb version to 1.3.2 for redteam extra to fix error TypeError: unhashable type: '_duckdb.typing.DuckDBPyType'

azure-ai-evaluation_1.11.0

1.11.0 (2025-09-02)

Features Added

  • Added support for user-supplied tags in the evaluate function. Tags are key-value pairs that can be used for experiment tracking, A/B testing, filtering, and organizing evaluation runs. The function accepts a tags parameter.
  • Added support for user-supplied TokenCredentials with LLM based evaluators.
  • Enhanced GroundednessEvaluator to support AI agent evaluation with tool calls. The evaluator now accepts agent response data containing tool calls and can extract context from file_search tool results for groundedness assessment. This enables evaluation of AI agents that use tools to retrieve information and generate responses. Note: Agent groundedness evaluation is currently supported only when the file_search tool is used.
  • Added language parameter to RedTeam class for multilingual red team scanning support. The parameter accepts values from SupportedLanguages enum including English, Spanish, French, German, Italian, Portuguese, Japanese, Korean, and Simplified Chinese, enabling red team attacks to be generated and conducted in multiple languages.
  • Added support for IndirectAttack and UngroundedAttributes risk categories in RedTeam scanning. These new risk categories expand red team capabilities to detect cross-platform indirect attacks and evaluate ungrounded inferences about human attributes including emotional state and protected class information.

Bugs Fixed

  • Fixed issue where evaluation results were not properly aligned with input data, leading to incorrect metrics being reported.

Other Changes

  • Deprecating AdversarialSimulator in favor of the AI Red Teaming Agent. AdversarialSimulator will be removed in the next minor release.
  • Moved retry configuration constants (MAX_RETRY_ATTEMPTS, MAX_RETRY_WAIT_SECONDS, MIN_RETRY_WAIT_SECONDS) from RedTeam class to new RetryManager class for better code organization and configurability.

azure-ai-evaluation_1.10.0

1.10.0 (2025-07-31)

Breaking Changes

  • Added evaluate_query parameter to all RAI service evaluators that can be passed as a keyword argument. This parameter controls whether queries are included in evaluation data when evaluating query-response pairs. Previously, queries were always included in evaluations. When set to True, both query and response will be evaluated; when set to False (default), only the response will be evaluated. This parameter is available across all RAI service evaluators including ContentSafetyEvaluator, ViolenceEvaluator, SexualEvaluator, SelfHarmEvaluator, HateUnfairnessEvaluator, ProtectedMaterialEvaluator, IndirectAttackEvaluator, CodeVulnerabilityEvaluator, UngroundedAttributesEvaluator, GroundednessProEvaluator, and EciEvaluator. Existing code that relies on queries being evaluated will need to explicitly set evaluate_query=True to maintain the previous behavior.

Features Added

  • Added support for Azure OpenAI Python grader via AzureOpenAIPythonGrader class, which serves as a wrapper around Azure Open AI Python grader configurations. This new grader object can be supplied to the main evaluate method as if it were a normal callable evaluator.
  • Added attack_success_thresholds parameter to RedTeam class for configuring custom thresholds that determine attack success. This allows users to set specific threshold values for each risk category, with scores greater than the threshold considered successful attacks (i.e. higher threshold means higher tolerance for harmful responses).
  • Enhanced threshold reporting in RedTeam results to include default threshold values when custom thresholds aren't specified, providing better transparency about the evaluation criteria used.

Bugs Fixed

  • Fixed red team scan output_path issue where individual evaluation results were overwriting each other instead of being preserved as separate files. Individual evaluations now create unique files while the user's output_path is reserved for final aggregated results.
  • Significant improvements to TaskAdherence evaluator. New version has less variance, is much faster and consumes fewer tokens.
  • Significant improvements to Relevance evaluator. New version has more concrete rubrics and has less variance, is much faster and consumes fewer tokens.

Other Changes

  • The default engine for evaluation was changed from promptflow (PFClient) to an in-SDK batch client (RunSubmitterClient)
    • Note: We've temporarily kept an escape hatch to fall back to the legacy promptflow implementation by setting _use_pf_client=True when invoking evaluate(). This is due to be removed in a future release.
Commits
  • a67562e Chore/support updated oai sdk (#43053)
  • f5da6d0 Update CHANGELOG with hotfix release date
  • c1c296a Update CHANGELOG and version for 1.11.1 hotfix release
  • 44acacc [evaluation] pin duckdb version for redteam (#43028)
  • f8b075a Fix dependency issue with RunStepFunctionToolCall (#42826)
  • 09838cf [AutoRelease] t2-storagediscovery-2025-09-03-35045(can only be merged by SDK ...
  • ee4b0f3 [AutoRelease] t2-sitemanager-2025-08-29-73765(can only be merged by SDK owner...
  • d59f7b2 fix dependencies for pyproject.toml (#42820)
  • da944c6 Enable package mode for mgmt sdk (#42276)
  • 07b029a [evaluation] Fix for IndirectAttack attack objectives mapping (#42816)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [azure-ai-evaluation](https://github.com/Azure/azure-sdk-for-python) from 1.8.0 to 1.11.1.
- [Release notes](https://github.com/Azure/azure-sdk-for-python/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-python/blob/main/doc/esrp_release.md)
- [Commits](Azure/azure-sdk-for-python@azure-ai-evaluation_1.8.0...azure-ai-evaluation_1.11.1)

---
updated-dependencies:
- dependency-name: azure-ai-evaluation
  dependency-version: 1.11.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Sep 22, 2025
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Oct 6, 2025

Superseded by #149.

@dependabot dependabot bot closed this Oct 6, 2025
@dependabot dependabot bot deleted the dependabot/pip/azure-ai-evaluation-1.11.1 branch October 6, 2025 08:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant