Skip to content

Commit dbb4e58

Browse files
Merge pull request #6026 from MicrosoftDocs/main
Auto Publish – main to live - 2025-07-14 22:06 UTC
2 parents a314d4e + f134e48 commit dbb4e58

21 files changed

+196
-217
lines changed

articles/ai-foundry/agents/concepts/model-region-support.md

Lines changed: 1 addition & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: aahill
77
ms.author: aahi
88
ms.service: azure-ai-agent-service
99
ms.topic: conceptual
10-
ms.date: 07/10/2025
10+
ms.date: 07/14/2025
1111
ms.custom: azure-ai-agents, references_regions
1212
---
1313

@@ -46,47 +46,6 @@ Azure AI Foundry Agent Service supports the following Azure OpenAI models in the
4646
| westus | X | X | X | X | X | | X | | X | |
4747
| westus3 | | X | X | X | X | | X | | | |
4848

49-
## Non-Microsoft models
50-
51-
The Azure AI Foundry Agent Service also supports the following models from the Azure AI Foundry model catalog.
52-
53-
* Meta-Llama-405B-Instruct
54-
55-
To use these models, you can use [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) to make a deployment, and then reference the deployment name in your agent. For example:
56-
57-
```python
58-
agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent" )
59-
```
60-
## Azure AI Foundry models
61-
62-
### Models with tool-calling
63-
64-
To best support agentic scenarios, we recommend using models that support tool-calling. The Azure AI Foundry Agent Service currently supports all agent-compatible models from the Azure AI Foundry model catalog.
65-
66-
To use these models, use the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) to make a model deployment, then reference the deployment name in your agent. For example:
67-
68-
`agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent")`
69-
70-
> [!NOTE]
71-
> This option should only be used for open-source models (for example, Cepstral, Mistral, Llama) and not for OpenAI models, which are natively supported in the service. This option should also only be used for models that support tool-calling.
72-
73-
### Models without tool-calling
74-
75-
Though tool-calling support is a core capability for agentic scenarios, we now provide the ability to use models that don’t support tool-calling in our API and SDK. This option can be helpful when you have specific use-cases that don’t require tool-calling.
76-
77-
The following steps will allow you to utilize any chat-completion model that is available through a [serverless API](/azure/ai-foundry/how-to/model-catalog-overview):
78-
79-
80-
81-
1. Deploy your desired model through serverless API. Model will show up on your **Models + Endpoints** page.
82-
83-
1. Click on model name to see model details, where you'll find your model's target URI and key.
84-
85-
1. Create a new Serverless connection on **Connected Resources** page, using the target URI and key.
86-
87-
The model can now be referenced in your code (`Target URI` + `@` + `Model Name`), for example:
88-
89-
`Model=https://Phi-4-mejco.eastus.models.ai.azure.com/@Phi-4-mejco`
9049

9150
## Next steps
9251

articles/ai-foundry/includes/get-started-fdp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom:
1010
- include file
1111
- build-aifnd
1212
- build-2025
13-
- update-code-3
13+
- update-code-4
1414
---
1515

1616
In this quickstart, you use [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) to:

articles/ai-foundry/openai/how-to/evaluations.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Azure OpenAI evaluation enables developers to create evaluation runs to test aga
5050
- West US 2
5151
- West US 3
5252

53-
If your preferred region is missing, refer to [Azure OpenAI regions](https://learn.microsoft.com/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#global-standard-model-availability) and check if it is one of the Azure OpenAI regional availability zones.
53+
If your preferred region is missing, refer to [Azure OpenAI regions](/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#global-standard-model-availability) and check if it is one of the Azure OpenAI regional availability zones.
5454

5555
### Supported deployment types
5656

@@ -63,7 +63,7 @@ If your preferred region is missing, refer to [Azure OpenAI regions](https://lea
6363

6464
## Evaluation API (preview)
6565

66-
Evaluation API lets you test model outputs directly through API calls, and programmatically assess model quality and performance. To use Evaluation API, check out the [REST API documentation](https://learn.microsoft.com/azure/ai-services/openai/authoring-reference-preview#evaluation---get-list).
66+
Evaluation API lets you test model outputs directly through API calls, and programmatically assess model quality and performance. To use Evaluation API, check out the [REST API documentation](/azure/ai-services/openai/authoring-reference-preview#evaluation---get-list).
6767

6868
## Evaluation pipeline
6969

@@ -127,7 +127,7 @@ The deployments available in your list depend on those you created within your A
127127

128128
Testing criteria is used to assess the effectiveness of each output generated by the target model. These tests compare the input data with the output data to ensure consistency. You have the flexibility to configure different criteria to test and measure the quality and relevance of the output at different levels.
129129

130-
:::image type="content" source="../media/how-to/evaluations/eval-testing-criteria.png" alt-text="Screenshot that shows the evaluations testing criteria options." lightbox="../media/how-to/evaluations/eval-testing-criteria.png":::
130+
:::image type="content" source="../media/how-to/evaluations/eval-testing-criteria.png" alt-text="Screenshot that shows the different testing criteria selections." lightbox="../media/how-to/evaluations/eval-testing-criteria.png":::
131131

132132
When you click into each testing criteria, you will see different types of graders as well as preset schemas that you can modify per your own evaluation dataset and criteria.
133133

@@ -146,11 +146,11 @@ When you click into each testing criteria, you will see different types of grade
146146

147147
4. Select your evaluation data which will be in `.jsonl` format. If you already have an existing data, you can select one, or upload a new data.
148148

149-
:::image type="content" source="../media/how-to/evaluations/upload-data-1.png" alt-text="Screenshot of data upload." lightbox="../media/how-to/evaluations/upload-data-1.png":::
149+
:::image type="content" source="../media/how-to/evaluations/upload-data-1.png" alt-text="Screenshot of data upload options." lightbox="../media/how-to/evaluations/upload-data-1.png":::
150150

151151
When you upload new data, you'll see the first three lines of the file as a preview on the right side:
152152

153-
:::image type="content" source="../media/how-to/evaluations/upload-data-2.png" alt-text="Screenshot of data upload." lightbox="../media/how-to/evaluations/upload-data-2.png":::
153+
:::image type="content" source="../media/how-to/evaluations/upload-data-2.png" alt-text="Screenshot of data upload with example selection." lightbox="../media/how-to/evaluations/upload-data-2.png":::
154154

155155
If you need a sample test file, you can use this sample `.jsonl` text. This sample contains sentences of various technical content, and we are going to be assessing semantic similarity across these sentences.
156156

@@ -169,19 +169,19 @@ When you click into each testing criteria, you will see different types of grade
169169

170170
5. If you would like to create new responses using inputs from your test data, you can select 'Generate new responses'. This will inject the input fields from our evaluation file into individual prompts for a model of your choice to generate output.
171171

172-
:::image type="content" source="../media/how-to/evaluations/eval-generate-1.png" alt-text="Screenshot of the UX for generating model responses." lightbox="../media/how-to/evaluations/eval-generate-1.png":::
172+
:::image type="content" source="../media/how-to/evaluations/eval-generate-1.png" alt-text="Screenshot of the UX showing selected import test data." lightbox="../media/how-to/evaluations/eval-generate-1.png":::
173173

174174
You will select the model of your choice. If you do not have a model, you can create a new model deployment. The selected model will take the input data and generate its own unique outputs, which in this case will be stored in a variable called `{{sample.output_text}}`. We'll then use that output later as part of our testing criteria. Alternatively, you could provide your own custom system message and individual message examples manually.
175175

176176
:::image type="content" source="../media/how-to/evaluations/eval-generate-2.png" alt-text="Screenshot of the UX for generating model responses." lightbox="../media/how-to/evaluations/eval-generate-2.png":::
177177

178178
6. For creating a test criteria, select **Add**. For the example file we provided, we are going to be assessing semantic similarity. Select **Model Scorer**, which contains test criteria presets for Semantic Similarity.
179179

180-
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-1.png" alt-text="Screenshot of the semantic similarity UX config." lightbox="../media/how-to/evaluations/eval-semantic-similarity-1.png":::
180+
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-1.png" alt-text="Screenshot of the semantic similarity UX config highlighting Model scorer." lightbox="../media/how-to/evaluations/eval-semantic-similarity-1.png":::
181181

182182
Select **Semantic Similarity** at the top. Scroll to the bottom, and in `User` section, specify `{{item.output}}` as `Ground truth`, and specify `{{sample.output_text}}` as `Output`. This will take the original reference output from your evaluation `.jsonl` file (the sample file provided) and compare it against the output that is generated by the model you chose in the previous step.
183183

184-
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-2.png" alt-text="Screenshot of the semantic similarity UX config." lightbox="../media/how-to/evaluations/eval-semantic-similarity-2.png":::
184+
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-2.png" alt-text="Screenshot of the semantic similarity UX config with generated output." lightbox="../media/how-to/evaluations/eval-semantic-similarity-2.png":::
185185

186186
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-3.png" alt-text="Screenshot of the semantic similarity UX config." lightbox="../media/how-to/evaluations/eval-semantic-similarity-3.png":::
187187

@@ -190,7 +190,7 @@ You will select the model of your choice. If you do not have a model, you can cr
190190
8. You are ready to create your Evaluation. Provide your Evaluation name, review everything looks correct, and **Submit** to create the Evaluation job. You'll be taken to a status page for your evaluation job, which will show the status as "Waiting".
191191

192192
:::image type="content" source="../media/how-to/evaluations/eval-submit-job.png" alt-text="Screenshot of the evaluation job submit UX." lightbox="../media/how-to/evaluations/eval-submit-job.png":::
193-
:::image type="content" source="../media/how-to/evaluations/eval-submit-job-2.png" alt-text="Screenshot of the evaluation job submit UX." lightbox="../media/how-to/evaluations/eval-submit-job-2.png":::
193+
:::image type="content" source="../media/how-to/evaluations/eval-submit-job-2.png" alt-text="Screenshot of the evaluation job submit UX, with a status of waiting." lightbox="../media/how-to/evaluations/eval-submit-job-2.png":::
194194

195195
9. Once your evaluation job has created, you can select the job to view the full details of the job:
196196

articles/ai-foundry/openai/includes/gpt-v-javascript.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,9 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
7070
7171
Select an image from the [azure-samples/cognitive-services-sample-data-files](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/Images). Use the image URL in the code below or set the `IMAGE_URL` environment variable to the image URL.
7272
73+
> [!IMPORTANT]
74+
> If you use a SAS URL to an image stored in Azure blob storage, you need to enable Managed Identity and assign the **Storage Blob Reader** role to your Azure OpenAI resource (do this in the Azure portal). This allows the model to access the image in blob storage.
75+
7376
> [!TIP]
7477
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
7578

articles/ai-services/content-understanding/tutorial/build-person-directory.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Content-Type: application/json
8181
{
8282
"tags": {
8383
"name": "Alice",
84-
"age": "20"
84+
"employeeId": "E12345"
8585
}
8686
}
8787
```
@@ -98,7 +98,7 @@ The API returns a `personId` that uniquely identifies the created person.
9898
"personId": "4f66b612-e57d-4d17-9ef7-b951aea2cf0f",
9999
"tags": {
100100
"name": "Alice",
101-
"age": "20"
101+
"employeeId": "E12345"
102102
}
103103
}
104104
```
@@ -195,7 +195,7 @@ The API returns the detected bounding box of the face along with the top person
195195
"personId": "{personId1}",
196196
"tags": {
197197
"name": "Alice",
198-
"age": "20"
198+
"employeeId": "E12345"
199199
},
200200
"confidence": 0.92
201201
}

articles/ai-services/language-service/named-entity-recognition/overview.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: overview
9-
ms.date: 02/15/2025
9+
ms.date: 07/14/2025
1010
ms.author: lajanuar
1111
ms.custom: language-service-ner
1212
---
@@ -19,31 +19,29 @@ Named Entity Recognition (NER) is one of the features offered by [Azure AI Langu
1919
* [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
2020
* The [**conceptual articles**](concepts/named-entity-categories.md) provide in-depth explanations of the service's functionality and features.
2121

22-
> [!NOTE]
23-
> [Entity Resolution](concepts/entity-resolutions.md) was upgraded to the [Entity Metadata](concepts/entity-metadata.md) starting in API version 2023-04-15-preview. If you're calling the preview version of the API equal or newer than 2023-04-15-preview, check out the [Entity Metadata](concepts/entity-metadata.md) article to use the resolution feature.
24-
2522
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
2623

2724
## Get started with named entity recognition
2825

2926
[!INCLUDE [development options](./includes/development-options.md)]
3027

31-
[!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)]
28+
[!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)]
3229

33-
## Responsible AI
30+
## Responsible AI
3431

35-
An AI system includes not only the technology, but also the people who use it, the people who are affected by it, and the environment in which it's deployed. Read the [transparency note for NER](/azure/ai-foundry/responsible-ai/language-service/transparency-note-named-entity-recognition) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
32+
An AI system consists of more than just its core technology. It also includes the people who operate it, the people its use affects, and the broader deployment context.
33+
All these interconnected elements shape the effectiveness and outcomes of AI. Read the [transparency note for NER](/azure/ai-foundry/responsible-ai/language-service/transparency-note-named-entity-recognition) to learn about responsible AI use and deployment in your systems. For more information, *see* the following articles:
3634

3735
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
3836

3937
## Scenarios
4038

41-
* Enhance search capabilities and search indexing - Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
42-
* Automate business processes - For example, when reviewing insurance claims, recognized entities like name and location could be highlighted to facilitate the review. Or a support ticket could be generated with a customer's name and company automatically from an email.
43-
* Customer analysis – Determine the most popular information conveyed by customers in reviews, emails, and calls to determine the most relevant topics that get brought up and determine trends over time.
39+
* **Enhance search capabilities and search indexing**. Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
40+
* **Automate business processes** - Insurance claims, recognized entities like name and location can be highlighted to facilitate review. Support tickets can be automatically generated with customer name and company from an email.
41+
* **In-depth customer analysis**. Determine the most popular information conveyed by customers in reviews, emails, and calls to determine relevant topics and trends over time.
4442

4543
## Next steps
4644

4745
There are two ways to get started using the Named Entity Recognition (NER) feature:
4846
* [Azure AI Foundry](../../../ai-foundry/what-is-azure-ai-foundry.md) is a web-based platform that lets you use several Language service features without needing to write code.
49-
* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
47+
* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.

articles/ai-services/language-service/personally-identifiable-information/language-support.md

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Personally Identifiable Information (PII) detection language support
33
titleSuffix: Azure AI services
4-
description: This article explains which natural languages are supported by the PII detection feature of Azure AI Language.
4+
description: This article explains which natural languages the PII detection feature supports of Azure AI Language.
55
author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
@@ -11,9 +11,10 @@ ms.author: lajanuar
1111
ms.custom: language-service-pii, build-2024
1212
---
1313

14-
# Personally Identifiable Information (PII) detection language support
14+
# Personally Identifiable Information (PII) detection language support
15+
16+
Use this article to learn which natural languages text PII, document PII, and conversation PII features support.
1517

16-
Use this article to learn which natural languages are supported by the text PII, document PII, and conversation PII features of Azure AI Language Service.
1718
# [Text PII](#tab/text)
1819

1920
## Text PII language support
@@ -190,7 +191,20 @@ Use this article to learn which natural languages are supported by the text PII,
190191

191192
## PII language support
192193

193-
The Generally Available Conversational PII service currently supports English. Preview model version `2023-04-15-preview` supports English, German, Spanish, and French.
194+
PII conversation preview version `2023-04-15-preview` supports the following languages:
195+
196+
* English
197+
* French
198+
* German
199+
* Spanish
200+
201+
202+
PII conversation generally available (GA) version currently supports the following languages:
203+
204+
* English
205+
* French
206+
* Spanish
207+
194208

195209

196210
---

articles/ai-services/language-service/sentiment-opinion-mining/includes/custom/rest-api/assign-resources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Use the following sample JSON as your body.
5050

5151
|Key |Placeholder |Value | Example |
5252
|---------|---------|----------|--|
53-
| `azureResourceId` | `{AZURE-RESOURCE-ID}` | The full resource ID path you want to assign. Found in the Azure portal under the **Properties** tab for the resource, in the **Resource ID** field. | `/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ContosoResourceGroup/providers/Microsoft.CognitiveServices/accounts/ContosoResource` |
53+
| `azureResourceId` | `{AZURE-RESOURCE-ID}` | The full resource ID path you want to assign. Found in the Azure portal under the **Properties** tab for the resource, in the **Resource ID** field. | `/subscriptions/a0a0a0a0-bbbb-cccc-dddd-e1e1e1e1e1e1/resourceGroups/ContosoResourceGroup/providers/Microsoft.CognitiveServices/accounts/ContosoResource` |
5454
| `customDomain` | `{CUSTOM-DOMAIN}` | The custom subdomain of the resource you want to assign. Found in the Azure portal under the **Keys and Endpoint** tab for the resource, as the **Endpoint** field in the URL `https://<your-custom-subdomain>.cognitiveservices.azure.com/` | `contosoresource` |
5555
| `region` | `{REGION-CODE}` | A region code specifying the region of the resource you want to assign. Found in the Azure portal under the **Keys and Endpoint** tab for the resource, in the **Location/Region** field. |`eastus`|
5656

0 commit comments

Comments
 (0)