You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/agents/concepts/model-region-support.md
+1-42Lines changed: 1 addition & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: aahill
7
7
ms.author: aahi
8
8
ms.service: azure-ai-agent-service
9
9
ms.topic: conceptual
10
-
ms.date: 07/10/2025
10
+
ms.date: 07/14/2025
11
11
ms.custom: azure-ai-agents, references_regions
12
12
---
13
13
@@ -46,47 +46,6 @@ Azure AI Foundry Agent Service supports the following Azure OpenAI models in the
46
46
| westus | X | X | X | X | X || X || X ||
47
47
| westus3 || X | X | X | X || X ||||
48
48
49
-
## Non-Microsoft models
50
-
51
-
The Azure AI Foundry Agent Service also supports the following models from the Azure AI Foundry model catalog.
52
-
53
-
* Meta-Llama-405B-Instruct
54
-
55
-
To use these models, you can use [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) to make a deployment, and then reference the deployment name in your agent. For example:
56
-
57
-
```python
58
-
agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent" )
59
-
```
60
-
## Azure AI Foundry models
61
-
62
-
### Models with tool-calling
63
-
64
-
To best support agentic scenarios, we recommend using models that support tool-calling. The Azure AI Foundry Agent Service currently supports all agent-compatible models from the Azure AI Foundry model catalog.
65
-
66
-
To use these models, use the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) to make a model deployment, then reference the deployment name in your agent. For example:
67
-
68
-
`agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent")`
69
-
70
-
> [!NOTE]
71
-
> This option should only be used for open-source models (for example, Cepstral, Mistral, Llama) and not for OpenAI models, which are natively supported in the service. This option should also only be used for models that support tool-calling.
72
-
73
-
### Models without tool-calling
74
-
75
-
Though tool-calling support is a core capability for agentic scenarios, we now provide the ability to use models that don’t support tool-calling in our API and SDK. This option can be helpful when you have specific use-cases that don’t require tool-calling.
76
-
77
-
The following steps will allow you to utilize any chat-completion model that is available through a [serverless API](/azure/ai-foundry/how-to/model-catalog-overview):
78
-
79
-
80
-
81
-
1. Deploy your desired model through serverless API. Model will show up on your **Models + Endpoints** page.
82
-
83
-
1. Click on model name to see model details, where you'll find your model's target URI and key.
84
-
85
-
1. Create a new Serverless connection on **Connected Resources** page, using the target URI and key.
86
-
87
-
The model can now be referenced in your code (`Target URI` + `@` + `Model Name`), for example:
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/evaluations.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ Azure OpenAI evaluation enables developers to create evaluation runs to test aga
50
50
- West US 2
51
51
- West US 3
52
52
53
-
If your preferred region is missing, refer to [Azure OpenAI regions](https://learn.microsoft.com/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#global-standard-model-availability) and check if it is one of the Azure OpenAI regional availability zones.
53
+
If your preferred region is missing, refer to [Azure OpenAI regions](/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#global-standard-model-availability) and check if it is one of the Azure OpenAI regional availability zones.
54
54
55
55
### Supported deployment types
56
56
@@ -63,7 +63,7 @@ If your preferred region is missing, refer to [Azure OpenAI regions](https://lea
63
63
64
64
## Evaluation API (preview)
65
65
66
-
Evaluation API lets you test model outputs directly through API calls, and programmatically assess model quality and performance. To use Evaluation API, check out the [REST API documentation](https://learn.microsoft.com/azure/ai-services/openai/authoring-reference-preview#evaluation---get-list).
66
+
Evaluation API lets you test model outputs directly through API calls, and programmatically assess model quality and performance. To use Evaluation API, check out the [REST API documentation](/azure/ai-services/openai/authoring-reference-preview#evaluation---get-list).
67
67
68
68
## Evaluation pipeline
69
69
@@ -127,7 +127,7 @@ The deployments available in your list depend on those you created within your A
127
127
128
128
Testing criteria is used to assess the effectiveness of each output generated by the target model. These tests compare the input data with the output data to ensure consistency. You have the flexibility to configure different criteria to test and measure the quality and relevance of the output at different levels.
129
129
130
-
:::image type="content" source="../media/how-to/evaluations/eval-testing-criteria.png" alt-text="Screenshot that shows the evaluations testing criteria options." lightbox="../media/how-to/evaluations/eval-testing-criteria.png":::
130
+
:::image type="content" source="../media/how-to/evaluations/eval-testing-criteria.png" alt-text="Screenshot that shows the different testing criteria selections." lightbox="../media/how-to/evaluations/eval-testing-criteria.png":::
131
131
132
132
When you click into each testing criteria, you will see different types of graders as well as preset schemas that you can modify per your own evaluation dataset and criteria.
133
133
@@ -146,11 +146,11 @@ When you click into each testing criteria, you will see different types of grade
146
146
147
147
4. Select your evaluation data which will be in `.jsonl` format. If you already have an existing data, you can select one, or upload a new data.
148
148
149
-
:::image type="content" source="../media/how-to/evaluations/upload-data-1.png" alt-text="Screenshot of data upload." lightbox="../media/how-to/evaluations/upload-data-1.png":::
149
+
:::image type="content" source="../media/how-to/evaluations/upload-data-1.png" alt-text="Screenshot of data upload options." lightbox="../media/how-to/evaluations/upload-data-1.png":::
150
150
151
151
When you upload new data, you'll see the first three lines of the file as a preview on the right side:
152
152
153
-
:::image type="content" source="../media/how-to/evaluations/upload-data-2.png" alt-text="Screenshot of data upload." lightbox="../media/how-to/evaluations/upload-data-2.png":::
153
+
:::image type="content" source="../media/how-to/evaluations/upload-data-2.png" alt-text="Screenshot of data upload with example selection." lightbox="../media/how-to/evaluations/upload-data-2.png":::
154
154
155
155
If you need a sample test file, you can use this sample `.jsonl` text. This sample contains sentences of various technical content, and we are going to be assessing semantic similarity across these sentences.
156
156
@@ -169,19 +169,19 @@ When you click into each testing criteria, you will see different types of grade
169
169
170
170
5. If you would like to create new responses using inputs from your test data, you can select 'Generate new responses'. This will inject the input fields from our evaluation file into individual prompts for a model of your choice to generate output.
171
171
172
-
:::image type="content" source="../media/how-to/evaluations/eval-generate-1.png" alt-text="Screenshot of the UX for generating model responses." lightbox="../media/how-to/evaluations/eval-generate-1.png":::
172
+
:::image type="content" source="../media/how-to/evaluations/eval-generate-1.png" alt-text="Screenshot of the UX showing selected import test data." lightbox="../media/how-to/evaluations/eval-generate-1.png":::
173
173
174
174
You will select the model of your choice. If you do not have a model, you can create a new model deployment. The selected model will take the input data and generate its own unique outputs, which in this case will be stored in a variable called `{{sample.output_text}}`. We'll then use that output later as part of our testing criteria. Alternatively, you could provide your own custom system message and individual message examples manually.
175
175
176
176
:::image type="content" source="../media/how-to/evaluations/eval-generate-2.png" alt-text="Screenshot of the UX for generating model responses." lightbox="../media/how-to/evaluations/eval-generate-2.png":::
177
177
178
178
6. For creating a test criteria, select **Add**. For the example file we provided, we are going to be assessing semantic similarity. Select **Model Scorer**, which contains test criteria presets for Semantic Similarity.
179
179
180
-
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-1.png" alt-text="Screenshot of the semantic similarity UX config." lightbox="../media/how-to/evaluations/eval-semantic-similarity-1.png":::
180
+
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-1.png" alt-text="Screenshot of the semantic similarity UX config highlighting Model scorer." lightbox="../media/how-to/evaluations/eval-semantic-similarity-1.png":::
181
181
182
182
Select **Semantic Similarity** at the top. Scroll to the bottom, and in `User` section, specify `{{item.output}}` as `Ground truth`, and specify `{{sample.output_text}}` as `Output`. This will take the original reference output from your evaluation `.jsonl` file (the sample file provided) and compare it against the output that is generated by the model you chose in the previous step.
183
183
184
-
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-2.png" alt-text="Screenshot of the semantic similarity UX config." lightbox="../media/how-to/evaluations/eval-semantic-similarity-2.png":::
184
+
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-2.png" alt-text="Screenshot of the semantic similarity UX config with generated output." lightbox="../media/how-to/evaluations/eval-semantic-similarity-2.png":::
185
185
186
186
:::image type="content" source="../media/how-to/evaluations/eval-semantic-similarity-3.png" alt-text="Screenshot of the semantic similarity UX config." lightbox="../media/how-to/evaluations/eval-semantic-similarity-3.png":::
187
187
@@ -190,7 +190,7 @@ You will select the model of your choice. If you do not have a model, you can cr
190
190
8. You are ready to create your Evaluation. Provide your Evaluation name, review everything looks correct, and **Submit** to create the Evaluation job. You'll be taken to a status page for your evaluation job, which will show the status as "Waiting".
191
191
192
192
:::image type="content" source="../media/how-to/evaluations/eval-submit-job.png" alt-text="Screenshot of the evaluation job submit UX." lightbox="../media/how-to/evaluations/eval-submit-job.png":::
193
-
:::image type="content" source="../media/how-to/evaluations/eval-submit-job-2.png" alt-text="Screenshot of the evaluation job submit UX." lightbox="../media/how-to/evaluations/eval-submit-job-2.png":::
193
+
:::image type="content" source="../media/how-to/evaluations/eval-submit-job-2.png" alt-text="Screenshot of the evaluation job submit UX, with a status of waiting." lightbox="../media/how-to/evaluations/eval-submit-job-2.png":::
194
194
195
195
9. Once your evaluation job has created, you can select the job to view the full details of the job:
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/gpt-v-javascript.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,6 +70,9 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
70
70
71
71
Select an image from the [azure-samples/cognitive-services-sample-data-files](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/Images). Use the image URL in the code below or set the `IMAGE_URL` environment variable to the image URL.
72
72
73
+
> [!IMPORTANT]
74
+
> If you use a SAS URL to an image stored in Azure blob storage, you need to enable Managed Identity and assign the **Storage Blob Reader** role to your Azure OpenAI resource (do this in the Azure portal). This allows the model to access the image in blob storage.
75
+
73
76
> [!TIP]
74
77
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/named-entity-recognition/overview.md
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: overview
9
-
ms.date: 02/15/2025
9
+
ms.date: 07/14/2025
10
10
ms.author: lajanuar
11
11
ms.custom: language-service-ner
12
12
---
@@ -19,31 +19,29 @@ Named Entity Recognition (NER) is one of the features offered by [Azure AI Langu
19
19
*[**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
20
20
* The [**conceptual articles**](concepts/named-entity-categories.md) provide in-depth explanations of the service's functionality and features.
21
21
22
-
> [!NOTE]
23
-
> [Entity Resolution](concepts/entity-resolutions.md) was upgraded to the [Entity Metadata](concepts/entity-metadata.md) starting in API version 2023-04-15-preview. If you're calling the preview version of the API equal or newer than 2023-04-15-preview, check out the [Entity Metadata](concepts/entity-metadata.md) article to use the resolution feature.
24
-
25
22
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
An AI system includes not only the technology, but also the people who use it, the people who are affected by it, and the environment in which it's deployed. Read the [transparency note for NER](/azure/ai-foundry/responsible-ai/language-service/transparency-note-named-entity-recognition) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
32
+
An AI system consists of more than just its core technology. It also includes the people who operate it, the people its use affects, and the broader deployment context.
33
+
All these interconnected elements shape the effectiveness and outcomes of AI. Read the [transparency note for NER](/azure/ai-foundry/responsible-ai/language-service/transparency-note-named-entity-recognition) to learn about responsible AI use and deployment in your systems. For more information, *see* the following articles:
36
34
37
35
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
38
36
39
37
## Scenarios
40
38
41
-
*Enhance search capabilities and search indexing - Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
42
-
*Automate business processes - For example, when reviewing insurance claims, recognized entities like name and location could be highlighted to facilitate the review. Or a support ticket could be generated with a customer's name and company automatically from an email.
43
-
* Customer analysis – Determine the most popular information conveyed by customers in reviews, emails, and calls to determine the most relevant topics that get brought up and determine trends over time.
39
+
***Enhance search capabilities and search indexing**. Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
40
+
***Automate business processes** - Insurance claims, recognized entities like name and location can be highlighted to facilitate review. Support tickets can be automatically generated with customer name and company from an email.
41
+
***In-depth customer analysis**. Determine the most popular information conveyed by customers in reviews, emails, and calls to determine relevant topics and trends over time.
44
42
45
43
## Next steps
46
44
47
45
There are two ways to get started using the Named Entity Recognition (NER) feature:
48
46
*[Azure AI Foundry](../../../ai-foundry/what-is-azure-ai-foundry.md) is a web-based platform that lets you use several Language service features without needing to write code.
49
-
* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
47
+
* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/personally-identifiable-information/language-support.md
+18-4Lines changed: 18 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Personally Identifiable Information (PII) detection language support
3
3
titleSuffix: Azure AI services
4
-
description: This article explains which natural languages are supported by the PII detection feature of Azure AI Language.
4
+
description: This article explains which natural languages the PII detection feature supports of Azure AI Language.
5
5
author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
@@ -11,9 +11,10 @@ ms.author: lajanuar
11
11
ms.custom: language-service-pii, build-2024
12
12
---
13
13
14
-
# Personally Identifiable Information (PII) detection language support
14
+
# Personally Identifiable Information (PII) detection language support
15
+
16
+
Use this article to learn which natural languages text PII, document PII, and conversation PII features support.
15
17
16
-
Use this article to learn which natural languages are supported by the text PII, document PII, and conversation PII features of Azure AI Language Service.
17
18
# [Text PII](#tab/text)
18
19
19
20
## Text PII language support
@@ -190,7 +191,20 @@ Use this article to learn which natural languages are supported by the text PII,
190
191
191
192
## PII language support
192
193
193
-
The Generally Available Conversational PII service currently supports English. Preview model version `2023-04-15-preview` supports English, German, Spanish, and French.
194
+
PII conversation preview version `2023-04-15-preview` supports the following languages:
195
+
196
+
* English
197
+
* French
198
+
* German
199
+
* Spanish
200
+
201
+
202
+
PII conversation generally available (GA) version currently supports the following languages:
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/sentiment-opinion-mining/includes/custom/rest-api/assign-resources.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ Use the following sample JSON as your body.
50
50
51
51
|Key |Placeholder |Value | Example |
52
52
|---------|---------|----------|--|
53
-
|`azureResourceId`|`{AZURE-RESOURCE-ID}`| The full resource ID path you want to assign. Found in the Azure portal under the **Properties** tab for the resource, in the **Resource ID** field. |`/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ContosoResourceGroup/providers/Microsoft.CognitiveServices/accounts/ContosoResource`|
53
+
|`azureResourceId`|`{AZURE-RESOURCE-ID}`| The full resource ID path you want to assign. Found in the Azure portal under the **Properties** tab for the resource, in the **Resource ID** field. |`/subscriptions/a0a0a0a0-bbbb-cccc-dddd-e1e1e1e1e1e1/resourceGroups/ContosoResourceGroup/providers/Microsoft.CognitiveServices/accounts/ContosoResource`|
54
54
|`customDomain`|`{CUSTOM-DOMAIN}`| The custom subdomain of the resource you want to assign. Found in the Azure portal under the **Keys and Endpoint** tab for the resource, as the **Endpoint** field in the URL `https://<your-custom-subdomain>.cognitiveservices.azure.com/`|`contosoresource`|
55
55
|`region`|`{REGION-CODE}`| A region code specifying the region of the resource you want to assign. Found in the Azure portal under the **Keys and Endpoint** tab for the resource, in the **Location/Region** field. |`eastus`|
0 commit comments