Skip to content

Commit 05b2bb7

Browse files
Merge pull request #8701 from MicrosoftDocs/main
Auto Publish – main to live - 2025-11-19 23:10 UTC
2 parents 9a6fc51 + 8c52615 commit 05b2bb7

25 files changed

+295
-200
lines changed

articles/ai-foundry/default/toc-files-foundry/agent-tools/toc.yml

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -25,15 +25,17 @@ items:
2525
items:
2626
- name: Azure Language tools and agents
2727
href: /azure/ai-services/language-service/concepts/foundry-tools-agents?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
28-
- name: CLU multi-turn conversations
29-
href: /azure/ai-services/language-service/conversational-language-understanding/concepts/multi-turn-conversations?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
30-
- name: Redact Personal Identifiable Information (PII) from text
31-
href: /azure/ai-services/language-service/personally-identifiable-information/how-to/redact-text-pii?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
28+
- name: Try CLU multi-turn conversations
29+
href: /azure/ai-services/language-service/conversational-language-understanding/how-to/quickstart-multi-turn-conversations?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
30+
- name: Detect Personally Identifiable Information (PII)
31+
href: /azure/ai-services/language-service/personally-identifiable-information/quickstart?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
32+
- name: Try Azure Language detection
33+
href: /azure/ai-services/language-service/language-detection/quickstart?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
3234
- name: Azure Translator in Foundry Tools
3335
items:
34-
- name: Text translation
36+
- name: Azure text translation
3537
href: /azure/ai-services/translator/text-translation/overview?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
36-
- name: Document translation
38+
- name: Azure document translation
3739
href: /azure/ai-services/translator/document-translation/overview?toc=/azure/ai-foundry/default/toc.json&bc=/azure/ai-foundry/breadcrumb/toc.json
3840
- name: Enable token limits with API gateways
3941
href: ../../configuration/enable-ai-api-management-gateway-portal.md

articles/ai-foundry/how-to/deploy-models-managed-pay-go.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ The following sections list the supported models for managed compute deployment
127127
| [Embed v4](https://ai.azure.com/explore/models/embed-v-4-0/version/4/registry/azureml-cohere/?cid=learnDocs) | Embeddings |
128128
| [Rerank v3.5](https://ai.azure.com/explore/models/Cohere-rerank-v3.5/version/2/registry/azureml-cohere/?cid=learnDocs) | Text classification |
129129

130-
### Mercury
130+
### Inception Labs
131131

132132
| Model | Task |
133133
|--|--|

articles/ai-foundry/includes/create-content-filter.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,6 @@ ms.custom: include
1515

1616
For any model deployment in [Foundry](https://ai.azure.com/?cid=learnDocs), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection.
1717

18-
> [!IMPORTANT]
19-
> The GPT-image-1 model does not support content filtering configuration: only the default content filter is used.
20-
2118
> [!TIP]
2219
> For guidance with content filters in your Foundry project, you can read more at [Foundry content filtering](/azure/ai-studio/concepts/content-filtering).
2320

articles/ai-foundry/openai/concepts/model-router.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ If you select **Auto-update** at the deployment step (see [Manage models](/azure
5353

5454
|Model router version|Underlying models| Underlying model version
5555
|:---:|:---|:----:|
56-
|`2025-11-18`| `gpt-4.1` </br> `gpt-4.1-mini` </br> `gpt-4.1-nano` </br> `o4-mini` <br> `gpt-5-nano` <br> `gpt-5-mini` <br> `gpt-5` <br> `gpt-5-chat` <br> `Deepseek-v3.1` <br> `gpt-oss-120b` <br> `llama4-maverick-instruct` <br> `grok-4` <br> `grok-4-fast` <br> `gpt-4o` <br> `gpt-4o-mini` <br> `claude-haiku-4-5` <br> `claude-opus-4-1` <br> `claude-sonnet-4-5` | `2025-04-14` <br> `2025-04-14` <br> `2025-04-14` <br> `2025-04-16` <br> `2025-08-07` <br> `2025-08-07` <br> `2025-08-07` <br> `2025-08-07` <br> N/A <br> N/A <br> N/A <br> N/A <br> N/A <br> `2024-11-20` <br> `2024-07-18` <br> N/A <br> N/A <br> N/A |
56+
|`2025-11-18`| `gpt-4.1` </br> `gpt-4.1-mini` </br> `gpt-4.1-nano` </br> `o4-mini` <br> `gpt-5-nano` <br> `gpt-5-mini` <br> `gpt-5` <br> `gpt-5-chat` <br> `Deepseek-v3.1` <br> `gpt-oss-120b` <br> `llama4-maverick-instruct` <br> `grok-4` <br> `grok-4-fast` <br> `gpt-4o` <br> `gpt-4o-mini` <br> `claude-haiku-4-5` <br> `claude-opus-4-1` <br> `claude-sonnet-4-5` | `2025-04-14` <br> `2025-04-14` <br> `2025-04-14` <br> `2025-04-16` <br> `2025-08-07` <br> `2025-08-07` <br> `2025-08-07` <br> `2025-08-07` <br> N/A <br> N/A <br> N/A <br> N/A <br> N/A <br> `2024-11-20` <br> `2024-07-18` <br> `2025-10-01` <br> `2025-08-05` <br> `2025-09-29` |
5757
|`2025-08-07`| `gpt-4.1` </br> `gpt-4.1-mini` </br> `gpt-4.1-nano` </br> `o4-mini` </br> `gpt-5` <br> `gpt-5-mini` <br> `gpt-5-nano` <br> `gpt-5-chat` | `2025-04-14` <br> `2025-04-14` <br> `2025-04-14` <br> `2025-04-16` <br> `2025-08-07` <br> `2025-08-07` <br> `2025-08-07` <br> `2025-08-07` |
5858
|`2025-05-19`| `gpt-4.1` </br>`gpt-4.1-mini` </br>`gpt-4.1-nano` </br>`o4-mini` | `2025-04-14` <br> `2025-04-14` <br> `2025-04-14` <br> `2025-04-16` |
5959

articles/ai-foundry/openai/how-to/content-filters.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,6 @@ Prompt shields and protected text and code models are optional and on by default
2525
> [!NOTE]
2626
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR). At this time, it is not possible to become a managed customer.
2727
28-
> [!IMPORTANT]
29-
> The GPT-image-1 series models do not support content filtering configuration: only the default content filter is used.
30-
3128
Content filters can be configured at the resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
3229

3330
## Prerequisites

articles/ai-foundry/openai/how-to/dall-e.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -50,14 +50,17 @@ OpenAI's image generation models create images from user-provided text prompts a
5050
| **Strengths** | Best for **realism**, **instruction following**, and **multimodal context** | Best for **fast prototyping**, **bulk generation**, or **cost-sensitive** use cases | Strong **prompt adherence**, **natural text rendering**, and **stylistic diversity** |
5151

5252
## Responsible AI and Image Generation
53-
Azure OpenAI image generation models include built-in Responsible AI (RAI) protections to help ensure safe and compliant use.
54-
We provide input and output moderation across all image generation models, along with Azure-specific safeguards such as content filtering and abuse monitoring. These systems help detect and prevent the generation or misuse of harmful, unsafe, or policy-violating content.
53+
Azure OpenAI's image generation models include built-in Responsible AI (RAI) protections to help ensure safe and compliant use.
54+
55+
In addition, Azure provides input and output moderation across all image generation models, along with Azure-specific safeguards such as content filtering and abuse monitoring. These systems help detect and prevent the generation or misuse of harmful, unsafe, or policy-violating content.
56+
5557
Customers can learn more about these safeguards and how to customize them here:
56-
- Learn more: Explore[content filtering](/azure/ai-foundry/openai/concepts/content-filter)
58+
- Learn more: Explore [content filtering](/azure/ai-foundry/openai/concepts/content-filter)
5759
- Request customization: Apply to [opt out of content filtering](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
5860

59-
Certain Enterprise Agreement (EA) customers, those with significant usage volume, or customers with approved use cases may also be eligible to enable photo transformations (i.e. applying image edits) to images containing minors.
60-
If you're approved, such images will not be automatically blocked by the system.
61+
### Special considerations for generating images of minors
62+
63+
Photorealistic images of minors are blocked by default. Customers can [request access](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQVFQRDhQRjVPNllLMVZCSVNYVUs4MzhNMyQlQCN0PWcu) to this model capability. Enterprise-tier customers are automatically approved.
6164

6265

6366
## Call the image generation API

articles/ai-foundry/openai/how-to/fine-tune-test.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resource
112112
| resource_group | The resource group name for your Azure OpenAI resource. |
113113
| resource_name | The Azure OpenAI resource name. |
114114
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
115-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
115+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
116116

117117

118118
### Deploy a model with Azure CLI

articles/ai-foundry/openai/how-to/fine-tuning-deploy.md

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ token = os.getenv("<TOKEN>")
4848
subscription = "<YOUR_SUBSCRIPTION_ID>"
4949
resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
5050
resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
51-
model_deployment_name = "gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
51+
model_deployment_name = "gpt-4.1-mini-ft" # custom deployment name that you will use to reference the model when making inference calls.
5252

5353
deploy_params = {'api-version': "2024-10-21"}
5454
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
@@ -58,7 +58,7 @@ deploy_data = {
5858
"properties": {
5959
"model": {
6060
"format": "OpenAI",
61-
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
61+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83
6262
"version": "1"
6363
}
6464
}
@@ -84,7 +84,7 @@ print(r.json())
8484
| resource_group | The resource group name for your Azure OpenAI resource. |
8585
| resource_name | The Azure OpenAI resource name. |
8686
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
87-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
87+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
8888

8989
### Cross region deployment
9090

@@ -112,7 +112,7 @@ source_resource = "<SOURCE_RESOURCE>"
112112

113113
source = f'/subscriptions/{source_subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.CognitiveServices/accounts/{source_resource}'
114114

115-
model_deployment_name = "gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
115+
model_deployment_name = "gpt-4.1-mini-ft" # custom deployment name that you will use to reference the model when making inference calls.
116116

117117
deploy_params = {'api-version': "2024-10-21"}
118118
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
@@ -124,7 +124,7 @@ deploy_data = {
124124
"properties": {
125125
"model": {
126126
"format": "OpenAI",
127-
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-35-turbo-0125.ft-0ab3f80e4f2242929258fff45b56a9ce
127+
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-4.1-mini-2025-04-14.ft-0ab3f80e4f2242929258fff45b56a9ce
128128
"version": "1",
129129
"source": source
130130
}
@@ -222,7 +222,7 @@ curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resource
222222
| resource_group | The resource group name for your Azure OpenAI resource. |
223223
| resource_name | The Azure OpenAI resource name. |
224224
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
225-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
225+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
226226

227227

228228
### Cross region deployment
@@ -327,7 +327,7 @@ client = AzureOpenAI(
327327
)
328328

329329
response = client.chat.completions.create(
330-
model="gpt-35-turbo-ft", # model = "Custom deployment name you chose for your fine-tuning model"
330+
model="gpt-4.1-mini-ft", # model = "Custom deployment name you chose for your fine-tuning model"
331331
messages=[
332332
{"role": "system", "content": "You are a helpful assistant."},
333333
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
@@ -366,16 +366,14 @@ Azure OpenAI fine-tuning supports the following deployment types.
366366

367367
[Standard deployments](../../foundry-models/concepts/deployment-types.md) provide a pay-per-token billing model with data residency confined to the deployed region.
368368

369-
| Models | East US2 | North Central US | Sweden Central | Switzerland West |
370-
|--------------------|:--------:|:----------------:|:--------------:|:----------------:|
371-
|o4-mini || || |
372-
|GPT-4.1 | ||| |
373-
|GPT-4.1-mini | ||| |
374-
|GPT-4.1-nano | ||| |
375-
|GPT-4o || || |
376-
|GPT-4o-mini | ||| |
377-
|GPT-35-Turbo (1106) |||||
378-
|GPT-35-Turbo (0125) |||||
369+
| Models | East US2 | North Central US | Sweden Central |
370+
|--------------------|:--------:|:----------------:|:--------------:|
371+
|o4-mini || ||
372+
|GPT-4.1 | |||
373+
|GPT-4.1-mini | |||
374+
|GPT-4.1-nano | |||
375+
|GPT-4o || ||
376+
|GPT-4o-mini | |||
379377

380378
### Global Standard
381379

@@ -401,6 +399,7 @@ Developer deployments are available from all Azure OpenAI regions for the follow
401399
* GPT-4.1
402400
* GPT-4.1-mini
403401
* GPT-4.1-nano
402+
* o4-mini
404403

405404

406405
### Provisioned Throughput

articles/ai-foundry/openai/includes/fine-tune-models.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,15 +14,10 @@ ms.custom:
1414
---
1515

1616
> [!NOTE]
17-
> `gpt-35-turbo`: Fine-tuning of this model is limited to a subset of regions, and isn't available in every region the base model is available.
18-
>
1917
> The supported regions for fine-tuning might vary if you use Azure OpenAI models in a Microsoft Foundry project versus outside a project.
20-
>
2118
2219
| Model ID | Standard training regions | Global training | Max request (tokens) | Training data (up to) | Modality |
2320
| --- | --- | :---: | :---: | :---: | --- |
24-
| `gpt-35-turbo` <br> (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | Input: 16,385<br> Output: 4,096 | Sep 2021 | Text to text |
25-
| `gpt-35-turbo` <br> (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | 16,385 | Sep 2021 | Text to text |
2621
| `gpt-4o-mini` <br> (2024-07-18) | North Central US <br> Sweden Central || Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | Oct 2023 | Text to text |
2722
| `gpt-4o` <br> (2024-08-06) | East US2 <br> North Central US <br> Sweden Central || Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | Oct 2023 | Text and vision to text |
2823
| `gpt-4.1` <br> (2025-04-14) | North Central US <br> Sweden Central || Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | May 2024 | Text and vision to text |

articles/ai-services/document-intelligence/containers/install-run.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -283,7 +283,7 @@ services:
283283
container_name: azure-cognitive-service-read
284284
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
285285
environment:
286-
- EULA=accept
286+
- EULA=accept
287287
- billing={FORM_RECOGNIZER_ENDPOINT_URI}
288288
- apiKey={FORM_RECOGNIZER_KEY}
289289
```
@@ -671,7 +671,7 @@ services:
671671
container_name: azure-cognitive-service-read
672672
image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
673673
environment:
674-
- EULA=accept
674+
- EULA=accept
675675
- billing={FORM_RECOGNIZER_ENDPOINT_URI}
676676
- apiKey={FORM_RECOGNIZER_KEY}
677677
```

0 commit comments

Comments
 (0)