You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/create-content-filter.md
-3Lines changed: 0 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,9 +15,6 @@ ms.custom: include
15
15
16
16
For any model deployment in [Foundry](https://ai.azure.com/?cid=learnDocs), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection.
17
17
18
-
> [!IMPORTANT]
19
-
> The GPT-image-1 model does not support content filtering configuration: only the default content filter is used.
20
-
21
18
> [!TIP]
22
19
> For guidance with content filters in your Foundry project, you can read more at [Foundry content filtering](/azure/ai-studio/concepts/content-filtering).
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/content-filters.md
-3Lines changed: 0 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,9 +25,6 @@ Prompt shields and protected text and code models are optional and on by default
25
25
> [!NOTE]
26
26
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR). At this time, it is not possible to become a managed customer.
27
27
28
-
> [!IMPORTANT]
29
-
> The GPT-image-1 series models do not support content filtering configuration: only the default content filter is used.
30
-
31
28
Content filters can be configured at the resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/dall-e.md
+8-5Lines changed: 8 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,14 +50,17 @@ OpenAI's image generation models create images from user-provided text prompts a
50
50
|**Strengths**| Best for **realism**, **instruction following**, and **multimodal context**| Best for **fast prototyping**, **bulk generation**, or **cost-sensitive** use cases | Strong **prompt adherence**, **natural text rendering**, and **stylistic diversity**|
51
51
52
52
## Responsible AI and Image Generation
53
-
Azure OpenAI image generation models include built-in Responsible AI (RAI) protections to help ensure safe and compliant use.
54
-
We provide input and output moderation across all image generation models, along with Azure-specific safeguards such as content filtering and abuse monitoring. These systems help detect and prevent the generation or misuse of harmful, unsafe, or policy-violating content.
53
+
Azure OpenAI's image generation models include built-in Responsible AI (RAI) protections to help ensure safe and compliant use.
54
+
55
+
In addition, Azure provides input and output moderation across all image generation models, along with Azure-specific safeguards such as content filtering and abuse monitoring. These systems help detect and prevent the generation or misuse of harmful, unsafe, or policy-violating content.
56
+
55
57
Customers can learn more about these safeguards and how to customize them here:
- Request customization: Apply to [opt out of content filtering](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
58
60
59
-
Certain Enterprise Agreement (EA) customers, those with significant usage volume, or customers with approved use cases may also be eligible to enable photo transformations (i.e. applying image edits) to images containing minors.
60
-
If you're approved, such images will not be automatically blocked by the system.
61
+
### Special considerations for generating images of minors
62
+
63
+
Photorealistic images of minors are blocked by default. Customers can [request access](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQVFQRDhQRjVPNllLMVZCSVNYVUs4MzhNMyQlQCN0PWcu) to this model capability. Enterprise-tier customers are automatically approved.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/fine-tune-test.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -112,7 +112,7 @@ curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resource
112
112
| resource_group | The resource group name for your Azure OpenAI resource. |
113
113
| resource_name | The Azure OpenAI resource name. |
114
114
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
115
-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
115
+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
61
+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83
62
62
"version": "1"
63
63
}
64
64
}
@@ -84,7 +84,7 @@ print(r.json())
84
84
| resource_group | The resource group name for your Azure OpenAI resource. |
85
85
| resource_name | The Azure OpenAI resource name. |
86
86
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
87
-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
87
+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-35-turbo-0125.ft-0ab3f80e4f2242929258fff45b56a9ce
127
+
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-4.1-mini-2025-04-14.ft-0ab3f80e4f2242929258fff45b56a9ce
128
128
"version": "1",
129
129
"source": source
130
130
}
@@ -222,7 +222,7 @@ curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resource
222
222
| resource_group | The resource group name for your Azure OpenAI resource. |
223
223
| resource_name | The Azure OpenAI resource name. |
224
224
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
225
-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
225
+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-4.1-mini-2025-04-14.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
226
226
227
227
228
228
### Cross region deployment
@@ -327,7 +327,7 @@ client = AzureOpenAI(
327
327
)
328
328
329
329
response = client.chat.completions.create(
330
-
model="gpt-35-turbo-ft", # model = "Custom deployment name you chose for your fine-tuning model"
330
+
model="gpt-4.1-mini-ft", # model = "Custom deployment name you chose for your fine-tuning model"
331
331
messages=[
332
332
{"role": "system", "content": "You are a helpful assistant."},
333
333
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
@@ -366,16 +366,14 @@ Azure OpenAI fine-tuning supports the following deployment types.
366
366
367
367
[Standard deployments](../../foundry-models/concepts/deployment-types.md) provide a pay-per-token billing model with data residency confined to the deployed region.
368
368
369
-
| Models | East US2 | North Central US | Sweden Central | Switzerland West |
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/fine-tune-models.md
-5Lines changed: 0 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,15 +14,10 @@ ms.custom:
14
14
---
15
15
16
16
> [!NOTE]
17
-
> `gpt-35-turbo`: Fine-tuning of this model is limited to a subset of regions, and isn't available in every region the base model is available.
18
-
>
19
17
> The supported regions for fine-tuning might vary if you use Azure OpenAI models in a Microsoft Foundry project versus outside a project.
20
-
>
21
18
22
19
| Model ID | Standard training regions | Global training | Max request (tokens) | Training data (up to) | Modality |
23
20
| --- | --- | :---: | :---: | :---: | --- |
24
-
|`gpt-35-turbo` <br> (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | Input: 16,385<br> Output: 4,096 | Sep 2021 | Text to text |
25
-
|`gpt-35-turbo` <br> (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | 16,385 | Sep 2021 | Text to text |
26
21
|`gpt-4o-mini` <br> (2024-07-18) | North Central US <br> Sweden Central | ✅ | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | Oct 2023 | Text to text |
27
22
|`gpt-4o` <br> (2024-08-06) | East US2 <br> North Central US <br> Sweden Central | ✅ | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | Oct 2023 | Text and vision to text |
28
23
|`gpt-4.1` <br> (2025-04-14) | North Central US <br> Sweden Central | ✅ | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | May 2024 | Text and vision to text |
0 commit comments