You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After you log your custom evaluator to your Azure AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under the **Evaluation** tab of your Azure AI project.
245
245
246
+
### Troubleshooting: Job Stuck in Running State
247
+
248
+
If your evaluation job remains in the **Running** state for an extended period when using Azure AI Foundry Project or Hub, this may be because the Azure OpenAI model you selected does not have enough capacity.
249
+
250
+
**Resolution**
251
+
252
+
Cancel the current evaluation job.
253
+
254
+
Increase the model capacity to handle larger input data.
255
+
256
+
Re-run the evaluation.
257
+
246
258
## Related content
247
259
248
260
-[Evaluate your generative AI applications locally](./evaluate-sdk.md)
@@ -27,7 +27,7 @@ Previously, Azure OpenAI received monthly updates of new API versions. Taking ad
27
27
28
28
Starting in August 2025, you can now opt in to our next generation v1 Azure OpenAI APIs which add support for:
29
29
30
-
- Ongoing access to the latest features with no need specify new `api-version`'s each month.
30
+
- Ongoing access to the latest features with no need to specify new `api-version`'s each month.
31
31
- Faster API release cycle with new features launching more frequently.
32
32
- OpenAI client support with minimal code changes to swap between OpenAI and Azure OpenAI when using key-based authentication.
33
33
- OpenAI client support for token based authentication and automatic token refresh without the need to take a dependency on a separate Azure OpenAI client.
@@ -43,29 +43,13 @@ For the initial v1 Generally Available (GA) API launch we're only supporting a s
model="gpt-4.1-nano", # Replace with your model deployment name
111
-
input="This is a test."
112
-
)
113
-
114
-
print(response.model_dump_json(indent=2))
78
+
client = OpenAI()
115
79
```
116
80
117
-
### Next generation API
81
+
**Microsoft Entra ID**:
118
82
119
83
> [!IMPORTANT]
120
84
> Handling automatic token refresh was previously handled through use of the `AzureOpenAI()` client. The v1 API removes this dependency, by adding automatic token refresh support to the `OpenAI()` client.
-`base_url` passes the Azure OpenAI endpoint and `/openai/v1` is appended to the endpoint address.
144
108
-`api_key` parameter is set to `token_provider`, enabling automatic retrieval and refresh of an authentication token instead of using a static API key.
145
109
146
-
# [REST](#tab/rest)
110
+
# [C#](#tab/dotnet)
111
+
112
+
### v1 API
147
113
148
-
### Last generation API
114
+
[C# v1 examples](./supported-languages.md)
149
115
150
116
**API Key**:
151
117
152
-
```bash
153
-
curl -X POST https://YOUR-RESOURCE-NAME.openai.azure.com/openai/responses?api-version=2025-04-01-preview \
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/azure-government.md
+21Lines changed: 21 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,6 +56,27 @@ To request quota increases for these models, submit a request at [https://aka.ms
56
56
57
57
<br>
58
58
59
+
### Model Retirements
60
+
In some cases, models are retired in Azure Governmen ahead of dates in the commercial cloud. General information on model retirement policies, dates, and other details can be found at [Azure OpenAI in Azure AI Foundry model deprecations and retirements](/azure/ai-foundry/openai/concepts/model-retirements). The following shows model retirement differences in Azure Government.
61
+
62
+
| Model | Version | Azure Government Status | Public Retirement date |
|`gpt-35-turbo`| 1106 | Retired | November 11, 2025 |
65
+
|`gpt-4`| turbo-2024-04-09 | Retired | November 11, 2025 |
66
+
67
+
<br>
68
+
69
+
### Deafault Model Versions
70
+
In some cases, new model versions are designated as default in Azure Governmen ahead of dates in the commercial cloud. General information on model upgrades can be found at [Working with Azure OpenAI models](/azure/ai-foundry/openai/how-to/working-with-models?tabs=powershell&branch=main#model-deployment-upgrade-configuration)
71
+
72
+
The following shows default model differences in Azure Government.
73
+
74
+
| Model | Azure Government Default Version | Public Default Version | Default upgrade date |
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/spillover-traffic-management.md
+25-4Lines changed: 25 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,15 +6,15 @@ ms.author: mopeakande
6
6
ms.service: azure-ai-foundry
7
7
ms.subservice: azure-ai-foundry-openai
8
8
ms.topic: how-to
9
-
ms.date: 09/03/2025
9
+
ms.date: 10/02/2025
10
10
---
11
11
12
12
# Manage traffic with spillover for provisioned deployments
13
13
14
14
Spillover manages traffic fluctuations on provisioned deployments by routing overage traffic to a corresponding standard deployment. Spillover is an optional capability that can be set for all requests on a given deployment or can be managed on a per-request basis. When spillover is enabled, Azure OpenAI in Azure AI Foundry Models sends any overage traffic from your provisioned deployment to a standard deployment for processing.
15
15
16
16
> [!NOTE]
17
-
> Spillover is currently not available for the `/v1`[API endpoint](../reference-preview-latest.md) or [responses API](./responses.md).
17
+
> Spillover is currently not available for the [responses API](./responses.md).
18
18
19
19
## Prerequisites
20
20
- You need to have a provisioned managed deployment and a standard deployment.
@@ -27,7 +27,28 @@ Spillover manages traffic fluctuations on provisioned deployments by routing ove
27
27
To maximize the utilization of your provisioned deployment, you can enable spillover for all global and data zone provisioned deployments. With spillover, bursts or fluctuations in traffic can be automatically managed by the service. This capability reduces the risk of experiencing disruptions when a provisioned deployment is fully utilized. Alternatively, spillover is configurable per-request to provide flexibility across different scenarios and workloads. Spillover can also now be used for the [Azure AI Foundry Agent Service](../../agents/overview.md).
28
28
29
29
## When does spillover come into effect?
30
-
When spillover is enabled for a deployment or configured for a given inference request, spillover is initiated when a non-200 response code is received for a given inference request. When a request results in a non-200 response code, the Azure OpenAI automatically sends the request from your provisioned deployment to your standard deployment to be processed. Even if a subset of requests is routed to the standard deployment, the service prioritizes sending requests to the provisioned deployment before sending any overage requests to the standard deployment, which may incur additional latency.
30
+
When you enable spillover for a deployment or configure it for a given inference request, spillover initiates when a specific non-`200` response code is received for a given inference request as a result of one of these scenarios:
31
+
32
+
- Provisioned throughput units (PTU) are completely used, resulting in a `429` response code.
33
+
34
+
- You send a long context token request, resulting in a `400` error code. For example, when using `gpt 4.1` series models, PTU supports only context lengths less than 128k and returns HTTP 400.
35
+
36
+
- Server errors when processing your request, resulting in error code `500` or `503`.
37
+
38
+
When a request results in one of these non-`200` response codes, Azure OpenAI automatically sends the request from your provisioned deployment to your standard deployment to be processed.
39
+
40
+
> [!NOTE]
41
+
> Even if a subset of requests is routed to the standard deployment, the service prioritizes sending requests to the provisioned deployment before sending any overage requests to the standard deployment, which might incur additional latency.
42
+
43
+
## How to know a request spilled over
44
+
45
+
The following HTTP response headers indicate that a specific request spilled over:
46
+
47
+
-`x-ms-spillover-from-<deployment-name>`. This header contains the PTU deployment name. The presence of this header indicates that the request was a spillover request.
48
+
49
+
-`x-ms-<deployment-name>`. This header contains the name of the deployment that served the request. If the request spilled over, the deployment name is the name of the standard deployment.
50
+
51
+
For a request that spilled over, if the standard deployment request failed for any reason, the original PTU response is used in the response to the customer. The customer sees a header `x-ms-spillover-error` that contains the response code of the spillover request (such as `429` or `500`) so that they know the reason for the failed spillover.
31
52
32
53
## How does spillover affect cost?
33
54
Since spillover uses a combination of provisioned and standard deployments to manage traffic fluctuations, billing for spillover involves two components:
@@ -124,4 +145,4 @@ Applying the `IsSpillover` split lets you view the requests to your deployment t
124
145
## See also
125
146
126
147
*[What is provisioned throughput](../concepts/provisioned-throughput.md)
127
-
*[Onboarding to provisioned throughput](./provisioned-throughput-onboarding.md)
148
+
*[Onboarding to provisioned throughput](./provisioned-throughput-onboarding.md)
0 commit comments