You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This documentation contains the following types of articles:
38
40
* The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
@@ -58,7 +60,7 @@ The following are common use cases for the Face service:
58
60
See the [customer checkin management](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/Scenario-CustomerCheckinManagement) and [face photo tagging](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/Scenario-FacePhotoTagging) scenarios on GitHub for working examples of facial recognition technology.
59
61
60
62
> [!WARNING]
61
-
> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
63
+
> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
62
64
63
65
## Face detection and analysis
64
66
@@ -101,7 +103,6 @@ Face liveness SDK reference docs:
101
103
102
104
Modern enterprises and apps can use the Face recognition technologies, including Face verification ("one-to-one" matching) and Face identification ("one-to-many" matching) to confirm that a user is who they claim to be.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/whats-new.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ These Image Analysis 4.0 preview APIs will be retired on March 31, 2025:
34
34
-`2023-07-01-preview`
35
35
-`v4.0-preview.1`
36
36
37
-
These features will no longer be available with the retirement of the preview API versions:
37
+
The following features will no longer be available upon retirement of the preview API versions, and they are removed from the Studio experience as of January 10, 2025:
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/model-retirements.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -107,6 +107,7 @@ These models are currently available for use in Azure OpenAI Service.
107
107
|`gpt-4`| vision-preview | To be upgraded to `gpt-4` version: `turbo-2024-04-09`, starting no sooner than January 27, 2025 **<sup>1</sup>**|`gpt-4o`|
108
108
|`gpt-4o`| 2024-05-13 | No earlier than May 20, 2025 <br><br>Deployments set to [**Auto-update to default**](/azure/ai-services/openai/how-to/working-with-models?tabs=powershell#auto-update-to-default) will be automatically upgraded to version: `2024-08-06`, starting on February 13, 2025. ||
109
109
|`gpt-4o-mini`| 2024-07-18 | No earlier than July 18, 2025 ||
110
+
|`gpt-4o-realtime-preview`| 2024-10-01 | No earlier than September 30, 2025 |`gpt-4o-realtime-preview` (version 2024-12-17) |
110
111
|`gpt-3.5-turbo-instruct`| 0914 | No earlier than April 1, 2025 ||
111
112
|`o1`| 2024-12-17 | No earlier than December 17, 2025 ||
112
113
|`text-embedding-ada-002`| 2 | No earlier than October 3, 2025 |`text-embedding-3-small` or `text-embedding-3-large`|
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,17 +58,18 @@ To learn more about the advanced `o1` series models see, [getting started with o
58
58
59
59
## GPT-4o-Realtime-Preview
60
60
61
-
The `gpt-4o-realtime-preview` model is part of the GPT-4o model family and supports low-latency, "speech in, speech out" conversational interactions. GPT-4o audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user.
61
+
The GPT 4o audio models are part of the GPT-4o model family and support low-latency, "speech in, speech out" conversational interactions. GPT-4o audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user.
62
62
63
63
GPT-4o audio is available in the East US 2 (`eastus2`) and Sweden Central (`swedencentral`) regions. To use GPT-4o audio, you need to [create](../how-to/create-resource.md) or use an existing resource in one of the supported regions.
64
64
65
-
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o audio model. If you are performing a programmatic deployment, the **model** name is `gpt-4o-realtime-preview`. For more information on how to use GPT-4o audio, see the [GPT-4o audio documentation](../realtime-audio-quickstart.md).
65
+
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o audio model. For more information on how to use GPT-4o audio, see the [GPT-4o audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).
66
66
67
67
Details about maximum request tokens and training data are available in the following table.
68
68
69
69
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
70
-
| --- | :--- |:--- |:---: |
71
-
|`gpt-4o-realtime-preview` (2024-10-01-preview) <br> **GPT-4o audio**|**Audio model** for real-time audio processing |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
70
+
|---|---|---|---|
71
+
|`gpt-4o-realtime-preview` (2024-10-01) <br> **GPT-4o audio**|**Audio model** for real-time audio processing |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
72
+
|`gpt-4o-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/provisioned-throughput.md
+40-23Lines changed: 40 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,35 +30,34 @@ An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model.
30
30
31
31
| Topic | Provisioned|
32
32
|---|---|
33
-
| What is it? |Provides guaranteed throughput at smaller increments than the existing provisioned offer. Deployments have a consistent max latency for a given model-version. |
33
+
| What is it? |Provides guaranteed throughput at smaller increments than the existing provisioned offer. Deployments have a consistent max latency for a given model-version. |
34
34
| Who is it for? | Customers who want guaranteed throughput with minimal latency variance. |
35
35
| Quota |Provisioned Managed Throughput Unit, Global Provisioned Managed Throughput Unit, or Data Zone Provisioned Managed Throughput Unit assigned per region. Quota can be used across any available Azure OpenAI model.|
36
36
| Latency | Max latency constrained from the model. Overall latency is a factor of call shape. |
|Estimating size |Provided calculator in Azure AI Foundry & benchmarking script. |
38
+
|Estimating size |Provided sizing calculator in Azure AI Foundry.|
39
39
|Prompt caching | For supported models, we discount up to 100% of cached input tokens. |
40
40
41
41
42
42
## How much throughput per PTU you get for each model
43
-
The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in the minute. Generating output tokens requires more processing than input tokens and so the more output tokens generated the lower your overall TPM. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload shape.
43
+
The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in the minute. Generating output tokens requires more processing than input tokens. For the models specified in the table below, 1 output token counts as 3 input tokens towards your TPM per PTU limit. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload shape.
44
44
45
-
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the `gpt-4o` and `gpt-4o-mini`models which represent the max TPM assuming all traffic is either input or output. To understand how different ratios of input and output tokens impact your Max TPM per PTU, see the [Azure OpenAI capacity calculator](https://oai.azure.com/portal/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
45
+
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the specified models. To understand the impact of output tokens on the TPM per PTU limit, use the 3 input token to 1 output token ratio. For a detailed understanding of how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://oai.azure.com/portal/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
|Latency Target Value |25 Tokens Per Second|33 Tokens Per Second|
56
55
57
56
For a full list see the [Azure OpenAI Service in Azure AI Foundry portal calculator](https://oai.azure.com/portal/calculator).
58
57
59
58
60
59
> [!NOTE]
61
-
> Global provisioned deployments are only supported for gpt-4o, 2024-08-06 and gpt-4o-mini, 2024-07-18 models at this time. Data zone provisioned deployments are only supported for gpt-4o, 2024-08-06, gpt-4o, 2024-05-13, and gpt-4o-mini, 2024-07-18 models at this time. For more information on model availability, review the [models documentation](./models.md).
60
+
> Global provisioned and data zone provisioned deployments are only supported for gpt-4oand gpt-4o-mini models at this time. For more information on model availability, review the [models documentation](./models.md).
62
61
63
62
## Key concepts
64
63
@@ -73,11 +72,11 @@ az cognitiveservices account deployment create \
73
72
--name <myResourceName> \
74
73
--resource-group <myResourceGroupName> \
75
74
--deployment-name MyDeployment \
76
-
--model-name gpt-4 \
77
-
--model-version 0613 \
75
+
--model-name gpt-4o \
76
+
--model-version 2024-08-06 \
78
77
--model-format OpenAI \
79
-
--sku-capacity 100 \
80
-
--sku-name ProvisionedManaged
78
+
--sku-capacity 15 \
79
+
--sku-name GlobalProvisionedManaged
81
80
```
82
81
83
82
### Quota
@@ -132,7 +131,7 @@ If an acceptable region isn't available to support the desire model, version and
132
131
133
132
### Determining the number of PTUs needed for a workload
134
133
135
-
PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from call shape characteristics (prompt size, generation size and call rate) to PTUs is complex and nonlinear. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
134
+
PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from throughput needs to PTUs can be approximated using historical token usage data or call shape estimations (input tokens, output tokens, and requests per minute) as outlined in our [performance and latency](../how-to/latency.md) documentation. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
136
135
137
136
A few high-level considerations:
138
137
- Generations require more capacity than prompts
@@ -165,16 +164,16 @@ For provisioned deployments, we use a variation of the leaky bucket algorithm to
165
164
1. When a request is made:
166
165
167
166
a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100%
167
+
168
+
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining the prompt tokens, less any cacehd tokens, and the specified `max_tokens` in the call. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
169
+
170
+
1. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
168
171
169
-
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. For requests that include at least 1024 cached tokens, the cached tokens are subtracted from the prompt token value. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
170
-
171
-
1. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
172
-
173
-
a. If the actual > estimated, then the difference is added to the deployment's utilization.
174
-
175
-
b. If the actual < estimated, then the difference is subtracted.
176
-
177
-
1. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
172
+
a. If the actual > estimated, then the difference is added to the deployment's utilization.
173
+
174
+
b. If the actual < estimated, then the difference is subtracted.
175
+
176
+
1. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
178
177
179
178
> [!NOTE]
180
179
> Calls are accepted until utilization reaches 100%. Bursts just over 100% may be permitted in short periods, but over time, your traffic is capped at 100% utilization.
@@ -184,12 +183,30 @@ For provisioned deployments, we use a variation of the leaky bucket algorithm to
184
183
185
184
#### How many concurrent calls can I have on my deployment?
186
185
187
-
The number of concurrent calls you can achieve depends on each call's shape (prompt size, max_token parameter, etc.). The service continues to accept calls until the utilization reach 100%. To determine the approximate number of concurrent calls, you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If the system generates less than the number of samplings tokens like max_token, it will accept more requests.
186
+
The number of concurrent calls you can achieve depends on each call's shape (prompt size, `max_tokens` parameter, etc.). The service continues to accept calls until the utilization reaches 100%. To determine the approximate number of concurrent calls, you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If the system generates less than the number of output tokens set for the `max_tokens` parameter, then the provisioned deployment will accept more requests.
188
187
189
188
## What models and regions are available for provisioned throughput?
0 commit comments