You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/deployment-types.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ For any [deployment type](/azure/ai-services/openai/how-to/deployment-types) lab
39
39
> [!IMPORTANT]
40
40
> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
41
41
42
-
**SKU name in code:** GlobalStandard
42
+
**SKU name in code:**`GlobalStandard`
43
43
44
44
Global deployments are available in the same Azure OpenAI resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources.
45
45
@@ -50,7 +50,7 @@ Customers with high consistent volume may experience greater latency variability
50
50
> [!IMPORTANT]
51
51
> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
52
52
53
-
**SKU name in code:** GlobalProvisionedManaged
53
+
**SKU name in code:**`GlobalProvisionedManaged`
54
54
55
55
Global deployments are available in the same Azure OpenAI resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global provisioned deployments provide reserved model processing capacity for high and predictable throughput using Azure global infrastructure.
56
56
@@ -61,7 +61,7 @@ Global deployments are available in the same Azure OpenAI resources as non-globa
61
61
62
62
[Global batch](./batch.md) is designed to handle large-scale and high-volume processing tasks efficiently. Process asynchronous groups of requests with separate quota, with 24-hour target turnaround, at [50% less cost than global standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). With batch processing, rather than send one request at a time you send a large number of requests in a single file. Global batch requests have a separate enqueued token quota avoiding any disruption of your online workloads.
63
63
64
-
**SKU name in code:** GlobalBatch
64
+
**SKU name in code:**`GlobalBatch`
65
65
66
66
Key use cases include:
67
67
@@ -84,7 +84,7 @@ Key use cases include:
84
84
> [!IMPORTANT]
85
85
> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location within the Microsoft specified data zone. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
86
86
87
-
**SKU name in code:** DataZoneStandard
87
+
**SKU name in code:**`DataZoneStandard`
88
88
89
89
Data zone standard deployments are available in the same Azure OpenAI resource as all other Azure OpenAI deployment types but allow you to leverage Azure global infrastructure to dynamically route traffic to the data center within the Microsoft defined data zone with the best availability for each request. Data zone standard provides higher default quotas than our Azure geography-based deployment types.
90
90
@@ -95,7 +95,7 @@ Customers with high consistent volume may experience greater latency variability
95
95
> [!IMPORTANT]
96
96
> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location within the Microsoft specified data zone.[Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
97
97
98
-
**SKU name in code:** DataZoneProvisionedManaged
98
+
**SKU name in code:**`DataZoneProvisionedManaged`
99
99
100
100
Data zone provisioned deployments are available in the same Azure OpenAI resource as all other Azure OpenAI deployment types but allow you to leverage Azure global infrastructure to dynamically route traffic to the data center within the Microsoft specified data zone with the best availability for each request. Data zone provisioned deployments provide reserved model processing capacity for high and predictable throughput using Azure infrastructure within the Microsoft specified data zone.
101
101
@@ -104,21 +104,21 @@ Data zone provisioned deployments are available in the same Azure OpenAI resourc
104
104
> [!IMPORTANT]
105
105
> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location within the Microsoft specified data zone. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
106
106
107
-
**SKU name in code:** DataZoneBatch
107
+
**SKU name in code:**`DataZoneBatch`
108
108
109
109
Data zone batch deployments provide all the same functionality as [global batch deployments](./batch.md) while allowing you to leverage Azure global infrastructure to dynamically route traffic to only data centers within the Microsoft defined data zone with the best availability for each request.
110
110
111
111
## Standard
112
112
113
-
**SKU name in code:** Standard
113
+
**SKU name in code:**`Standard`
114
114
115
115
Standard deployments provide a pay-per-call billing model on the chosen model. Provides the fastest way to get started as you only pay for what you consume. Models available in each region as well as throughput may be limited.
116
116
117
117
Standard deployments are optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability.
118
118
119
119
## Provisioned
120
120
121
-
**SKU name in code:** ProvisionedManaged
121
+
**SKU name in code:**`ProvisionedManaged`
122
122
123
123
Provisioned deployments allow you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. Learn more from our [Provisioned throughput concepts article](../concepts/provisioned-throughput.md).
0 commit comments