Skip to content

Commit 8be7d87

Browse files
Merge pull request #276888 from mrbullwinkle/mrb_05_31_2024_quota_freshness
[Azure OpenAI] Freshness update
2 parents a2ac012 + d4c4493 commit 8be7d87

File tree

1 file changed

+2
-6
lines changed
  • articles/ai-services/openai/how-to

1 file changed

+2
-6
lines changed

articles/ai-services/openai/how-to/quota.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-openai
99
ms.topic: how-to
10-
ms.date: 08/01/2023
10+
ms.date: 05/31/2024
1111
ms.author: mbullwin
1212
---
1313

@@ -57,11 +57,7 @@ Post deployment you can adjust your TPM allocation by selecting **Edit deploymen
5757
5858
## Model specific settings
5959

60-
Different model deployments, also called model classes have unique max TPM values that you're now able to control. **This represents the maximum amount of TPM that can be allocated to that type of model deployment in a given region.** While each model type represents its own unique model class, the max TPM value is currently only different for certain model classes:
61-
62-
- GPT-4
63-
- GPT-4-32K
64-
- Text-Davinci-003
60+
Different model deployments, also called model classes have unique max TPM values that you're now able to control. **This represents the maximum amount of TPM that can be allocated to that type of model deployment in a given region.**
6561

6662
All other model classes have a common max TPM value.
6763

0 commit comments

Comments
 (0)