Skip to content

Commit b8485f9

Browse files
added quota instructions
1 parent aa4a3a6 commit b8485f9

File tree

2 files changed

+19
-0
lines changed

2 files changed

+19
-0
lines changed

README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,25 @@ Please check the link [Azure Products by Region](https://azure.microsoft.com/en-
7070

7171
- Embedding model capacity
7272

73+
### Quota Recommendations
74+
- For optimal performance, we recommend provisioning at least **30,000 tokens** per deployment.
75+
- Consider higher quotas for applications with frequent or complex queries.
76+
- Plan for potential increases in demand and adjust quotas accordingly.
77+
78+
# Check Quota for GPT-4, GPT-4o, and GPT-4o Mini
79+
80+
## Overview
81+
This guide explains how to check the usage quota for different OpenAI models, including GPT-4, GPT-4o, and GPT-4o Mini.
82+
83+
## 1. Check via OpenAI Dashboard
84+
1. Go to the [Azure AI Foundry| Azure OpenAI Service](https://oai.azure.com/).
85+
2. Log in with your OpenAI account.
86+
3. View your usage, quota, and limits.
87+
88+
![image](./docs/Images/ReadMe/quotaImage.png)
89+
90+
91+
7392
### **Options**
7493
Pick from the options below to see step-by-step instructions for: GitHub Codespaces, VS Code Dev Containers, Local Environments, and Bicep deployments.
7594

docs/Images/ReadMe/quotaImage.png

210 KB
Loading

0 commit comments

Comments
 (0)