Skip to content

Commit b0d0470

Browse files
updated readme
1 parent 5900731 commit b0d0470

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -71,16 +71,15 @@ Please check the link [Azure Products by Region](https://azure.microsoft.com/en-
7171
- Embedding model capacity
7272

7373
### Quota Recommendations
74-
- For optimal performance, we recommend provisioning at least **30,000 tokens** per deployment.
75-
- Consider higher quotas for applications with frequent or complex queries.
74+
- For optimal performance, we recommend provisioning at least **30k tokens** per deployment.
7675
- Plan for potential increases in demand and adjust quotas accordingly.
7776

7877
### Check Quota for GPT-4, GPT-4o, and GPT-4o Mini
7978

8079
## Overview
8180
This guide explains how to check the usage quota for different OpenAI models, including GPT-4, GPT-4o, and GPT-4o Mini.
8281

83-
## 1. Check via OpenAI Dashboard
82+
### Check via OpenAI Dashboard
8483
1. Go to the [Azure OpenAI Service](https://oai.azure.com/).
8584
2. Log in with your OpenAI account.
8685
3. View your usage, quota, and limits.

0 commit comments

Comments
 (0)