You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/assistants-ai-studio.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ author: mrbullwinkle
40
40
|**Deployment**| This is where you set which model deployment to use with your assistant. |
41
41
|**Functions**| Create custom function definitions for the models to formulate API calls and structure data outputs based on your specifications. Not used in this quickstart. |
42
42
|**Code interpreter**| Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. |
43
-
|**Files**| You can upload up to 20 files, with a max file size of 512 MB to use with tools. Not used in this quickstart. |
43
+
|**Files**| You can upload up to 10,000 files, with a max file size of 512 MB to use with tools. Not used in this quickstart. |
44
44
45
45
:::image type="content" source="../media/quickstarts/assistants-ai-studio-playground.png" alt-text="Screenshot of the Assistant configuration screen without all the values filled in." lightbox="../media/quickstarts/assistants-ai-studio-playground.png":::
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/assistants-studio.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ Use the **Assistant setup** pane to create a new AI assistant or to select an ex
44
44
|**Deployment**| This is where you set which model deployment to use with your assistant. |
45
45
|**Functions**| Create custom function definitions for the models to formulate API calls and structure data outputs based on your specifications |
46
46
|**Code interpreter**| Code interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code. |
47
-
|**Files**| You can upload up to 20 files, with a max file size of 512 MB to use with tools. |
47
+
|**Files**| You can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using [AI Studio](../assistants-quickstart.md?pivots=programming-language-ai-studio). |
| GPT-4o max images per request (# of images in the messages array/conversation history) | 10 |
@@ -77,7 +77,7 @@ M = million | K = thousand
77
77
78
78
#### Usage tiers
79
79
80
-
Global Standard deployments use Azure's global infrastructure, dynamically routing customer traffic to the data center with best availability for the customer’s inference requests. This enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage may see more variability in response latency.
80
+
Global Standard deployments use Azure's global infrastructure, dynamically routing customer traffic to the data center with best availability for the customer’s inference requests. This enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage might see more variability in response latency.
81
81
82
82
The Usage Limit determines the level of usage above which customers might see larger variability in response latency. A customer’s usage is defined per model and is the total tokens consumed across all deployments in all subscriptions in all regions for a given tenant.
83
83
@@ -113,9 +113,9 @@ To minimize issues related to rate limits, it's a good idea to use the following
113
113
114
114
### How to request increases to the default quotas and limits
115
115
116
-
Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, quota increase requests are being accepted and will be filled in the order they are received. Priority will be given to customers who generate traffic that consumes the existing quota allocation, and your request may be denied if this condition isn't met.
116
+
Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Note that due to overwhelming demand, quota increase requests are being accepted and will be filled in the order they are received. Priority will be given to customers who generate traffic that consumes the existing quota allocation, and your request might be denied if this condition isn't met.
117
117
118
-
For other rate limits, please [submit a service request](../cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context).
118
+
For other rate limits, [submit a service request](../cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context).
0 commit comments