You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/public_cloud/ai_machine_learning/endpoints_guide_02_capabilities/guide.en-gb.md
+17-6Lines changed: 17 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ This page provides the technical features, capabilities and limitations of [AI E
28
28
| Large Selection of Models | AI Endpoints offers a diverse range of pre-trained AI models, covering categories such as Assistant (LLMs, Code Assistants), Audio Analysis, Embeddings, Natural Language Processing, Translation, Image Generation, and Computer Vision. For a full list of models, please visit the [AI Endpoints Catalog Page](https://endpoints.ai.cloud.ovh.net/catalog). |
29
29
| Model Metrics | Users can access various metrics in the [OVHcloud Control Panel](/links/manager), such as the number of calls made per model, input and output tokens for large language models (LLMs), and other usage data. These insights can help you manage costs and gain a better understanding of how your applications are using AI capabilities. |
30
30
| Data Privacy and Sovereignty | OVHcloud prioritizes data privacy and sovereignty, ensuring that AI models accessed via AI Endpoints are fully compliant with strict European regulations. Our infrastructure, located in Gravelines, France, adheres to European data protection regulations. Data is not stored or shared during or after model use, providing users with peace of mind that their data is secure and protected. |
31
-
| Access with Personalized Tokens | To ensure secure and authenticated access to model APIs, users need to provide a token for each request. Access tokens can easily be created through the [AI Endpoints](https://endpoints.ai.cloud.ovh.net) page, providing the flexibility to manage multiple tokens for various projects or teams. Additionally, each token comes with adjustable validity periods, allowing users to tailor their access to specific needs.
31
+
| Access with Personalized Access Keys | To ensure secure and authenticated access to model APIs, users need to provide an API access key in each request. Access keys can be easily created by following the instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) guide. API keys are linked to a Public Cloud project. We provide the flexibility to manage multiple keys for various projects or teams. Additionally, each access key comes with adjustable validity periods, allowing users to tailor their access to specific needs.
32
32
33
33
### Planned features
34
34
@@ -44,17 +44,28 @@ This page provides the technical features, capabilities and limitations of [AI E
44
44
45
45
AI Endpoints is designed to be compatible with the OpenAI API, making it easy to integrate with existing applications and workflows. This compatibility means that you can take advantage of AI capabilities without having to make major changes to your existing technology stack, allowing you to get up and running quickly and easily.
46
46
47
-
### Flexibile Usage
47
+
### Flexibile usage
48
48
49
49
AI Endpoints' APIs are language-agnostic. It enables developers to use any programming language or technology of their choice when working with our APIs, providing them with the freedom to build and integrate AI capabilities according to their requirements and preferences.
50
50
51
-
## Limitations for the beta phase
51
+
## Limitations
52
52
53
-
### No Token Limit
53
+
### Model rate limit
54
54
55
-
As of now, the AI Endpoints platform does not impose a token limit for API requests.
55
+
When using AI Endpoints, the **following rate limits apply**:
56
56
57
-
In the future, we plan to introduce a token limit feature. This feature will allow you to set a limit on the number of tokens used for each API request, providing better control and management over token consumption.
57
+
-**Anonymous**: 2 requests per minute, per by IP and per model.
58
+
-**Authenticated with an API access key**: 400 requests per minute, per PCI project and per model.
59
+
60
+
If you exceed this limit, a **429 error code** will be returned.
61
+
62
+
If you require higher usage, please **[get in touch with us](https://help.ovhcloud.com/csm?id=csm_get_help)** to discuss increasing your rate limits.
63
+
64
+
### No usage limit
65
+
66
+
As of now, the AI Endpoints platform does not impose any usage limits for API requests, apart from the rate limiting.
67
+
68
+
However, we are considering introducing a usage limit feature in the future. This feature will allow you to set a limit on the number of tokens, characters, seconds of audio consumed, depending on your usage, providing better control and management over AI Endpoints consumption.
0 commit comments