Skip to content

Commit 91ba7a3

Browse files
julien-cWauplinVaibhavs10
authored
quickfix re. pricing system (#1597)
* quickfix re. pricing system * rename doc page * Update docs/api-inference/pricing.md Co-authored-by: Lucain <[email protected]> * Update docs/api-inference/pricing.md Co-authored-by: vb <[email protected]> * Update docs/api-inference/index.md Co-authored-by: vb <[email protected]> --------- Co-authored-by: Lucain <[email protected]> Co-authored-by: vb <[email protected]>
1 parent 087da13 commit 91ba7a3

File tree

7 files changed

+28
-20
lines changed

7 files changed

+28
-20
lines changed

docs/api-inference/_redirects.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,4 @@ detailed_parameters: parameters
33
parallelism: getting_started
44
usage: getting_started
55
faq: index
6+
rate-limits: pricing

docs/api-inference/_toctree.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55
title: Getting Started
66
- local: supported-models
77
title: Supported Models
8-
- local: rate-limits
9-
title: Rate Limits
8+
- local: pricing
9+
title: Pricing and Rate limits
1010
- local: security
1111
title: Security
1212
title: Getting Started

docs/api-inference/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Getting Started
22

3-
The Serverless Inference API allows you to easily do inference on a wide range of models and tasks. You can do requests with your favorite tools (Python, cURL, etc). We also provide a Python SDK (`huggingface_hub`) to make it even easier.
3+
The Serverless Inference API allows you to easily do inference on a wide range of models and tasks. You can do requests with your favorite tools (Python, cURL, etc). We also provide a Python SDK (`huggingface_hub`) and JavaScript SDK (`huggingface.js`) to make it even easier.
44

55
We'll do a minimal example using a [sentiment classification model](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest). Please visit task-specific parameters and further documentation in our [API Reference](./parameters).
66

docs/api-inference/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,14 @@ Explore the most popular models for text, image, speech, and more — all with a
88

99
## Why use the Inference API?
1010

11-
The Serverless Inference API offers a fast and free way to explore thousands of models for a variety of tasks. Whether you're prototyping a new application or experimenting with ML capabilities, this API gives you instant access to high-performing models across multiple domains:
11+
The Serverless Inference API offers a fast and simple way to explore thousands of models for a variety of tasks. Whether you're prototyping a new application or experimenting with ML capabilities, this API gives you instant access to high-performing models across multiple domains:
1212

1313
* **Text Generation:** Including large language models and tool-calling prompts, generate and experiment with high-quality responses.
1414
* **Image Generation:** Easily create customized images, including LoRAs for your own styles.
1515
* **Document Embeddings:** Build search and retrieval systems with SOTA embeddings.
1616
* **Classical AI Tasks:** Ready-to-use models for text classification, image classification, speech recognition, and more.
1717

18-
**Fast and Free to Get Started**: The Inference API is free with higher rate limits for PRO users. For production needs, explore [Inference Endpoints](https://ui.endpoints.huggingface.co/) for dedicated resources, autoscaling, advanced security features, and more.
18+
**Fast and Free to Get Started**: The Inference API is free to try out and comes with additional included credits for PRO users. For production needs, explore [Inference Endpoints](https://ui.endpoints.huggingface.co/) for dedicated resources, autoscaling, advanced security features, and more.
1919

2020
---
2121

docs/api-inference/pricing.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Pricing and Rate limits
2+
3+
As a HF user, you get monthly credits to run the HF Inference API. The amount of credits you get depends on your type of account (Free or PRO or Enterprise Hub), see table below.
4+
You get charged for every inference request, based on the compute time x price of the underlying hardware.
5+
6+
For instance, a request to [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) that takes 10 seconds to complete on a GPU machine that costs $0.00012 per second to run, will be billed $0.0012.
7+
8+
When your monthly included credits are depleted:
9+
- if you're a Free user, you won't be able to query the Inference API anymore,
10+
- if you're a PRO or Enterprise Hub user, you will get charged for the requests on top of your subscription. You can monitor your spending on your billing page.
11+
12+
Note that serverless API is not meant to be used for heavy production applications. If you need to handle large numbers of requests, consider [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to have dedicated resources.
13+
14+
You need to be authenticated (passing a token or through your browser) to use the Inference API.
15+
16+
17+
| User Tier | Included monthly credits |
18+
|---------------------------|------------------------------------|
19+
| Free Users | subject to change, less than $0.10 |
20+
| PRO and Enterprise Users | $2.00 |

docs/api-inference/rate-limits.md

Lines changed: 0 additions & 13 deletions
This file was deleted.

docs/api-inference/supported-models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can find:
1010

1111
## What do I get with a PRO subscription?
1212

13-
In addition to thousands of public models available in the Hub, PRO and Enterprise users get higher [rate limits](./rate-limits) and free access to the following models:
13+
In addition to thousands of public models available in the Hub, PRO and Enterprise users get higher [included credits](./pricing) and access to the following models:
1414

1515
<!-- Manually maintained hard-coded list based on https://github.com/huggingface-internal/api-inference/blob/main/master-rs/custom_config.yml -->
1616

@@ -27,4 +27,4 @@ This list is not exhaustive and might be updated in the future.
2727

2828
## Running Private Models
2929

30-
The free Serverless API is designed to run popular public models. If you have a private model, you can use [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to deploy it.
30+
The Serverless API is designed to run popular public models. If you have a private model, you can use [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to deploy it.

0 commit comments

Comments
 (0)