Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/api-inference/_redirects.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@ detailed_parameters: parameters
parallelism: getting_started
usage: getting_started
faq: index
rate-limits: pricing
4 changes: 2 additions & 2 deletions docs/api-inference/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
title: Getting Started
- local: supported-models
title: Supported Models
- local: rate-limits
title: Rate Limits
- local: pricing
title: Pricing and Rate limits
- local: security
title: Security
title: Getting Started
Expand Down
2 changes: 1 addition & 1 deletion docs/api-inference/getting-started.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Getting Started

The Serverless Inference API allows you to easily do inference on a wide range of models and tasks. You can do requests with your favorite tools (Python, cURL, etc). We also provide a Python SDK (`huggingface_hub`) to make it even easier.
The Serverless Inference API allows you to easily do inference on a wide range of models and tasks. You can do requests with your favorite tools (Python, cURL, etc). We also provide a Python SDK (`huggingface_hub`) and JavaScript SDK (`huggingface.js`) to make it even easier.

We'll do a minimal example using a [sentiment classification model](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest). Please visit task-specific parameters and further documentation in our [API Reference](./parameters).

Expand Down
4 changes: 2 additions & 2 deletions docs/api-inference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ Explore the most popular models for text, image, speech, and more — all with a

## Why use the Inference API?

The Serverless Inference API offers a fast and free way to explore thousands of models for a variety of tasks. Whether you're prototyping a new application or experimenting with ML capabilities, this API gives you instant access to high-performing models across multiple domains:
The Serverless Inference API offers a fast and simple way to explore thousands of models for a variety of tasks. Whether you're prototyping a new application or experimenting with ML capabilities, this API gives you instant access to high-performing models across multiple domains:

* **Text Generation:** Including large language models and tool-calling prompts, generate and experiment with high-quality responses.
* **Image Generation:** Easily create customized images, including LoRAs for your own styles.
* **Document Embeddings:** Build search and retrieval systems with SOTA embeddings.
* **Classical AI Tasks:** Ready-to-use models for text classification, image classification, speech recognition, and more.

⚡ **Fast and Free to Get Started**: The Inference API is free with higher rate limits for PRO users. For production needs, explore [Inference Endpoints](https://ui.endpoints.huggingface.co/) for dedicated resources, autoscaling, advanced security features, and more.
⚡ **Fast and Free to Get Started**: The Inference API is free to try out and comes with additional included credits for PRO users. For production needs, explore [Inference Endpoints](https://ui.endpoints.huggingface.co/) for dedicated resources, autoscaling, advanced security features, and more.

---

Expand Down
20 changes: 20 additions & 0 deletions docs/api-inference/pricing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Pricing and Rate limits

As a HF user, you get monthly credits to run the HF Inference API. The amount of credits you get depends on your type of account (Free or PRO or Enterprise Hub), see table below.
You get charged for every inference request, based on the compute time x price of the underlying hardware.

For instance, a request to [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) that takes 10 seconds to complete on a GPU machine that costs $0.00012 per second to run, will be billed $0.0012.

When your monthly included credits are depleted:
- if you're a Free user, you won't be able to query the Inference API anymore,
- if you're a PRO or Enterprise Hub user, you will get charged for the requests on top of your subscription. You can monitor your spending on your billing page.

Note that serverless API is not meant to be used for heavy production applications. If you need to handle large numbers of requests, consider [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to have dedicated resources.

You need to be authenticated (passing a token or through your browser) to use the Inference API.


| User Tier | Included monthly credits |
|---------------------------|------------------------------------|
| Free Users | subject to change, less than $0.10 |
| PRO and Enterprise Users | $2.00 |
13 changes: 0 additions & 13 deletions docs/api-inference/rate-limits.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/api-inference/supported-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can find:

## What do I get with a PRO subscription?

In addition to thousands of public models available in the Hub, PRO and Enterprise users get higher [rate limits](./rate-limits) and free access to the following models:
In addition to thousands of public models available in the Hub, PRO and Enterprise users get higher [included credits](./pricing) and access to the following models:

<!-- Manually maintained hard-coded list based on https://github.com/huggingface-internal/api-inference/blob/main/master-rs/custom_config.yml -->

Expand All @@ -27,4 +27,4 @@ This list is not exhaustive and might be updated in the future.

## Running Private Models

The free Serverless API is designed to run popular public models. If you have a private model, you can use [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to deploy it.
The Serverless API is designed to run popular public models. If you have a private model, you can use [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to deploy it.