|
1 | | -# Inference Providers API |
| 1 | +# Inference Providers |
2 | 2 |
|
3 | | -The Hugging Face Inference Providers API revolutionizes how developers access and run machine learning models by offering a unified, flexible interface to multiple serverless inference providers. This new approach extends our previous Serverless Inference API, providing more models, increased performances and better reliability thanks to our awesome partners. |
| 3 | +The Hugging Face Inference Providers revolutionizes how developers access and run machine learning models by offering a unified, flexible interface to multiple serverless inference providers. This new approach extends our previous Serverless Inference API, providing more models, increased performances and better reliability thanks to our awesome partners. |
4 | 4 |
|
5 | | -To learn more about the launch of the Inference Providers API, check out our [announcement blog post](https://huggingface.co/blog/inference-providers). |
| 5 | +To learn more about the launch of the Inference Providers, check out our [announcement blog post](https://huggingface.co/blog/inference-providers). |
6 | 6 |
|
7 | 7 | ## Why use the Inference Provider API? |
8 | 8 |
|
@@ -33,13 +33,13 @@ To get started quickly with [Chat Completion models](http://huggingface.co/model |
33 | 33 |
|
34 | 34 | ## Get Started |
35 | 35 |
|
36 | | -You can call the Inference Providers API with your preferred tools, such as Python, JavaScript, or cURL. To simplify integration, we offer both a Python SDK (`huggingface_hub`) and a JavaScript SDK (`huggingface.js`). |
| 36 | +You can call the Inference Providers with your preferred tools, such as Python, JavaScript, or cURL. To simplify integration, we offer both a Python SDK (`huggingface_hub`) and a JavaScript SDK (`huggingface.js`). |
37 | 37 |
|
38 | 38 | In this section, we will demonstrate a simple example using [deepseek-ai/DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), a conversational Large Language Model. For the example, we will use [Novita AI](https://novita.ai/) as Inference Provider with routed requests. You will learn what that means in the next chapters. |
39 | 39 |
|
40 | 40 | ### Authentication |
41 | 41 |
|
42 | | -The Inference Providers API requires passing a user token in the request headers. You can generate a token by signing up on the Hugging Face website and going to the [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). We recommend creating a `fine-grained` token with the scope to `Make calls to Inference Providers`. |
| 42 | +The Inference Providers requires passing a user token in the request headers. You can generate a token by signing up on the Hugging Face website and going to the [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). We recommend creating a `fine-grained` token with the scope to `Make calls to Inference Providers`. |
43 | 43 |
|
44 | 44 | For more details about user tokens, check out [this guide](https://huggingface.co/docs/hub/en/security-tokens). |
45 | 45 |
|
|
0 commit comments