Skip to content

Commit 61b3717

Browse files
WauplinPierrci
andauthored
Apply suggestions from code review
Co-authored-by: Pierric Cistac <[email protected]>
1 parent 95f0171 commit 61b3717

File tree

4 files changed

+15
-15
lines changed

4 files changed

+15
-15
lines changed

docs/api-inference/hub-api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Inference status is either "warm" or undefined:
9494
<curl>
9595

9696
```sh
97-
# Get inference status (not warm)
97+
# Get inference status (no inference)
9898
~ curl -s https://huggingface.co/api/models/manycore-research/SpatialLM-Llama-1B?expand[]=inference
9999
{
100100
"_id": "67d3b141d8b6e20c6d009c8b",
@@ -112,7 +112,7 @@ In the `huggingface_hub`, use `model_info` with the expand parameter:
112112
>>> from huggingface_hub import model_info
113113

114114
>>> info = model_info("manycore-research/SpatialLM-Llama-1B", expand="inference")
115-
>>> info.inference_provider_mapping
115+
>>> info.inference
116116
None
117117
```
118118

docs/api-inference/hub-integration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Hub Integration
22

3-
The Inference Providers is tightly integrated with the Hugging Face Hub. No matter which provider you use, the usage and billing will be centralized in your Hugging Face account.
3+
Inference Providers is tightly integrated with the Hugging Face Hub. No matter which provider you use, the usage and billing will be centralized in your Hugging Face account.
44

55
## Model search
66

@@ -46,7 +46,7 @@ Several Hugging Face features utilize Inference Providers and count towards your
4646
## User Settings
4747

4848
In your user account settings, you are able to:
49-
- set your own API keys for the providers you’ve signed up with. Otherwise, you can still use them – your requests will be billed on your HF account. More details in the [billing section](./pricing#routed-requests-vs-direct-calls).
49+
- set your own API keys for the providers you’ve signed up with. If you don't, your requests will be billed on your HF account. More details in the [billing section](./pricing#routed-requests-vs-direct-calls).
5050

5151
<div class="flex justify-center">
5252
<img class="block light:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/set-custom-key-light.png"/>

docs/api-inference/index.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
# Inference Providers
22

3-
The Hugging Face Inference Providers revolutionizes how developers access and run machine learning models by offering a unified, flexible interface to multiple serverless inference providers. This new approach extends our previous Serverless Inference API, providing more models, increased performances and better reliability thanks to our awesome partners.
3+
Hugging Face Inference Providers revolutionizes how developers access and run machine learning models by offering a unified, flexible interface to multiple serverless inference providers. This new approach extends our previous Serverless Inference API, providing more models, increased performances and better reliability thanks to our awesome partners.
44

5-
To learn more about the launch of the Inference Providers, check out our [announcement blog post](https://huggingface.co/blog/inference-providers).
5+
To learn more about the launch of Inference Providers, check out our [announcement blog post](https://huggingface.co/blog/inference-providers).
66

7-
## Why use the Inference Providers?
7+
## Why use Inference Providers?
88

9-
The Inference Providers offers a fast and simple way to explore thousands of models for a variety of tasks. Whether you're experimenting with ML capabilities or building a new application, this API gives you instant access to high-performing models across multiple domains:
9+
Inference Providers offers a fast and simple way to explore thousands of models for a variety of tasks. Whether you're experimenting with ML capabilities or building a new application, this API gives you instant access to high-performing models across multiple domains:
1010

1111
* **Text Generation:** Including large language models and tool-calling prompts, generate and experiment with high-quality responses.
1212
* **Image and Video Generation:** Easily create customized images, including LoRAs for your own styles.
1313
* **Document Embeddings:** Build search and retrieval systems with SOTA embeddings.
1414
* **Classical AI Tasks:** Ready-to-use models for text classification, image classification, speech recognition, and more.
1515

16-
**Fast and Free to Get Started**: The Inference Providers comes with a free-tier and additional included credits for [PRO users](https://hf.co/subscribe/pro).
16+
**Fast and Free to Get Started**: Inference Providers comes with a free-tier and additional included credits for [PRO users](https://hf.co/subscribe/pro), as well as [Enterprise Hub organizations](https://huggingface.co/enterprise).
1717

1818
## Key Features
1919

@@ -33,13 +33,13 @@ To get started quickly with [Chat Completion models](http://huggingface.co/model
3333

3434
## Get Started
3535

36-
You can call the Inference Providers with your preferred tools, such as Python, JavaScript, or cURL. To simplify integration, we offer both a Python SDK (`huggingface_hub`) and a JavaScript SDK (`huggingface.js`).
36+
You can use Inference Providers with your preferred tools, such as Python, JavaScript, or cURL. To simplify integration, we offer both a Python SDK (`huggingface_hub`) and a JavaScript SDK (`huggingface.js`).
3737

3838
In this section, we will demonstrate a simple example using [deepseek-ai/DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), a conversational Large Language Model. For the example, we will use [Novita AI](https://novita.ai/) as Inference Provider.
3939

4040
### Authentication
4141

42-
The Inference Providers requires passing a user token in the request headers. You can generate a token by signing up on the Hugging Face website and going to the [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). We recommend creating a `fine-grained` token with the scope to `Make calls to Inference Providers`.
42+
Inference Providers requires passing a user token in the request headers. You can generate a token by signing up on the Hugging Face website and going to the [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). We recommend creating a `fine-grained` token with the scope to `Make calls to Inference Providers`.
4343

4444
For more details about user tokens, check out [this guide](https://huggingface.co/docs/hub/en/security-tokens).
4545

@@ -140,7 +140,7 @@ const response = await fetch(
140140
console.log(await response.json());
141141
```
142142

143-
For convenience, the JS library `@huggingface/inference` provides an [`InferenceClient`](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClient) that handles inference for you. Make sure to install it with `npm install @huggingface/inference`.
143+
For convenience, the JS library `@huggingface/inference` provides an [`InferenceClient`](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClient) that handles inference for you. You can install it with `npm install @huggingface/inference`.
144144

145145

146146
```js
@@ -166,7 +166,7 @@ console.log(chatCompletion.choices[0].message);
166166

167167
In this introduction, we've covered the basics of Inference Providers. To learn more about this service, check out our guides and API Reference:
168168
- [Pricing and Billing](./pricing): everything you need to know about billing
169-
- [Hub integration](./hub-integration): how Inference Providers is integrated with the Hub?
169+
- [Hub integration](./hub-integration): how is Inference Providers integrated with the Hub?
170170
- [External Providers](./providers): everything about providers and how to become an official partner
171-
- [Hub API](./hub-api): high level API for inference providers
171+
- [Hub API](./hub-api): high-level API for Inference Providers
172172
- [API Reference](./tasks/index): learn more about the parameters and task-specific settings.

docs/api-inference/pricing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Pricing and Billing
22

3-
Inference Providers is a production-ready service involving external partners and is therefore a paid-product. However, as a Hugging Face user you get monthly credits to run experiments. The amount of credits you get depends on your type of account:
3+
Inference Providers is a production-ready service involving external partners and is therefore a paid product. However, as a Hugging Face user, you get monthly credits to run experiments. The amount of credits you get depends on your type of account:
44

55
| User Tier | Included monthly credits |
66
| ------------------------ | ---------------------------------- |

0 commit comments

Comments
 (0)