Skip to content

Commit eb1a7d8

Browse files
committed
cohanges from code review
1 parent 7da66c5 commit eb1a7d8

File tree

1 file changed

+22
-9
lines changed

1 file changed

+22
-9
lines changed

docs/inference-providers/index.md

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,13 @@ Our platform integrates with leading AI infrastructure providers, giving you acc
3232

3333
## Why Choose Inference Providers?
3434

35-
If you're building AI-powered applications, you've likely experienced the pain points of managing multiple provider APIs, comparing model performance, and dealing with varying reliability. Inference Providers solves these challenges by offering:
35+
When you build AI applications, it's tough to manage multiple provider APIs, comparing model performance, and dealing with varying reliability. Inference Providers solves these challenges by offering:
3636

3737
**Instant Access to Cutting-Edge Models**: Go beyond mainstream providers to access thousands of specialized models across multiple AI tasks. Whether you need the latest language models, state-of-the-art image generators, or domain-specific embeddings, you'll find them here.
3838

3939
**Zero Vendor Lock-in**: Unlike being tied to a single provider's model catalog, you get access to models from Cerebras, Groq, Together AI, Replicate, and more — all through one consistent interface.
4040

41-
**Production-Ready Performance**: Built for enterprise workloads with automatic failover, intelligent routing, and the reliability your applications demand.
41+
**Production-Ready Performance**: Built for enterprise workloads with automatic failover i.e. ~0 downtime, intelligent routing, and the reliability your applications demand.
4242

4343
Here's what you can build:
4444

@@ -68,7 +68,7 @@ We'll walk through a practical example using [deepseek-ai/DeepSeek-V3-0324](http
6868

6969
Before diving into integration, explore models interactively with our [Inference Playground](https://huggingface.co/playground). Test different [chat completion models](http://huggingface.co/models?inference_provider=all&sort=trending&other=conversational) with your prompts and compare responses to find the perfect fit for your use case.
7070

71-
<a href="https://huggingface.co/playground" target="blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/9_Tgf0Tv65srhBirZQMTp.png" style="max-width: 550px; width: 100%;"/></a>
71+
<a href="https://huggingface.co/playground" target="blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/9_Tgf0Tv65srhBirZQMTp.png" alt="Inference Playground thumbnail" style="max-width: 550px; width: 100%;"/></a>
7272

7373
### Authentication
7474

@@ -92,7 +92,14 @@ Here are three ways to integrate Inference Providers into your Python applicatio
9292

9393
For convenience, the `huggingface_hub` library provides an [`InferenceClient`](https://huggingface.co/docs/huggingface_hub/guides/inference) that automatically handles provider selection and request routing.
9494

95-
Install with `pip install huggingface_hub`:
95+
In your terminal, install the Hugging Face Hub Python client and log in:
96+
97+
```shell
98+
pip install huggingface_hub
99+
huggingface-cli login # get a read token from hf.co/settings/tokens
100+
```
101+
102+
You can now use the the client with a Python interpreter:
96103

97104
```python
98105
import os
@@ -119,7 +126,7 @@ print(completion.choices[0].message)
119126

120127
<hfoption id="openai">
121128

122-
**Drop-in OpenAI Replacement**: Already using OpenAI's Python client? Just change the base URL to instantly access hundreds of additional open-weights models through our provider network.
129+
If you're already using OpenAI's Python client, then you need a **drop-in OpenAI replacement**. Just swap-out the base URL to instantly access hundreds of additional open-weights models through our provider network.
123130

124131
Our system automatically routes your request to the optimal provider for the specified model:
125132

@@ -142,14 +149,14 @@ completion = client.chat.completions.create(
142149
],
143150
)
144151

145-
print(completion.choices[0].message)
152+
For maximum control and interoperability with custom frameworks, use our OpenAI-compatible REST API directly.
146153
```
147154

148155
</hfoption>
149156

150157
<hfoption id="requests">
151158

152-
**Direct HTTP Integration**: For maximum control or integration with custom frameworks, use our OpenAI-compatible REST API directly.
159+
For maximum control and interoperability with custom frameworks, use our OpenAI-compatible REST API directly.
153160

154161
Our routing system automatically selects the best available provider for your chosen model:
155162

@@ -187,7 +194,13 @@ Integrate Inference Providers into your JavaScript applications with these flexi
187194

188195
Our JavaScript SDK provides a convenient interface with automatic provider selection and TypeScript support.
189196

190-
Install with `npm install @huggingface/inference`:
197+
Install with NPM:
198+
199+
```shell
200+
npm install @huggingface/inference
201+
```
202+
203+
Then use the client with Javascript:
191204

192205
```js
193206
import { InferenceClient } from "@huggingface/inference";
@@ -211,7 +224,7 @@ console.log(chatCompletion.choices[0].message);
211224

212225
<hfoption id="openai">
213226

214-
**OpenAI JavaScript Client Compatible**: Migrate your existing OpenAI integration seamlessly by updating just the base URL:
227+
If you're already using OpenAI's Python client, then you need a **drop-in OpenAI replacement**. Just swap-out the base URL to instantly access hundreds of additional open-weights models through our provider network.
215228

216229
```javascript
217230
import OpenAI from "openai";

0 commit comments

Comments
 (0)