Skip to content

Commit 27ffea3

Browse files
authored
restructure inference providers index page (#1802)
* restructure index page * move playground back down * move partners table back to the top * use shared title for quick start
1 parent 81f1afc commit 27ffea3

File tree

1 file changed

+17
-6
lines changed

1 file changed

+17
-6
lines changed

docs/inference-providers/index.md

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,6 @@ Inference Providers offers a fast and simple way to explore thousands of models
5353

5454
To get started quickly with [Chat Completion models](http://huggingface.co/models?inference_provider=all&sort=trending&other=conversational), use the [Inference Playground](https://huggingface.co/playground) to easily test and compare models with your prompts.
5555

56-
5756
<a href="https://huggingface.co/playground" target="blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/9_Tgf0Tv65srhBirZQMTp.png" style="max-width: 550px; width: 100%;"/></a>
5857

5958
## Get Started
@@ -72,7 +71,12 @@ Inference Providers requires passing a user token in the request headers. You ca
7271

7372
For more details about user tokens, check out [this guide](https://huggingface.co/docs/hub/en/security-tokens).
7473

75-
### cURL
74+
### Quick Start
75+
76+
<hfoptions id="inference-providers-examples">
77+
<hfoption id="curl">
78+
79+
**cURL**
7680

7781
Let's start with a cURL command highlighting the raw HTTP request. You can adapt this request to be run with the tool of your choice.
7882

@@ -92,7 +96,10 @@ curl https://router.huggingface.co/novita/v3/openai/chat/completions \
9296
}'
9397
```
9498

95-
### Python
99+
</hfoption>
100+
<hfoption id="python">
101+
102+
**Python**
96103

97104
In Python, you can use the `requests` library to make raw requests to the API:
98105

@@ -140,10 +147,12 @@ completion = client.chat.completions.create(
140147
print(completion.choices[0].message)
141148
```
142149

143-
### JavaScript
150+
</hfoption>
151+
<hfoption id="javascript">
144152

145-
In JS, you can use the `fetch` library to make raw requests to the API:
153+
**JavaScript**
146154

155+
In JS, you can use the `fetch` library to make raw requests to the API:
147156

148157
```js
149158
import fetch from "node-fetch";
@@ -173,7 +182,6 @@ console.log(await response.json());
173182

174183
For convenience, the JS library `@huggingface/inference` provides an [`InferenceClient`](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClient) that handles inference for you. You can install it with `npm install @huggingface/inference`.
175184

176-
177185
```js
178186
import { InferenceClient } from "@huggingface/inference";
179187

@@ -193,6 +201,9 @@ const chatCompletion = await client.chatCompletion({
193201
console.log(chatCompletion.choices[0].message);
194202
```
195203

204+
</hfoption>
205+
</hfoptions>
206+
196207
## Next Steps
197208

198209
In this introduction, we've covered the basics of Inference Providers. To learn more about this service, check out our guides and API Reference:

0 commit comments

Comments
 (0)