Skip to content
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 12 additions & 3 deletions docs/inference-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@ Inference Providers offers a fast and simple way to explore thousands of models

To get started quickly with [Chat Completion models](http://huggingface.co/models?inference_provider=all&sort=trending&other=conversational), use the [Inference Playground](https://huggingface.co/playground) to easily test and compare models with your prompts.


<a href="https://huggingface.co/playground" target="blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/9_Tgf0Tv65srhBirZQMTp.png" style="max-width: 550px; width: 100%;"/></a>

## Get Started
Expand All @@ -72,6 +71,9 @@ Inference Providers requires passing a user token in the request headers. You ca

For more details about user tokens, check out [this guide](https://huggingface.co/docs/hub/en/security-tokens).

<hfoptions id="inference-providers-examples">
<hfoption id="curl">

### cURL

Let's start with a cURL command highlighting the raw HTTP request. You can adapt this request to be run with the tool of your choice.
Expand All @@ -92,6 +94,9 @@ curl https://router.huggingface.co/novita/v3/openai/chat/completions \
}'
```

</hfoption>
<hfoption id="python">

### Python

In Python, you can use the `requests` library to make raw requests to the API:
Expand Down Expand Up @@ -140,11 +145,13 @@ completion = client.chat.completions.create(
print(completion.choices[0].message)
```

</hfoption>
<hfoption id="javascript">

### JavaScript

In JS, you can use the `fetch` library to make raw requests to the API:


```js
import fetch from "node-fetch";

Expand Down Expand Up @@ -173,7 +180,6 @@ console.log(await response.json());

For convenience, the JS library `@huggingface/inference` provides an [`InferenceClient`](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClient) that handles inference for you. You can install it with `npm install @huggingface/inference`.


```js
import { InferenceClient } from "@huggingface/inference";

Expand All @@ -193,6 +199,9 @@ const chatCompletion = await client.chatCompletion({
console.log(chatCompletion.choices[0].message);
```

</hfoption>
</hfoptions>

## Next Steps

In this introduction, we've covered the basics of Inference Providers. To learn more about this service, check out our guides and API Reference:
Expand Down