You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/guides/inference.md
+30-30Lines changed: 30 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -248,36 +248,36 @@ You might wonder why using [`InferenceClient`] instead of OpenAI's client? There
248
248
249
249
[`InferenceClient`]'s goal is to provide the easiest interface to run inference on Hugging Face models, on any provider. It has a simple API that supports the most common tasks. Here is a table showing which providers support which tasks:
250
250
251
-
| Domain | Task | HF Inference | fal-ai | Fireworks AI | Hyperbolic | Novita AI | Replicate | Sambanova | Together |
Copy file name to clipboardExpand all lines: src/huggingface_hub/inference/_client.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -132,7 +132,7 @@ class InferenceClient:
132
132
path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api)
133
133
documentation for details). When passing a URL as `model`, the client will not append any suffix path to it.
134
134
provider (`str`, *optional*):
135
-
Name of the provider to use for inference. Can be "fal-ai"`, `"fireworks-ai"`, `"hf-inference"`, `"hyperbolic"`, `"novita"`, `"replicate"`, "sambanova"` or `"together"`.
135
+
Name of the provider to use for inference. Can be "fal-ai"`, `"fireworks-ai"`, `"hf-inference"`, `"hyperbolic"`, `"nebius"`, `"novita"`, `"replicate"`, "sambanova"` or `"together"`.
136
136
defaults to hf-inference (Hugging Face Serverless Inference API).
137
137
If model is a URL or `base_url` is passed, then `provider` is not used.
Copy file name to clipboardExpand all lines: src/huggingface_hub/inference/_generated/_async_client.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -120,7 +120,7 @@ class AsyncInferenceClient:
120
120
path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api)
121
121
documentation for details). When passing a URL as `model`, the client will not append any suffix path to it.
122
122
provider (`str`, *optional*):
123
-
Name of the provider to use for inference. Can be "fal-ai"`, `"fireworks-ai"`, `"hf-inference"`, `"hyperbolic"`, `"novita"`, `"replicate"`, "sambanova"` or `"together"`.
123
+
Name of the provider to use for inference. Can be "fal-ai"`, `"fireworks-ai"`, `"hf-inference"`, `"hyperbolic"`, `"nebius"`, `"novita"`, `"replicate"`, "sambanova"` or `"together"`.
124
124
defaults to hf-inference (Hugging Face Serverless Inference API).
125
125
If model is a URL or `base_url` is passed, then `provider` is not used.
0 commit comments