You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token.
44
44
45
-
### Requesting third-party inference providers
45
+
### Third-party inference providers
46
46
47
-
You can request inference from third-party providers with the inference client.
47
+
You can send inference requests to third-party providers with the inference client.
48
48
49
49
Currently, we support the following providers: [Fal.ai](https://fal.ai), [Replicate](https://replicate.com), [Together](https://together.xyz) and [Sambanova](https://sambanova.ai).
50
50
51
-
To make request to a third-party provider, you have to pass the `provider` parameter to the inference function. Make sure your request is authenticated with an access token.
51
+
To send requests to a third-party provider, you have to pass the `provider` parameter to the inference function. Make sure your request is authenticated with an access token.
52
52
```ts
53
-
const accessToken ="hf_..."; // Either a HF access token, or an API key from the 3rd party provider (Replicate in this example)
53
+
const accessToken ="hf_..."; // Either a HF access token, or an API key from the third-party provider (Replicate in this example)
54
54
55
55
const client =newHfInference(accessToken);
56
56
awaitclient.textToImage({
@@ -63,7 +63,7 @@ await client.textToImage({
63
63
When authenticated with a Hugging Face access token, the request is routed through https://huggingface.co.
64
64
When authenticated with a third-party provider key, the request is made directly against that provider's inference API.
65
65
66
-
Only a subset of models are supported when requesting 3rd party providers. You can check the list of supported models per pipeline tasks here:
66
+
Only a subset of models are supported when requesting third-party providers. You can check the list of supported models per pipeline tasks here:
0 commit comments