You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/api-inference/register-as-a-provider.md
+55-4Lines changed: 55 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,9 +56,57 @@ For example, you can find the expected schema for Text to Speech here: [https://
56
56
57
57
## 2. JS Client Integration
58
58
59
-
Before proceeding with the next steps, ensure you've implemented the necessary code to integrate with the JS client and thoroughly tested your implementation.
59
+
Before proceeding with the next steps, ensure you've implemented the necessary code to integrate with the JS client and thoroughly tested your implementation. Here are the steps to follow:
60
+
61
+
### 1. Implement the provider helper
62
+
63
+
Create a new file under packages/inference/src/providers/{provider_name}.ts and copy-paste the following snippet.
raise NotImplementedError("Needs to be implemented")
92
+
}
93
+
}
94
+
```
95
+
96
+
Implement the methods that require custom handling. Check out the base implementation to check default behavior. If you don't need to override a method, just remove it. You have to define at least `makeRoute`, `preparePayload` and `getResponse`.
97
+
98
+
If the provider supports multiple tasks that require different implementations, create dedicated subclasses for each task, following the pattern used in the existing providers implementation, e.g. [Together AI provider implementation](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/providers/together.ts).
99
+
100
+
For text-generation and conversational tasks, one can just inherit from BaseTextGenerationTask and BaseConversationalTask respectively (defined in [providerHelper.ts]((https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/providers/providerHelper.ts))) and override the methods if needed. Examples can be found in [Cerebras](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/providers/cerebras.ts) or [Fireworks](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/providers/fireworks.ts) provider implementations.
101
+
102
+
### 2. Register the provider
103
+
104
+
Go to [packages/inference/src/lib/getProviderHelper.ts](https://github.com/huggingface/huggingface.js//blob/main/packages/inference/src/lib/getProviderHelper.ts) and add your provider to `PROVIDERS`. Please try to respect alphabetical order.
105
+
106
+
### 3. Add tests
107
+
108
+
Go to [packages/inference/test/InferenceClient.spec.ts](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/test/InferenceClient.spec.ts) and add new tests for each task supported by your provider.
60
109
61
-
TODO
62
110
63
111
## 3. Model Mapping API
64
112
@@ -268,7 +316,7 @@ Before adding a new provider to the `huggingface_hub` Python library, make sure
268
316
### 1. Implement the provider helper
269
317
Create a new file under src/huggingface_hub/inference/_providers/{provider_name}.py and copy-paste the following snippet.
270
318
271
-
Implement the methods that require custom handling. Check out the base implementation to check default behavior. If you don't need to override a method, just remove it. At least one of _prepare_payload_as_dict or _prepare_payload_as_bytes must be overwritten.
319
+
Implement the methods that require custom handling. Check out the base implementation to check default behavior. If you don't need to override a method, just remove it. At least one of `_prepare_payload_as_dict` or `_prepare_payload_as_bytes` must be overwritten.
272
320
273
321
If the provider supports multiple tasks that require different implementations, create dedicated subclasses for each task, following the pattern shown in fal_ai.py.
274
322
@@ -337,6 +385,7 @@ class MyNewProviderTaskProviderHelper(TaskProviderHelper):
337
385
- Go to [tests/test_inference_providers.py](https://github.com/huggingface/huggingface_hub/blob/main/tests/test_inference_providers.py) and add static tests for overridden methods.
338
386
- Go to [tests/test_inference_client.py](https://github.com/huggingface/huggingface_hub/blob/main/tests/test_inference_client.py) and add VCR tests:
339
387
388
+
340
389
a. Add an entry to `_RECOMMENDED_MODELS_FOR_VCR` at the top of the test module. This contains a mapping task <> test model. model-id must be the HF model id.
341
390
```python
342
391
_RECOMMENDED_MODELS_FOR_VCR= {
@@ -362,7 +411,7 @@ class MyNewProviderTaskProviderHelper(TaskProviderHelper):
d. Commit the generated VCR cassettes with your PR.
367
416
368
417
@@ -371,3 +420,5 @@ class MyNewProviderTaskProviderHelper(TaskProviderHelper):
371
420
**Question:** By default, in which order do we list providers in the settings page?
372
421
373
422
**Answer:** The default sort is by total number of requests routed by HF over the last 7 days. This order defines which provider will be used in priority by the widget on the model page (but the user's order takes precedence).
0 commit comments