Skip to content

Commit b0e2a1b

Browse files
committed
first version
1 parent d290556 commit b0e2a1b

File tree

1 file changed

+368
-0
lines changed

1 file changed

+368
-0
lines changed
Lines changed: 368 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,368 @@
1+
# How to be registered as an inference provider on the Hub?
2+
3+
If you'd like to be an inference provider on the Hub, you must follow the steps outlined in this guide.
4+
5+
## 1. Requirements
6+
7+
<Tip>
8+
9+
If you provide inference only for LLMs and VLMs following the OpenAI API, you
10+
can probably skip most of this section and just open a PR on
11+
https://github.com/huggingface/huggingface.js/tree/main/packages/inference to add you as a provider.
12+
13+
</Tip>
14+
15+
The first step to understand the integration is to take a look at the JS inference client that lives
16+
inside the huggingface.js repo:
17+
https://github.com/huggingface/huggingface.js/tree/main/packages/inference
18+
19+
This is the client that powers our Inference widgets on model pages, and is the blueprint
20+
implementation for other downstream SDKs like Python's `huggingface-hub` and other tools.
21+
22+
### What is a Task
23+
24+
You will see that inference methods (`textToImage`, `chatCompletion`, etc.) have names that closely
25+
mirror the task names. A task, also known as `pipeline_tag` in the HF ecosystem, is the type of
26+
model (basically which types of inputs and outputs the model has), for instance "text-generation"
27+
or "text-to-image". It is indicated prominently on model pages, here:
28+
29+
<div class="flex justify-center">
30+
<img class="block light:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/pipeline-tag-on-model-page-light.png"/>
31+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/pipeline-tag-on-model-page-dark.png"/>
32+
</div>
33+
34+
The list of all possible tasks can be found at https://huggingface.co/tasks and the list of JS method names is documented in the README at https://github.com/huggingface/huggingface.js/tree/main/packages/inference.
35+
36+
<Tip>
37+
38+
Note that `chatCompletion` is an exception as it is not a pipeline_tag, per se. Instead, it
39+
includes models with either `pipeline_tag="text-generation"` or `pipeline_tag="image-text-to-text"`
40+
which are tagged as "conversational".
41+
42+
</Tip>
43+
44+
### Task API schema
45+
46+
For each task type, we enforce an API schema to make it easier for end users to use different
47+
models interchangeably. To be compatible, your third-party API must adhere to a "standard" shape API we expect on HF model pages for each pipeline task type.
48+
49+
This is not an issue for LLMs as everyone converged on the OpenAI API anyways, but can be
50+
more tricky for other tasks like "text-to-image" or "automatic-speech-recognition" where there
51+
exists no standard API.
52+
53+
For example, you can find the expected schema for Text to Speech here: [https://github.com/huggingface/huggingface.js/packages/src/tasks/text-to-speech/spec/input.json#L4](https://github.com/huggingface/huggingface.js/blob/0a690a14d52041a872dc103846225603599f4a33/packages/tasks/src/tasks/text-to-speech/spec/input.json#L4), and similarly for other supported tasks. If your API for a given task is different from HF's, it is not an issue: you can tweak the code in `huggingface.js` to be able to call your models, i.e., provide some kind of "translation" of parameter names and output names. However, API specs should not be model-specific, only task-specific. Run the JS code and add some [tests](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/test/HfInference.spec.ts) to make sure it works well. We can help with this step!
54+
55+
## 2. JS Client Integration
56+
57+
Before proceeding with the next steps, ensure you've implemented the necessary code to integrate with the JS client and thoroughly tested your implementation.
58+
59+
TODO
60+
61+
## 3. Model Mapping API
62+
63+
Once you've verified the huggingface.js/inference client can call your models successfully, you
64+
can use our Model Mapping API.
65+
66+
This API lets a Partner "register" that they support model X, Y or Z on HF.
67+
68+
This enables:
69+
- The inference widget on corresponding model pages
70+
- Inference compatibility throughout the HF ecosystem (Python and JS client SDKs for instance), and any downstream tool or library
71+
72+
### Register a mapping item
73+
74+
```http
75+
POST /api/partners/{provider}/models
76+
```
77+
Create a new mapping item, with the following body (JSON-encoded):
78+
79+
```json
80+
{
81+
"task": "WidgetType", // required
82+
"hfModel": "string", // required: the name of the model on HF: namespace/model-name
83+
"providerModel": "string", // required: the partner's "model id" i.e. id on your side
84+
"status": "live" | "staging" // Optional: defaults to "staging". "staging" models are only available to members of the partner's org, then you switch them to "live" when they're ready to go live
85+
}
86+
```
87+
88+
- `task`, also known as `pipeline_tag` in the HF ecosystem, is the type of model / type of API
89+
(examples: "text-to-image", "text-generation", but you should use "conversational" for chat models)
90+
- `hfModel` is the model id on the Hub's side.
91+
- `providerModel` is the model id on your side (can be the same or different).
92+
93+
In the future, we will add support for a new parameter (ping us if it's important to you now):
94+
```json
95+
{
96+
"hfFilter": ["string"] // (both can't be defined at the same time)
97+
// ^Power user move: register a "tag" slice of HF in one go.
98+
// Example: tag == "base_model:adapter:black-forest-labs/FLUX.1-dev" for all Flux-dev LoRAs
99+
}
100+
```
101+
#### Authentication
102+
You need to be in the _provider_ Hub organization (e.g. https://huggingface.co/togethercomputer
103+
for TogetherAI) with **Write** permissions to be able to access this endpoint.
104+
105+
#### Validation
106+
The endpoint validates that:
107+
- `hfModel` is indeed of `pipeline_tag == task` OR `task` is "conversational" and the model is
108+
compatible (i.e. the `pipeline_tag` is either "text-generation" or "image-text-to-text" AND the model is tagged as "conversational").
109+
- (in the future) we auto-test that the Partner's API successfully responds to a
110+
huggingface.js/inference call of the corresponding task i.e. the API specs are valid.
111+
112+
### Delete a mapping item
113+
114+
```http
115+
DELETE /api/partners/{provider}/models?hfModel=namespace/model-name
116+
```
117+
118+
### Update a mapping item's status
119+
120+
Call this HTTP PUT endpoint:
121+
122+
```http
123+
PUT /api/partners/{provider}/models/status
124+
```
125+
126+
With the following body (JSON-encoded):
127+
128+
```json
129+
{
130+
"hfModel": "namespace/model-name", // The name of the model on HF
131+
"status": "live" | "staging" // The new status, one of "staging" or "live"
132+
}
133+
```
134+
135+
### List the whole mapping
136+
137+
```http
138+
GET /api/partners/{provider}/models?status=staging|live
139+
```
140+
This gets all mapping items from the DB. For clarity/DX, the output is grouped by task.
141+
142+
<Tip warning={true}>
143+
This is publicly accessible. It's useful to be transparent by default and it helps debug client SDKs, etc.
144+
</Tip>
145+
146+
Here is an example of response:
147+
148+
```json
149+
{
150+
"text-to-image": {
151+
"black-forest-labs/FLUX.1-Canny-dev": {
152+
"providerId": "black-forest-labs/FLUX.1-canny",
153+
"status": "live"
154+
},
155+
"black-forest-labs/FLUX.1-Depth-dev": {
156+
"providerId": "black-forest-labs/FLUX.1-depth",
157+
"status": "live"
158+
}
159+
},
160+
"conversational": {
161+
"deepseek-ai/DeepSeek-R1": {
162+
"providerId": "deepseek-ai/DeepSeek-R1",
163+
"status": "live"
164+
}
165+
},
166+
"text-generation": {
167+
"meta-llama/Llama-2-70b-hf": {
168+
"providerId": "meta-llama/Llama-2-70b-hf",
169+
"status": "live"
170+
},
171+
"mistralai/Mixtral-8x7B-v0.1": {
172+
"providerId": "mistralai/Mixtral-8x7B-v0.1",
173+
"status": "live"
174+
}
175+
}
176+
}
177+
```
178+
179+
## 4. Billing
180+
181+
For routed requests (see figure below), i.e. when users authenticate via HF, our intent is that
182+
our users only pay the standard provider API rates. There's no additional markup from us, we
183+
just pass through the provider costs directly.
184+
185+
<div class="flex justify-center">
186+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/inference-providers/types_of_billing.png"/>
187+
</div>
188+
189+
For LLM providers, a workaround some people use is to extract numbers of input and output
190+
tokens in the responses and multiply by a hardcoded pricing table – this is quite brittle, so we
191+
are reluctant to do this.
192+
We propose an easier way to figure out this cost and charge it to our users, by asking you to
193+
provide the cost for each request via an HTTP API you host on your end.
194+
195+
### HTTP API Specs
196+
197+
We ask that you expose an API that supports a HTTP POST request.
198+
The body of the request is a JSON-encoded object containing a list of request IDs for which we
199+
request the cost.
200+
201+
```http
202+
POST {your URL here}
203+
Content-Type: application/json
204+
205+
{
206+
"requestIds": [
207+
"deadbeef0",
208+
"deadbeef1",
209+
"deadbeef2",
210+
"deadbeef3"
211+
]
212+
}
213+
```
214+
215+
The response is also JSON-encoded. The response contains an array of objects specifying the
216+
request's ID and its cost in nano-USD (10^-9 USD).
217+
```http
218+
HTTP/1.1 200 OK
219+
Content-Type: application/json
220+
221+
{
222+
"requests": [
223+
{ "requestId": "deadbeef0", "costNanoUsd": 100 },
224+
{ "requestId": "deadbeef1", "costNanoUsd": 100 },
225+
{ "requestId": "deadbeef2", "costNanoUsd": 100 },
226+
{ "requestId": "deadbeef3", "costNanoUsd": 100 }
227+
]
228+
}
229+
```
230+
231+
### Price Unit
232+
233+
We require the price to be an **integer** number of **nano-USDs** (10^-9 USD).
234+
235+
### How to define the request ID
236+
237+
For each request/generation you serve, you should define a unique request (or response) ID,
238+
and provide it as a response Header. We will use this ID as the request ID for the billing API
239+
above.
240+
241+
As part of those requirements, please let us know your Header name. If you don't already have one, we suggest the `Inference-Id` name for instance, and it should contain a UUID character string.
242+
243+
**Example**: Defining an `Inference-Id` header in your inference response.
244+
245+
```http
246+
POST /v1/chat/completions
247+
Content-Type: application/json
248+
[request headers]
249+
[request body]
250+
------
251+
HTTP/1.1 200 OK
252+
Content-Type: application/json
253+
[other request headers]
254+
Inference-Id: unique-id-00131
255+
[response body]
256+
```
257+
258+
## 5. Python client integration
259+
260+
<Tip>
261+
262+
Before adding a new provider to the `huggingface_hub` Python library, make sure that all the previous steps have been completed and everything is working on the Hub. Support in the Python library comes as a second step.
263+
264+
</Tip>
265+
266+
### 1. Implement the provider helper
267+
Create a new file under src/huggingface_hub/inference/_providers/{provider_name}.py and copy-paste the following snippet.
268+
269+
Implement the methods that require custom handling. Check out the base implementation to check default behavior. If you don't need to override a method, just remove it. At least one of _prepare_payload_as_dict or _prepare_payload_as_bytes must be overwritten.
270+
271+
If the provider supports multiple tasks that require different implementations, create dedicated subclasses for each task, following the pattern shown in fal_ai.py.
272+
273+
For text-generation and conversational tasks, one can just inherit from BaseTextGenerationTask and BaseConversationalTask respectively (defined in _common.py) and override the methods if needed. Examples can be found in fireworks_ai.py and together.py.
274+
275+
```python
276+
from typing import Any, Dict, Optional, Union
277+
278+
from ._common import TaskProviderHelper
279+
280+
281+
class MyNewProviderTaskProviderHelper(TaskProviderHelper):
282+
def __init__(self):
283+
"""Define high-level parameters."""
284+
super().__init__(provider=..., base_url=..., task=...)
285+
286+
def get_response(
287+
self,
288+
response: Union[bytes, Dict],
289+
request_params: Optional[RequestParameters] = None,
290+
) -> Any:
291+
"""
292+
Return the response in the expected format.
293+
294+
Override this method in subclasses for customized response handling."""
295+
return super().get_response(response)
296+
297+
def _prepare_headers(self, headers: Dict, api_key: str) -> Dict:
298+
"""Return the headers to use for the request.
299+
300+
Override this method in subclasses for customized headers.
301+
"""
302+
return super()._prepare_headers(headers, api_key)
303+
304+
def _prepare_route(self, mapped_model: str, api_key: str) -> str:
305+
"""Return the route to use for the request.
306+
307+
Override this method in subclasses for customized routes.
308+
"""
309+
return super()._prepare_route(mapped_model)
310+
311+
def _prepare_payload_as_dict(self, inputs: Any, parameters: Dict, mapped_model: str) -> Optional[Dict]:
312+
"""Return the payload to use for the request, as a dict.
313+
314+
Override this method in subclasses for customized payloads.
315+
Only one of `_prepare_payload_as_dict` and `_prepare_payload_as_bytes` should return a value.
316+
"""
317+
return super()._prepare_payload_as_dict(inputs, parameters, mapped_model)
318+
319+
def _prepare_payload_as_bytes(
320+
self, inputs: Any, parameters: Dict, mapped_model: str, extra_payload: Optional[Dict]
321+
) -> Optional[bytes]:
322+
"""Return the body to use for the request, as bytes.
323+
324+
Override this method in subclasses for customized body data.
325+
Only one of `_prepare_payload_as_dict` and `_prepare_payload_as_bytes` should return a value.
326+
"""
327+
return super()._prepare_payload_as_bytes(inputs, parameters, mapped_model, extra_payload)
328+
```
329+
330+
### 2. Register the Provider
331+
- Go to [src/huggingface_hub/inference/_providers/__init__.py](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_providers/__init__.py) and add your provider to `PROVIDER_T` and `PROVIDERS`. Please try to respect alphabetical order.
332+
- Go to [src/huggingface_hub/inference/_client.py](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py) and update docstring in `InferenceClient.__init__` to document your provider.
333+
334+
### 3. Add tests
335+
- Go to [tests/test_inference_providers.py](https://github.com/huggingface/huggingface_hub/blob/main/tests/test_inference_providers.py) and add static tests for overridden methods.
336+
- Go to [tests/test_inference_client.py](https://github.com/huggingface/huggingface_hub/blob/main/tests/test_inference_client.py) and add VCR tests:
337+
a. Add an entry to `_RECOMMENDED_MODELS_FOR_VCR` at the top of the test module. This contains a mapping task <> test model. model-id must be the HF model id.
338+
```python
339+
_RECOMMENDED_MODELS_FOR_VCR = {
340+
"your-provider": {
341+
"task": "model-id",
342+
...
343+
},
344+
...
345+
}
346+
```
347+
b. Set up authentication: To record VCR cassettes, you'll need authentication:
348+
If you are a member of the provider organization (e.g., Replicate organization: https://huggingface.co/replicate), you can set the HF_INFERENCE_TEST_TOKEN environment variable with your HF token:
349+
350+
```bash
351+
export HF_INFERENCE_TEST_TOKEN="your-hf-token"
352+
```
353+
354+
If you're not a member but the provider is officially released on the Hub, you can set the HF_INFERENCE_TEST_TOKEN environment variable as above. If you don't have enough inference credits, we can help you record the VCR cassettes.
355+
356+
c. Record and commit tests
357+
Run the tests for your provider:
358+
```bash
359+
pytest tests/test_inference_client.py -k <provider>
360+
```
361+
d. Commit the generated VCR cassettes with your PR.
362+
363+
364+
## FAQ
365+
366+
**Question:** By default, in which order do we list providers in the settings page?
367+
368+
**Answer:** The default sort is by total number of requests routed by HF over the last 7 days. This order defines which provider will be used in priority by the widget on the model page (but the user's order takes precedence).

0 commit comments

Comments
 (0)