You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/inference-providers/register-as-a-provider.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -154,17 +154,17 @@ Create a new mapping item, with the following body (JSON-encoded):
154
154
-`hfModel` is the model id on the Hub's side.
155
155
-`providerModel` is the model id on your side (can be the same or different).
156
156
157
-
The output of such route is a mapping ID that you can use later to update the mapping's status; or to delete it.
157
+
The output of this route is a mapping ID that you can later use to update the mapping's status or delete it.
158
158
159
159
### Using a tag-filter to map several HF models to a single inference endpoint
160
160
161
-
We also support mapping HF models based on their `tags`.
161
+
We also support mapping HF models based on their `tags`. Using tag filters, you can automatically map multiple HF models to a single inference endpoint on your side.
162
+
For example, any model tagged with both `lora` and `base_model:adapter:black-forest-labs/FLUX.1-dev` can be mapped to your Flux-dev LoRA inference endpoint.
162
163
163
-
This is useful to, for example, automatically map LoRA adapters to a single Inference Endpoint on your side.
164
164
165
165
<Tip>
166
166
167
-
Important note: the client library (Javascript) must be able to handle LoRA weights for your provider. Check out [fal's implementation](https://github.com/huggingface/huggingface.js/blob/904964c9f8cd10ed67114ccb88b9028e89fd6cad/packages/inference/src/providers/fal-ai.ts#L78-L124) for more details.
167
+
Important: Make sure that the JS client library can handle LoRA weights for your provider. Check out [fal's implementation](https://github.com/huggingface/huggingface.js/blob/904964c9f8cd10ed67114ccb88b9028e89fd6cad/packages/inference/src/providers/fal-ai.ts#L78-L124) for more details.
168
168
169
169
</Tip>
170
170
@@ -189,10 +189,10 @@ Create a new mapping item, with the following body (JSON-encoded):
189
189
-`task`, also known as `pipeline_tag` in the HF ecosystem, is the type of model / type of API
190
190
(examples: "text-to-image", "text-generation", but you should use "conversational" for chat models)
191
191
-`tags` is the set of model tags to match. For example, to match all LoRAs of Flux, you can use: `["lora", "base_model:adapter:black-forest-labs/FLUX.1-dev"]`
192
-
-`providerModel` is the model id on your side (can be the same or different).
193
-
-`adapterType` is a literal value designed to help client libraries interpret how to request your API. The only supported value at the moment is `"lora"`.
192
+
-`providerModel` is the model ID on your side (can be the same or different from the HF model ID).
193
+
-`adapterType` is a literal value that helps client libraries interpret how to call your API. The only supported value at the moment is `"lora"`.
194
194
195
-
The output of such route is a mapping ID that you can use later to update the mapping's status; or to delete it.
195
+
The output of this route is a mapping ID that you can later use to update the mapping's status or delete it.
0 commit comments