You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/inference-providers/register-as-a-provider.md
+22Lines changed: 22 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -457,6 +457,28 @@ class MyNewProviderTaskProviderHelper(TaskProviderHelper):
457
457
- Go to [tests/test_inference_providers.py](https://github.com/huggingface/huggingface_hub/blob/main/tests/test_inference_providers.py) and add static tests for overridden methods.
458
458
459
459
460
+
## 6. Add provider documentation
461
+
462
+
Create a dedicated documentation page for your provider within the Hugging Face documentation. This page should contain a concise description of your provider services, highlight the benefits for users, set expectations regarding performance or features, and include any relevant details such as pricing models or data retention policies. Essentially, provide any information that would be valuable to end users.
463
+
464
+
Here's how to add your documentation page:
465
+
466
+
- Provide Your Logo: You can send your logo files (separate light and dark mode versions) directly to us. This is often the simplest way. Alternatively, if you prefer, you can open a PR in the [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images/tree/main/inference-providers/logos) repository. If you choose to open a PR:
467
+
* Logos must be in `.png` format.
468
+
* Name them `{provider-name}-light.png` and `{provider-name}-dark.png`.
469
+
* Please ping `@Wauplin` and `@celinah` on the PR.
470
+
- Create the Documentation File:
471
+
* Use an existing provider page as a template. For example, check out the template for [Fal AI](https://github.com/huggingface/hub-docs/blob/main/scripts/inference-providers/templates/providers/fal-ai.handlebars).
472
+
* The file should be located under `scripts/inference-providers/templates/providers/{your-provider-name}.handlebars`.
473
+
- Submit the Documentation PR:
474
+
* Add your new `{provider-name}.handlebars` file.
475
+
* Update the [partners table](./index#partners) to include your company or product.
476
+
* Update the `_toctree.yml` file in the `docs/inference-providers/` directory to include your new documentation page in the "Providers" section, maintaining alphabetical order.
477
+
* Update the `scripts/inference-providers/scripts/generate.ts` file to include your provider in the `PROVIDERS_HUB_ORGS` and `PROVIDERS_URLS` constants, maintaining alphabetical order.
478
+
* Run `pnpm install` (if you haven't already) and then `pnpm run generate` at the root of the `scripts/inference-providers` repository to generate the documentation.
479
+
* Commit all your changes, including the manually edited files (provider page, `_toctree.yml`, partners table) and the files generated by the script.
480
+
* When you open the PR, please ping @Wauplin, @SBrandeis, @julien-c, and @hanouticelina for a review. If you need any assistance with these steps, please reach out – we're here to help you!
481
+
460
482
## FAQ
461
483
462
484
**Question:** By default, in which order do we list providers in the settings page?
Copy file name to clipboardExpand all lines: docs/inference-providers/tasks/audio-classification.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,13 +29,17 @@ For more details about the `audio-classification` task, check out its [dedicated
29
29
30
30
### Recommended models
31
31
32
+
-[ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition): An emotion recognition model.
32
33
33
34
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=audio-classification&sort=trending).
34
35
35
36
### Using the API
36
37
37
38
38
-
There are currently no snippet examples for the **audio-classification** task, as no providers support it yet.
Copy file name to clipboardExpand all lines: docs/inference-providers/tasks/chat-completion.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,6 +24,7 @@ This is a subtask of [`text-generation`](https://huggingface.co/docs/inference-p
24
24
-[google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it): A text-generation model trained to follow instructions.
25
25
-[meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Very powerful text generation model trained to follow instructions.
26
26
-[microsoft/phi-4](https://huggingface.co/microsoft/phi-4): Powerful text generation model by Microsoft.
27
+
-[Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M): Strong conversational model that supports very long instructions.
27
28
-[Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct): Text generation model used to write code.
28
29
-[deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1): Powerful reasoning based open large language model.
|** name**|_string_| Optional name identifier for the schema |
128
+
|** schema***|_unknown_| The actual JSON schema definition |
123
129
|**seed**|_integer_||
124
130
|**stop**|_string[]_| Up to 4 sequences where the API will stop generating further tokens. |
Copy file name to clipboardExpand all lines: docs/inference-providers/tasks/feature-extraction.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,7 @@ For more details about the `feature-extraction` task, check out its [dedicated p
29
29
30
30
### Recommended models
31
31
32
+
-[thenlper/gte-large](https://huggingface.co/thenlper/gte-large): A powerful feature extraction model for natural language processing tasks.
32
33
33
34
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=feature-extraction&sort=trending).
34
35
@@ -37,7 +38,7 @@ Explore all available models and find the one that suits you best [here](https:/
Copy file name to clipboardExpand all lines: docs/inference-providers/tasks/fill-mask.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,13 +24,17 @@ For more details about the `fill-mask` task, check out its [dedicated page](http
24
24
25
25
### Recommended models
26
26
27
+
-[FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base): A multilingual model trained on 100 languages.
27
28
28
29
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=fill-mask&sort=trending).
29
30
30
31
### Using the API
31
32
32
33
33
-
There are currently no snippet examples for the **fill-mask** task, as no providers support it yet.
0 commit comments