Skip to content

Commit c9d7b6f

Browse files
committed
Merge branch 'main' into benank/main
2 parents a5c80ff + 0fa2338 commit c9d7b6f

File tree

15 files changed

+390
-346
lines changed

15 files changed

+390
-346
lines changed

docs/hub/datasets-dask.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ the `meta` argument to know the type of the new column in the meantime.
9393

9494
When reading Parquet data from Hugging Face, Dask automatically leverages the metadata in Parquet files to skip entire files or row groups if they are not needed. For example if you apply a filter (predicate) on a Hugging Face Dataset in Parquet format or if you select a subset of the columns (projection), Dask will read the metadata of the Parquet files to discard the parts that are not needed without downloading them.
9595

96-
This is possible thanks to a [reimplmentation of the Dask DataFrame API](https://docs.coiled.io/blog/dask-dataframe-is-fast.html?utm_source=hf-docs) to support query optimization, which makes Dask faster and more robust.
96+
This is possible thanks to a [reimplementation of the Dask DataFrame API](https://docs.coiled.io/blog/dask-dataframe-is-fast.html?utm_source=hf-docs) to support query optimization, which makes Dask faster and more robust.
9797

9898
For example this subset of FineWeb-Edu contains many Parquet files. If you can filter the dataset to keep the text from recent CC dumps, Dask will skip most of the files and only download the data that match the filter:
9999

docs/hub/fastai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ All models on the Hub come up with the following features:
1414

1515
## Using existing models
1616

17-
The `huggingface_hub` library is a lightweight Python client with utlity functions to download models from the Hub.
17+
The `huggingface_hub` library is a lightweight Python client with utility functions to download models from the Hub.
1818

1919
```bash
2020
pip install huggingface_hub["fastai"]

docs/hub/security-protectai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Interested in joining our security partnership / providing scanning information
1111

1212
We partnered with Protect AI to provide scanning in order to make the Hub safer. The same way files are scanned by our internal scanning system, public repositories' files are scanned by Guardian.
1313

14-
Our frontend has been redesigned specifically for this purpose, in order to accomodate for new scanners:
14+
Our frontend has been redesigned specifically for this purpose, in order to accommodate for new scanners:
1515

1616
<img class="block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/third-party-scans-list.png"/>
1717

docs/hub/spaces-sdks-docker-tabby.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ In this guide, you will learn how to deploy your own Tabby instance and use it f
66

77
## Your first Tabby Space
88

9-
In this section, you will learn how to deploy a Tabby Space and use it for yourself or your orgnization.
9+
In this section, you will learn how to deploy a Tabby Space and use it for yourself or your organization.
1010

1111
### Deploy Tabby on Spaces
1212

docs/inference-providers/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@
1919
title: Cohere
2020
- local: providers/fal-ai
2121
title: Fal AI
22+
- local: providers/featherless-ai
23+
title: Featherless AI
2224
- local: providers/fireworks-ai
2325
title: Fireworks
2426
- local: providers/groq

docs/inference-providers/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ Here is the complete list of partners integrated with Inference Providers, and t
1818
| [Cerebras](./providers/cerebras) || | | | |
1919
| [Cohere](./providers/cohere) ||| | | |
2020
| [Fal AI](./providers/fal-ai) | | | |||
21+
| [Featherless AI](./providers/featherless-ai) || | | | |
2122
| [Fireworks](./providers/fireworks-ai) ||| | | |
2223
| [Groq](./providers/groq) || | | | |
2324
| [HF Inference](./providers/hf-inference) ||||| |
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
<!---
2+
WARNING
3+
4+
This markdown file has been generated from a script. Please do not edit it directly.
5+
6+
### Template
7+
8+
If you want to update the content related to featherless-ai's description, please edit the template file under `https://github.com/huggingface/hub-docs/tree/main/scripts/inference-providers/templates/providers/featherless-ai.handlebars`.
9+
10+
### Logos
11+
12+
If you want to update featherless-ai's logo, upload a file by opening a PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/inference-providers/logos. Ping @wauplin and @celinah on the PR to let them know you uploaded a new logo.
13+
Logos must be in .png format and be named `featherless-ai-light.png` and `featherless-ai-dark.png`. Visit https://huggingface.co/settings/theme to switch between light and dark mode and check that the logos are displayed correctly.
14+
15+
### Generation script
16+
17+
For more details, check out the `generate.ts` script: https://github.com/huggingface/hub-docs/blob/main/scripts/inference-providers/scripts/generate.ts.
18+
--->
19+
20+
# Featherless AI
21+
22+
<div class="flex justify-center">
23+
<a href="https://featherless.ai/" target="_blank">
24+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/logos/featherless-ai-light.png"/>
25+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/logos/featherless-ai-dark.png"/>
26+
</a>
27+
</div>
28+
29+
<div class="flex">
30+
<a href="https://huggingface.co/featherless-ai" target="_blank">
31+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-lg.svg"/>
32+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-lg-dark.svg"/>
33+
</a>
34+
</div>
35+
36+
[Featherless AI](https://featherless.ai) is a serverless AI inference platform that offers access to thousands of open-source models.
37+
38+
Our goal is to make all AI models available for serverless inference. We provide inference via API to a continually expanding library of open-weight models.
39+
40+
## Supported tasks
41+
42+
43+
### Chat Completion (LLM)
44+
45+
Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
46+
47+
<InferenceSnippet
48+
pipeline=text-generation
49+
providersMapping={ {"featherless-ai":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"} } }
50+
conversational />
51+
52+
53+
### Chat Completion (VLM)
54+
55+
Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
56+
57+
<InferenceSnippet
58+
pipeline=image-text-to-text
59+
providersMapping={ {"featherless-ai":{"modelId":"allura-org/Gemma-3-Glitter-27B","providerModelId":"allura-org/Gemma-3-Glitter-27B"} } }
60+
conversational />
61+
62+
63+
### Text Generation
64+
65+
Find out more about Text Generation [here](../tasks/text_generation).
66+
67+
<InferenceSnippet
68+
pipeline=text-generation
69+
providersMapping={ {"featherless-ai":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"} } }
70+
/>
71+

docs/inference-providers/providers/hf-inference.md

Lines changed: 32 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -38,163 +38,146 @@ If you are interested in deploying models to a dedicated and autoscaling infrast
3838

3939
## Supported tasks
4040

41-
4241
### Automatic Speech Recognition
4342

4443
Find out more about Automatic Speech Recognition [here](../tasks/automatic_speech_recognition).
4544

4645
<InferenceSnippet
47-
pipeline=automatic-speech-recognition
48-
providersMapping={ {"hf-inference":{"modelId":"openai/whisper-large-v3","providerModelId":"openai/whisper-large-v3"} } }
46+
pipeline=automatic-speech-recognition
47+
providersMapping={ {"hf-inference":{"modelId":"openai/whisper-large-v3","providerModelId":"openai/whisper-large-v3"} } }
4948
/>
5049

51-
5250
### Chat Completion (LLM)
5351

5452
Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
5553

5654
<InferenceSnippet
57-
pipeline=text-generation
58-
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
55+
pipeline=text-generation
56+
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
5957
conversational />
6058

61-
6259
### Chat Completion (VLM)
6360

6461
Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
6562

6663
<InferenceSnippet
67-
pipeline=image-text-to-text
68-
providersMapping={ {"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"} } }
64+
pipeline=image-text-to-text
65+
providersMapping={ {"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"} } }
6966
conversational />
7067

71-
7268
### Feature Extraction
7369

7470
Find out more about Feature Extraction [here](../tasks/feature_extraction).
7571

7672
<InferenceSnippet
77-
pipeline=feature-extraction
78-
providersMapping={ {"hf-inference":{"modelId":"intfloat/multilingual-e5-large-instruct","providerModelId":"intfloat/multilingual-e5-large-instruct"} } }
73+
pipeline=feature-extraction
74+
providersMapping={ {"hf-inference":{"modelId":"intfloat/multilingual-e5-large-instruct","providerModelId":"intfloat/multilingual-e5-large-instruct"} } }
7975
/>
8076

81-
8277
### Fill Mask
8378

8479
Find out more about Fill Mask [here](../tasks/fill_mask).
8580

8681
<InferenceSnippet
87-
pipeline=fill-mask
88-
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"} } }
82+
pipeline=fill-mask
83+
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"} } }
8984
/>
9085

91-
9286
### Image Classification
9387

9488
Find out more about Image Classification [here](../tasks/image_classification).
9589

9690
<InferenceSnippet
97-
pipeline=image-classification
98-
providersMapping={ {"hf-inference":{"modelId":"Falconsai/nsfw_image_detection","providerModelId":"Falconsai/nsfw_image_detection"} } }
91+
pipeline=image-classification
92+
providersMapping={ {"hf-inference":{"modelId":"Falconsai/nsfw_image_detection","providerModelId":"Falconsai/nsfw_image_detection"} } }
9993
/>
10094

101-
10295
### Image Segmentation
10396

10497
Find out more about Image Segmentation [here](../tasks/image_segmentation).
10598

10699
<InferenceSnippet
107-
pipeline=image-segmentation
108-
providersMapping={ {"hf-inference":{"modelId":"mattmdjaga/segformer_b2_clothes","providerModelId":"mattmdjaga/segformer_b2_clothes"} } }
100+
pipeline=image-segmentation
101+
providersMapping={ {"hf-inference":{"modelId":"mattmdjaga/segformer_b2_clothes","providerModelId":"mattmdjaga/segformer_b2_clothes"} } }
109102
/>
110103

111-
112104
### Object Detection
113105

114106
Find out more about Object Detection [here](../tasks/object_detection).
115107

116108
<InferenceSnippet
117-
pipeline=object-detection
118-
providersMapping={ {"hf-inference":{"modelId":"facebook/detr-resnet-50","providerModelId":"facebook/detr-resnet-50"} } }
109+
pipeline=object-detection
110+
providersMapping={ {"hf-inference":{"modelId":"facebook/detr-resnet-50","providerModelId":"facebook/detr-resnet-50"} } }
119111
/>
120112

121-
122113
### Question Answering
123114

124115
Find out more about Question Answering [here](../tasks/question_answering).
125116

126117
<InferenceSnippet
127-
pipeline=question-answering
128-
providersMapping={ {"hf-inference":{"modelId":"deepset/roberta-base-squad2","providerModelId":"deepset/roberta-base-squad2"} } }
118+
pipeline=question-answering
119+
providersMapping={ {"hf-inference":{"modelId":"deepset/roberta-base-squad2","providerModelId":"deepset/roberta-base-squad2"} } }
129120
/>
130121

131-
132122
### Summarization
133123

134124
Find out more about Summarization [here](../tasks/summarization).
135125

136126
<InferenceSnippet
137-
pipeline=summarization
138-
providersMapping={ {"hf-inference":{"modelId":"facebook/bart-large-cnn","providerModelId":"facebook/bart-large-cnn"} } }
127+
pipeline=summarization
128+
providersMapping={ {"hf-inference":{"modelId":"facebook/bart-large-cnn","providerModelId":"facebook/bart-large-cnn"} } }
139129
/>
140130

141-
142131
### Table Question Answering
143132

144133
Find out more about Table Question Answering [here](../tasks/table_question_answering).
145134

146135
<InferenceSnippet
147-
pipeline=table-question-answering
148-
providersMapping={ {"hf-inference":{"modelId":"google/tapas-base-finetuned-wtq","providerModelId":"google/tapas-base-finetuned-wtq"} } }
136+
pipeline=table-question-answering
137+
providersMapping={ {"hf-inference":{"modelId":"google/tapas-base-finetuned-wtq","providerModelId":"google/tapas-base-finetuned-wtq"} } }
149138
/>
150139

151-
152140
### Text Classification
153141

154142
Find out more about Text Classification [here](../tasks/text_classification).
155143

156144
<InferenceSnippet
157-
pipeline=text-classification
158-
providersMapping={ {"hf-inference":{"modelId":"distilbert/distilbert-base-uncased-finetuned-sst-2-english","providerModelId":"distilbert/distilbert-base-uncased-finetuned-sst-2-english"} } }
145+
pipeline=text-classification
146+
providersMapping={ {"hf-inference":{"modelId":"tabularisai/multilingual-sentiment-analysis","providerModelId":"tabularisai/multilingual-sentiment-analysis"} } }
159147
/>
160148

161-
162149
### Text Generation
163150

164151
Find out more about Text Generation [here](../tasks/text_generation).
165152

166153
<InferenceSnippet
167-
pipeline=text-generation
168-
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
154+
pipeline=text-generation
155+
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
169156
/>
170157

171-
172158
### Text To Image
173159

174160
Find out more about Text To Image [here](../tasks/text_to_image).
175161

176162
<InferenceSnippet
177-
pipeline=text-to-image
178-
providersMapping={ {"hf-inference":{"modelId":"black-forest-labs/FLUX.1-dev","providerModelId":"black-forest-labs/FLUX.1-dev"} } }
163+
pipeline=text-to-image
164+
providersMapping={ {"hf-inference":{"modelId":"black-forest-labs/FLUX.1-dev","providerModelId":"black-forest-labs/FLUX.1-dev"} } }
179165
/>
180166

181-
182167
### Token Classification
183168

184169
Find out more about Token Classification [here](../tasks/token_classification).
185170

186171
<InferenceSnippet
187-
pipeline=token-classification
188-
providersMapping={ {"hf-inference":{"modelId":"dslim/bert-base-NER","providerModelId":"dslim/bert-base-NER"} } }
172+
pipeline=token-classification
173+
providersMapping={ {"hf-inference":{"modelId":"dslim/bert-base-NER","providerModelId":"dslim/bert-base-NER"} } }
189174
/>
190175

191-
192176
### Translation
193177

194178
Find out more about Translation [here](../tasks/translation).
195179

196180
<InferenceSnippet
197-
pipeline=translation
198-
providersMapping={ {"hf-inference":{"modelId":"google-t5/t5-base","providerModelId":"google-t5/t5-base"} } }
181+
pipeline=translation
182+
providersMapping={ {"hf-inference":{"modelId":"google-t5/t5-base","providerModelId":"google-t5/t5-base"} } }
199183
/>
200-

0 commit comments

Comments
 (0)