Skip to content

Commit 5707b56

Browse files
committed
update blog
1 parent 9593c5e commit 5707b56

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

inference-providers-featherless-groq.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,16 +23,21 @@ authors:
2323

2424
# Groq & Featherless AI on Hugging Face Inference Providers 🔥
2525

26-
We're thrilled to share that **Featherless AI** is now a supported Inference Provider on the Hugging Face Hub!
27-
Featherless AI joins our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. Inference Providers are also seamlessly integrated into our client SDKs (for both JS and Python), making it super easy to use a wide variety of models with your preferred providers.
26+
We're thrilled to share that **Featherless AI** and **Groq** are now supported Inference Providers on the Hugging Face Hub!
27+
Featherless AI and Groq join our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. Inference Providers are also seamlessly integrated into our client SDKs (for both JS and Python), making it super easy to use a wide variety of models with your preferred providers.
2828

2929
[Featherless AI](https://featherless.ai) supports a wide variety of text and conversational models, including the latest open-source models from DeepSeek, Meta, Google, Qwen, and much more.
3030

3131
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the [models page](https://huggingface.co/models?inference_provider=featherless-ai&sort=trending).
3232

33-
We're quite excited to see what you'll build with this new provider!
33+
[Groq](https://groq.com) offers a fast Inference API, powered by the LPU (Language Processing Unit), their own AI hardware processor that meets the demand for instant speed, scalability, and low latency. By optimizing compute density, memory bandwidth, and scalability, LPUs overcome performance bottlenecks and deliver ultra-low latency inference, unlocking a new class of use cases.
34+
35+
Take advantage of Groq for fast AI inference performance for leading openly-available models from providers like Meta, DeepSeek, Qwen, Mistral, Google, OpenAI, and more.
36+
37+
We're quite excited to see what you'll build with those new providers!
3438

3539
Read more about how to use Featherless as Inference Provider in its dedicated [documentation page](https://huggingface.co/docs/inference-providers/providers/featherless-ai).
40+
Read more about how to use Groq as Inference Provider in its dedicated [documentation page](https://huggingface.co/docs/inference-providers/providers/groq).
3641

3742
## How it works
3843

@@ -92,20 +97,22 @@ print(completion.choices[0].message)
9297

9398
#### from JS using @huggingface/inference
9499

100+
The following example shows how to use Qwen QWQ-32B using Groq as the inference provider. You can use a [Hugging Face token](https://huggingface.co/settings/tokens) for automatic routing through Hugging Face, or your own Groq cloud API key if you have one.
101+
95102
```js
96103
import { HfInference } from "@huggingface/inference";
97104

98105
const client = new HfInference("xxxxxxxxxxxxxxxxxxxxxxxx");
99106

100107
const chatCompletion = await client.chatCompletion({
101-
model: "deepseek-ai/DeepSeek-R1-0528",
108+
model: "Qwen/QwQ-32B",
102109
messages: [
103110
{
104111
role: "user",
105112
content: "What is the capital of France?"
106113
}
107114
],
108-
provider: "featherless-ai",
115+
provider: "groq",
109116
});
110117

111118
console.log(chatCompletion.choices[0].message);

0 commit comments

Comments
 (0)