Skip to content

Commit 6b3e471

Browse files
benankSBrandeis
andauthored
Add Groq to provider docs (#1777)
* Add Groq to provider docs * update supported tasks * update md file * last update --------- Co-authored-by: SBrandeis <[email protected]>
1 parent 0fa2338 commit 6b3e471

File tree

14 files changed

+297
-233
lines changed

14 files changed

+297
-233
lines changed

docs/inference-providers/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@
2323
title: Featherless AI
2424
- local: providers/fireworks-ai
2525
title: Fireworks
26+
- local: providers/groq
27+
title: Groq
2628
- local: providers/hyperbolic
2729
title: Hyperbolic
2830
- local: providers/hf-inference

docs/inference-providers/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ Here is the complete list of partners integrated with Inference Providers, and t
2020
| [Fal AI](./providers/fal-ai) | | | |||
2121
| [Featherless AI](./providers/featherless-ai) || | | | |
2222
| [Fireworks](./providers/fireworks-ai) ||| | | |
23+
| [Groq](./providers/groq) || | | | |
2324
| [HF Inference](./providers/hf-inference) ||||| |
2425
| [Hyperbolic](./providers/hyperbolic) ||| | | |
2526
| [Nebius](./providers/nebius) ||||| |

docs/inference-providers/providers/cohere.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,6 @@ Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
5656

5757
<InferenceSnippet
5858
pipeline=image-text-to-text
59-
providersMapping={ {"cohere":{"modelId":"CohereLabs/aya-vision-32b","providerModelId":"c4ai-aya-vision-32b"} } }
59+
providersMapping={ {"cohere":{"modelId":"CohereLabs/aya-vision-8b","providerModelId":"c4ai-aya-vision-8b"} } }
6060
conversational />
6161

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
<!---
2+
WARNING
3+
4+
This markdown file has been generated from a script. Please do not edit it directly.
5+
6+
### Template
7+
8+
If you want to update the content related to groq's description, please edit the template file under `https://github.com/huggingface/hub-docs/tree/main/scripts/inference-providers/templates/providers/groq.handlebars`.
9+
10+
### Logos
11+
12+
If you want to update groq's logo, upload a file by opening a PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/inference-providers/logos. Ping @wauplin and @celinah on the PR to let them know you uploaded a new logo.
13+
Logos must be in .png format and be named `groq-light.png` and `groq-dark.png`. Visit https://huggingface.co/settings/theme to switch between light and dark mode and check that the logos are displayed correctly.
14+
15+
### Generation script
16+
17+
For more details, check out the `generate.ts` script: https://github.com/huggingface/hub-docs/blob/main/scripts/inference-providers/scripts/generate.ts.
18+
--->
19+
20+
# Groq
21+
22+
<div class="flex justify-center">
23+
<a href="https://groq.com/" target="_blank">
24+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/logos/groq-light.png"/>
25+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/logos/groq-dark.png"/>
26+
</a>
27+
</div>
28+
29+
<div class="flex">
30+
<a href="https://huggingface.co/groq" target="_blank">
31+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-lg.svg"/>
32+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-lg-dark.svg"/>
33+
</a>
34+
</div>
35+
36+
Groq is fast AI inference. Their groundbreaking LPU technology delivers record-setting performance and efficiency for GenAI models. With custom chips specifically designed for AI inference workloads and a deterministic, software-first approach, Groq eliminates the bottlenecks of conventional hardware to enable real-time AI applications with predictable latency and exceptional throughput so developers can build fast.
37+
38+
For latest pricing, visit our [pricing page](https://groq.com/pricing/).
39+
40+
## Resources
41+
- **Website**: https://groq.com/
42+
- **Documentation**: https://console.groq.com/docs
43+
- **Community Forum**: https://community.groq.com/
44+
- **X**: [@GroqInc](https://x.com/GroqInc)
45+
- **LinkedIn**: [Groq](https://www.linkedin.com/company/groq/)
46+
- **YouTube**: [Groq](https://www.youtube.com/@GroqInc)
47+
48+
## Supported tasks
49+
50+
51+
### Chat Completion (LLM)
52+
53+
Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
54+
55+
<InferenceSnippet
56+
pipeline=text-generation
57+
providersMapping={ {"groq":{"modelId":"Qwen/Qwen3-32B","providerModelId":"qwen/qwen3-32b"} } }
58+
conversational />
59+
60+
61+
### Chat Completion (VLM)
62+
63+
Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
64+
65+
<InferenceSnippet
66+
pipeline=image-text-to-text
67+
providersMapping={ {"groq":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"} } }
68+
conversational />
69+

docs/inference-providers/providers/hf-inference.md

Lines changed: 32 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -38,163 +38,146 @@ If you are interested in deploying models to a dedicated and autoscaling infrast
3838

3939
## Supported tasks
4040

41-
4241
### Automatic Speech Recognition
4342

4443
Find out more about Automatic Speech Recognition [here](../tasks/automatic_speech_recognition).
4544

4645
<InferenceSnippet
47-
pipeline=automatic-speech-recognition
48-
providersMapping={ {"hf-inference":{"modelId":"openai/whisper-large-v3","providerModelId":"openai/whisper-large-v3"} } }
46+
pipeline=automatic-speech-recognition
47+
providersMapping={ {"hf-inference":{"modelId":"openai/whisper-large-v3","providerModelId":"openai/whisper-large-v3"} } }
4948
/>
5049

51-
5250
### Chat Completion (LLM)
5351

5452
Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
5553

5654
<InferenceSnippet
57-
pipeline=text-generation
58-
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
55+
pipeline=text-generation
56+
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
5957
conversational />
6058

61-
6259
### Chat Completion (VLM)
6360

6461
Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
6562

6663
<InferenceSnippet
67-
pipeline=image-text-to-text
68-
providersMapping={ {"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"} } }
64+
pipeline=image-text-to-text
65+
providersMapping={ {"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"} } }
6966
conversational />
7067

71-
7268
### Feature Extraction
7369

7470
Find out more about Feature Extraction [here](../tasks/feature_extraction).
7571

7672
<InferenceSnippet
77-
pipeline=feature-extraction
78-
providersMapping={ {"hf-inference":{"modelId":"intfloat/multilingual-e5-large-instruct","providerModelId":"intfloat/multilingual-e5-large-instruct"} } }
73+
pipeline=feature-extraction
74+
providersMapping={ {"hf-inference":{"modelId":"intfloat/multilingual-e5-large-instruct","providerModelId":"intfloat/multilingual-e5-large-instruct"} } }
7975
/>
8076

81-
8277
### Fill Mask
8378

8479
Find out more about Fill Mask [here](../tasks/fill_mask).
8580

8681
<InferenceSnippet
87-
pipeline=fill-mask
88-
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"} } }
82+
pipeline=fill-mask
83+
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"} } }
8984
/>
9085

91-
9286
### Image Classification
9387

9488
Find out more about Image Classification [here](../tasks/image_classification).
9589

9690
<InferenceSnippet
97-
pipeline=image-classification
98-
providersMapping={ {"hf-inference":{"modelId":"Falconsai/nsfw_image_detection","providerModelId":"Falconsai/nsfw_image_detection"} } }
91+
pipeline=image-classification
92+
providersMapping={ {"hf-inference":{"modelId":"Falconsai/nsfw_image_detection","providerModelId":"Falconsai/nsfw_image_detection"} } }
9993
/>
10094

101-
10295
### Image Segmentation
10396

10497
Find out more about Image Segmentation [here](../tasks/image_segmentation).
10598

10699
<InferenceSnippet
107-
pipeline=image-segmentation
108-
providersMapping={ {"hf-inference":{"modelId":"mattmdjaga/segformer_b2_clothes","providerModelId":"mattmdjaga/segformer_b2_clothes"} } }
100+
pipeline=image-segmentation
101+
providersMapping={ {"hf-inference":{"modelId":"mattmdjaga/segformer_b2_clothes","providerModelId":"mattmdjaga/segformer_b2_clothes"} } }
109102
/>
110103

111-
112104
### Object Detection
113105

114106
Find out more about Object Detection [here](../tasks/object_detection).
115107

116108
<InferenceSnippet
117-
pipeline=object-detection
118-
providersMapping={ {"hf-inference":{"modelId":"facebook/detr-resnet-50","providerModelId":"facebook/detr-resnet-50"} } }
109+
pipeline=object-detection
110+
providersMapping={ {"hf-inference":{"modelId":"facebook/detr-resnet-50","providerModelId":"facebook/detr-resnet-50"} } }
119111
/>
120112

121-
122113
### Question Answering
123114

124115
Find out more about Question Answering [here](../tasks/question_answering).
125116

126117
<InferenceSnippet
127-
pipeline=question-answering
128-
providersMapping={ {"hf-inference":{"modelId":"deepset/roberta-base-squad2","providerModelId":"deepset/roberta-base-squad2"} } }
118+
pipeline=question-answering
119+
providersMapping={ {"hf-inference":{"modelId":"deepset/roberta-base-squad2","providerModelId":"deepset/roberta-base-squad2"} } }
129120
/>
130121

131-
132122
### Summarization
133123

134124
Find out more about Summarization [here](../tasks/summarization).
135125

136126
<InferenceSnippet
137-
pipeline=summarization
138-
providersMapping={ {"hf-inference":{"modelId":"facebook/bart-large-cnn","providerModelId":"facebook/bart-large-cnn"} } }
127+
pipeline=summarization
128+
providersMapping={ {"hf-inference":{"modelId":"facebook/bart-large-cnn","providerModelId":"facebook/bart-large-cnn"} } }
139129
/>
140130

141-
142131
### Table Question Answering
143132

144133
Find out more about Table Question Answering [here](../tasks/table_question_answering).
145134

146135
<InferenceSnippet
147-
pipeline=table-question-answering
148-
providersMapping={ {"hf-inference":{"modelId":"google/tapas-base-finetuned-wtq","providerModelId":"google/tapas-base-finetuned-wtq"} } }
136+
pipeline=table-question-answering
137+
providersMapping={ {"hf-inference":{"modelId":"google/tapas-base-finetuned-wtq","providerModelId":"google/tapas-base-finetuned-wtq"} } }
149138
/>
150139

151-
152140
### Text Classification
153141

154142
Find out more about Text Classification [here](../tasks/text_classification).
155143

156144
<InferenceSnippet
157-
pipeline=text-classification
158-
providersMapping={ {"hf-inference":{"modelId":"tabularisai/multilingual-sentiment-analysis","providerModelId":"tabularisai/multilingual-sentiment-analysis"} } }
145+
pipeline=text-classification
146+
providersMapping={ {"hf-inference":{"modelId":"tabularisai/multilingual-sentiment-analysis","providerModelId":"tabularisai/multilingual-sentiment-analysis"} } }
159147
/>
160148

161-
162149
### Text Generation
163150

164151
Find out more about Text Generation [here](../tasks/text_generation).
165152

166153
<InferenceSnippet
167-
pipeline=text-generation
168-
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
154+
pipeline=text-generation
155+
providersMapping={ {"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"} } }
169156
/>
170157

171-
172158
### Text To Image
173159

174160
Find out more about Text To Image [here](../tasks/text_to_image).
175161

176162
<InferenceSnippet
177-
pipeline=text-to-image
178-
providersMapping={ {"hf-inference":{"modelId":"black-forest-labs/FLUX.1-dev","providerModelId":"black-forest-labs/FLUX.1-dev"} } }
163+
pipeline=text-to-image
164+
providersMapping={ {"hf-inference":{"modelId":"black-forest-labs/FLUX.1-dev","providerModelId":"black-forest-labs/FLUX.1-dev"} } }
179165
/>
180166

181-
182167
### Token Classification
183168

184169
Find out more about Token Classification [here](../tasks/token_classification).
185170

186171
<InferenceSnippet
187-
pipeline=token-classification
188-
providersMapping={ {"hf-inference":{"modelId":"dslim/bert-base-NER","providerModelId":"dslim/bert-base-NER"} } }
172+
pipeline=token-classification
173+
providersMapping={ {"hf-inference":{"modelId":"dslim/bert-base-NER","providerModelId":"dslim/bert-base-NER"} } }
189174
/>
190175

191-
192176
### Translation
193177

194178
Find out more about Translation [here](../tasks/translation).
195179

196180
<InferenceSnippet
197-
pipeline=translation
198-
providersMapping={ {"hf-inference":{"modelId":"google-t5/t5-base","providerModelId":"google-t5/t5-base"} } }
181+
pipeline=translation
182+
providersMapping={ {"hf-inference":{"modelId":"google-t5/t5-base","providerModelId":"google-t5/t5-base"} } }
199183
/>
200-

docs/inference-providers/providers/nscale.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
4646

4747
<InferenceSnippet
4848
pipeline=text-generation
49-
providersMapping={ {"nscale":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"Qwen/Qwen3-235B-A22B"} } }
49+
providersMapping={ {"nscale":{"modelId":"meta-llama/Llama-3.1-8B-Instruct","providerModelId":"meta-llama/Llama-3.1-8B-Instruct"} } }
5050
conversational />
5151

5252

docs/inference-providers/providers/replicate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,6 @@ Find out more about Text To Video [here](../tasks/text_to_video).
5454

5555
<InferenceSnippet
5656
pipeline=text-to-video
57-
providersMapping={ {"replicate":{"modelId":"Wan-AI/Wan2.1-T2V-14B","providerModelId":"wavespeedai/wan-2.1-t2v-480p"} } }
57+
providersMapping={ {"replicate":{"modelId":"Lightricks/LTX-Video","providerModelId":"lightricks/ltx-video:8c47da666861d081eeb4d1261853087de23923a268a69b63febdf5dc1dee08e4"} } }
5858
/>
5959

docs/inference-providers/providers/together.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
4444

4545
<InferenceSnippet
4646
pipeline=text-generation
47-
providersMapping={ {"together":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1"} } }
47+
providersMapping={ {"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"} } }
4848
conversational />
4949

5050

@@ -64,7 +64,7 @@ Find out more about Text Generation [here](../tasks/text_generation).
6464

6565
<InferenceSnippet
6666
pipeline=text-generation
67-
providersMapping={ {"together":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1"} } }
67+
providersMapping={ {"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"} } }
6868
/>
6969

7070

docs/inference-providers/tasks/chat-completion.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ This is a subtask of [`text-generation`](https://huggingface.co/docs/inference-p
2525
- [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B): Smaller variant of one of the most powerful models.
2626
- [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Very powerful text generation model trained to follow instructions.
2727
- [microsoft/phi-4](https://huggingface.co/microsoft/phi-4): Powerful text generation model by Microsoft.
28+
- [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M): Strong conversational model that supports very long instructions.
2829
- [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct): Text generation model used to write code.
2930
- [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1): Powerful reasoning based open large language model.
3031

@@ -60,7 +61,7 @@ The API supports:
6061

6162
<InferenceSnippet
6263
pipeline=text-generation
63-
providersMapping={ {"cerebras":{"modelId":"Qwen/Qwen3-32B","providerModelId":"qwen-3-32b"},"cohere":{"modelId":"CohereLabs/c4ai-command-r-plus","providerModelId":"command-r-plus-04-2024"},"featherless-ai":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"},"fireworks-ai":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"accounts/fireworks/models/deepseek-r1-0528"},"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"},"hyperbolic":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"},"nebius":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"},"novita":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek/deepseek-r1-0528"},"nscale":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"Qwen/Qwen3-235B-A22B"},"sambanova":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"DeepSeek-R1-0528"},"together":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1"}} }
64+
providersMapping={ {"cerebras":{"modelId":"Qwen/Qwen3-32B","providerModelId":"qwen-3-32b"},"cohere":{"modelId":"CohereLabs/c4ai-command-r-plus","providerModelId":"command-r-plus-04-2024"},"featherless-ai":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"},"fireworks-ai":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"accounts/fireworks/models/deepseek-r1-0528"},"groq":{"modelId":"Qwen/Qwen3-32B","providerModelId":"qwen/qwen3-32b"},"hf-inference":{"modelId":"sarvamai/sarvam-m","providerModelId":"sarvamai/sarvam-m"},"hyperbolic":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"},"nebius":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek-ai/DeepSeek-R1-0528"},"novita":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"deepseek/deepseek-r1-0528"},"nscale":{"modelId":"meta-llama/Llama-3.1-8B-Instruct","providerModelId":"meta-llama/Llama-3.1-8B-Instruct"},"sambanova":{"modelId":"deepseek-ai/DeepSeek-R1-0528","providerModelId":"DeepSeek-R1-0528"},"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"}} }
6465
conversational />
6566

6667

@@ -70,7 +71,7 @@ conversational />
7071

7172
<InferenceSnippet
7273
pipeline=image-text-to-text
73-
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"llama-4-scout-17b-16e-instruct"},"cohere":{"modelId":"CohereLabs/aya-vision-32b","providerModelId":"c4ai-aya-vision-32b"},"featherless-ai":{"modelId":"allura-org/Gemma-3-Glitter-27B","providerModelId":"allura-org/Gemma-3-Glitter-27B"},"fireworks-ai":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"accounts/fireworks/models/llama4-scout-instruct-basic"},"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"},"hyperbolic":{"modelId":"Qwen/Qwen2.5-VL-7B-Instruct","providerModelId":"Qwen/Qwen2.5-VL-7B-Instruct"},"nebius":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it-fast"},"novita":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"},"nscale":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"},"sambanova":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"Llama-4-Scout-17B-16E-Instruct"},"together":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"}} }
74+
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"llama-4-scout-17b-16e-instruct"},"cohere":{"modelId":"CohereLabs/aya-vision-8b","providerModelId":"c4ai-aya-vision-8b"},"featherless-ai":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it"},"fireworks-ai":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"accounts/fireworks/models/llama4-scout-instruct-basic"},"groq":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"},"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"},"hyperbolic":{"modelId":"Qwen/Qwen2.5-VL-7B-Instruct","providerModelId":"Qwen/Qwen2.5-VL-7B-Instruct"},"nebius":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it-fast"},"novita":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"},"nscale":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"},"sambanova":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"Llama-4-Scout-17B-16E-Instruct"},"together":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"}} }
7475
conversational />
7576

7677

0 commit comments

Comments
 (0)