@@ -91,6 +91,7 @@ google/gemma-3-27b-it:bf16
9191```
9292| Attribute | Value |
9393| -----------| -------|
94+ | Supports parallel tool calling | No |
9495| Supported image formats | PNG, JPEG, WEBP, and non-animated GIFs |
9596| Maximum image resolution (pixels) | 896x896 |
9697| Token dimension (pixels)| 56x56 |
@@ -103,6 +104,7 @@ This model was optimized to have a dense knowledge and faster tokens throughput
103104
104105| Attribute | Value |
105106| -----------| -------|
107+ | Supports parallel tool calling | Yes |
106108| Supported images formats | PNG, JPEG, WEBP, and non-animated GIFs |
107109| Maximum image resolution (pixels) | 1540x1540 |
108110| Token dimension (pixels)| 28x28 |
@@ -123,6 +125,7 @@ It can analyze images and offer insights from visual content alongside text.
123125
124126| Attribute | Value |
125127| -----------| -------|
128+ | Supports parallel tool calling | Yes |
126129| Supported images formats | PNG, JPEG, WEBP, and non-animated GIFs |
127130| Maximum image resolution (pixels) | 1024x1024 |
128131| Token dimension (pixels)| 16x16 |
@@ -148,6 +151,10 @@ allenai/molmo-72b-0924:fp8
148151Released December 6, 2024, Meta’s Llama 3.3 70b is a fine-tune of the [ Llama 3.1 70b] ( /managed-inference/reference-content/model-catalog/#llama-31-70b-instruct ) model.
149152This model is still text-only (text in/text out). However, Llama 3.3 was designed to approach the performance of Llama 3.1 405B on some applications.
150153
154+ | Attribute | Value |
155+ | -----------| -------|
156+ | Supports parallel tool calling | Yes |
157+
151158#### Model name
152159```
153160meta/llama-3.3-70b-instruct:fp8
@@ -158,6 +165,10 @@ meta/llama-3.3-70b-instruct:bf16
158165Released July 23, 2024, Meta’s Llama 3.1 is an iteration of the open-access Llama family.
159166Llama 3.1 was designed to match the best proprietary models and outperform many of the available open source on common industry benchmarks.
160167
168+ | Attribute | Value |
169+ | -----------| -------|
170+ | Supports parallel tool calling | Yes |
171+
161172#### Model names
162173```
163174meta/llama-3.1-70b-instruct:fp8
@@ -168,6 +179,10 @@ meta/llama-3.1-70b-instruct:bf16
168179Released July 23, 2024, Meta’s Llama 3.1 is an iteration of the open-access Llama family.
169180Llama 3.1 was designed to match the best proprietary models and outperform many of the available open source on common industry benchmarks.
170181
182+ | Attribute | Value |
183+ | -----------| -------|
184+ | Supports parallel tool calling | Yes |
185+
171186#### Model names
172187```
173188meta/llama-3.1-8b-instruct:fp8
@@ -197,6 +212,10 @@ nvidia/llama-3.1-nemotron-70b-instruct:fp8
197212Released January 21, 2025, Deepseek’s R1 Distilled Llama 70B is a distilled version of the Llama model family based on Deepseek R1.
198213DeepSeek R1 Distill Llama 70B is designed to improve the performance of Llama models on reasoning use cases such as mathematics and coding tasks.
199214
215+ | Attribute | Value |
216+ | -----------| -------|
217+ | Supports parallel tool calling | No |
218+
200219#### Model name
201220```
202221deepseek/deepseek-r1-distill-llama-70b:fp8
@@ -247,6 +266,10 @@ Mistral Nemo is a state-of-the-art transformer model of 12B parameters, built by
247266This model is open-weight and distributed under the Apache 2.0 license.
248267It was trained on a large proportion of multilingual and code data.
249268
269+ | Attribute | Value |
270+ | -----------| -------|
271+ | Supports parallel tool calling | Yes |
272+
250273#### Model name
251274```
252275mistral/mistral-nemo-instruct-2407:fp8
@@ -302,6 +325,10 @@ kyutai/moshika-0.1-8b:fp8
302325Qwen2.5-coder is your intelligent programming assistant familiar with more than 40 programming languages.
303326With Qwen2.5-coder deployed at Scaleway, your company can benefit from code generation, AI-assisted code repair, and code reasoning.
304327
328+ | Attribute | Value |
329+ | -----------| -------|
330+ | Supports parallel tool calling | No |
331+
305332#### Model name
306333```
307334qwen/qwen2.5-coder-32b-instruct:int8
0 commit comments