diff --git a/pages/generative-apis/reference-content/supported-models.mdx b/pages/generative-apis/reference-content/supported-models.mdx index 7c203134f5..c3034f9452 100644 --- a/pages/generative-apis/reference-content/supported-models.mdx +++ b/pages/generative-apis/reference-content/supported-models.mdx @@ -14,22 +14,23 @@ Our API supports the most popular models for [Chat](/generative-apis/how-to/quer | Provider | Model string | Context window (Tokens) | Maximum output (Tokens)| License | Model card | |-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | Google (Preview) | `gemma-3-27b-it` | 40k | 8192 | [Gemma](https://ai.google.dev/gemma/terms) | [HF](https://huggingface.co/google/gemma-3-27b-it) | -| Mistral | `mistral-small-3.1-24b-instruct-2503` | 128k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) | +| Mistral | `mistral-small-3.2-24b-instruct-2506` | 128k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506) | ## Chat models | Provider | Model string | Context window (Tokens) | Maximum output (Tokens)| License | Model card | |-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| +| OpenAI | `gpt-oss-120b` | 128k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/openai/gpt-oss-120b) | | Mistral | `devstral-small-2505` | 128k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/mistralai/Devstral-Small-2505) | | Meta | `llama-3.3-70b-instruct` | 100k | 4096 | [Llama 3.3 Community](https://www.llama.com/llama3_3/license/) | [HF](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | | Meta | `llama-3.1-8b-instruct` | 128k | 16384 | [Llama 3.1 Community](https://llama.meta.com/llama3_1/license/) | [HF](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | | Mistral | `mistral-nemo-instruct-2407` | 128k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) | | Qwen | `qwen3-235b-a22b-instruct-2507` | 40k | 4096 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) | -| Qwen | `qwen2.5-coder-32b-instruct` | 32k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | +| Qwen | `qwen3-coder-30b-a3b-instruct` | 128k | 8192 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) | | DeepSeek | `deepseek-r1-distill-llama-70b` | 32k | 4096 | [MIT](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md) | [HF](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | - If you are unsure which chat model to use, we currently recommend Mistral Small 3.1 24B Instruct (`mistral-small-3.1-24b-instruct-2503`) to get started. + If you are unsure which chat model to use, we currently recommend Mistral Small 3.2 24B Instruct (`mistral-small-3.2-24b-instruct-2506`) to get started. ## Vision models @@ -61,7 +62,8 @@ Deprecated models should not be queried anymore. We recommend to use newer model | Provider | Model string | End of Life (EOL) date |-----------------|-----------------|-----------------| -| Meta | `llama-3.1-70b-instruct` | 25th May, 2025 | +| Mistral | `mistral-small-3.1-24b-instruct-2503` | 14th November, 2025 | +| Qwen | `qwen2.5-coder-32b-instruct` | 14th November, 2025 | Llama 3.1 70B is now deprecated. The new Llama 3.3 70B is available with similar or better performance in most use cases. @@ -74,4 +76,5 @@ These models are not accessible anymore from Generative APIs. They can still how | Provider | Model string | EOL date |-----------------|-----------------|-----------------| +| Meta | `llama-3.1-70b-instruct` | 25th May, 2025 | | SBERT | `sentence-t5-xxl` | 26 February, 2025 | diff --git a/pages/managed-inference/reference-content/model-catalog.mdx b/pages/managed-inference/reference-content/model-catalog.mdx index c985489aac..8d086516d9 100644 --- a/pages/managed-inference/reference-content/model-catalog.mdx +++ b/pages/managed-inference/reference-content/model-catalog.mdx @@ -16,6 +16,7 @@ A quick overview of available models in Scaleway's catalog and their core attrib | Model name | Provider | Maximum Context length (tokens) | Modalities | Compatible Instances (Max Context in tokens\*) | License | |------------|----------|--------------|------------|-----------|---------| +| [`gpt-oss-120b`](#gpt-oss-120b) | OpenAI | 128k | Text | H100 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`qwen3-235b-a22b-instruct-2507`](#qwen3-235b-a22b-instruct-2507) | Qwen | 40k | Text | H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`gemma-3-27b-it`](#gemma-3-27b-it) | Google | 40k | Text, Vision | H100, H100-2 | [Gemma](https://ai.google.dev/gemma/terms) | | [`llama-3.3-70b-instruct`](#llama-33-70b-instruct) | Meta | 128k | Text | H100 (15k), H100-2 | [Llama 3.3 Community](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | @@ -26,6 +27,7 @@ A quick overview of available models in Scaleway's catalog and their core attrib | [`deepseek-r1-distill-70b`](#deepseek-r1-distill-llama-70b) | Deepseek | 128k | Text | H100 (13k), H100-2 | [MIT](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/blob/main/LICENSE) and [Llama 3.3 Community](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE) | | [`deepseek-r1-distill-8b`](#deepseek-r1-distill-llama-8b) | Deepseek | 128k | Text | L4 (90k), L40S, H100, H100-2 | [MIT](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/blob/main/LICENSE) and [Llama 3.1 Community](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/LICENSE) | | [`mistral-7b-instruct-v0.3`](#mistral-7b-instruct-v03) | Mistral | 32k | Text | L4, L40S, H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | +| [`mistral-small-3.2-24b-instruct-2506`](#mistral-small-32-24b-instruct-2506) | Mistral | 128k | Text, Vision | H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`mistral-small-3.1-24b-instruct-2503`](#mistral-small-31-24b-instruct-2503) | Mistral | 128k | Text, Vision | H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`mistral-small-24b-instruct-2501`](#mistral-small-24b-instruct-2501) | Mistral | 32k | Text | L40S (20k), H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`mistral-nemo-instruct-2407`](#mistral-nemo-instruct-2407) | Mistral | 128k | Text | L40S, H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | @@ -36,6 +38,7 @@ A quick overview of available models in Scaleway's catalog and their core attrib | [`moshika-0.1-8b`](#moshika-01-8b) | Kyutai | 4k | Audio to Audio| L4, H100 | [CC-BY-4.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/cc-by-4.0.md) | | [`pixtral-12b-2409`](#pixtral-12b-2409) | Mistral | 128k | Text, Vision | L40S (50k), H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`molmo-72b-0924`](#molmo-72b-0924) | Allen AI | 50k | Text, Vision | H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) and [Twonyi Qianwen license](https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE)| +| [`qwen3-coder-30b-a3b-instruct`](#qwen3-coder-30b-a3b-instruct) | Qwen | 128k | Code | L40S, H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`qwen2.5-coder-32b-instruct`](#qwen25-coder-32b-instruct) | Qwen | 32k | Code | H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | [`bge-multilingual-gemma2`](#bge-multilingual-gemma2) | BAAI | 4k | Embeddings | L4, L40S, H100, H100-2 | [Gemma](https://ai.google.dev/gemma/terms) | | [`sentence-t5-xxl`](#sentence-t5-xxl) | Sentence transformers | 512 | Embeddings | L4 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | @@ -45,6 +48,7 @@ A quick overview of available models in Scaleway's catalog and their core attrib ## Models feature summary | Model name | Structured output supported | Function calling | Supported languages | | --- | --- | --- | --- | +| `gpt-oss-120b` | Yes | Yes | English | | `qwen3-235b-a22b-instruct-2507` | Yes | Yes | English, French, German, Chinese, Japanese, Korean and 113 additional languages and dialects | | `gemma-3-27b-it` | Yes | Partial | English, Chinese, Japanese, Korean and 31 additional languages | | `llama-3.3-70b-instruct` | Yes | Yes | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai | @@ -55,6 +59,7 @@ A quick overview of available models in Scaleway's catalog and their core attrib | `deepseek-r1-distill-llama-70B` | Yes | Yes | English, Chinese | | `deepseek-r1-distill-llama-8B` | Yes | Yes | English, Chinese | | `mistral-7b-instruct-v0.3` | Yes | Yes | English | +| `mistral-small-3.2-24b-instruct-2506` | Yes | Yes | English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi | | `mistral-small-3.1-24b-instruct-2503` | Yes | Yes | English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi | | `mistral-small-24b-instruct-2501` | Yes | Yes | English, French, German, Dutch, Spanish, Italian, Polish, Portuguese, Chinese, Japanese, Korean | | `mistral-nemo-instruct-2407` | Yes | Yes | English, French, German, Spanish, Italian, Portuguese, Russian, Chinese, Japanese | @@ -65,6 +70,7 @@ A quick overview of available models in Scaleway's catalog and their core attrib | `moshika-0.1-8b` | No | No | English | | `pixtral-12b-2409` | Yes | Yes | English | | `molmo-72b-0924` | Yes | No | English | +| `qwen3-coder-30b-a3b-instruct` | Yes | Yes | English, French, German, Chinese, Japanese, Korean and 113 additional languages and dialects | | `qwen2.5-coder-32b-instruct` | Yes | Yes | English, French, Spanish, Portuguese, German, Italian, Russian, Chinese, Japanese, Korean, Vietnamese, Thai, Arabic and 16 additional languages. | | `bge-multilingual-gemma2` | No | No | English, French, Chinese, Japanese, Korean | | `sentence-t5-xxl` | No | No | English | @@ -98,6 +104,22 @@ google/gemma-3-27b-it:bf16 - Pan & Scan is not yet supported for Gemma 3 images. This means that high resolution images are currently resized to 896x896 resolution that may generate artifacts and lead to a lower accuracy. +### Mistral-small-3.2-24b-instruct-2506 +Mistral-small-3.2-24b-instruct-2506 is an improved version of Mistral-small-3.1 which performs better on tool calling. +This model was optimized to have a dense knowledge and faster tokens throughput compared to its size. + +| Attribute | Value | +|-----------|-------| +| Supports parallel tool calling | Yes | +| Supported images formats | PNG, JPEG, WEBP, and non-animated GIFs | +| Maximum image resolution (pixels) | 1540x1540 | +| Token dimension (pixels)| 28x28 | + +#### Model names +``` +mistral/mistral-small-3.2-24b-instruct-2506:fp8 +``` + ### Mistral-small-3.1-24b-instruct-2503 Mistral-small-3.1-24b-instruct-2503 is a model developed by Mistral to perform text processing and image analysis on many languages. This model was optimized to have a dense knowledge and faster tokens throughput compared to its size. @@ -112,6 +134,7 @@ This model was optimized to have a dense knowledge and faster tokens throughput #### Model names ``` mistral/mistral-small-3.1-24b-instruct-2503:bf16 +mistral/mistral-small-3.1-24b-instruct-2503:fp8 ``` - Bitmap (or raster) image formats, meaning storing images as grids of individual pixels, are supported. Vector image formats (SVG, PSD) are not supported, neither PDFs nor videos. @@ -147,16 +170,31 @@ allenai/molmo-72b-0924:fp8 ## Text models -### Qwen3-235b-a22b-instruct-2507 +### Gpt-oss-120b Released July 23, 2025, Qwen 3 235B A22B is an open-weight model, competitive in multiple benchmarks (such as [LM Arena for text use cases](https://lmarena.ai/leaderboard)) compared to Gemini 2.5 Pro and GPT4.5. | Attribute | Value | |-----------|-------| | Supports parallel tool calling | Yes | + + #### Model name ``` -qwen/qwen3-235b-a22b-instruct-2507:awq +openai/gpt-oss-120b:fp4 +``` + +### Gpt-oss-120b +Released August 5, 2025, GPT OSS 120B is an open-weight model providing significant throughput performance and reasoning capabilities. +Currently, this model should be used through Responses API, as Chat Completion does not yet support tool calling for this model. + +| Attribute | Value | +|-----------|-------| +| Supports parallel tool calling | Yes | + +#### Model name +``` +openai/gpt-oss-120b:fp4 ``` ### Llama-3.3-70b-instruct @@ -333,6 +371,19 @@ kyutai/moshika-0.1-8b:fp8 ## Code models +### Qwen3-coder-30b-a3b-instruct +Qwen3-coder is an improved version of Qwen2.5 with better accuracy and throughput. +Thanks to its a3b architecture, only a subset of its weights are activated for a given generation, leading to much faster input and output token processing, ideal for code completion. + +| Attribute | Value | +|-----------|-------| +| Supports parallel tool calling | Yes | + +#### Model name +``` +qwen/qwen3-coder-30b-a3b-instruct:fp8 +``` + ### Qwen2.5-coder-32b-instruct Qwen2.5-coder is your intelligent programming assistant familiar with more than 40 programming languages. With Qwen2.5-coder deployed at Scaleway, your company can benefit from code generation, AI-assisted code repair, and code reasoning.