Skip to content

Commit ccda4e5

Browse files
authored
fix(inference): update licenses
1 parent 2dfb5fe commit ccda4e5

File tree

1 file changed

+26
-21
lines changed

1 file changed

+26
-21
lines changed

pages/managed-inference/reference-content/model-catalog.mdx

Lines changed: 26 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -16,31 +16,33 @@ A quick overview of available models in Scaleway's catalog and their core attrib
1616

1717
## Models technical summary
1818

19-
| Model name | Provider | Context length | Modalities | Instances | License |
19+
| Model name | Provider | Context length (tokens) | Modalities | Instances | License |
2020
|------------|----------|--------------|------------|-----------|---------|
21-
| [`mixtral-8x7b-instruct-v0.1`](#mixtral-8x7b-instruct-v01) | Mistral | 32k tokens | Text | H100 | Apache 2.0 |
22-
| [`llama-3.1-70b-instruct`](#llama-31-70b-instruct) | Meta | up to 128k tokens | Text | H100, H100-2 | Llama 3 community |
23-
| [`llama-3.1-8b-instruct`](#llama-31-8b-instruct) | Meta | up to 128k tokens | Text | L4, L40S, H100, H100-2 | Llama 3 community |
24-
| [`llama-3-70b-instruct`](#llama-3-70b-instruct) | Meta | 8k tokens | Text | H100 | Llama 3 community |
25-
| [`llama-3.3-70b-instruct`](#llama-33-70b-instruct) | Meta | up to 131k tokens | Text | H100, H100-2 | Llama 3 community |
26-
| [`llama-3-nemotron-70b`](#llama-31-nemotron-70b-instruct) | Nvidia | up to 128k tokens | Text | H100, H100-2 | Lllama 3.3 community |
27-
| [`deepseek-r1-distill-70b`](#deepseek-r1-distill-llama-70b) | Deepseek | up to 131k tokens | Text | H100, H100-2 | MIT |
28-
| [`deepseek-r1-distill-8b`](#deepseek-r1-distill-llama-8b) | Deepseek | up to 131k tokens | Text | L4, L40S, H100 | Apache 2.0 |
29-
| [`mistral-7b-instruct-v0.3`](#mistral-7b-instruct-v03) | Mistral | 32k tokens | Text | L4, L40S, H100, H100-1 | Apache 2.0 |
30-
| [`mistral-small-24b-instruct-2501`](#mistral-small-24b-base-2501) | Mistral | 32k tokens | Text | L40S, H100, H100-2 | Apache 2.0 |
31-
| [`mistral-nemo-instruct-2407`](#mistral-nemo-instruct-2407) | Mistral | 128k | Text | L40S, H100, H100-2 | Apache 2.0 |
32-
| [`moshiko-0.1-8b`](#moshiko-01-8b) | Kyutai | 4,096 tokens | Text | L4, H100 | Apache 2.0 |
33-
| [`moshika-0.1-8b`](#moshika-01-8b) | Kyutai | 4,096 tokens | Text | L4, H100 | Apache 2.0 |
34-
| [`wizardlm-70b-v1.0`](#wizardlm-70b-v10) | WizardLM | 4,096 tokens | Text | H100, H100-2 | Lllama 2 community |
35-
| [`pixtral-12b-2409`](#pixtral-12b-2409) | Mistral | 128k tokens | Multimodal | L40S, H100, H100-2 | Apache 2.0 |
36-
| [`molmo-72b-0924`](#molmo-72b-0924) | Allen AI | 50k | Multimodal | H100-2 | Apache 2.0 |
37-
| [`qwen2.5-coder-32b-instruct`](#qwen25-coder-32b-instruct) | Qwen | up to 32k | Code | H100, H100-2 | Apache 2.0 |
38-
| [`sentence-t5-xxl`](#sentence-t5-xxl) | Sentence transformers | 512 tokens | Embeddings | L4 | Apache 2.0 |
21+
| [`gemma-3-27b-it`](#gemma-3-27b-it) | Google | 32k | Text | H100, H100-2 | [Gemma](https://ai.google.dev/gemma/terms) |
22+
| [`llama-3.1-70b-instruct`](#llama-31-70b-instruct) | Meta | up to 128k tokens | Text | H100, H100-2 | [Llama 3.1 community](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) |
23+
| [`llama-3.1-8b-instruct`](#llama-31-8b-instruct) | Meta | up to 128k tokens | Text | L4, L40S, H100, H100-2 | [Llama 3.1 community](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/LICENSE) |
24+
| [`llama-3-70b-instruct`](#llama-3-70b-instruct) | Meta | 8k tokens | Text | H100 | [Llama 3 community](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE) |
25+
| [`llama-3.3-70b-instruct`](#llama-33-70b-instruct) | Meta | up to 131k tokens | Text | H100, H100-2 | [Llama 3.3 community](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) |
26+
| [`llama-3-nemotron-70b`](#llama-31-nemotron-70b-instruct) | Nvidia | up to 128k tokens | Text | H100, H100-2 | [Llama 3.1 community](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) |
27+
| [`deepseek-r1-distill-70b`](#deepseek-r1-distill-llama-70b) | Deepseek | up to 131k tokens | Text | H100, H100-2 | [MIT](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/blob/main/LICENSE) and [Llama 3.3 Community](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE) |
28+
| [`deepseek-r1-distill-8b`](#deepseek-r1-distill-llama-8b) | Deepseek | up to 131k tokens | Text | L4, L40S, H100 | [MIT](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B/blob/main/LICENSE) and [Llama 3.1 Community](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/LICENSE) |
29+
| [`mistral-7b-instruct-v0.3`](#mistral-7b-instruct-v03) | Mistral | 32k tokens | Text | L4, L40S, H100, H100-1 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
30+
| [`mistral-small-24b-instruct-2501`](#mistral-small-24b-base-2501) | Mistral | 32k tokens | Text | L40S, H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
31+
| [`mistral-nemo-instruct-2407`](#mistral-nemo-instruct-2407) | Mistral | 128k | Text | L40S, H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
32+
| [`mixtral-8x7b-instruct-v0.1`](#mixtral-8x7b-instruct-v01) | Mistral | 32k | Text | H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
33+
| [`moshiko-0.1-8b`](#moshiko-01-8b) | Kyutai | 4,096 tokens | Text | L4, H100 | [CC-BY-4.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/cc-by-4.0.md) |
34+
| [`moshika-0.1-8b`](#moshika-01-8b) | Kyutai | 4,096 tokens | Text | L4, H100 | [CC-BY-4.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/cc-by-4.0.md) |
35+
| [`wizardlm-70b-v1.0`](#wizardlm-70b-v10) | WizardLM | 4,096 tokens | Text | H100, H100-2 | [Llama 2 community](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/LICENSE.txt) |
36+
| [`pixtral-12b-2409`](#pixtral-12b-2409) | Mistral | 128k tokens | Multimodal | L40S, H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
37+
| [`molmo-72b-0924`](#molmo-72b-0924) | Allen AI | 50k | Multimodal | H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
38+
| [`qwen2.5-coder-32b-instruct`](#qwen25-coder-32b-instruct) | Qwen | up to 32k | Code | H100, H100-2 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
39+
| [`bge-multilingual-gemma2`](#bge-multilingual-gemma2) | No | No | [Gemma](https://ai.google.dev/gemma/terms) |
40+
| [`sentence-t5-xxl`](#sentence-t5-xxl) | Sentence transformers | 512 tokens | Embeddings | L4 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
3941

4042
## Models feature summary
4143
| Model name | Structured output supported | Function calling | Supported languages |
4244
| --- | --- | --- | --- |
43-
| `mixtral-8x7b-instruct-v0.1` | Yes | No | English, French, German, Italian, Spanish |
45+
| `gemma-3-27b-it` | Yes | Partial | English, Chinese, Japanese, Korean and 31 additional languages |
4446
| `llama-3.3-70b-instruct` | Yes | Yes | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai |
4547
| `llama-3.1-70b-instruct` | Yes | Yes | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai |
4648
| `llama-3.1-8b-instruct` | Yes | Yes | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai |
@@ -51,14 +53,17 @@ A quick overview of available models in Scaleway's catalog and their core attrib
5153
| `mistral-7b-instruct-v0.3` | Yes | Yes | English |
5254
| `mistral-small-3.1-24b-instruct-2503` | Yes | Yes | English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi |
5355
| `mistral-nemo-instruct-2407` | Yes | Yes | English, French, German, Spanish, Italian, Portuguese, Russian, Chinese, Japanese |
56+
| `mixtral-8x7b-instruct-v0.1` | Yes | No | English, French, German, Italian, Spanish |
5457
| `moshiko-0.1-8b` | No | No | English |
5558
| `moshika-0.1-8b` | No | No | English |
5659
| `wizardLM-70b-v1.0` | Yes | No | English |
5760
| `pixtral-12b-2409` | Yes | No | English, French, German, Spanish (to be verified) |
5861
| `molmo-72b-0924` | Yes | No | English |
59-
| `qwen2.5-coder-32b-instruct` | Yes | Yes | Over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic |
62+
| `qwen2.5-coder-32b-instruct` | Yes | Yes | English, French, Spanish, Portuguese, German, Italian, Russian, Chinese, Japanese, Korean, Vietnamese, Thai, Arabic and 16 additional languages. |
63+
| `bge-multilingual-gemma2` | No | No | English, French, Chinese, Japanese, Korean |
6064
| `sentence-t5-xxl` | No | No | English |
6165

66+
6267
## Model details
6368
<Message type="note">
6469
Despite efforts for accuracy, the possibility of generated text containing inaccuracies or [hallucinations](/managed-inference/concepts/#hallucinations) exists. Always verify the content generated independently.

0 commit comments

Comments
 (0)