Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion pages/managed-inference/reference-content/supported-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ You can find a complete list of all models available in Scaleway's catalog on th
We recommend starting with a variation of a supported model from the Scaleway catalog.
For example, you can deploy a [quantized (4-bit) version of Llama 3.3](https:/huggingface.co/unsloth/Llama-3.3-70B-Instruct-bnb-4bit).
If deploying a fine-tuned version of Llama 3.3, make sure your file structure matches the example linked above.
Examples whose compatibility has been tested are available in [tested models](#known-compatible-models).
</Message>

To deploy a custom model via Hugging Face, ensure the following:
Expand Down Expand Up @@ -232,4 +233,17 @@ Custom models must conform to one of the architectures listed below. Click to ex
* `EAGLEModel`
* `MedusaModel`
* `MLPSpeculatorPreTrainedModel`
</Concept>
</Concept>

## Known compatible models

Several models have already been verified to work on Managed Inference Custom models. This list is not exhaustive and is updated gradually. Click to expand full list.

<Concept>
## Models verified for compatibility
The following model compatibility has been verified:
* `ibm-granite/granite-vision-3.2-2b`
* `ibm-granite/granite-3.3-2b-instruct`
* `microsoft/phi-4`
* `Qwen/Qwen3-32B`
</Concept>