Skip to content

Commit 4de5d0f

Browse files
authored
docs: update vlm coverage (#961)
Signed-off-by: Alexandros Koumparoulis <[email protected]>
1 parent ab56f2f commit 4de5d0f

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

docs/model-coverage/vlm.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,9 @@ NeMo Automodel supports <a href=https://huggingface.co/docs/transformers/main/mo
3030
| Qwen2-VL-2B-Instruct & Qwen2.5-VL-3B-Instruct | cord-v2 | Supported | Supported | [qwen2_5_vl_3b_rdr.yaml](../../examples/vlm_finetune/qwen2_5/qwen2_5_vl_3b_rdr.yaml) |
3131
| Qwen3-VL-MoE | cord-v2 | Supported | Supported | [qwen3_vl_moe_30b_te_deepep.yaml](../../examples/vlm_finetune/qwen3/qwen3_vl_moe_30b_te_deepep.yaml) |
3232
| Qwen3-Omni-30BA3B | cord-v2 | Supported | Supported | [qwen3_omni_moe_30b_te_deepep.yaml](../../examples/vlm_finetune/qwen3/qwen3_omni_moe_30b_te_deepep.yaml) |
33-
| InternVL2-4B | cord-v2 | Supported | Supported | [internvl_3_5_4b.yaml](../../examples/vlm_finetune/internvl/internvl_3_5_4b.yaml) |
33+
| InternVL3.5-4B | cord-v2 | Supported | Supported | [internvl_3_5_4b.yaml](../../examples/vlm_finetune/internvl/internvl_3_5_4b.yaml) |
34+
| Ministral3-{3B,8B,14B} | MedPix-VQA | Supported | Supported | [ministral3_3b_medpix.yaml](../../examples/vlm_finetune/mistral/ministral3_3b_medpix.yaml), [ministral3_8b_medpix.yaml](../../examples/vlm_finetune/mistral/ministral3_8b_medpix.yaml), [ministral3_14b_medpix.yaml](../../examples/vlm_finetune/mistral/ministral3_14b_medpix.yaml) |
35+
| Phi-4-multimodal-instruct | commonvoice_17_tr_fixed | Supported | Supported | [phi4_mm_cv17.yaml](../../examples/vlm_finetune/phi4/phi4_mm_cv17.yaml) |
3436

3537
For detailed instructions on fine-tuning these models using both SFT and PEFT approaches, please refer to the [Gemma 3 and Gemma 3n Fine-Tuning Guide](../guides/omni/gemma3-3n.md). The guide covers dataset preparation, configuration, and running both full fine-tuning and LoRA-based parameter efficient fine-tuning.
3638

0 commit comments

Comments
 (0)