Skip to content

Unsupported qwen3_vl #2998

@guweixin

Description

@guweixin

I tried quantizing the Qwen/Qwen3-VL-4B-Instruct using the below command
optimum-cli export openvino --model Qwen/Qwen3-VL-4B-Instruct Qwen3-VL-4B-Instruct-int4-sym_group-1 --task image-text-to-text --weight-format int4 --trust-remote-code --sym --backup-precision int8_sym --group-size -1
I tried inferring the quantized model using openvino.genai VLM example and getting the below error.
Unsupported 'qwen3_vl' VLM model type

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions