Skip to content

Does it support FastVLM-0.5B-4bit version? #317

@Lornatang

Description

@Lornatang

I used the mlx-vlm tool to perform 4-bit quantization on the official FastVLM-0.5B and modified it according to your example, but I encountered this error:

MLXNN/Module.swift:570: Fatal error: 'try!' expression unexpectedly raised an error: MLXNN.UpdateError.needModuleInfo("Unable to get @ModuleInfo for FastVLMMultiModalProjector.layers -- must be wrapped to receive updates")

It would be great if you could take a look at this. Thank you for your great work.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions