Skip to content

Where can one find gemma-3 quantized weights?Β #82

@fac2003

Description

@fac2003

I would like to run quantized gemma-3 for inference (4b,12b and 27b model variants). The project appear to support this (--quant option in run_multimodal script for instance), but the weights do not seem to be available in pytorch format on kaggle:

Image

Could quantized weights be made available that work directly with run_multimodal --quant or I am missing some way to convert the bfloat16 weights?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions