Skip to content

支持72B的instruct模型吗? #4

@mxzgn

Description

@mxzgn

目前72模型在
ov_model = OVModelForCausalLM.from_pretrained(model_path, export=True,
compile=False, quantization_config=OVWeightQuantizationConfig(
bits=4, **compression_configs))
中断了,请问如何解决

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions