Skip to content

Freeze LLM set to True #6

@AdityaKulshrestha

Description

@AdityaKulshrestha

In the code, when enabling the LoRA the freeze_llm should be set to False. Shouldn't it be the opposite? Like in your other repository including QwenVL-Finetune.

assert not (training_args.lora_enable and training_args.freeze_llm), 'When using LoRA, the LLM should not be frozen. If you want to freeze the LLM, please disable LoRA.'

Constrastingly, in the QwenVL repo

raise ValueError("If `lora_enable` is True, `freeze_llm` must also be True.")

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions