Skip to content

Allow using PEFT (e.g. LoRA) in mask decoder with frozen image encoder #1089

@YonghaoZhao722

Description

@YonghaoZhao722

Hi, thank you for the great work on micro-sam!

Currently, the function get_trainable_sam_model raises an error when trying to use PEFT together with freeze_image_encoder=True:

raise ValueError("You cannot use PEFT & freeze the image encoder at the same time.")

However, in many fine-tuning scenarios (especially with limited data or compute), it's common to freeze the encoder and apply LoRA or other PEFT methods only to the decoder.

Would it be possible to:

  • either remove this restriction, or
  • allow it with a warning, leaving the responsibility to the user?

Thanks again for developing this tool!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions