Hi, thank you for the great work on micro-sam!
Currently, the function get_trainable_sam_model raises an error when trying to use PEFT together with freeze_image_encoder=True:
raise ValueError("You cannot use PEFT & freeze the image encoder at the same time.")
However, in many fine-tuning scenarios (especially with limited data or compute), it's common to freeze the encoder and apply LoRA or other PEFT methods only to the decoder.
Would it be possible to:
- either remove this restriction, or
- allow it with a warning, leaving the responsibility to the user?
Thanks again for developing this tool!