Skip to content

Question about finetune_ignore commandline argument #103

@niki-amini-naieni

Description

@niki-amini-naieni

Hi, thank you for your very useful repository! I have a couple of questions.

(1) My first question is about the finetune_ignore commandline argument here. From what I understand, it allows you to load a checkpoint but ignore some of the weights. So, for example, if I wanted to load a set of pretrained GroundingDINO weights but not load the transformer.tgt_embed.weight, I could specify: --finetune_ignore 'transformer.tgt_embed.weight'. Is my understanding correct?

(2) The Grounding DINO decoder is a transformer architecture, so why can I not just increase the number of decoder queries at inference time while using the pretrained model with 900 queries? When I try to do this, I get errors about 'transformer.tgt_embed.weight.' The structure of these parameters seems to depend on the number of queries, so when I load the pretrained checkpoint, I experience the error.

Thank you very much for any input you can provide!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions