Skip to content

Make it easier to know which datasets were used #268

@set-soft

Description

@set-soft

Hi!
The availability of various pre-trained weights is very good, thanks!
You also include a detailed information about the datasets in the "model zoo" section. Thanks again.
But then I get confused about the relation between the models available in the Google Drive and the models uploaded on Hugging Face.
Can you please add a link to Hugging Face along with the Google Drive link?

BTW: The model cards on Hugging Face have too much copy & paste, I suggest putting all the repetitive instructions in just one repo (https://huggingface.co/ZhengPeng7/BiRefNet) and in the other repos just put information relevant to the weights found in the repo, with links to the main HF repo, paper, GitHub, etc.

I wrote some ComfyUI nodes that supports all the pre-trained models you uploaded to Hugging Face and want to make a table indicating which data set was used for each case.

P.S. I downloaded the General HR (epoch 230) thinking it was the same as https://huggingface.co/ZhengPeng7/BiRefNet_HR/resolve/main/model.safetensors but found this isn't the case. I don't mean the file format, I mean a SHA256 for the bb.layers.0.blocks.0.attn.proj.bias Tensor. This is why I ask for more detailed information, you might also mention which epoch is the one on Hugging Face, so we know if the model from other of your sources is newer.

Metadata

Metadata

Assignees

No one assigned

    Labels

    TODOdocumentationImprovements or additions to documentation

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions