-
Notifications
You must be signed in to change notification settings - Fork 0
Release TreeQ quantized DiT models on Hugging Face #1
Description
Hi @racoonykc 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work on Arxiv, "TreeQ: Pushing the Quantization Boundary of Diffusion Transformer via Tree-Structured Mixed-Precision Search", and see from your GitHub repository (https://github.com/racoonykc/TreeQ) that you're planning to release the code and quantized models.
The Hugging Face paper page (https://huggingface.co/papers/2512.06353) lets people discuss about your paper and lets them find artifacts related to it (your models for instance). You can also claim the paper as yours, which will show up on your public profile at HF, and add Github and project page URLs.
Would you be interested in hosting your TreeQ quantized DiT models on https://huggingface.co/models once they are ready for release?
Hosting on Hugging Face will give you more visibility and enable better discoverability. We can add pipeline tags (e.g., "unconditional-image-generation") in the model cards so that people can find the models easier, and link them directly to the paper page.
If you're interested, here's a guide for uploading models: https://huggingface.co/docs/hub/models-uploading. If it's a custom PyTorch model, you might find the PyTorchModelHubMixin class useful, as it adds from_pretrained and push_to_hub functionalities.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. After uploading, we can also link the models to your paper page (read here) so people can discover your model.
You can also build a demo for your models on Spaces, and we can provide you a ZeroGPU grant, which gives you A100 GPUs for free.
Let me know if you're interested or need any guidance as you prepare for the release!
Kind regards,
Niels