-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Hi @mmaaz60 π€
Niels here from the open-source team at Hugging Face. I discovered your work on Arxiv (https://huggingface.co/papers/2511.23477) and saw the exciting announcement on your GitHub repository (https://github.com/mbzuai-oryx/Video-CoM) that the code, dataset, and model for Video-CoM will be released soon!
The Hugging Face paper page lets people discuss your paper and find related artifacts (like your model and dataset). You can also claim the paper as yours to feature it on your public profile, and add GitHub and project page URLs.
It'd be great to make the Video-CoM model checkpoints and the Video-CoM-Instruct dataset available on the π€ hub once they are released, to improve their discoverability and visibility. We can add tags so that people can easily find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, we could leverage the PyTorchModelHubMixin class which adds from_pretrained and push_to_hub to any custom nn.Module. Alternatively, one can leverage the hf_hub_download one-liner to download a checkpoint from the hub. The Video-CoM model, being a multimodal language model for video reasoning, would typically fall under the video-text-to-text pipeline tag.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page.
Uploading dataset
It would be awesome to make the Video-CoM-Instruct dataset available on π€ , so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")See here for a guide: https://huggingface.co/docs/datasets/loading. The Video-CoM-Instruct dataset, curated for multi-step manipulation reasoning on videos, would also align with the video-text-to-text task category.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser.
Let me know if you're interested in hosting your artifacts on the Hub, or need any guidance when the release is ready!
Cheers,
Niels
ML Engineer @ HF π€