Releases: huggingface/huggingface_hub
v0.30.2: Fix text-generation task in InferenceClient
Fixing some InferenceClient-related bugs:
- [Inference Providers] Fix text-generation when using an external provider #2982 by @hanouticelina
 - Fix HfInference conversational #2985 by @Wauplin
 
Full Changelog: v0.30.1...v0.30.2
v0.30.1: fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction'
Patch release to fix #2967.
Full Changelog: v0.30.0...v0.30.1
Xet is here! (+ many cool Inference-related things!)
🚀 Ready. Xet. Go!
This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by xet-core, a Rust-based package that handles all the low-level details.
You can start using Xet today by installing the optional dependency:
pip install -U huggingface_hub[hf_xet]With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.
Blog post: Xet on the Hub
Docs: Storage backends → Xet
Tip
Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so hf_xet uploads may need to be enabled for your repo. Join the waitlist to get onboarded soon!
This is the result of collaborative work by @bpronan, @hanouticelina, @rajatarya, @jsulz, @assafvayner, @Wauplin, + many others on the infra/Hub side!
- Xet download workflow by @hanouticelina in #2875
 - Add ability to enable/disable xet storage on a repo by @hanouticelina in #2893
 - Xet upload workflow by @hanouticelina in #2887
 - Xet Docs for huggingface_hub by @rajatarya in #2899
 - Adding Token Refresh Xet Tests by @rajatarya in #2932
 - Using a two stage download path for xet files. by @bpronan in #2920
 - add 
xetEnabledas an expand property by @hanouticelina in #2907 - Xet integration by @Wauplin in #2958
 
⚡ Enhanced InferenceClient
The InferenceClient has received significant updates and improvements in this release, making it more robust and easy to work with.
We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.
- Add Cohere as an Inference Provider by @alexrs-cohere in #2888
 - Add Cerebras provider by @Wauplin in #2901
 - remove cohere from testing and fix quality by @hanouticelina in #2902
 
Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:
from huggingface_hub import InferenceClient
client = InferenceClient(provider="novita")
video = client.text_to_video(
    "A young man walking on the street",
    model="Wan-AI/Wan2.1-T2V-14B",
)- [Inference Providers] Add text-to-video support for Novita by @hanouticelina in #2922
 
It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.
- [Inference Providers] Async calls for fal.ai by @hanouticelina in #2927
 - update polling interval by @hanouticelina in #2937
 - [Inference Providers] Fix status and response URLs when polling text-to-video results with fal-ai by @hanouticelina in #2943
 
Miscellaneous improvements:
- [Bot] Update inference types by @HuggingFaceInfra in #2832
 - Update 
InferenceClientdocstring to reflect thattoken=Falseis no longer accepted by @abidlabs in #2853 - [Inference providers] Root-only base URLs by @Wauplin in #2918
 - Add prompt in image_to_image type by @Wauplin in #2956
 - [Inference Providers] fold OpenAI support into 
providerparameter by @hanouticelina in #2949 - clean up some inference stuff by @Wauplin in #2941
 - regenerate cassettes by @hanouticelina in #2925
 - Fix payload model name when model id is a URL by @hanouticelina in #2911
 - [InferenceClient] Fix token initialization and add more tests by @hanouticelina in #2921
 - [Inference Providers] check inference provider mapping for HF Inference API by @hanouticelina in #2948
 
✨ New Features and Improvements
This release also includes several other notable features and improvements.
It's now possible to pass a path with wildcard to the upload command instead of passing --include=... option:
huggingface-cli upload my-cool-model *.safetensors
- Added support for Wildcards in huggingface-cli upload by @devesh-2002 in #2868
 
Deploying an Inference Endpoint from the Model Catalog just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.
from huggingface_hub import create_inference_endpoint_from_catalog
endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()
endpoint.client.chat_completion(...)The ModelHubMixin got two small updates:
- authors can provide a paper URL that will be added to all model cards pushed by the library.
 - dataclasses are now supported for any init arg (was only the case of 
configuntil now) 
- Add paper URL to hub mixin by @NielsRogge in #2917
 - [HubMixin] handle dataclasses in all args, not only 'config' by @Wauplin in #2928
 
You can now sort by name, size, last updated and last used where using the delete-cache command:
huggingface-cli delete-cache --sort=size- feat: add 
--sortarg todelete-cacheto sort by size by @AlpinDale in #2815 
Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see docs). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")
# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))
# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)Warning
This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.
💔 Breaking Changes
labels has been removed from InferenceClient.zero_shot_classification and InferenceClient.zero_shot_image_classification tasks in favor of candidate_labels. There has been a proper deprecation warning for that.
🛠️ Small Fixes and Maintenance
🐛 Bug and Typo Fixes
- Fix revision bug in _upload_large_folder.py by @yuantuo666 in #2879
 - bug fix in inference_endpoint wait function for proper waiting on update by @Ajinkya-25 in #2867
 - Update SpaceHardware enum by @Wauplin in #2891
 - Fix: Restore sys.stdout in notebook_login after error by @LEEMINJOO in #2896
 - Remove link to unmaintained model card app Space by @davanstrien in #2897
 - Fixing a typo in chat_completion example by @Wauplin in #2910
 - chore: Link to Authentication by @FL33TW00D in #2905
 - Handle file-like objects in curlify by @hanouticelina in #2912
 - Fix typos by @omahs in #2951
 - Add expanduser and expandvars to path envvars by @FredHaa in #2945
 
🏗️ Internal
Thanks to the work previously introduced by the diffusers team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.
- add style bot GitHub action by @hanouticelina in #2898
 - fix style bot GH action by @hanouticelina in #2906
 - Fix bot style GH action (again) by @hanouticelina in #2909
 
Other minor updates:
- Fix prerelease CI by @Wauplin in #2877
 - Update update-inference-types.yaml by @Wauplin in #2926
 - [Internal] Fix check parameters script by @hanouticelina in #2957
 
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @Ajinkya-25
- bug fix in inference_endpoint wait function for proper waiting on update (#2867)
 
 - @abidlabs
- Update 
InferenceClientdocstring to reflect thattoken=Falseis no longer accepted (#2853) 
 - Update 
 - @devesh-2002
- Added support for Wildcards in huggingface-cli upload (#2868)
 
 - @alexrs-cohere
- Add Cohere as an Inference Provider (#2888)
 
 - @NielsRogge
- Add paper URL to hub mixin (#2917)
 
 - @AlpinDale
- feat: add 
--sortarg todelete-cacheto sort by size (#2815) 
 - feat: add 
 - @FredHaa
- Add expanduser and expandvars to path envvars (#2945)
 
 - @omahs
- Fix typos (#2951)
 
 
[v0.29.3]: Adding 2 new Inference Providers: Cerebras and Cohere 🔥
Added client-side support for Cerebras and Cohere providers for upcoming official launch on the Hub.
Cerebras: #2901.
Cohere: #2888.
Full Changelog: v0.29.2...v0.29.3
[v0.29.2] Fix payload model name when model id is a URL & Restore `sys.stdout` in `notebook_login()` after error
This patch release includes two fixes:
- Fix payload model name when model id is a URL #2911
 - Fix: Restore sys.stdout in notebook_login after error #2896
 
Full Changelog: v0.29.1...v0.29.2
[v0.29.1] Fix revision URL encoding in `upload_large_folder` & Fix endpoint update state handling in `InferenceEndpoint.wait()`
This patch release includes two fixes:
- Fix revision bug in _upload_large_folder.py #2879
 - bug fix in inference_endpoint wait function for proper waiting on update #2867
 
Full Changelog: v0.29.0...v0.29.1
[v0.29.0]: Introducing 4 new Inference Providers: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita 🔥
We’re thrilled to announce the addition of three more outstanding serverless Inference Providers to the Hugging Face Hub: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita. These providers join our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. This release adds official support for these 3 providers, making it super easy to use a wide variety of models with your preferred providers.
See our announcement blog for more details: https://huggingface.co/blog/new-inference-providers.
- Add Fireworks AI provider + instructions for new provider by @Wauplin in #2848
 - Add Hyperbolic provider by @hanouticelina in #2863
 - Add Novita provider by @hanouticelina in #2865
 - Nebius AI Studio provider added by @Aktsvigun in #2866
 - Add Black Forest Labs provider by @hanouticelina in #2864
 
Note that Black Forest Labs is not yet supported on the Hub. Once we announce it, huggingface_hub 0.29.0 will automatically support it.
⚡ Other Inference updates
- Default to 
base_urlif provided by @Wauplin in #2805 - update supported models by @hanouticelina in #2813
 - [InferenceClient] Better handling of task parameters by @hanouticelina in #2812
 - Add YuE (music gen) from fal.ai by @Wauplin in #2801
 - [InferenceClient] Renaming 
extra_parameterstoextra_bodyby @hanouticelina in #2821 - fix automatic-speech-recognition output parsing by @hanouticelina in #2826
 - [Bot] Update inference types by @HuggingFaceInfra in #2791
 - Support inferenceProviderMapping as expand property by @Wauplin in #2841
 - Handle extra fields in inference types by @Wauplin in #2839
 - [InferenceClient] Add dynamic inference providers mapping by @hanouticelina in #2836
 - (misc) Deprecate some hf-inference specific features (wait-for-model header, can't override model's task, get_model_status, list_deployed_models) by @Wauplin in #2851
 - Partial revert #2851: allow task override on sentence-similarity by @Wauplin in #2861
 - Fix Inference Client VCR tests by @hanouticelina in #2858
 - update new provider doc by @hanouticelina in #2870
 
💔 Breaking changes
None.
🛠️ Small fixes and maintenance
😌 QoL improvements
- dev(narugo): add resume for ranged headers of http_get function by @narugo1992 in #2823
 
🐛 Bug and typo fixes
- [Docs] Fix broken link in CLI guide documentation by @hanouticelina in #2799
 - fix by @anael-l in #2806): Replace urljoin for HF_ENDPOINT paths
 - InferenceClient some minor docstrings thingies by @julien-c in #2810
 - Do not send staging token to production by @Wauplin in #2811
 - Add 
HF_DEBUGenvironment variable for debugging/reproducibility by @Wauplin in #2819 - Fix curlify by @Wauplin in #2828
 - Improve whoami() error messages by specifying token source by @aniketqw in #2814
 - Fix error message if invalid token on file download by @Wauplin in #2847
 - Fix test_dataset_info (missing dummy dataset) by @Wauplin in #2850
 - Fix is_jsonable if integer key in dict by @Wauplin in #2857
 
🏗️ internal
- another test by @Wauplin (direct commit on main)
 - feat(ci): ignore unverified trufflehog results by @Wauplin in #2837
 - Add datasets and diffusers to prerelease tests by @Wauplin in #2834
 - Always proxy hf-inference calls + update tests by @Wauplin in #2798
 - Skip list_models(inference=...) tests in CI by @Wauplin in #2852
 - Deterministic test_export_folder (dduf testsà by @Wauplin in #2854
 - [cleanup] Unique constants in tests + env variable for inference tests by @Wauplin in #2855
 - feat: Adds a new environment variable HF_HUB_USER_AGENT_ORIGIN to set origin of calls in user-agent by @Hugoch in #2869
 
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @narugo1992
- dev(narugo): add resume for ranged headers of http_get function (#2823)
 
 - @Aktsvigun
- Nebius AI Studio provider added (#2866)
 
 
v0.28.1: FIX path in `HF_ENDPOINT` discarded
Release 0.28.0 introduced a bug making it impossible to set a HF_ENDPOINT env variable with a value with a subpath. This has been fixed in #2807.
Full Changelog: v0.28.0...v0.28.1
[v0.28.0]: Third-party Inference Providers on the Hub & multiple quality of life improvements and bug fixes
⚡️Unified Inference Across Multiple Inference Providers
The InferenceClient now supports third-party providers, offering a unified interface to run inference across multiple services while leveraging models from the Hugging Face Hub. This update enables developers to:
- 🌐 Switch providers seamlessly - Transition between inference providers with a single interface.
 - 🔗 Unified model IDs - Always reference Hugging Face Hub model IDs, even when using external providers.
 - 🔑 Simplified billing and access management - You can use your Hugging Face Token for routing to third-party providers (billed through your HF account).
 
A list of supported third-party providers can be found here.
Example of text-to-image inference with Replicate:
>>> from huggingface_hub import InferenceClient
>>> replicate_client = InferenceClient(
...    provider="replicate",
...    api_key="my_replicate_api_key", # Using your personal Replicate key
)
>>> image = replicate_client.text_to_image(
...    "A cyberpunk cat hacking neural networks",
...    model="black-forest-labs/FLUX.1-schnell"
)
>>> image.save("cybercat.png")Another example of chat completion with Together AI:
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="deepseek-ai/DeepSeek-R1",
...     messages=[{"role": "user", "content": "How many r's are there in strawberry?"}],
... )When using external providers, you can choose between two access modes: either use the provider's native API key, as shown in the examples above, or route calls through Hugging Face infrastructure (billed to your HF account):
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...    provider="fal-ai",
...    token="hf_****"  # Your Hugging Face token
)
🔜 New providers/models/tasks will be added iteratively in the future.
👉 You can find a list of supported tasks per provider and more details here.
- [InferenceClient] Add third-party providers support by @hanouticelina in #2757
 - Unified
 prepare_requestmethod + class-based providers by @Wauplin in #2777- [InferenceClient] Support proxy calls for 3rd party providers by @hanouticelina in #2781
 - [InferenceClient] Add
 text-to-videotask and update supported tasks and models by @hanouticelina in #2786- Add type hints for providers by @Wauplin in #2788
 - [InferenceClient] Update inference documentation by @hanouticelina in #2776
 - Add text-to-video to supported tasks by @Wauplin in #2790
 
✨ HfApi
The following change aligns the client with server-side updates by adding new repositories properties: usedStorage and resourceGroup.
[HfApi] update list of repository properties following server side updates by @hanouticelina in #2728
Extends empty commit prevention to file copy operations, preserving clean version histories when no changes are made.
[HfApi] prevent empty commits when copying files by @hanouticelina in #2730
🌐 📚 Documentation
Thanks to @WizKnight, the hindi translation is much better!
Improved Hindi Translation in Documentation📝 by @WizKnight in #2697
💔 Breaking changes
The like endpoint has been removed to prevent misuse. You can still remove existing likes using the unlikeendpoint.
[HfApi] remove
likeendpoint by @hanouticelina in #2739
🛠️ Small fixes and maintenance
😌 QoL improvements
- [InferenceClient] flag 
chat_completion()'slogit_biasas UNUSED by @hanouticelina in #2724 - Remove unused parameters from method's docstring by @hanouticelina in #2738
 - Add optional rejection_reason when rejecting a user access token by @Wauplin in #2758
 - Add 
py.typedto be compliant with PEP-561 again by @hanouticelina in #2752 
🐛 Bug and typo fixes
- Fix super_squash_history revision not urlencoded by @Wauplin in #2795
 - Replace model repo with repo in docstrings by @albertvillanova in #2715
 - [BUG] Fix 404 NOT FOUND issue caused by endpoint tail slash by @Mingqi2 in #2721
 - Fix 
typing.get_type_hintscall on aModelHubMixinby @aliberts in #2729 - fix typo by @qwertyforce in #2762
 - rejection reason docstring by @Wauplin in #2764
 - Add timeout to WeakFileLock by @Wauplin in #2751
 - Fix
CardData.get()to respect default values whenNoneby @hanouticelina in #2770 - Fix RepoCard.load when passing a repo_id that is also a dir path by @Wauplin in #2771
 - Fix filename too long when downloading to local folder by @Wauplin in #2789
 
🏗️ internal
- Migrate to new Ruff "2025 style guide" formatter by @hanouticelina in #2749
 - remove org tokens tests by @hanouticelina in #2759
 - Fix 
RepoCardtest on Windows by @hanouticelina in #2774 - [Bot] Update inference types by @HuggingFaceInfra in #2712
 
[v0.27.1]: Fix `typing.get_type_hints` call on a `ModelHubMixin`
Full Changelog: v0.27.0...v0.27.1
See #2729 for more details.