Skip to content

Releases: huggingface/huggingface_hub

v0.30.2: Fix text-generation task in InferenceClient

08 Apr 08:34
9255af9

Choose a tag to compare

Fixing some InferenceClient-related bugs:

Full Changelog: v0.30.1...v0.30.2

v0.30.1: fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction'

31 Mar 15:03
c9f9ad2

Choose a tag to compare

Xet is here! (+ many cool Inference-related things!)

28 Mar 14:44
fefa7cc

Choose a tag to compare

🚀 Ready. Xet. Go!

This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by xet-core, a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

pip install -U huggingface_hub[hf_xet]

With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.

Blog post: Xet on the Hub
Docs: Storage backends → Xet

Tip

Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so hf_xet uploads may need to be enabled for your repo. Join the waitlist to get onboarded soon!

This is the result of collaborative work by @bpronan, @hanouticelina, @rajatarya, @jsulz, @assafvayner, @Wauplin, + many others on the infra/Hub side!

⚡ Enhanced InferenceClient

The InferenceClient has received significant updates and improvements in this release, making it more robust and easy to work with.

We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="novita")

video = client.text_to_video(
    "A young man walking on the street",
    model="Wan-AI/Wan2.1-T2V-14B",
)

It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")

Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.

Miscellaneous improvements:

✨ New Features and Improvements

This release also includes several other notable features and improvements.

It's now possible to pass a path with wildcard to the upload command instead of passing --include=... option:

huggingface-cli upload my-cool-model *.safetensors

Deploying an Inference Endpoint from the Model Catalog just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.

from huggingface_hub import create_inference_endpoint_from_catalog

endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()

endpoint.client.chat_completion(...)
  • Support deploy Inference Endpoint from model catalog by @Wauplin in #2892

The ModelHubMixin got two small updates:

  • authors can provide a paper URL that will be added to all model cards pushed by the library.
  • dataclasses are now supported for any init arg (was only the case of config until now)

You can now sort by name, size, last updated and last used where using the delete-cache command:

huggingface-cli delete-cache --sort=size
  • feat: add --sort arg to delete-cache to sort by size by @AlpinDale in #2815

Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see docs). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)

Warning

This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.

💔 Breaking Changes

labels has been removed from InferenceClient.zero_shot_classification and InferenceClient.zero_shot_image_classification tasks in favor of candidate_labels. There has been a proper deprecation warning for that.

🛠️ Small Fixes and Maintenance

🐛 Bug and Typo Fixes

🏗️ Internal

Thanks to the work previously introduced by the diffusers team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.

Other minor updates:

Significant community contributions

The following contributors have made significant changes to the library over the last release:

[v0.29.3]: Adding 2 new Inference Providers: Cerebras and Cohere 🔥

11 Mar 10:53
5ea7077

Choose a tag to compare

Added client-side support for Cerebras and Cohere providers for upcoming official launch on the Hub.

Cerebras: #2901.
Cohere: #2888.

Full Changelog: v0.29.2...v0.29.3

[v0.29.2] Fix payload model name when model id is a URL & Restore `sys.stdout` in `notebook_login()` after error

05 Mar 13:24

Choose a tag to compare

This patch release includes two fixes:

  • Fix payload model name when model id is a URL #2911
  • Fix: Restore sys.stdout in notebook_login after error #2896

Full Changelog: v0.29.1...v0.29.2

[v0.29.1] Fix revision URL encoding in `upload_large_folder` & Fix endpoint update state handling in `InferenceEndpoint.wait()`

20 Feb 09:34

Choose a tag to compare

This patch release includes two fixes:

  • Fix revision bug in _upload_large_folder.py #2879
  • bug fix in inference_endpoint wait function for proper waiting on update #2867

Full Changelog: v0.29.0...v0.29.1

[v0.29.0]: Introducing 4 new Inference Providers: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita 🔥

18 Feb 17:46
1589113

Choose a tag to compare

We’re thrilled to announce the addition of three more outstanding serverless Inference Providers to the Hugging Face Hub: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita. These providers join our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. This release adds official support for these 3 providers, making it super easy to use a wide variety of models with your preferred providers.

See our announcement blog for more details: https://huggingface.co/blog/new-inference-providers.

Note that Black Forest Labs is not yet supported on the Hub. Once we announce it, huggingface_hub 0.29.0 will automatically support it.

⚡ Other Inference updates

💔 Breaking changes

None.

🛠️ Small fixes and maintenance

😌 QoL improvements

  • dev(narugo): add resume for ranged headers of http_get function by @narugo1992 in #2823

🐛 Bug and typo fixes

🏗️ internal

  • another test by @Wauplin (direct commit on main)
  • feat(ci): ignore unverified trufflehog results by @Wauplin in #2837
  • Add datasets and diffusers to prerelease tests by @Wauplin in #2834
  • Always proxy hf-inference calls + update tests by @Wauplin in #2798
  • Skip list_models(inference=...) tests in CI by @Wauplin in #2852
  • Deterministic test_export_folder (dduf testsà by @Wauplin in #2854
  • [cleanup] Unique constants in tests + env variable for inference tests by @Wauplin in #2855
  • feat: Adds a new environment variable HF_HUB_USER_AGENT_ORIGIN to set origin of calls in user-agent by @Hugoch in #2869

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.28.1: FIX path in `HF_ENDPOINT` discarded

30 Jan 13:46
dea8d04

Choose a tag to compare

Release 0.28.0 introduced a bug making it impossible to set a HF_ENDPOINT env variable with a value with a subpath. This has been fixed in #2807.

Full Changelog: v0.28.0...v0.28.1

[v0.28.0]: Third-party Inference Providers on the Hub & multiple quality of life improvements and bug fixes

28 Jan 11:26

Choose a tag to compare

⚡️Unified Inference Across Multiple Inference Providers

Screenshot 2025-01-28 at 12 05 42

The InferenceClient now supports third-party providers, offering a unified interface to run inference across multiple services while leveraging models from the Hugging Face Hub. This update enables developers to:

  • 🌐 Switch providers seamlessly - Transition between inference providers with a single interface.
  • 🔗 Unified model IDs - Always reference Hugging Face Hub model IDs, even when using external providers.
  • 🔑 Simplified billing and access management - You can use your Hugging Face Token for routing to third-party providers (billed through your HF account).

A list of supported third-party providers can be found here.

Example of text-to-image inference with Replicate:

>>> from huggingface_hub import InferenceClient

>>> replicate_client = InferenceClient(
...    provider="replicate",
...    api_key="my_replicate_api_key", # Using your personal Replicate key
)
>>> image = replicate_client.text_to_image(
...    "A cyberpunk cat hacking neural networks",
...    model="black-forest-labs/FLUX.1-schnell"
)
>>> image.save("cybercat.png")

Another example of chat completion with Together AI:

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="deepseek-ai/DeepSeek-R1",
...     messages=[{"role": "user", "content": "How many r's are there in strawberry?"}],
... )

When using external providers, you can choose between two access modes: either use the provider's native API key, as shown in the examples above, or route calls through Hugging Face infrastructure (billed to your HF account):

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...    provider="fal-ai",
...    token="hf_****"  # Your Hugging Face token
)

⚠️ Parameters availability may vary between providers - check provider documentation.
🔜 New providers/models/tasks will be added iteratively in the future.
👉 You can find a list of supported tasks per provider and more details here.

✨ HfApi

The following change aligns the client with server-side updates by adding new repositories properties: usedStorage and resourceGroup.

[HfApi] update list of repository properties following server side updates by @hanouticelina in #2728

Extends empty commit prevention to file copy operations, preserving clean version histories when no changes are made.

[HfApi] prevent empty commits when copying files by @hanouticelina in #2730

🌐 📚 Documentation

Thanks to @WizKnight, the hindi translation is much better!

Improved Hindi Translation in Documentation📝 by @WizKnight in #2697

💔 Breaking changes

The like endpoint has been removed to prevent misuse. You can still remove existing likes using the unlikeendpoint.

[HfApi] remove like endpoint by @hanouticelina in #2739

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

[v0.27.1]: Fix `typing.get_type_hints` call on a `ModelHubMixin`

06 Jan 12:07

Choose a tag to compare