Releases: huggingface/huggingface_hub
v1.1.0: Faster Downloads, new CLI features and more!
🚀 Optimized Download Experience
⚡ This release significantly improves the file download experience by making it faster and cleaning up the terminal output.
snapshot_download is now always multi-threaded, leading to significant performance gains. We removed a previous limitation, as Xet's internal resource management ensures we can parallelize downloads safely without resource contention. A sample benchmark showed this made the download much faster!
Additionally, the output for snapshot_download and hf download CLI is now much less verbose. Per file logs are hidden by default, and all individual progress bars are combined into a single progress bar, resulting in a much cleaner output.
- Multi-threaded snapshot download by @Wauplin in #3522
- Compact output in
snapshot_downloadandhf downloadby @Wauplin in #3523
Inference Providers
🆕 WaveSpeedAI is now an official Inference Provider on Hugging Face! 🎉 WaveSpeedAI provides fast, scalable, and cost-effective model serving for creative AI applications, supporting text-to-image, image-to-image, text-to-video, and image-to-video tasks. 🎨
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="wavespeed",
api_key=os.environ["HF_TOKEN"],
)
video = client.text_to_video(
"A cat riding a bike",
model="Wan-AI/Wan2.2-TI2V-5B",
)More snippets examples in the provider documentation 👉 here.
We also added support for image-segmentation task for fal, enabling state-of-the-art background removal with RMBG v2.0.
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="fal-ai",
api_key=os.environ["HF_TOKEN"],
)
output = client.image_segmentation("cats.jpg", model="briaai/RMBG-2.0")- [inference provider] Add wavespeed.ai as an inference provider by @arabot777 in #3474
- [Inference Providers] implement
image-segmentationfor fal by @hanouticelina in #3521
🦾 CLI continues to get even better!
Following the complete revamp of the Hugging Face CLI in v1.0, this release builds on that foundation by adding powerful new features and improving accessibility.
New hf PyPI Package
To make the CLI even easier to access, we've published a new, minimal PyPI package: hf. This package installs the hf CLI tool and It's perfect for quick, isolated execution with modern tools like uvx.
# Run the CLI without installing it
> uvx hf auth whoamiimport hf in a Python script will correctly raise an ImportError.
A big thank you to @thorwhalen for generously transferring the hf package name to us on PyPI. This will make the CLI much more accessible for all Hugging Face users. 🤗
Manage Inference Endpoints
A new command group, hf endpoints, has been added to deploy and manage your Inference Endpoints directly from the terminal.
This provides "one-liners" for deploying, deleting, updating, and monitoring endpoints. The CLI offers two clear paths for deployment: hf endpoints deploy for standard Hub models and hf endpoints catalog deploy for optimized Model Catalog configurations.
> hf endpoints --help
Usage: hf endpoints [OPTIONS] COMMAND [ARGS]...
Manage Hugging Face Inference Endpoints.
Options:
--help Show this message and exit.
Commands:
catalog Interact with the Inference Endpoints catalog.
delete Delete an Inference Endpoint permanently.
deploy Deploy an Inference Endpoint from a Hub repository.
describe Get information about an existing endpoint.
ls Lists all Inference Endpoints for the given namespace.
pause Pause an Inference Endpoint.
resume Resume an Inference Endpoint.
scale-to-zero Scale an Inference Endpoint to zero.
update Update an existing endpoint.- [CLI] Add Inference Endpoints Commands by @hanouticelina in #3428
Verify Cache Integrity
A new command, hf cache verify, has been added to check your cached files against their checksums on the Hub. This is a great tool to ensure your local cache is not corrupted and is in sync with the remote repository.
> hf cache verify --help
Usage: hf cache verify [OPTIONS] REPO_ID
Verify checksums for a single repo revision from cache or a local directory.
Examples:
- Verify main revision in cache: `hf cache verify gpt2`
- Verify specific revision: `hf cache verify gpt2 --revision refs/pr/1`
- Verify dataset: `hf cache verify karpathy/fineweb-edu-100b-shuffle --repo-type dataset`
- Verify local dir: `hf cache verify deepseek-ai/DeepSeek-OCR --local-dir /path/to/repo`
Arguments:
REPO_ID The ID of the repo (e.g. `username/repo-name`). [required]
Options:
--repo-type [model|dataset|space]
The type of repository (model, dataset, or
space). [default: model]
--revision TEXT Git revision id which can be a branch name,
a tag, or a commit hash.
--cache-dir TEXT Cache directory to use when verifying files
from cache (defaults to Hugging Face cache).
--local-dir TEXT If set, verify files under this directory
instead of the cache.
--fail-on-missing-files Fail if some files exist on the remote but
are missing locally.
--fail-on-extra-files Fail if some files exist locally but are not
present on the remote revision.
--token TEXT A User Access Token generated from
https://huggingface.co/settings/tokens.
--help Show this message and exit.- [CLI] Add
hf cache verifyby @hanouticelina in #3461
Cache Sorting and Limiting
Managing your local cache is now easier. The hf cache ls command has been enhanced with two new options:
--sort: Sort your cache byaccessed,modified,name, orsize. You can also specify order (e.g.,modified:ascto find the oldest files).--limit: Get just the top N results after sorting (e.g.,--limit 10).
# List top 10 most recently accessed repos
> hf cache ls --sort accessed --limit 10
# Find the 5 largest repos you haven't used in over a year
> hf cache ls --filter "accessed>1y" --sort size --limit 5Finally, we've patched the CLI installer script to fix a bug for zsh users. The installer now works correctly across all common shells.
- Use hf installer with bash by @Wauplin in #3498
- make installer work for zsh by @hanouticelina in #3513
🔧 Other
We've fixed a bug in HfFileSystem where the instance cache would break when using multiprocessing with the "fork" start method.
🌍 Documentation
Thanks to @BastienGimbert for translating the README to French 🇫🇷 🤗
- i18n: add French README translation by @BastienGimbert in #3490
and Thanks to @didier-durand for fixing multiple language typos in the library! 🤗
- [Doc]: fix various typos in different files by @didier-durand in #3499
- [Doc]: fix various typos in different files by @didier-durand in #3509
- [Doc]: fix various typos in different files by @didier-durand in #3514
- [Doc]: fix various typos in different files by @didier-durand in #3517
- [Doc]: fix various typos in different files by @didier-durand in #3497
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
🏗️ internal
- Remove aiohttp dependency by @Wauplin in #3488
- Prepare for 1.1.0 by @Wauplin in #3489
- Fix type annotations in inference codegen by @Wauplin in #3496
- Add CI + official support for Python 3.14 by @Wauplin in #3483
- [Internal] Fix quality issue generated from
update-inference-typesworkflow by @hanouticelina in #3516
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @arabot777
- [inference provider] Add wavespeed.ai as an inference provider (#3474)
- @BastienGimbert
- i18n: add French README translation (#3490)
- @didier-durand
[v1.0.1] Remove `aiohttp` from extra dependencies
In huggingface_hub v1.0 release, we've removed our dependency on aiohttp to replace it with httpx but we forgot to remove it from the huggingface_hub[inference] extra dependencies in setup.py. This patch release removes it, making the inference extra removed as well.
The internal method _import_aiohttp being unused, it has been removed as well.
Full Changelog: v1.0.0...v1.0.1
v1.0: Building for the Next Decade
Check out our blog post announcement!
🚀 HTTPx migration
The huggingface_hub library now uses httpx instead of requests for HTTP requests. This change was made to improve performance and to support both synchronous and asynchronous requests the same way. We therefore dropped both requests and aiohttp dependencies.
The get_session and hf_raise_for_status still exist and respectively returns an httpx.Client and processes a httpx.Response object. An additional get_async_client utility has been added for async logic.
The exhaustive list of breaking changes can be found here.
- [1.0] Httpx migration by @Wauplin in #3328
- Fix async client hook by @Wauplin in #3433
- Fix
hf_raise_for_statuson async stream + tests by @Wauplin in #3442 - [v1.0] Update "HTTP backend" docs +
git_vs_httpguide by @Wauplin in #3357
🪄 CLI revamp
huggingface_hub 1.0 marks a complete transformation of our command-line experience. We've reimagined the CLI from the ground up, creating a tool that feels native to modern ML workflows while maintaining the simplicity the community love.
One CLI to Rule: Goodbye huggingface-cli
This release marks the end of an era with the complete removal of the huggingface-cli command. The new hf command (introduced in v0.34.0) takes its place with a cleaner, more intuitive design that follows a logical "resource-action" pattern. This breaking change simplifies the user experience and aligns with modern CLI conventions - no more typing those extra 11 characters!
hf CLI Revamp
The new CLI introduces a comprehensive set of commands for repository and file management that expose powerful HfApi functionality directly from the terminal:
> hf repo --help
Usage: hf repo [OPTIONS] COMMAND [ARGS]...
Manage repos on the Hub.
Options:
--help Show this message and exit.
Commands:
branch Manage branches for a repo on the Hub.
create Create a new repo on the Hub.
delete Delete a repo from the Hub.
move Move a repository from a namespace to another namespace.
settings Update the settings of a repository.
tag Manage tags for a repo on the Hub.A dry run mode has been added to hf download, which lets you preview exactly what will be downloaded before committing to the transfer—showing file sizes, what's already cached, and total bandwidth requirements in a clean table format:
> hf download gpt2 --dry-run
[dry-run] Fetching 26 files: 100%|██████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 50.66it/s]
[dry-run] Will download 26 files (out of 26) totalling 5.6G.
File Bytes to download
--------------------------------- -----------------
.gitattributes 445.0
64-8bits.tflite 125.2M
64-fp16.tflite 248.3M
64.tflite 495.8M
README.md 8.1K
config.json 665.0
flax_model.msgpack 497.8M
generation_config.json 124.0
merges.txt 456.3K
model.safetensors 548.1M
onnx/config.json 879.0
onnx/decoder_model.onnx 653.7M
onnx/decoder_model_merged.onnx 655.2M
...The CLI now provides intelligent shell auto-completion that suggests available commands, subcommands, options, and arguments as you type - making command discovery effortless and reducing the need to constantly check --help.
The CLI now also checks for updates in the background, ensuring you never miss important improvements or security fixes. Once every 24 hours, the CLI silently checks PyPI for newer versions and notifies you when an update is available - with personalized upgrade instructions based on your installation method.
The cache management CLI has been completely revamped with the removal of hf scan cache and hf scan delete in favor of docker-inspired commands that are more intuitive. The new hf cache ls provides rich filtering capabilities, hf cache rm enables targeted deletion, and hf cache prune cleans up detached revisions.
# List cached repos
>>> hf cache ls
ID SIZE LAST_ACCESSED LAST_MODIFIED REFS
--------------------------- -------- ------------- ------------- -----------
dataset/nyu-mll/glue 157.4M 2 days ago 2 days ago main script
model/LiquidAI/LFM2-VL-1.6B 3.2G 4 days ago 4 days ago main
model/microsoft/UserLM-8b 32.1G 4 days ago 4 days ago main
Found 3 repo(s) for a total of 5 revision(s) and 35.5G on disk.
# List cached repos with filters
>>> hf cache ls --filter "type=model" --filter "size>3G" --filter "accessed>7d"
# Output in different format
>>> hf cache ls --format json
>>> hf cache ls --revisions # Replaces the old --verbose flag
# Cache removal
>>> hf cache rm model/meta-llama/Llama-2-70b-hf
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q) # Remove old items
# Clean up detached revisions
hf cache prune # Removes all unreferenced revisionsUnder the hood, this transformation is powered by Typer, significantly reducing boilerplate and making the CLI easier to maintain and extend with new features.
- Refactor CLI implementation using Typer by @hanouticelina in #3365
- Add new HF commands by @hanouticelina in #3384
- Document new HF commands by @hanouticelina in #3393
- Implement dry run mode in download CLI by @Wauplin in #3407
- [hf CLI] check for updates and notify user by @Wauplin in #3418
- Print version only in CLI by @Wauplin (direct commit on v1.0-release)
- Disable rich in CLI by @Wauplin in #3427
- [CLI] Revamp
hf cacheby @hanouticelina in #3439 - [CLI] Update cache CLI docs and migration guide by @hanouticelina in #3450
CLI Installation: Zero-Friction Setup
The new cross-platform installers simplify CLI installation by creating isolated sandboxed environments without interfering with your existing Python setup or project dependencies. The installers work seamlessly across macOS, Linux, and Windows, automatically handling dependencies and PATH configuration.
# On macOS and Linux
>>> curl -LsSf https://hf.co/cli/install.sh | sh
# On Windows
>>> powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"Finally, the [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.
- Add cross-platform CLI Installers by @hanouticelina in #3378
- update installers paths by @hanouticelina in #3400
- [CLI] Remove
[cli]extra by @hanouticelina in #3451
💔 Breaking changes
The v1.0 release is a major milestone for the huggingface_hub library. It marks our commitment to API stability and the maturity of the library. We have made several improvements and breaking changes to make the library more robust and easier to use. A migration guide has been written to reduce friction as much as possible: https://huggingface.co/docs/huggingface_hub/concepts/migration.
- [v1.0] feat: add migration guide for v1.0 by @google-labs-jules[bot] in #3360
We'll list all breaking changes below:
-
Minimum Python version is now 3.9 (instead of 3.8).
-
HTTP backend migrated from
requeststohttpx. Expect some breaking changes on advances features and errors. The exhaustive list can be found here. -
The deprecated
huggingface-clihas been removed,hf(introduced inv0.34) replaces it with a clearer ressource-action CLI. -
The
[cli]extra has been removed - The CLI now ships with the corehuggingface_hubpackage.- [CLI] Remove
[cli]extra by @hanouticelina in #3451
- [CLI] Remove
-
Long deprecated classes like
HfFolder,InferenceAPI, andRepositoryhave been removed. -
constant.hf_cache_homehave been removed. Useconstants.HF_HOMEinstead. -
use_auth_tokenis not supported anymore. Usetokeninstead. Previously usinguse_auth_tokenautomatically redirected totokenwith a warning -
removed
get_token_permission. Became useless when fine-grained tokens arrived. -
removed
update_repo_visibility. Useupdate_repo_settingsinstead. -
removed
is_write_actionis allbuild_hf_headersmethods. Not relevant since fine-grained tokens arrived. -
removed
write_permissionarg from login method. Not relevant anymore. -
renamed
login(new_session)tologin(skip_if_logged_in)in login methods. Not announced but hopefully very little friction. Only some notebooks to update on the Hub (will do it once released) -
removed
resume_download/force_filename/local_dir_use_symlinksparameters from hf_hub_download/snapshot_downlo...
[v0.36.0] Last Stop Before 1.0
This is the final minor release before v1.0.0. This release focuses on performance optimizations to HfFileSystem and adds a new get_organization_overview API endpoint.
We'll continue to release security patches as needed, but v0.37 will not happen. The next release will be 1.0.0. We’re also deeply grateful to the entire Hugging Face community for their feedback, bug reports, and suggestions that have shaped this library.
Full Changelog: v0.35.0...v0.36.0
📁 HfFileSystem
Major optimizations have been implemented in HfFileSystem:
- Cache is kept when pickling a
fsinstance. This is particularily useful when streaming datasets in a distributed training environment. Each worker won't have to rebuild their cache anymore
Listing files with .glob() has been greatly optimized:
from huggingface_hub import HfFileSystem
HfFileSystem().glob("datasets/HuggingFaceFW/fineweb-edu/data/*/*")
# Before: ~100 /tree calls (one per subdirectory)
# Now: 1 /tree callMinor updates:
- add block_size in init by @lhoestq in #3425
- hffs minor fix by @lhoestq in #3449
- HTTP backoff: Retry on ChunkedEncodingError by @lhoestq in #3437
🌍 HfApi
It is now possible to get high-level information about an organization, the same way it is already possible to do with users:
>>> from huggingface_hub import get_organization_overview
>>> get_organization_overview("huggingface")
Organization(
avatar_url='https://cdn-avatars.huggingface.co/v1/production/uploads/1583856921041-5dd96eb166059660ed1ee413.png',
name='huggingface',
fullname='Hugging Face',
details='The AI community building the future.',
is_verified=True,
is_following=True,
num_users=198,
num_models=164, num_spaces=96,
num_datasets=1043,
num_followers=64814
)- Add client support for the organization overview endpoint by @BastienGimbert in #3436
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
- Add quotes for better shell compatibility by @aopstudio in #3369
- update the
sentence_similaritydocstring by @tolgaakar in #3374 - Do not retry on 429 (only on 5xx) by @Wauplin in #3377
- Use git xet transfer to check if xet is enabled by @hanouticelina in #3381
- Replace pkgx install instruction with uv by @gary149 in #3420
- The error message as previously displayed... by @goldnode in #3405
- Use all tools unless explicit allowed_tools by @Mithil467 in #3397
- [type validation] skip unresolved forward ref by @zucchini-nlp in #3376
- document job stage possible values by @hanouticelina in #3431
- update token parameter docstring by @hanouticelina in #3447
🏗️ internal
- bump to 0.36.0.dev0 by @Wauplin (direct commit on main)
- [Workflow] security fix by @glegendre01 in #3383
- migrate tip blocks by @hanouticelina in #3392
- [Internal] Fix
tyquality by @hanouticelina in #3441 - backward compatible cli tracking (v0.x) by @Wauplin in #3460
Community contributions
The following contributors have made changes to the library over the last release. Thank you!
- @aopstudio
* Add quotes for better shell compatibility (#3369) - @tolgaakar
* update thesentence_similaritydocstring (#3374) (#3375) - @Mithil467
* Use all tools unless explicit allowed_tools (#3397) - @goldnode
* The error message as previously displayed... (#3405) - @BastienGimbert
* Add client support for the organization overview endpoint (#3436)
[v0.35.3] Fix `image-to-image` target size parameter mapping & tiny agents allow tools list bug
This release includes two bug fixes:
- [Inference] Fix target size mapping for fal-ai's image-to-image in #3399 by @hanouticelina flagged by @iam-tsr
- [Tiny-Agents] Use all tools unless allowed_tools is set explicitly in #3397 by @Mithil467
Full Changelog: v0.35.2...v0.35.3
[v0.35.2] Welcoming Z.ai as Inference Providers!
Full Changelog: v0.35.1...v0.35.2
New inference provider! 🔥
Z.ai is now officially an Inference Provider on the Hub. See full documentation here: https://huggingface.co/docs/inference-providers/providers/zai-org.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="zai-org")
completion = client.chat.completions.create(
model="zai-org/GLM-4.5",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print("\nThinking:")
print(completion.choices[0].message.reasoning_content)
print("\nOutput:")
print(completion.choices[0].message.content)Thinking:
Okay, the user is asking about the capital of France. That's a pretty straightforward geography question.
Hmm, I wonder if this is just a casual inquiry or if they need it for something specific like homework or travel planning. The question is very basic though, so probably just general knowledge.
Paris is definitely the correct answer here. It's been the capital for centuries, since the Capetian dynasty made it the seat of power. Should I mention any historical context? Nah, the user didn't ask for details - just the capital.
I recall Paris is also France's largest city and major cultural hub. But again, extra info might be overkill unless they follow up. Better keep it simple and accurate.
The answer should be clear and direct: "Paris". No need to overcomplicate a simple fact. If they want more, they'll ask.
Output:
The capital of France is **Paris**.
Paris has been the political and cultural center of France for centuries, serving as the seat of government, the residence of the President (Élysée Palace), and home to iconic landmarks like the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. It is also France's largest city and a global hub for art, fashion, gastronomy, and history.
Misc:
[v0.35.1] Do not retry on 429 and skip forward ref in strict dataclass
Full Changelog: v0.35.0...v0.35.1
[v0.35.0] Announcing Scheduled Jobs: run cron jobs on GPU on the Hugging Face Hub!
Scheduled Jobs
In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think "cron jobs running on GPU".
This comes with a fully-fledge CLI:
hf jobs scheduled run @hourly ubuntu echo hello world
hf jobs scheduled run "0 * * * *" ubuntu echo hello world
hf jobs scheduled ps -a
hf jobs scheduled inspect <id>
hf jobs scheduled delete <id>
hf jobs scheduled suspend <id>
hf jobs scheduled resume <id>
hf jobs scheduled uv run @weekly train.py
It is now possible to run a command with uv run:
hf jobs uv run --with lighteval -s HF_TOKEN lighteval endpoint inference-providers "model_name=openai/gpt-oss-20b,provider=groq" "lighteval|gsm8k|0|0"
Some other improvements have been added to the existing Jobs API for a better UX.
- [Jobs] Use current or stored token in a Job secrets by @lhoestq in #3272
- update uv image by @lhoestq in #3270
And finally, Jobs documentation has been updated with new examples (and some fixes):
- Fix bash history expansion in hf jobs example by @nyuuzyou in #3277
- Add timeout info to Jobs guide docs by @davanstrien in #3281
- Update jobs.md by @tre3x in #3297
- docs: Add link to uv-scripts organization in Jobs guide by @davanstrien in #3326
- docs: Add Docker images section for UV scripts in Jobs guide by @davanstrien in #3327
- docs: add link to TRL jobs training documentation by @davanstrien in #3330
CLI updates
In addition to the Scheduled Jobs, some improvements have been added to the hf CLI.
- [CLI] print help if no command provided by @Wauplin in #3262
- update hf auth whoami output by @hanouticelina in #3274
- Add 'user:' prefix to whoami command output for consistency by @gary149 in #3267
- Whoami: custom message only on unauthorized by @Wauplin in #3288
Inference Providers
Welcome Scaleway and PublicAI!
Two new partners have been integrated to Inference Providers: Scaleway and PublicAI! (as part of releases 0.34.5 and 0.34.6).
- feat: add scaleway inference provider by @Gnoale in #3356
- Add PublicAI provider by @Wauplin in #3367
Image-to-video
Image to video is now supported in the InferenceClient:
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai")
video = client.image_to_video(
"cat.png",
prompt="The cat starts to dance",
model="Wan-AI/Wan2.2-I2V-A14B",
)- [Inference] Support image to video task by @hanouticelina in #3289
Miscellaneous
Header content-type is now correctly set when sending an image or audio request (e.g. for image-to-image task). It is inferred either from the filename or the URL provided by the user. If user is directly passing raw bytes, the content-type header has to be set manually.
A .reasoning field has been added to the Chat Completion output. This is used by some providers to return reasoning tokens separated from the .content stream of tokens.
MCP & tiny-agents updates
tiny-agents now handles AGENTS.md instruction file (see https://agents.md/).
- allow use of AGENTS.md as well as PROMPT.md by @evalstate in #3317
Tools filtering has already been improved to avoid loading non-relevant tools from an MCP server:
- [MCP] Handle Ollama's deviation from the OpenAI tool streaming spec by @hanouticelina in #3140
- [Tiny Agents] Add tools to config by @NielsRogge in #3242
- fix allowed tools by @Wauplin (direct commit on main)
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
- Fix bad total size after resuming download by @DKingAlpha in #3234)
- bug fix: only extend path on window sys by @vealocia in #3265
- [Update] HF Jobs Documentation by @ariG23498 in #3268
- Improve Git Credential Helper Detection for Linux (GCM & libsecret support) by @danchev in #3264
- Make requests decode content by @rasmusfaber in #3271
- Add validation warnings for repository limits in upload_large_folder by @davanstrien in #3280
- Include
HF_HUB_DISABLE_XETin the environment dump by @hanouticelina in #3290 - Add type to job owner by @drbh in #3291
- Update to use only summary bars for uploads when in notebooks by @hoytak in #3243
- Deprecate library/tags/task/... filtering in list_models by @Wauplin in #3318
- Added
appsas a parameter toHfApi.list_modelsby @anirbanbasu in #3322 - Update error message to improve shell compatibility by @aopstudio in #3333
- docs: minor typo fix in /en/guides/manage-cache by @Manith-Ratnayake in #3353
🏗️ internal
- Prepare for v0.35 by @Wauplin in #3261
- fix-ish CI by @Wauplin (direct commit on main)
- Fix lfs test in CI by @Wauplin in #3275
- [Internal] Use
tytype checker by @hanouticelina in #3294 - [Internal] fix
tycheck quality by @hanouticelina in #3320 - Return early in
is_jsonableif circular reference by @Wauplin in #3348
Community contributions
The following contributors have made changes to the library over the last release. Thank you!
- @DKingAlpha
- @vealocia
- bug fix: only extend path on window sys (#3265)
- @danchev
- Improve Git Credential Helper Detection for Linux (GCM & libsecret support) (#3264)
- @rasmusfaber
- Make requests decode content (#3271)
- @nyuuzyou
- Fix bash history expansion in hf jobs example (#3277)
- @tre3x
- Update jobs.md (#3297)
- @hoytak
- Update to use only summary bars for uploads when in notebooks (#3243)
- @anirbanbasu
- Added
appsas a parameter toHfApi.list_models(#3322)
- Added
- @aopstudio
- Update error message to improve shell compatibility (#3333)
- @Manith-Ratnayake
- docs: minor typo fix in /en/guides/manage-cache (#3353)
- @Gnoale
- feat: add scaleway inference provider (#3356)
[v0.34.6]: Welcoming PublicAI as Inference Providers!
Full Changelog: v0.34.5...v0.34.6
⚡ New provider: PublicAI
Tip
All supported PublicAI models can be found here.
Public AI Inference Utility is a nonprofit, open-source project building products and organizing advocacy to support the work of public AI model builders like the Swiss AI Initiative, AI Singapore, AI Sweden, and the Barcelona Supercomputing Center. Think of a BBC for AI, a public utility for AI, or public libraries for AI.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="publicai")
completion = client.chat.completions.create(
model="swiss-ai/Apertus-70B-Instruct-2509",
messages=[{"role": "user", "content": "What is the capital of Switzerland?"}],
)
print(completion.choices[0].message.content)[v0.34.5]: Welcoming Scaleway as Inference Providers!
Full Changelog: v0.34.4...v0.34.5
⚡ New provider: Scaleway
Tip
All supported Scaleway models can be found here. For more details, check out its documentation page.
Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="scaleway")
completion = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B-Instruct-2507",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)

