Skip to content

Update dependency huggingface_hub to v1#22

Open
renovate[bot] wants to merge 1 commit intodevfrom
renovate/huggingface_hub-1.x
Open

Update dependency huggingface_hub to v1#22
renovate[bot] wants to merge 1 commit intodevfrom
renovate/huggingface_hub-1.x

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jan 9, 2026

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
huggingface_hub ==0.36.0==1.5.0 age confidence

Release Notes

huggingface/huggingface_hub (huggingface_hub)

v1.5.0: [v1.5.0]: Buckets API, Agent-first CLI, Spaces Hot-Reload and more

Compare Source

This release introduces major new features including Buckets (xet-based large scale object storage), CLI Extensions, Space Hot-Reload, and significant improvements for AI coding agents. The CLI has been completely overhauled with centralized error handling, better help output, and new commands for collections, papers, and more.

🪣 Buckets: S3-like Object Storage on the Hub

Buckets provide S3-like object storage on Hugging Face, powered by the Xet storage backend. Unlike repositories (which are git-based and track file history), buckets are remote object storage containers designed for large-scale files with content-addressable deduplication. Use them for training checkpoints, logs, intermediate artifacts, or any large collection of files that doesn't need version control.

# Create a bucket
hf buckets create my-bucket --private

# Upload a directory
hf buckets sync ./data hf://buckets/username/my-bucket

# Download from bucket
hf buckets sync hf://buckets/username/my-bucket ./data

# List files
hf buckets list username/my-bucket -R --tree

The Buckets API includes full CLI and Python support for creating, listing, moving, and deleting buckets; uploading, downloading, and syncing files; and managing bucket contents with include/exclude patterns.

📚 Documentation: Buckets guide

🤖 AI Agent Support

This release includes several features designed to improve the experience for AI coding agents (Claude Code, OpenCode, Cursor, etc.):

  • Centralized CLI error handling: Clean user-facing messages without tracebacks (set HF_DEBUG=1 for full traces) by @​hanouticelina in #​3754
  • Token-efficient skill: The hf skills add command now installs a compact skill (~1.2k tokens vs ~12k before) by @​hanouticelina in #​3802
  • Agent-friendly hf jobs logs: Prints available logs and exits by default; use -f to stream by @​davanstrien in #​3783
  • Add AGENTS.md: Dev setup and codebase guide for AI agents by @​Wauplin in #​3789
# Install the hf-cli skill for Claude
hf skills add --claude

# Install for project-level
hf skills add --project

🔥 Space Hot-Reload (Experimental)

Hot-reload Python files in a Space without a full rebuild and restart. This is useful for rapid iteration on Gradio apps.

# Open an interactive editor to modify a remote file
hf spaces hot-reload username/repo-name app.py

# Take local version and patch remote
hf spaces hot-reload username/repo-name -f app.py

🖥️ CLI Improvements

New Commands
  • Add hf papers ls to list daily papers on the Hub by @​julien-c in #​3723
  • Add hf collections commands (ls, info, create, update, delete, add-item, update-item, delete-item) by @​Wauplin in #​3767
CLI Extensions

Introduce an extension mechanism to the hf CLI. Extensions are standalone executables hosted in GitHub repositories that users can install, run, and remove with simple commands. Inspired by gh extension.

# Install an extension (defaults to huggingface org)
hf extensions install hf-claude

# Install from any GitHub owner
hf extensions install hanouticelina/hf-claude

# Run an extension
hf claude

# List installed extensions
hf extensions list
Output Format Options
Usability
Jobs CLI

List available hardware:

✗ hf jobs hardware
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR       COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ----------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A               $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A               $0.0005  $0.03     
cpu-performance CPU Performance        32 vCPU  256 GB  N/A               $0.3117  $18.70    
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A               $0.0167  $1.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)     $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)     $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)   $0.0167  $1.00  
...

Also added a ton of fixes and small QoL improvements.

🤖 Inference

🔧 Other QoL Improvements

💔 Breaking Changes

  • hf jobs ps removes old Go-template --format '{{.id}}' syntax. Use -q for IDs or --format json | jq for custom extraction by @​davanstrien in #​3799
  • Migrate to hf repos instead of hf repo (old command still works but shows deprecation warning) by @​Wauplin in #​3848
  • Migrate hf repo-files delete to hf repo delete-files (old command hidden from help, shows deprecation warning) by @​Wauplin in #​3821

🐛 Bug and typo fixes

📖 Documentation

🏗️ Internal

v1.4.1: [v1.4.1] Fix file corruption when server ignores Range header on download retry

Compare Source

Fix file corruption when server ignores Range header on download retry.
Full details in #​3778 by @​XciD.

Full Changelog: huggingface/huggingface_hub@v1.4.0...v1.4.1

v1.4.0: [v1.4.0] Building the HF CLI for You and your AI Agents

Compare Source

🧠 hf skills add CLI Command

A new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.

> hf skills add --help
Usage: hf skills add [OPTIONS]

  Download a skill and install it for an AI assistant.

Options:
  --claude      Install for Claude.
  --codex       Install for Codex.
  --opencode    Install for OpenCode.
  -g, --global  Install globally (user-level) instead of in the current
                project directory.
  --dest PATH   Install into a custom destination (path to skills directory).
  --force       Overwrite existing skills in the destination.
  --help        Show this message and exit.

Examples
  $ hf skills add --claude
  $ hf skills add --claude --global
  $ hf skills add --codex --opencode

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli

The skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.

🖥️ Improved CLI Help Output

The CLI help output has been reorganized to be more informative and agent-friendly:

  • Commands are now grouped into Main commands and Help commands
  • Examples section showing common usage patterns
  • Learn more section with links to documentation
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...

  Manage local cache directory.

Options:
  --help  Show this message and exit.

Main commands:
  ls      List cached repositories or revisions.
  prune   Remove detached revisions from the cache.
  rm      Remove cached repositories or revisions.
  verify  Verify checksums for a single repo revision from cache or a local
          directory.

Examples
  $ hf cache ls
  $ hf cache ls --revisions
  $ hf cache ls --filter "size>1GB" --limit 20
  $ hf cache ls --format json
  $ hf cache prune
  $ hf cache prune --dry-run
  $ hf cache rm model/gpt2
  $ hf cache rm <revision_hash>
  $ hf cache rm model/gpt2 --dry-run
  $ hf cache rm model/gpt2 --yes
  $ hf cache verify gpt2
  $ hf cache verify gpt2 --revision refs/pr/1
  $ hf cache verify my-dataset --repo-type dataset

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli

📊 Evaluation Results Module

The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.

We added helpers in huggingface_hub to work with this format:

  • EvalResultEntry dataclass representing evaluation scores
  • eval_result_entries_to_yaml() to serialize entries to YAML format
  • parse_eval_result_entries() to parse YAML data back into EvalResultEntry objects
import yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file

entries = [
    EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
    EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
    path_or_fileobj=yaml_content.encode(),
    path_in_repo=".eval_results/results.yaml",
    repo_id="your-username/your-model",
)

🖥️ Other CLI Improvements

New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.

hf papers ls                       # List most recent daily papers
hf papers ls --sort=trending       # List trending papers
hf papers ls --date=2025-01-23     # List papers from a specific date
hf papers ls --date=today          # List today's papers

New hf collections commands for managing collections from the CLI:

# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending

# Create a collection
hf collections create "My Models" --description "Favorites" --private

# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"

# Get info
hf collections info user/my-coll

# Delete
hf collections delete user/my-coll

Other CLI-related improvements:

📊 Jobs

Multi-GPU training commands are now supported with torchrun and accelerate launch:

> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py

You can also pass local config files alongside your scripts:

> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml

New hf jobs hardware command to list available hardware options:

> hf jobs hardware
NAME         PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
------------ ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic    CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade  CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
t4-small     Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium    Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small   Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large   Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2 2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4 4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large   Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4       4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8       8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1         1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4         4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1       1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4       4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8       8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50  

Better filtering with label support and negation:

> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B

⚡️ Inference

🔧 QoL Improvements

📖 Documentation

🐛 Bug and typo fixes

🏗️ Internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v1.3.7: [v1.3.7] Log 'x-amz-cf-id' on http error if no request id

Compare Source

Log 'x-amz-cf-id' on http error (if no request id) (#​3759)

Full Changelog: huggingface/huggingface_hub@v1.3.5...v1.3.7

v1.3.5: [v1.3.5] Configurable default timeout for HTTP calls

Compare Source

Default timeout is 10s. This is ok in most use cases but can trigger errors in CIs making a lot of requests to the Hub. Solution is to set HF_HUB_DOWNLOAD_TIMEOUT=60 as environment variable in these cases.

Full Changelog: huggingface/huggingface_hub@v1.3.4...v1.3.5

v1.3.4: [v1.3.4] Fix CommitUrl._endpoint default to None

Compare Source

  • Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @​tomaarsen in #​3737

Full Changelog: huggingface/huggingface_hub@v1.3.3...v1.3.4

v1.3.3: [v1.3.3] List Jobs Hardware & Bug Fixes

Compare Source

⚙️ List Jobs Hardware

You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.

From the CLI:

hf jobs hardware                           
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
cpu-performance CPU Performance        8 vCPU   32 GB   N/A              $0.0000  $0.00     
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A              $0.0000  $0.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large      Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2    2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4    4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large      Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4          4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8          8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1            1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4            4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1          1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4          4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8          8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50 

Programmatically:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'

🐛 Bug Fixes

✨ Various Improvements

📚 Documentation

v1.3.2: [v1.3.2] Zai provider support for text-to-image and fix custom endpoint not forwarded

Compare Source

Full Changelog: huggingface/huggingface_hub@v1.3.1...v1.3.2

v1.3.1: [v1.3.1] Add dimensions & encoding_format parameters to feature extraction (embeddings) task

Compare Source

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size #​3671 by @​mishig25

Full Changelog: huggingface/huggingface_hub@v1.3.0...v1.3.1

v1.3.0: [v1.3.0] New CLI Commands for Hub Discovery, Jobs Monitoring and more!

Compare Source

🖥️ CLI: hf models, hf datasets, hf spaces Commands

The CLI has been reorganized with dedicated commands for Hub discovery, while hf repo stays focused on managing your own repositories.

New commands:

# Models
hf models ls --author=Qwen --limit=10
hf models info Qwen/Qwen-Image-2512

# Datasets
hf datasets ls --filter "format:parquet" --sort=downloads
hf datasets info HuggingFaceFW/fineweb

# Spaces
hf spaces ls --search "3d"
hf spaces info enzostvs/deepsite

This organization mirrors the Python API (list_models, model_info, etc.), keeps the hf <resource> <action> pattern, and is extensible for future commands like hf papers or hf collections.

🔧 Transformers CLI Installer

You can now install the transformers CLI alongside the huggingface_hub CLI using the standalone installer scripts.

# Install hf CLI only (default)
curl -LsSf https://hf.co/cli/install.sh | bash -s

# Install both hf and transformers CLIs
curl -LsSf https://hf.co/cli/install.sh | bash -s -- --with-transformers
# Install hf CLI only (default)
powershell -c "irm https://hf.co/cli/install.ps1 | iex"

# Install both hf and transformers CLIs
powershell -c "irm https://hf.co/cli/install.ps1 | iex" -WithTransformers

Once installed, you can use the transformers CLI directly:

transformers serve
transformers chat openai/gpt-oss-120b

📊 Jobs Monitoring

New hf jobs stats command to monitor your running jobs in real-time, similar to docker stats. It displays a live table with CPU, memory, network, and GPU usage.

>>> hf jobs stats
JOB ID                   CPU % NUM CPU MEM % MEM USAGE      NET I/O         GPU UTIL % GPU MEM % GPU MEM USAGE
------------------------ ----- ------- ----- -------------- --------------- ---------- --------- ---------------
6953ff6274100871415c13fd 0%    3.5     0.01% 1.3MB / 15.0GB 0.0bps / 0.0bps 0%         0.0%      0.0B / 22.8GB

A new HfApi.fetch_jobs_metrics() method is also available:

>>> for metrics in fetch_job_metrics(job_id="6953ff6274100871415c13fd"):
...     print(metrics)
{
    "cpu_usage_pct": 0,
    "cpu_millicores": 3500,
    "memory_used_bytes": 1306624,
    "memory_total_bytes": 15032385536,
    "rx_bps": 0,
    "tx_bps": 0,
    "gpus": {
        "882fa930": {
            "utilization": 0,
            "memory_used_bytes": 0,
            "memory_total_bytes": 22836000000
        }
    },
    "replica": "57vr7"
}

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 9, 2026

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review

Comment @coderabbitai help to get the list of available commands and usage tips.

@renovate renovate bot force-pushed the renovate/huggingface_hub-1.x branch 3 times, most recently from d966ca3 to 0f9d723 Compare January 14, 2026 14:30
@renovate renovate bot force-pushed the renovate/huggingface_hub-1.x branch 3 times, most recently from 257dfbe to 29d0522 Compare January 29, 2026 10:35
@renovate renovate bot force-pushed the renovate/huggingface_hub-1.x branch 3 times, most recently from 60d9578 to 8d8266b Compare February 6, 2026 10:00
@renovate renovate bot force-pushed the renovate/huggingface_hub-1.x branch from 8d8266b to 0d86a6d Compare February 26, 2026 18:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants