Skip to content

Releases: huggingface/huggingface_hub

[v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid

11 Jun 08:18
f498b42

Choose a tag to compare

[v0.32.5] [Tiny-Agents] inject environment variables in headers

10 Jun 16:04
8dfb199

Choose a tag to compare

  • Inject env var in headers + better type annotations #3142

Full Changelog: v0.32.4...v0.32.5

[v0.32.4]: Bug fixes in `tiny-agents`, and fix input handling for question-answering task.

03 Jun 10:04

Choose a tag to compare

Full Changelog: v0.32.3...v0.32.4

This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:

[v0.32.3]: Handle env variables in `tiny-agents`, better CLI exit and handling of MCP tool calls arguments

30 May 08:29

Choose a tag to compare

Full Changelog: v0.32.2...v0.32.3

This release introduces some improvements and bug fixes to tiny-agents:

  • [tiny-agents] Handle env variables in tiny-agents (Python client) #3129
  • [Fix] tiny-agents cli exit issues #3125
  • Improve Handling of MCP Tool Call Arguments #3127

[v0.32.2]: Add endpoint support in Tiny-Agent + fix `snapshot_download` on large repos

27 May 09:24
6dd0164

Choose a tag to compare

Full Changelog: v0.32.1...v0.32.2

  • [MCP] Add local/remote endpoint inference support #3121
  • Fix snapshot_download on very large repo (>50k files) #3122

[v0.32.1]: hot-fix: Fix tiny agents on Windows

26 May 09:53

Choose a tag to compare

[v0.32.0]: MCP Client, Tiny Agents CLI and more!

22 May 21:38

Choose a tag to compare

🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI

✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!

pip install -U huggingface_hub[mcp]

In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:

import os

from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient

async def main():
    async with MCPClient(
        provider="nebius",
        model="Qwen/Qwen2.5-72B-Instruct",
        api_key=os.environ["HF_TOKEN"],
    ) as client:
        await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
        messages = [
            {
                "role": "user",
                "content": "Generate a picture of a cat on the moon",
            }
        ]
        async for chunk in client.process_single_turn_with_tools(messages):
            # Log messages
            if isinstance(chunk, ChatCompletionStreamOutput):
                delta = chunk.choices[0].delta
                if delta.content:
                    print(delta.content, end="")

            # Or tool calls
            elif isinstance(chunk, ChatCompletionInputMessage):
                print(
                    f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
                )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

For even simpler development, we now also offer a higher-level Agent class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient. It's designed to be a simple while loop built right on top of an MCPClient.

You can run these Agents directly from the command line:

> tiny-agents run --help
                                                                                                                                                                                     
 Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...                                                                                                                           
                                                                                                                                                                                     
 Run the Agent in the CLI                                                                                                                                                            
                                                                                                                                                                                     
                                                                                                                                                                                     
╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│   path      [PATH]  Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset                         │
│                     (https://huggingface.co/datasets/tiny-agents/tiny-agents)                                                                                                     │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help          Show this message and exit.                                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.

This is an early version of the MCPClient, and community contributions are welcome 🤗

⚡ Inference Providers

Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!

  • [Inference Providers] Add feature extraction task for Nebius by @diadorer in #3057

We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥

  • 🗿 adding support for Nscale inference provider by @nbarr07 in #3068

We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient follows the OpenAI API specs structured output.

  • [Inference Providers] Fix structured output schema in chat completion by @hanouticelina in #3082

💾 Serialization

We've introduced a new @strict decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:

from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field

# Custom validator to ensure a value is positive
def positive_int(value: int):
    if not value > 0:
        raise ValueError(f"Value must be positive, got {value}")


class Config:
    model_type: str
    hidden_size: int = positive_int(default=16)
    vocab_size: int = 32  # Default value

    # Methods named `validate_xxx` are treated as class-wise validators
    def validate_big_enough_vocab(self):
        if self.vocab_size < self.hidden_size:
            raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")

config = Config(model_type="bert", hidden_size=24)   # Valid
config = Config(model_type="bert", hidden_size=-1)   # Raises StrictDataclassFieldValidationError

# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16)   # Raises StrictDataclassClassValidationError

This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.

  • New @strict decorator for dataclass validation by @Wauplin in #2895

This release brings also support for DTensor in _get_unique_id / get_torch_storage_size helpers, allowing transformers to seamlessly use save_pretrained with DTensor.

✨ HF API

When creating an Endpoint, the default for scale_to_zero_timeout is now None, meaning endpoints will no longer scale to zero by default unless explicitly configured.

  • Dont set scale to zero as default when creating an Endpoint by @tomaarsen in #3062

We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.

  • Add helpers to handle OAuth in a FastAPI app by @Wauplin in #2684

📚 Documentation

We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).

  • [Inference] Mention local endpoints inference + remove separate HF Inference API mentions by @hanouticelina in #3085

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

  • [Internal] make hf-xet (again) a required dependency #3103
  • fix conda by @han...
Read more

[v0.31.4]: strict dataclasses, support `DTensor` saving & some bug fixes

19 May 09:48

Choose a tag to compare

This release includes some new features and bug fixes:

  • New strict decorators for runtime dataclass validation with custom and type-based checks. by @Wauplin in #2895.
  • Added DTensor support to _get_unique_id / get_torch_storage_size helpers, enabling transformers to use save_pretrained with DTensor. by @S1ro1 in #3042.
  • Some bug fixes: #3080 & #3076.

Full Changelog: v0.31.2...v0.31.4

[v0.31.2] Hot-fix: make `hf-xet` optional again and bump the min version of the package

13 May 09:50

Choose a tag to compare

Patch release to make hf-xet optional. More context in #3079 and #3078.

Full Changelog: v0.31.1...v0.31.2

[v0.31.0] LoRAs with Inference Providers, `auto` mode for provider selection, embeddings models and more

06 May 20:59

Choose a tag to compare

🧑‍🎨 Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai") # or provider="replicate"

# output is a PIL.Image object
image = client.text_to_image(
    "a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
    model="openfree/flux-chatgpt-ghibli-lora",
)

⚙️ auto mode for provider selection

You can now automatically select a provider for a model using auto mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient

# will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto") 

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

⚠️ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.

🧠 Embeddings support with Sambanova (feature-extraction)

We added support for feature extraction (embeddings) inference with sambanova provider.

⚡ Other Inference features

HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported — if a model isn't already deployed, it won’t be available via HF Inference API.

Miscellaneous improvements and some bug fixes:

✅ Of course, all of those inference changes are available in the AsyncInferenceClient async equivalent 🤗

🚀 Xet

Thanks to @bpronan's PR, Xet now supports uploading byte arrays:

from huggingface_hub import upload_file

file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo

upload_file(
    path_or_fileobj=file_content,
    repo_id=repo_id,
)

Additionally, we’ve added documentation for environment variables used by hf-xet to optimize file download/upload performance — including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS), high-performance mode (HF_XET_HIGH_PERFORMANCE), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY).

Miscellaneous improvements:

  • Removing workaround for deprecated refresh route headers by @bpronan in #2993

✨ HF API

We added HTTP download support for files larger than 50GB — enabling more reliable handling of large file downloads.

We also added dynamic batching to upload_large_folder, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration — improving performance and reducing the risk of hitting the commits rate limit on large repositories.

We added support for new arguments when creating or updating Hugging Face Inference Endpoints.

  • add route payload to deploy Inference Endpoints by @Vaibhavs10 in #3013
  • Add the 'env' parameter to creating/updating Inference Endpoints by @tomaarsen in #3045

💔 Breaking changes

  • The default value of the provider argument in InferenceClient and AsyncInferenceClient is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings.
    If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior.
  • HF Inference API Routing Update: The inference URL path for feature-extraction and sentence-similarity tasks has changed from https://router.huggingface.co/hf-inference/pipeline/{task}/{model}to https://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}.
  • [inference] Necessary breaking change: nest task-specific route inside of model route by @julien-c in #3044

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

Community contributions

The following contributors have made significant changes to the library over the last release: