forked from kubeflow/trainer
-
Notifications
You must be signed in to change notification settings - Fork 20
Update dependency huggingface_hub to v0.34.4 #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
red-hat-konflux
wants to merge
1
commit into
konflux-poc
Choose a base branch
from
konflux/mintmaker/konflux-poc/huggingface_hub-0.x
base: konflux-poc
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Update dependency huggingface_hub to v0.34.4 #60
red-hat-konflux
wants to merge
1
commit into
konflux-poc
from
konflux/mintmaker/konflux-poc/huggingface_hub-0.x
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
999830e
to
6628110
Compare
Pull Request Test Coverage Report for Build 15505390404Details
💛 - Coveralls |
6628110
to
73043c0
Compare
73043c0
to
3a085d9
Compare
3a085d9
to
985fb42
Compare
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Join our Discord community for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
985fb42
to
4622a8b
Compare
4622a8b
to
f923a25
Compare
f923a25
to
1e14bf3
Compare
Signed-off-by: red-hat-konflux <126015336+red-hat-konflux[bot]@users.noreply.github.com>
1e14bf3
to
4f63805
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.23.4
->==0.34.4
Release Notes
huggingface/huggingface_hub (huggingface_hub)
v0.34.4
: [v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilitiesCompare Source
Biggest update is the support of Image-To-Video task with inference provider Fal AI
And some quality of life improvements:
Full Changelog: huggingface/huggingface_hub@v0.34.3...v0.34.4
v0.34.3
: [v0.34.3] Jobs improvements andwhoami
user prefixCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.2...v0.34.3
v0.34.2
: [v0.34.2] Bug fixes: Windows path handling & resume download size fixCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.1...v0.34.2
v0.34.1
: [v0.34.1] [CLI] print help if no command providedCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.0...v0.34.1
v0.34.0
: [v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!Compare Source
🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!
We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new
hf jobs
command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.Key features:
run
,ps
,logs
,inspect
,cancel
) to run and manage jobsuv
(experimental)All features are available both from Python (
run_job
,list_jobs
, etc.) and the CLI (hf jobs
).Example usage:
You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental
uv
runner for Python scripts with inline dependencies.Check out the Jobs guide for more examples and details.
🚀 The CLI is now
hf
! (formerlyhuggingface-cli
)Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from
huggingface-cli
tohf
! The legacyhuggingface-cli
remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command formathf <resource> <action> [options]
(e.g.hf auth login
,hf repo create
,hf jobs run
).Run
hf --help
to know more about the CLI options.⚡ Inference
🖼️ Image-to-image
Added support for
image-to-image
task in theInferenceClient
for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:image-to-image
support for Replicate provider by @hanouticelina in #3188image-to-image
support for fal.ai provider by @hanouticelina in #3187In addition to this, it is now possible to directly pass a
PIL.Image
as input to theInferenceClient
.🤖 Tiny-Agents
tiny-agents
got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:🐛 Bug fixes
InferenceClient
andtiny-agents
got a few quality of life improvements and bug fixes:📤 Xet
Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:
Documentation has already been written to explain better the protocol and its options:
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
healthRoute
instead of GET / to check status by @mfuntowicz in #3165expand
argument when listing files in repos by @lhoestq in #3195libcst
incompatibility with Python 3.13 by @hanouticelina in #3251🏗️ internal
v0.33.5
: [v0.33.5] [Inference] Fix aUserWarning
when streaming withAsyncInferenceClient
Compare Source
AsyncInferenceClient
#3252Full Changelog: huggingface/huggingface_hub@v0.33.4...v0.33.5
v0.33.4
: [v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP toolsCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.3...v0.33.4
v0.33.3
: [v0.33.3] [Tiny-Agent]: Update tiny-agents exampleCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.2...v0.33.3
v0.33.2
: [v0.33.2] [Tiny-Agent]: Switch to VSCode MCP formatCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.1...v0.33.2
Breaking changes:
Example of
agent.json
:Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents
v0.33.1
: [v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check UpdateCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.0...v0.33.1
This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:
v0.33.0
: [v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!Compare Source
⚡ New provider: Featherless.AI
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.
⚡ New provider: Groq
At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.
Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.
🤖 MCP and Tiny-agents
It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!
Fixing some DX issues in the
tiny-agents
CLI.tiny-agents
cli exit issues by @Wauplin in #3125📚 Documentation
New translation from the Hindi-speaking community, for the community!
🛠️ Small fixes and maintenance
😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal
Significant community contributions
The following contributors have made significant changes to the library over the last release:
v0.32.6
: [v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oidCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.5...v0.32.6
v0.32.5
: [v0.32.5] [Tiny-Agents] inject environment variables in headersCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.4...v0.32.5
v0.32.4
: [v0.32.4]: Bug fixes intiny-agents
, and fix input handling for question-answering task.Compare Source
Full Changelog: huggingface/huggingface_hub@v0.32.3...v0.32.4
This release introduces bug fixes to
tiny-agents
andInferenceClient.question_answering
:asyncio.wait()
does not accept bare coroutines #3135 by @hanouticelinav0.32.3
: [v0.32.3]: Handle env variables intiny-agents
, better CLI exit and handling of MCP tool calls argumentsCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.2...v0.32.3
This release introduces some improvements and bug fixes to
tiny-agents
:tiny-agents
cli exit issues #3125v0.32.2
: [v0.32.2]: Add endpoint support in Tiny-Agent + fixsnapshot_download
on large reposCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.1...v0.32.2
v0.32.1
: [v0.32.1]: hot-fix: Fix tiny agents on WindowsCompare Source
Patch release to fix #3116
Full Changelog: huggingface/huggingface_hub@v0.32.0...v0.32.1
v0.32.0
: [v0.32.0]: MCP Client, Tiny Agents CLI and more!Compare Source
🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI
✨ The
huggingface_hub
library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends theInfrenceClient
and provides a seamless way to connect LLMs to both local and remote tool servers!In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:
For even simpler development, we now also offer a higher-level
Agent
class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper aroundMCPClient
. It's designed to be a simple while loop built right on top of an MCPClient.You can run these Agents directly from the command line:
You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.
This is an early version of the
MCPClient
, and community contributions are welcome 🤗InferenceClient
is also aMCPClient
by @julien-c in #2986⚡ Inference Providers
Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!
We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥
We also fixed compatibility issues with structured outputs across providers by ensuring the
InferenceClient
follows the OpenAI API specs structured output.💾 Serialization
We've introduced a new
@strict
decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.
@strict
decorator for dataclass validation by @Wauplin in #2895This release brings also support for
DTensor
in_get_unique_id
/get_torch_storage_size
helpers, allowingtransformers
to seamlessly usesave_pretrained
withDTensor
.✨ HF API
When creating an Endpoint, the default for
scale_to_zero_timeout
is nowNone
, meaning endpoints will no longer scale to zero by default unless explicitly configured.We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.
📚 Documentation
We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the
InferenceClient
can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).🛠️ Small fixes and maintenance
😌 QoL improvements
api.endpoint
to arguments for_get_upload_mode
by @matthewgrossman in #3077🐛 Bug and typo fixes
read()
by @lhoestq in #3080🏗️ internal
hf-xet
optional by @hanouticelina in #3079Community contributions
huggingface-cli repo create
command by @Wauplin in #3094Significant community contributions
The following contributors have made significant changes to the library over the last release:
v0.31.4
: [v0.31.4]: strict dataclasses, supportDTensor
saving & some bug fixesCompare Source
This release includes some new features and bug fixes:
strict
decorators for runtime dataclass validation with custom and type-based checks. by @Wauplin in #2895.DTensor
support to_get_unique_id
/get_torch_storage_size
helpers, enablingtransformers
to usesave_pretrained
withDTensor
. by @S1ro1 in #3042.Full Changelog: huggingface/huggingface_hub@v0.31.2...v0.31.4
v0.31.3
Compare Source
v0.31.2
: [v0.31.2] Hot-fix: makehf-xet
optional again and bump the min version of the packageCompare Source
Patch release to make
hf-xet
optional. More context in #3079 and #3078.Full Changelog: huggingface/huggingface_hub@v0.31.1...v0.31.2
v0.31.1
Compare Source
v0.31.0
: [v0.31.0] LoRAs with Inference Providers,auto
mode for provider selection, embeddings models and moreCompare Source
🧑🎨 Introducing LoRAs with fal.ai and Replicate providers
We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡
⚙️
auto
mode for provider selectionYou can now automatically select a provider for a model using
auto
mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.provider
argument. Previously, the default washf-inference
, so this change may be a breaking one if you're not specifying the provider name when initializingInferenceClient
orAsyncInferenceClient
.provider="auto"
by @julien-c in #3011🧠 Embeddings support with Sambanova (feature-extraction)
We added support for feature extraction (embeddings) inference with sambanova provider.
⚡ Other Inference features
HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported — if a model isn't already deployed, it won’t be available via HF Inference API.
Miscellaneous improvements and some bug fixes:
Configuration
📅 Schedule: Branch creation - "after 5am on saturday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
To execute skipped test pipelines write comment
/ok-to-test
.This PR has been generated by MintMaker (powered by Renovate Bot).