-
Notifications
You must be signed in to change notification settings - Fork 0
⬆️ Update all non-major dependencies #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
renovate
wants to merge
1
commit into
main
Choose a base branch
from
renovate/all-minor-patch
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
d15ff19
to
8b0d658
Compare
ec17c48
to
2f4e838
Compare
3393c6a
to
1d769d9
Compare
1d769d9
to
1d4c160
Compare
1d4c160
to
a20af6f
Compare
d2da685
to
947bfa5
Compare
cb83a8e
to
866bad3
Compare
162bf2a
to
7d4015e
Compare
7d4015e
to
5bbd4f6
Compare
e689f8a
to
366a4a9
Compare
366a4a9
to
622fea3
Compare
7cbb7cb
to
f2113dc
Compare
|
f2113dc
to
c4a48fc
Compare
c4a48fc
to
1e62d88
Compare
1e62d88
to
dd15106
Compare
dd15106
to
493731e
Compare
b34b1e7
to
fcaae6b
Compare
0cc1316
to
df88180
Compare
ed6a90d
to
5e72be1
Compare
dd3186b
to
ce76941
Compare
ce76941
to
56a6e07
Compare
3974f97
to
849f3c3
Compare
8079324
to
def8b44
Compare
a750c2e
to
be9dfc4
Compare
be9dfc4
to
f5c4830
Compare
f5c4830
to
ef57e3d
Compare
fc2dfbf
to
3da16de
Compare
3560b85
to
d1f9b26
Compare
| datasource | package | from | to | | --------------- | ----------------------------- | ------ | ------ | | pypi | huggingface-hub | 0.23.4 | 0.35.3 | | pypi | pytest | 8.2.2 | 8.4.2 | | github-releases | containerbase/python-prebuild | 3.11.9 | 3.14.0 | | pypi | python-dotenv | 1.0.1 | 1.1.1 |
d1f9b26
to
0328808
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
None yet
0 participants
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
0.23.4
->0.35.3
8.2.2
->8.4.2
3.11.9
->3.14.0
1.0.1
->1.1.1
Release Notes
huggingface/huggingface_hub (huggingface-hub)
v0.35.3
: [v0.35.3] Fiximage-to-image
target size parameter mapping & tiny agents allow tools list bugCompare Source
This release includes two bug fixes:
Full Changelog: huggingface/huggingface_hub@v0.35.2...v0.35.3
v0.35.2
: [v0.35.2] Welcoming Z.ai as Inference Providers!Compare Source
Full Changelog: huggingface/huggingface_hub@v0.35.1...v0.35.2
New inference provider! 🔥
Z.ai is now officially an Inference Provider on the Hub. See full documentation here: https://huggingface.co/docs/inference-providers/providers/zai-org.
Misc:
v0.35.1
: [v0.35.1] Do not retry on 429 and skip forward ref in strict dataclassCompare Source
strict
dataclasses #3376Full Changelog: huggingface/huggingface_hub@v0.35.0...v0.35.1
v0.35.0
: [v0.35.0] Announcing Scheduled Jobs: run cron jobs on GPU on the Hugging Face Hub!Compare Source
Scheduled Jobs
In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think "cron jobs running on GPU".
This comes with a fully-fledge CLI:
It is now possible to run a command with
uv run
:hf jobs uv run
by @lhoestq in #3303Some other improvements have been added to the existing Jobs API for a better UX.
And finally, Jobs documentation has been updated with new examples (and some fixes):
CLI updates
In addition to the Scheduled Jobs, some improvements have been added to the
hf
CLI.Inference Providers
Welcome Scaleway and PublicAI!
Two new partners have been integrated to Inference Providers: Scaleway and PublicAI! (as part of releases
0.34.5
and0.34.6
).Image-to-video
Image to video is now supported in the
InferenceClient
:Miscellaneous
Header
content-type
is now correctly set when sending an image or audio request (e.g. forimage-to-image
task). It is inferred either from the filename or the URL provided by the user. If user is directly passing raw bytes, the content-type header has to be set manually.A
.reasoning
field has been added to the Chat Completion output. This is used by some providers to return reasoning tokens separated from the.content
stream of tokens.MCP & tiny-agents updates
tiny-agents
now handlesAGENTS.md
instruction file (see https://agents.md/).Tools filtering has already been improved to avoid loading non-relevant tools from an MCP server:
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
HF_HUB_DISABLE_XET
in the environment dump by @hanouticelina in #3290apps
as a parameter toHfApi.list_models
by @anirbanbasu in #3322🏗️ internal
ty
type checker by @hanouticelina in #3294ty
check quality by @hanouticelina in #3320is_jsonable
if circular reference by @Wauplin in #3348Community contributions
The following contributors have made changes to the library over the last release. Thank you!
apps
as a parameter toHfApi.list_models
(#3322)v0.34.6
: [v0.34.6]: Welcoming PublicAI as Inference Providers!Compare Source
Full Changelog: huggingface/huggingface_hub@v0.34.5...v0.34.6
⚡ New provider: PublicAI
Public AI Inference Utility is a nonprofit, open-source project building products and organizing advocacy to support the work of public AI model builders like the Swiss AI Initiative, AI Singapore, AI Sweden, and the Barcelona Supercomputing Center. Think of a BBC for AI, a public utility for AI, or public libraries for AI.
v0.34.5
: [v0.34.5]: Welcoming Scaleway as Inference Providers!Compare Source
Full Changelog: huggingface/huggingface_hub@v0.34.4...v0.34.5
⚡ New provider: Scaleway
Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.
v0.34.4
: [v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilitiesCompare Source
Biggest update is the support of Image-To-Video task with inference provider Fal AI
And some quality of life improvements:
Full Changelog: huggingface/huggingface_hub@v0.34.3...v0.34.4
v0.34.3
: [v0.34.3] Jobs improvements andwhoami
user prefixCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.2...v0.34.3
v0.34.2
: [v0.34.2] Bug fixes: Windows path handling & resume download size fixCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.1...v0.34.2
v0.34.1
: [v0.34.1] [CLI] print help if no command providedCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.0...v0.34.1
v0.34.0
: [v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!Compare Source
🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!
We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new
hf jobs
command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.Key features:
run
,ps
,logs
,inspect
,cancel
) to run and manage jobsuv
(experimental)All features are available both from Python (
run_job
,list_jobs
, etc.) and the CLI (hf jobs
).Example usage:
You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental
uv
runner for Python scripts with inline dependencies.Check out the Jobs guide for more examples and details.
🚀 The CLI is now
hf
! (formerlyhuggingface-cli
)Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from
huggingface-cli
tohf
! The legacyhuggingface-cli
remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command formathf <resource> <action> [options]
(e.g.hf auth login
,hf repo create
,hf jobs run
).Run
hf --help
to know more about the CLI options.⚡ Inference
🖼️ Image-to-image
Added support for
image-to-image
task in theInferenceClient
for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:image-to-image
support for Replicate provider by @hanouticelina in #3188image-to-image
support for fal.ai provider by @hanouticelina in #3187In addition to this, it is now possible to directly pass a
PIL.Image
as input to theInferenceClient
.🤖 Tiny-Agents
tiny-agents
got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:🐛 Bug fixes
InferenceClient
andtiny-agents
got a few quality of life improvements and bug fixes:📤 Xet
Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:
Documentation has already been written to explain better the protocol and its options:
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
healthRoute
instead of GET / to check status by @mfuntowicz in #3165expand
argument when listing files in repos by @lhoestq in #3195libcst
incompatibility with Python 3.13 by @hanouticelina in #3251🏗️ internal
v0.33.5
: [v0.33.5] [Inference] Fix aUserWarning
when streaming withAsyncInferenceClient
Compare Source
AsyncInferenceClient
#3252Full Changelog: huggingface/huggingface_hub@v0.33.4...v0.33.5
v0.33.4
: [v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP toolsCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.3...v0.33.4
v0.33.3
: [v0.33.3] [Tiny-Agent]: Update tiny-agents exampleCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.2...v0.33.3
v0.33.2
: [v0.33.2] [Tiny-Agent]: Switch to VSCode MCP formatCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.1...v0.33.2
Breaking changes:
Example of
agent.json
:Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents
v0.33.1
: [v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check UpdateCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.0...v0.33.1
This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:
v0.33.0
: [v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!Compare Source
⚡ New provider: Featherless.AI
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.
⚡ New provider: Groq
At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.
Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.
🤖 MCP and Tiny-agents
It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!
Fixing some DX issues in the
tiny-agents
CLI.tiny-agents
cli exit issues by @Wauplin in #3125📚 Documentation
New translation from the Hindi-speaking community, for the community!
🛠️ Small fixes and maintenance
😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal
Significant community contributions
The following contributors have made significant changes to the library over the last release:
v0.32.6
: [v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oidCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.5...v0.32.6
v0.32.5
: [v0.32.5] [Tiny-Agents] inject environment variables in headersCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.4...v0.32.5
v0.32.4
: [v0.32.4]: Bug fixes intiny-agents
, and fix input handling for question-answering task.Compare Source
Full Changelog: huggingface/huggingface_hub@v0.32.3...v0.32.4
This release introduces bug fixes to
tiny-agents
andInferenceClient.question_answering
:asyncio.wait()
does not accept bare coroutines #3135 by @hanouticelinav0.32.3
: [v0.32.3]: Handle env variables intiny-agents
, better CLI exit and handling of MCP tool calls argumentsCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.2...v0.32.3
This release introduces some improvements and bug fixes to
tiny-agents
:tiny-agents
cli exit issues #3125v0.32.2
: [v0.32.2]: Add endpoint support in Tiny-Agent + fixsnapshot_download
on large reposCompare Source
Full Changelog: huggingface/huggingface_hub@v0.32.1...v0.32.2
v0.32.1
: [v0.32.1]: hot-fix: Fix tiny agents on WindowsCompare Source
Patch release to fix #3116
Full Changelog: huggingface/huggingface_hub@v0.32.0...v0.32.1
v0.32.0
: [v0.32.0]: MCP Client, Tiny Agents CLI and more!Compare Source
🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI
✨ The
huggingface_hub
library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends theInfrenceClient
and provides a seamless way to connect LLMs to both local and remote tool servers!