Skip to content

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jul 20, 2024

This PR contains the following updates:

Package Change Age Confidence Type Update
huggingface-hub 0.23.4 -> 0.35.3 age confidence dependencies minor
pytest (changelog) 8.2.2 -> 8.4.2 age confidence test minor
python 3.11.9 -> 3.14.0 age confidence dependencies minor
python-dotenv 1.0.1 -> 1.1.1 age confidence dependencies minor

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v0.35.3: [v0.35.3] Fix image-to-image target size parameter mapping & tiny agents allow tools list bug

Compare Source

This release includes two bug fixes:

Full Changelog: huggingface/huggingface_hub@v0.35.2...v0.35.3

v0.35.2: [v0.35.2] Welcoming Z.ai as Inference Providers!

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.35.1...v0.35.2

New inference provider! 🔥

Z.ai is now officially an Inference Provider on the Hub. See full documentation here: https://huggingface.co/docs/inference-providers/providers/zai-org.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="zai-org")
completion = client.chat.completions.create(
    model="zai-org/GLM-4.5",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print("\nThinking:")
print(completion.choices[0].message.reasoning_content)
print("\nOutput:")
print(completion.choices[0].message.content)
Thinking:
Okay, the user is asking about the capital of France. That's a pretty straightforward geography question. 

Hmm, I wonder if this is just a casual inquiry or if they need it for something specific like homework or travel planning. The question is very basic though, so probably just general knowledge. 

Paris is definitely the correct answer here. It's been the capital for centuries, since the Capetian dynasty made it the seat of power. Should I mention any historical context? Nah, the user didn't ask for details - just the capital. 

I recall Paris is also France's largest city and major cultural hub. But again, extra info might be overkill unless they follow up. Better keep it simple and accurate. 

The answer should be clear and direct: "Paris". No need to overcomplicate a simple fact. If they want more, they'll ask.

Output:
The capital of France is **Paris**.  

Paris has been the political and cultural center of France for centuries, serving as the seat of government, the residence of the President (Élysée Palace), and home to iconic landmarks like the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. It is also France's largest city and a global hub for art, fashion, gastronomy, and history.

Misc:

v0.35.1: [v0.35.1] Do not retry on 429 and skip forward ref in strict dataclass

Compare Source

  • Do not retry on 429 (only on 5xx) #​3377
  • Skip unresolved forward ref in strict dataclasses #​3376

Full Changelog: huggingface/huggingface_hub@v0.35.0...v0.35.1

v0.35.0: [v0.35.0] Announcing Scheduled Jobs: run cron jobs on GPU on the Hugging Face Hub!

Compare Source

Scheduled Jobs

In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think "cron jobs running on GPU".

This comes with a fully-fledge CLI:

hf jobs scheduled run @​hourly ubuntu echo hello world
hf jobs scheduled run "0 * * * *" ubuntu echo hello world
hf jobs scheduled ps -a
hf jobs scheduled inspect <id>
hf jobs scheduled delete <id>
hf jobs scheduled suspend <id>
hf jobs scheduled resume <id>
hf jobs scheduled uv run @&#8203;weekly train.py

It is now possible to run a command with uv run:

hf jobs uv run --with lighteval -s HF_TOKEN lighteval endpoint inference-providers "model_name=openai/gpt-oss-20b,provider=groq" "lighteval|gsm8k|0|0"

Some other improvements have been added to the existing Jobs API for a better UX.

And finally, Jobs documentation has been updated with new examples (and some fixes):

CLI updates

In addition to the Scheduled Jobs, some improvements have been added to the hf CLI.

Inference Providers

Welcome Scaleway and PublicAI!

Two new partners have been integrated to Inference Providers: Scaleway and PublicAI! (as part of releases 0.34.5 and 0.34.6).

Image-to-video

Image to video is now supported in the InferenceClient:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai")

video = client.image_to_video(
    "cat.png",
    prompt="The cat starts to dance",
    model="Wan-AI/Wan2.2-I2V-A14B",
)
Miscellaneous

Header content-type is now correctly set when sending an image or audio request (e.g. for image-to-image task). It is inferred either from the filename or the URL provided by the user. If user is directly passing raw bytes, the content-type header has to be set manually.

  • [InferenceClient] Add content-type header whenever possible + refacto by @​Wauplin in #​3321

A .reasoning field has been added to the Chat Completion output. This is used by some providers to return reasoning tokens separated from the .content stream of tokens.

MCP & tiny-agents updates

tiny-agents now handles AGENTS.md instruction file (see https://agents.md/).

Tools filtering has already been improved to avoid loading non-relevant tools from an MCP server:

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ internal

Community contributions

The following contributors have made changes to the library over the last release. Thank you!

v0.34.6: [v0.34.6]: Welcoming PublicAI as Inference Providers!

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.5...v0.34.6

⚡ New provider: PublicAI

[!Tip]
All supported PublicAI models can be found here.

Public AI Inference Utility is a nonprofit, open-source project building products and organizing advocacy to support the work of public AI model builders like the Swiss AI Initiative, AI Singapore, AI Sweden, and the Barcelona Supercomputing Center. Think of a BBC for AI, a public utility for AI, or public libraries for AI.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="publicai")
completion = client.chat.completions.create(
    model="swiss-ai/Apertus-70B-Instruct-2509",
    messages=[{"role": "user", "content": "What is the capital of Switzerland?"}],
)

print(completion.choices[0].message.content)

v0.34.5: [v0.34.5]: Welcoming Scaleway as Inference Providers!

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.4...v0.34.5

⚡ New provider: Scaleway

[!Tip]
All supported Scaleway models can be found here. For more details, check out its documentation page.

Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B-Instruct-2507",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

v0.34.4: [v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilities

Compare Source

Biggest update is the support of Image-To-Video task with inference provider Fal AI

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
 ...     f.write(video)

And some quality of life improvements:

Full Changelog: huggingface/huggingface_hub@v0.34.3...v0.34.4

v0.34.3: [v0.34.3] Jobs improvements and whoami user prefix

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.2...v0.34.3

v0.34.2: [v0.34.2] Bug fixes: Windows path handling & resume download size fix

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.1...v0.34.2

v0.34.1: [v0.34.1] [CLI] print help if no command provided

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.0...v0.34.1

v0.34.0: [v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!

Compare Source

🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!

We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new hf jobs command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.

Key features:

  • 🐳 Docker-like CLI: Familiar commands (run, ps, logs, inspect, cancel) to run and manage jobs
  • 🔥 Any Hardware: Instantly access CPUs, T4/A10G/A100 GPUs, and TPUs with a simple flag
  • 📦 Run Anything: Use Docker images, HF Spaces, or custom containers
  • 📊 Live Monitoring: Stream logs in real-time, just like running locally
  • 💰 Pay-as-you-go: Only pay for the seconds you use
  • 🧬 UV Runner: Run Python scripts with inline dependencies using uv (experimental)

All features are available both from Python (run_job, list_jobs, etc.) and the CLI (hf jobs).

Example usage:

### Run a Python script on the cloud
hf jobs run python:3.12 python -c "print('Hello from the cloud!')"

### Use a GPU
hf jobs run --flavor=t4-small --namespace=huggingface ubuntu nvidia-smi

### List your jobs
hf jobs ps

### Stream logs from a job
hf jobs logs <job-id>

### Inspect job details
hf jobs inspect <job-id>

### Cancel a running job
hf jobs cancel <job-id>

### Run a UV script (experimental)
hf jobs uv run my_script.py --flavor=a10g-small --with=trl

You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental uv runner for Python scripts with inline dependencies.

Check out the Jobs guide for more examples and details.

🚀 The CLI is now hf! (formerly huggingface-cli)

Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf! The legacy huggingface-cli remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command format hf <resource> <action> [options] (e.g. hf auth login, hf repo create, hf jobs run).

Run hf --help to know more about the CLI options.

✗ hf --help
usage: hf <command> [<args>]

positional arguments:
  {auth,cache,download,jobs,repo,repo-files,upload,upload-large-folder,env,version,lfs-enable-largefiles,lfs-multipart-upload}
                        hf command helpers
    auth                Manage authentication (login, logout, etc.).
    cache               Manage local cache directory.
    download            Download files from the Hub
    jobs                Run and manage Jobs on the Hub.
    repo                Manage repos on the Hub.
    repo-files          Manage files in a repo on the Hub.
    upload              Upload a file or a folder to the Hub. Recommended for single-commit uploads.
    upload-large-folder
                        Upload a large folder to the Hub. Recommended for resumable uploads.
    env                 Print information about the environment.
    version             Print information about the hf version.

options:
  -h, --help            show this help message and exit

⚡ Inference

🖼️ Image-to-image

Added support for image-to-image task in the InferenceClient for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai")
client = InferenceClient(provider="replicate")

with open("cat.png", "rb") as image_file:
   input_image = image_file.read()

### output is a PIL.Image object
image = client.image_to_image(
    input_image,
    prompt="Turn the cat into a tiger.",
    model="black-forest-labs/FLUX.1-Kontext-dev",
)

In addition to this, it is now possible to directly pass a PIL.Image as input to the InferenceClient.

🤖 Tiny-Agents

tiny-agents got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:

{
  "model": "Qwen/Qwen2.5-72B-Instruct",
  "provider": "nebius",
  "inputs": [
    {
      "type": "promptString",
      "id": "github-personal-access-token",
      "description": "Github Personal Access Token (read-only)",
      "password": true
    }
  ],
  "servers": [
    {
     "type": "stdio",
     "command": "docker",
     "args": [
       "run",
       "-i",
       "--rm",
       "-e",
       "GITHUB_PERSONAL_ACCESS_TOKEN",
       "-e",
       "GITHUB_TOOLSETS=repos,issues,pull_requests",
       "ghcr.io/github/github-mcp-server"
     ],
     "env": {
       "GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github-personal-access-token}"
     }
    }
  ]
}
🐛 Bug fixes

InferenceClient and tiny-agents got a few quality of life improvements and bug fixes:

📤 Xet

Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:

Documentation has already been written to explain better the protocol and its options:

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ internal

v0.33.5: [v0.33.5] [Inference] Fix a UserWarning when streaming with AsyncInferenceClient

Compare Source

  • Fix: "UserWarning: ... sessions are still open..." when streaming with AsyncInferenceClient #​3252

Full Changelog: huggingface/huggingface_hub@v0.33.4...v0.33.5

v0.33.4: [v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP tools

Compare Source

  • Omit parameters in default tools of tiny-agent #​3214

Full Changelog: huggingface/huggingface_hub@v0.33.3...v0.33.4

v0.33.3: [v0.33.3] [Tiny-Agent]: Update tiny-agents example

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.33.2...v0.33.3

v0.33.2: [v0.33.2] [Tiny-Agent]: Switch to VSCode MCP format

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.33.1...v0.33.2

Breaking changes:

  • no more config nested mapping => everything at root level
  • headers at root level instead of inside options.requestInit
  • updated the way values are pulled from ENV (based on input id)

Example of agent.json:

{
  "model": "Qwen/Qwen2.5-72B-Instruct",
  "provider": "nebius",
  "inputs": [
    {
      "type": "promptString",
      "id": "hf-token",
      "description": "Token for Hugging Face API access",
      "password": true
    }
  ],
  "servers": [
    {
      "type": "http",
      "url": "https://huggingface.co/mcp",
      "headers": {
        "Authorization": "Bearer ${input:hf-token}"
      }
    }
  ]
}

Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents

v0.33.1: [v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check Update

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.33.0...v0.33.1

This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:

v0.33.0: [v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!

Compare Source

⚡ New provider: Featherless.AI

Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="featherless-ai")

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-0528", 
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ], 
)

print(completion.choices[0].message)

⚡ New provider: Groq

At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.

Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="groq")

completion = client.chat.completions.create(
    model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in one sentence."},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
                },
            ],
        }
    ],
)

print(completion.choices[0].message)

🤖 MCP and Tiny-agents

It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!

Fixing some DX issues in the tiny-agents CLI.

📚 Documentation

New translation from the Hindi-speaking community, for the community!

🛠️ Small fixes and maintenance

😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.32.6: [v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid

Compare Source

  • Fix for wrongly saved upload_mode/remote_oid #​3113

Full Changelog: huggingface/huggingface_hub@v0.32.5...v0.32.6

v0.32.5: [v0.32.5] [Tiny-Agents] inject environment variables in headers

Compare Source

  • Inject env var in headers + better type annotations #​3142

Full Changelog: huggingface/huggingface_hub@v0.32.4...v0.32.5

v0.32.4: [v0.32.4]: Bug fixes in tiny-agents, and fix input handling for question-answering task.

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.3...v0.32.4

This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:

v0.32.3: [v0.32.3]: Handle env variables in tiny-agents, better CLI exit and handling of MCP tool calls arguments

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.2...v0.32.3

This release introduces some improvements and bug fixes to tiny-agents:

  • [tiny-agents] Handle env variables in tiny-agents (Python client) #​3129
  • [Fix] tiny-agents cli exit issues #​3125
  • Improve Handling of MCP Tool Call Arguments #​3127

v0.32.2: [v0.32.2]: Add endpoint support in Tiny-Agent + fix snapshot_download on large repos

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.1...v0.32.2

  • [MCP] Add local/remote endpoint inference support #​3121
  • Fix snapshot_download on very large repo (>50k files) #​3122

v0.32.1: [v0.32.1]: hot-fix: Fix tiny agents on Windows

Compare Source

Patch release to fix #​3116

Full Changelog: huggingface/huggingface_hub@v0.32.0...v0.32.1

v0.32.0: [v0.32.0]: MCP Client, Tiny Agents CLI and more!

Compare Source

🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI

✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!

pip install -U

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Saturday" in timezone Europe/Paris, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/Fabien-R/text-to-sql-assistant).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40MzEuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE0My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

@renovate renovate bot force-pushed the renovate/all-minor-patch branch from d15ff19 to 8b0d658 Compare July 20, 2024 19:39
@renovate renovate bot changed the title ⬆️ Update dependency huggingface-hub to v0.24.0 ⬆️ Update all non-major dependencies Jul 20, 2024
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 4 times, most recently from ec17c48 to 2f4e838 Compare July 29, 2024 15:12
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 3 times, most recently from 3393c6a to 1d769d9 Compare August 6, 2024 10:40
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from 1d769d9 to 1d4c160 Compare August 8, 2024 02:19
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from 1d4c160 to a20af6f Compare August 19, 2024 18:17
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 3 times, most recently from d2da685 to 947bfa5 Compare September 12, 2024 09:54
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from cb83a8e to 866bad3 Compare September 23, 2024 14:06
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from 162bf2a to 7d4015e Compare October 8, 2024 03:48
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from 7d4015e to 5bbd4f6 Compare October 9, 2024 11:25
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from e689f8a to 366a4a9 Compare October 21, 2024 16:23
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from 366a4a9 to 622fea3 Compare October 28, 2024 15:57
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 3 times, most recently from 7cbb7cb to f2113dc Compare December 5, 2024 05:07
Copy link
Contributor Author

renovate bot commented Dec 5, 2024

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: poetry.lock
Updating dependencies
Resolving dependencies...

Creating virtualenv text-to-sql-tVnws56r-py3.14 in /home/ubuntu/.cache/pypoetry/virtualenvs

The current project's supported Python range (3.14.0) is not compatible with some of the required packages Python requirement:
  - crewai requires Python <=3.13,>=3.10, so it will not be satisfied for Python 3.14.0

Because text-to-sql depends on crewai (0.32.2) which requires Python <=3.13,>=3.10, version solving failed.

  • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties
    
    For crewai, a possible solution would be to set the `python` property to "<empty>"

    https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
    https://python-poetry.org/docs/dependency-specification/#using-environment-markers

@renovate renovate bot force-pushed the renovate/all-minor-patch branch from f2113dc to c4a48fc Compare December 6, 2024 20:21
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from c4a48fc to 1e62d88 Compare December 16, 2024 15:11
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from 1e62d88 to dd15106 Compare January 6, 2025 13:39
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from dd15106 to 493731e Compare January 28, 2025 17:24
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 3 times, most recently from b34b1e7 to fcaae6b Compare May 30, 2025 09:42
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 4 times, most recently from 0cc1316 to df88180 Compare June 11, 2025 19:09
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from ed6a90d to 5e72be1 Compare June 18, 2025 08:32
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from dd3186b to ce76941 Compare June 25, 2025 14:29
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from ce76941 to 56a6e07 Compare July 2, 2025 07:51
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from 3974f97 to 849f3c3 Compare July 11, 2025 16:09
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 5 times, most recently from 8079324 to def8b44 Compare July 29, 2025 11:28
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from a750c2e to be9dfc4 Compare August 8, 2025 09:27
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from be9dfc4 to f5c4830 Compare August 16, 2025 00:46
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from f5c4830 to ef57e3d Compare September 6, 2025 19:39
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 2 times, most recently from fc2dfbf to 3da16de Compare September 16, 2025 16:42
@renovate renovate bot force-pushed the renovate/all-minor-patch branch 3 times, most recently from 3560b85 to d1f9b26 Compare September 29, 2025 17:52
| datasource      | package                       | from   | to     |
| --------------- | ----------------------------- | ------ | ------ |
| pypi            | huggingface-hub               | 0.23.4 | 0.35.3 |
| pypi            | pytest                        | 8.2.2  | 8.4.2  |
| github-releases | containerbase/python-prebuild | 3.11.9 | 3.14.0 |
| pypi            | python-dotenv                 | 1.0.1  | 1.1.1  |
@renovate renovate bot force-pushed the renovate/all-minor-patch branch from d1f9b26 to 0328808 Compare October 8, 2025 05:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants