Skip to content

Conversation

red-hat-konflux[bot]
Copy link

@red-hat-konflux red-hat-konflux bot commented May 17, 2025

This PR contains the following updates:

Package Change Age Confidence
huggingface_hub ==0.23.4 -> ==0.34.4 age confidence

Release Notes

huggingface/huggingface_hub (huggingface_hub)

v0.34.4: [v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilities

Compare Source

Biggest update is the support of Image-To-Video task with inference provider Fal AI

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
 ...     f.write(video)

And some quality of life improvements:

Full Changelog: huggingface/huggingface_hub@v0.34.3...v0.34.4

v0.34.3: [v0.34.3] Jobs improvements and whoami user prefix

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.2...v0.34.3

v0.34.2: [v0.34.2] Bug fixes: Windows path handling & resume download size fix

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.1...v0.34.2

v0.34.1: [v0.34.1] [CLI] print help if no command provided

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.34.0...v0.34.1

v0.34.0: [v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!

Compare Source

🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!

We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new hf jobs command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.

Key features:

  • 🐳 Docker-like CLI: Familiar commands (run, ps, logs, inspect, cancel) to run and manage jobs
  • 🔥 Any Hardware: Instantly access CPUs, T4/A10G/A100 GPUs, and TPUs with a simple flag
  • 📦 Run Anything: Use Docker images, HF Spaces, or custom containers
  • 📊 Live Monitoring: Stream logs in real-time, just like running locally
  • 💰 Pay-as-you-go: Only pay for the seconds you use
  • 🧬 UV Runner: Run Python scripts with inline dependencies using uv (experimental)

All features are available both from Python (run_job, list_jobs, etc.) and the CLI (hf jobs).

Example usage:

### Run a Python script on the cloud
hf jobs run python:3.12 python -c "print('Hello from the cloud!')"

### Use a GPU
hf jobs run --flavor=t4-small --namespace=huggingface ubuntu nvidia-smi

### List your jobs
hf jobs ps

### Stream logs from a job
hf jobs logs <job-id>

### Inspect job details
hf jobs inspect <job-id>

### Cancel a running job
hf jobs cancel <job-id>

### Run a UV script (experimental)
hf jobs uv run my_script.py --flavor=a10g-small --with=trl

You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental uv runner for Python scripts with inline dependencies.

Check out the Jobs guide for more examples and details.

🚀 The CLI is now hf! (formerly huggingface-cli)

Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf! The legacy huggingface-cli remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command format hf <resource> <action> [options] (e.g. hf auth login, hf repo create, hf jobs run).

Run hf --help to know more about the CLI options.

✗ hf --help
usage: hf <command> [<args>]

positional arguments:
  {auth,cache,download,jobs,repo,repo-files,upload,upload-large-folder,env,version,lfs-enable-largefiles,lfs-multipart-upload}
                        hf command helpers
    auth                Manage authentication (login, logout, etc.).
    cache               Manage local cache directory.
    download            Download files from the Hub
    jobs                Run and manage Jobs on the Hub.
    repo                Manage repos on the Hub.
    repo-files          Manage files in a repo on the Hub.
    upload              Upload a file or a folder to the Hub. Recommended for single-commit uploads.
    upload-large-folder
                        Upload a large folder to the Hub. Recommended for resumable uploads.
    env                 Print information about the environment.
    version             Print information about the hf version.

options:
  -h, --help            show this help message and exit

⚡ Inference

🖼️ Image-to-image

Added support for image-to-image task in the InferenceClient for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai")
client = InferenceClient(provider="replicate")

with open("cat.png", "rb") as image_file:
   input_image = image_file.read()

### output is a PIL.Image object
image = client.image_to_image(
    input_image,
    prompt="Turn the cat into a tiger.",
    model="black-forest-labs/FLUX.1-Kontext-dev",
)

In addition to this, it is now possible to directly pass a PIL.Image as input to the InferenceClient.

🤖 Tiny-Agents

tiny-agents got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:

{
  "model": "Qwen/Qwen2.5-72B-Instruct",
  "provider": "nebius",
  "inputs": [
    {
      "type": "promptString",
      "id": "github-personal-access-token",
      "description": "Github Personal Access Token (read-only)",
      "password": true
    }
  ],
  "servers": [
    {
     "type": "stdio",
     "command": "docker",
     "args": [
       "run",
       "-i",
       "--rm",
       "-e",
       "GITHUB_PERSONAL_ACCESS_TOKEN",
       "-e",
       "GITHUB_TOOLSETS=repos,issues,pull_requests",
       "ghcr.io/github/github-mcp-server"
     ],
     "env": {
       "GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github-personal-access-token}"
     }
    }
  ]
}
🐛 Bug fixes

InferenceClient and tiny-agents got a few quality of life improvements and bug fixes:

📤 Xet

Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:

Documentation has already been written to explain better the protocol and its options:

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ internal

v0.33.5: [v0.33.5] [Inference] Fix a UserWarning when streaming with AsyncInferenceClient

Compare Source

  • Fix: "UserWarning: ... sessions are still open..." when streaming with AsyncInferenceClient #​3252

Full Changelog: huggingface/huggingface_hub@v0.33.4...v0.33.5

v0.33.4: [v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP tools

Compare Source

  • Omit parameters in default tools of tiny-agent #​3214

Full Changelog: huggingface/huggingface_hub@v0.33.3...v0.33.4

v0.33.3: [v0.33.3] [Tiny-Agent]: Update tiny-agents example

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.33.2...v0.33.3

v0.33.2: [v0.33.2] [Tiny-Agent]: Switch to VSCode MCP format

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.33.1...v0.33.2

Breaking changes:

  • no more config nested mapping => everything at root level
  • headers at root level instead of inside options.requestInit
  • updated the way values are pulled from ENV (based on input id)

Example of agent.json:

{
  "model": "Qwen/Qwen2.5-72B-Instruct",
  "provider": "nebius",
  "inputs": [
    {
      "type": "promptString",
      "id": "hf-token",
      "description": "Token for Hugging Face API access",
      "password": true
    }
  ],
  "servers": [
    {
      "type": "http",
      "url": "https://huggingface.co/mcp",
      "headers": {
        "Authorization": "Bearer ${input:hf-token}"
      }
    }
  ]
}

Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents

v0.33.1: [v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check Update

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.33.0...v0.33.1

This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:

v0.33.0: [v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!

Compare Source

⚡ New provider: Featherless.AI

Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="featherless-ai")

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-0528", 
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ], 
)

print(completion.choices[0].message)

⚡ New provider: Groq

At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.

Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="groq")

completion = client.chat.completions.create(
    model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in one sentence."},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
                },
            ],
        }
    ],
)

print(completion.choices[0].message)

🤖 MCP and Tiny-agents

It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!

Fixing some DX issues in the tiny-agents CLI.

📚 Documentation

New translation from the Hindi-speaking community, for the community!

🛠️ Small fixes and maintenance

😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.32.6: [v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid

Compare Source

  • Fix for wrongly saved upload_mode/remote_oid #​3113

Full Changelog: huggingface/huggingface_hub@v0.32.5...v0.32.6

v0.32.5: [v0.32.5] [Tiny-Agents] inject environment variables in headers

Compare Source

  • Inject env var in headers + better type annotations #​3142

Full Changelog: huggingface/huggingface_hub@v0.32.4...v0.32.5

v0.32.4: [v0.32.4]: Bug fixes in tiny-agents, and fix input handling for question-answering task.

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.3...v0.32.4

This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:

v0.32.3: [v0.32.3]: Handle env variables in tiny-agents, better CLI exit and handling of MCP tool calls arguments

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.2...v0.32.3

This release introduces some improvements and bug fixes to tiny-agents:

  • [tiny-agents] Handle env variables in tiny-agents (Python client) #​3129
  • [Fix] tiny-agents cli exit issues #​3125
  • Improve Handling of MCP Tool Call Arguments #​3127

v0.32.2: [v0.32.2]: Add endpoint support in Tiny-Agent + fix snapshot_download on large repos

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.32.1...v0.32.2

  • [MCP] Add local/remote endpoint inference support #​3121
  • Fix snapshot_download on very large repo (>50k files) #​3122

v0.32.1: [v0.32.1]: hot-fix: Fix tiny agents on Windows

Compare Source

Patch release to fix #​3116

Full Changelog: huggingface/huggingface_hub@v0.32.0...v0.32.1

v0.32.0: [v0.32.0]: MCP Client, Tiny Agents CLI and more!

Compare Source

🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI

✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!

pip install -U huggingface_hub[mcp]

In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:

import os

from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient

async def main():
    async with MCPClient(
        provider="nebius",
        model="Qwen/Qwen2.5-72B-Instruct",
        api_key=os.environ["HF_TOKEN"],
    ) as client:
        await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
        messages = [
            {
                "role": "user",
                "content": "Generate a picture of a cat on the moon",
            }
        ]
        async for chunk in client.process_single_turn_with_tools(messages):

### Log messages
            if isinstance(chunk, ChatCompletionStreamOutput):
                delta = chunk.choices[0].delta
                if delta.content:
                    print(delta.content, end="")

### Or tool calls
            elif isinstance(chunk, ChatCompletionInputMessage):
                print(
                    f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
                )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

For even simpler development, we now also offer a higher-level Agent class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient. It's designed to be a simple while loop built right on top of an MCPClient.

You can run these Agents directly from the command line:

> tiny-agents run --help
                                                                                                                                                                                     
 Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...                                                                                                                           
                                                                                                                                                                                     
 Run the Agent in the CLI                                                                                                                                                            
                                                                                                                                                                                     
                                                                                                                                                                                     
╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│   path      [PATH]  Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset                         │
│                     (https://huggingface.co/datasets/tiny-agents/tiny-agents)                                                                                                     │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help          Show this message and exit.                                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.

This is an early version of the MCPClient, and community contributions are welcome 🤗

⚡ Inference Providers

Thanks to @​diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!

We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥

We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient follows the OpenAI API specs structured output.

💾 Serialization

We've introduced a new @strict decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:

from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field

### Custom validator to ensure a value is positive
def positive_int(value: int):
    if not value > 0:
        raise ValueError(f"Value must be positive, got {value}")

class Config:
    model_type: str
    hidden_size: int = positive_int(default=16)
    vocab_size: int = 32  # Default value

### Methods named `validate_xxx` are treated as class-wise validators
    def validate_big_enough_vocab(self):
        if self.vocab_size < self.hidden_size:
            raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")

config = Config(model_type="bert", hidden_size=24)   # Valid
config = Config(model_type="bert", hidden_size=-1)   # Raises StrictDataclassFieldValidationError

### `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16)   # Raises StrictDataclassClassValidationError

This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.

This release brings also support for DTensor in _get_unique_id / get_torch_storage_size helpers, allowing transformers to seamlessly use save_pretrained with DTensor.

✨ HF API

When creating an Endpoint, the default for scale_to_zero_timeout is now None, meaning endpoints will no longer scale to zero by default unless explicitly configured.

We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.

📚 Documentation

We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).

🛠️ Small fixes and maintenance

😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal

Community contributions

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.31.4: [v0.31.4]: strict dataclasses, support DTensor saving & some bug fixes

Compare Source

This release includes some new features and bug fixes:

  • New strict decorators for runtime dataclass validation with custom and type-based checks. by @​Wauplin in #​2895.
  • Added DTensor support to _get_unique_id / get_torch_storage_size helpers, enabling transformers to use save_pretrained with DTensor. by @​S1ro1 in #​3042.
  • Some bug fixes: #​3080 & #​3076.

Full Changelog: huggingface/huggingface_hub@v0.31.2...v0.31.4

v0.31.3

Compare Source

v0.31.2: [v0.31.2] Hot-fix: make hf-xet optional again and bump the min version of the package

Compare Source

Patch release to make hf-xet optional. More context in #​3079 and #​3078.

Full Changelog: huggingface/huggingface_hub@v0.31.1...v0.31.2

v0.31.1

Compare Source

v0.31.0: [v0.31.0] LoRAs with Inference Providers, auto mode for provider selection, embeddings models and more

Compare Source

🧑‍🎨 Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai") # or provider="replicate"

### output is a PIL.Image object
image = client.text_to_image(
    "a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
    model="openfree/flux-chatgpt-ghibli-lora",
)

⚙️ auto mode for provider selection

You can now automatically select a provider for a model using auto mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient

### will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto") 

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

⚠️ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.

🧠 Embeddings support with Sambanova (feature-extraction)

We added support for feature extraction (embeddings) inference with sambanova provider.

⚡ Other Inference features

HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported — if a model isn't already deployed, it won’t be available via HF Inference API.

Miscellaneous improvements and some bug fixes:


Configuration

📅 Schedule: Branch creation - "after 5am on saturday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.

This PR has been generated by MintMaker (powered by Renovate Bot).

@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 999830e to 6628110 Compare May 24, 2025 11:46
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.31.2 Update dependency huggingface_hub to v0.32.0 May 24, 2025
@coveralls
Copy link

coveralls commented May 24, 2025

Pull Request Test Coverage Report for Build 15505390404

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall first build on konflux/mintmaker/konflux-poc/huggingface_hub-0.x at 93.407%

Totals Coverage Status
Change from base Build 15020007478: 93.4%
Covered Lines: 85
Relevant Lines: 91

💛 - Coveralls

@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 6628110 to 73043c0 Compare May 31, 2025 16:47
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.32.0 Update dependency huggingface_hub to v0.32.3 May 31, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 73043c0 to 3a085d9 Compare June 7, 2025 07:00
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.32.3 Update dependency huggingface_hub to v0.32.4 Jun 7, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 3a085d9 to 985fb42 Compare June 14, 2025 13:02
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.32.4 Update dependency huggingface_hub to v0.33.0 Jun 14, 2025
Copy link

coderabbitai bot commented Jun 14, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Join our Discord community for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 985fb42 to 4622a8b Compare June 28, 2025 05:22
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.33.0 Update dependency huggingface_hub to v0.33.1 Jun 28, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 4622a8b to f923a25 Compare July 5, 2025 05:02
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.33.1 Update dependency huggingface_hub to v0.33.2 Jul 5, 2025
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from f923a25 to 1e14bf3 Compare July 12, 2025 05:31
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.33.2 Update dependency huggingface_hub to v0.33.4 Jul 12, 2025
Signed-off-by: red-hat-konflux <126015336+red-hat-konflux[bot]@users.noreply.github.com>
@red-hat-konflux red-hat-konflux bot force-pushed the konflux/mintmaker/konflux-poc/huggingface_hub-0.x branch from 1e14bf3 to 4f63805 Compare August 9, 2025 08:24
@red-hat-konflux red-hat-konflux bot changed the title Update dependency huggingface_hub to v0.33.4 Update dependency huggingface_hub to v0.34.4 Aug 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant